content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Look Up Table Matching
Hi everyone,
Apologies all, it seems I have managed to figure out the below and have previously had help with a similar query - see link for solution.
Please disregard.
I am attempting to implement some kind of look up table where I can use VLOOKUP or INDEX and MATCH to find the associated value. I'm having a little trouble with the formula to do this.
The work flow is as follows (see image for a visual aid): User inputs data through a Smart Form - this goes into my Raw Data Sheet - lets call this Sheet A (LH side on image). A helper column (in
example below: the column is result) at the end of sheet A is where I intend to perform the formula down the list of entries.
Lets assume user made three entries. FieldA, FieldB, FieldC.
Now, a separate Look Up table exists on a different sheet (RH side of image) - lets call this Sheet B. On this sheet FieldA associates with 1 on the same row, FieldB with 2 on the same row and FieldC
with 3 on the same row.
In this example, essentially, I am attempting to bring Column 5 into result based on Primary column and Column4 being the same.
Assistance on this is most appreciated.
Best Answer
• @AlexP try the following formula:
=INDEX([Column5]:[Column5], MATCH([Primary Column]@row, [Column4]:[Column4], 0), 1)
You will need a cross-sheet formula to reference Sheet B from Sheet A.
Jenn Hilber
Smartsheet Overachievers Alumni
• @AlexP try the following formula:
=INDEX([Column5]:[Column5], MATCH([Primary Column]@row, [Column4]:[Column4], 0), 1)
You will need a cross-sheet formula to reference Sheet B from Sheet A.
Jenn Hilber
Smartsheet Overachievers Alumni
Help Article Resources | {"url":"https://community.smartsheet.com/discussion/75341/look-up-table-matching","timestamp":"2024-11-09T08:05:29Z","content_type":"text/html","content_length":"401687","record_id":"<urn:uuid:2126f74d-6b94-41c9-8f9d-f80c16a3c490>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00695.warc.gz"} |
Normalized Numbers on Sheet 3-39
+ General Questions (11)
In the Video "Floating Point Numbers (part 2)" is said, that the black numbers on Sheet 3-39 are denormalized. Are all numbers with exp = 0 denormalized? F.e. the number 0 00 11 has the mantisa 11
and 1 < 11/ (2^(1-2) < B. Doesnt that mean that 0 00 11 is normalized?
Thanks for all answers and please excuse my english.
Best regards,
Nick Jochum
The goal of normalization is to give each number a unique representation. We achieve this by shifting the mantissa until the left most digit is not zero. At the same time the exponent is decrement to
preserve the value. So for B=2 the mantissa must start with 1. Unfortunately this leaves a gap around 0. To fill this gap we allow denormalized numbers.
Imagine you are trying represent a number smaller than the smallest normalized number. You start off with a mantissa and should normalize it. You are going to shift it left while decrementing the
exponent. At some point you can't decrement the exponent any more, because it has already reached 0, but the mantissa is still not starting with 1. Now you have a denormalized number.
Are all numbers with exp = 0 denormalized?
That depends. In the process above (shift and decrement) you might reach exponent zero at the same time when the first bit of the mantissa is 1. Then you have a normalized mantissa and exponent zero.
This becomes a problem when we make use of a hidden bit. Normalized numbers will have a hidden 1, while denormalized numbers will have a hidden 0. So we need some way of spotting denormalized
numbers. IEEE 754 (slide 44, 45, 46) deals with this by reserving exponent zero for denormalized numbers. So there, yes, exponent zero means denormalized. But without a hidden bit you might have a
normalized number with exponent zero.
[..] that the black numbers on Sheet 3-39 are denormalized
If the representation makes use of a hidden bit, then yes all the black ones are denormalized. If this is a representation without hidden bit, then only the two innermost (left and right of zero) are | {"url":"https://q2a.cs.uni-kl.de/776/normalized-numbers-on-sheet-3-39?show=812","timestamp":"2024-11-14T23:24:04Z","content_type":"text/html","content_length":"52794","record_id":"<urn:uuid:cca0d0a6-a4c7-4545-89c5-583bc668a27e>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00686.warc.gz"} |
Time Series Forecasting Results
Time Series Forecasting Results¶
When a model finishes training, click on the model to see the results.
The model report contains a visualization of the time series forecast vs. the ground truth of the target variable. If quantiles were specified, this graph also contains the forecast intervals.
If K-Fold cross-test is used for evaluation, the forecast and forecast intervals are shown for every fold.
For multiple time series datasets, one visualization per time series is provided.
For multiple time series datasets, metrics are aggregated over all time series.
If at least one time series has an undefined metric, then the aggregated metric is also undefined.
If K-Fold cross-test is used for evaluation, these aggregated metrics are then averaged over all folds, ignoring folds that yield undefined metric values.
Metric Aggregation method
Mean Absolute Scaled Error (MASE) Average across all time series
Mean Absolute Percentage Error Average across all time series
Symmetric MAPE Average across all time series
Mean Absolute Error (MAE) Average across all time series
Mean Squared Error (MSE) Average across all time series
Mean Scaled Interval Score (MSIS) Average across all time series
Mean Absolute Quantile Loss First compute the mean of each quantile loss across time series then compute the mean across all quantiles
Mean Weighted Quantile Loss First compute the mean of each quantile loss across time series then compute the mean across all quantiles. Finally divide by the sum of the absolute target value
(MWQL) across all time series
Root Mean Squared Error (RMSE) Square-root of the aggregated Mean Squared Error (MSE)
Normalized Deviation (ND) Sum of the absolute error across all time series, divided by the sum of the absolute target value across all time series
For multiple time series datasets, DSS also shows the metrics of each individual time series.
If K-Fold cross-test is used for evaluation, per time series metrics are aggregated over each fold for each time series, ignoring folds that yield undefined metric values.
For multiple time series datasets, some models train one algorithm per time series under the hood (mainly ARIMA and Seasonal LOESS). The resulting per times series hyperparameters are shown in this
tab, if any. | {"url":"https://doc.dataiku.com/dss/12/machine-learning/time-series-forecasting/results.html","timestamp":"2024-11-10T09:25:28Z","content_type":"text/html","content_length":"36965","record_id":"<urn:uuid:1afb4774-e6c7-4bf9-ad41-547492648af5>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00322.warc.gz"} |
Change Date: 25 Oct 2024, 9:18 a.m.
volume_fraction_of_pelite_in_sea_floor_sediment 1
"Volume fraction" is used in the construction volume_fraction_of_X_in_Y, where X is a material constituent of Y. It is evaluated as the volume of X divided by the volume of Y (including X). It may be
expressed as a fraction, a percentage, or any other dimensionless representation of a fraction. "Sea floor sediment" is sediment deposited at the sea bed. "Pelite" is sediment less than 0.063
millimeters in diameter. | {"url":"https://cfeditor.ceda.ac.uk/proposal/5486","timestamp":"2024-11-07T06:37:52Z","content_type":"text/html","content_length":"4339","record_id":"<urn:uuid:f26b28d1-327a-4eed-a314-e6ec1e818824>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00700.warc.gz"} |
team ass 3 | lynnwhite
top of page
Connecting students to knowledge
and its application
Southern Utah University
Lynn White, Ph.D.
Statistics Lab
Scale Calculations
and Our Sample Slide
For the SOLO analysis: you will calculate the mean and standard deviation for the age of your sample. You will also calculate each participant's score on every scale of your survey. Depending on your
project, this might involve calculating a mean. In other cases, you might need to calculate a total. If you forgot how to do this, watch lab lecture 1. Once this is done, you will find the mean score
for all participants - for every scale. To find out exactly what and how to calculate these, click on your project below. If you forgot how to calculate the mean and standard deviation for a variable
using SPSS, watch lab lecture 4.
Once you have found the mean and standard deviation for age and the mean for every scale on your survey, copy and paste this SPSS output into a word doc or pdf. Hi-light the means (and the one
standard deviation) and upload this to the solo analysis for team assignment 3.
In the lab: you will finish the "our sample" slide that will become part of your end of semester presentation. What you need to include on this slide depends on your project. See below.
NOTE: Need help finding the frequencies for each category of a variable? Check out this short video!
After the "our sample" slide is finished, include another slide that has all the relevant SPSS output pasted onto it... hi-light the numbers that you used on the "our sample" slide
Save the pptx as a pdf and submit the pdf to canvas (only the team leader for this assignment submits)
CLIMATE CHANGE: report the number of males, females, and "missing or other". Next, use a frequency bar graph to report the number of participants in each category of the university status variable.
Include a category for "not reported" (if applicable). Use Excel to create a colorful and clear graph (do not forget to label the x and y axis).
DISCRIMINATION AGAINST IMMIGRANTS IN THE US: report the number of males, females, and "missing or other". Next, use a frequency bar graph to report the number of participants in each category of the
USA status variable. Include a category for "not reported" (if applicable). Use Excel to create a colorful and clear graph (do not forget to label the x and y axis).
DISCRIMINATION - EVERYWHERE EXCEPT MY BACKYARD: report the number of people in each category of "sex": males, females, and "missing or other". Next, use a frequency bar graph to report the number of
participants in each category of the Religion variable. Include a category for "not reported" (if applicable). Use Excel to create a colorful and clear graph (do not forget to label the x and y
GUN VIOLENCE IN THE U.S.: report the number of males, females, and "missing or other". Next, use a frequency bar graph to report the number of participants in each category of the Politics variable.
Include a category for "not reported" (if applicable). Use Excel to create a colorful and clear graph (do not forget to label the x and y axis).
TRANSGENDER DISCRIMINATION: report the number of people in each category of "sex": males, females, and "missing or other". Next, use a frequency bar graph to report the number of participants in each
category of the "gender identity" variable. Include a category for "not reported" (if applicable). Use Excel to create a colorful and clear graph (do not forget to label the x and y axis).
bottom of page | {"url":"https://www.lynnwhite-suu.com/copy-of-team-project-assign","timestamp":"2024-11-08T04:11:00Z","content_type":"text/html","content_length":"462566","record_id":"<urn:uuid:c77d3f9a-abe5-44a9-9622-e627193dffc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00268.warc.gz"} |
Minimal determining sets of locally finitely-determined functionals
Let N denote the set of natural numbers, N^N the set of all total functions from N into N. By functional we mean any function whose domain is N^N. if F is a functional and A is a partial function
from N into N, we say that A is a determining segment (ds) of F if F has the same value on any two total extensions of A. A ds is called minimal (mds) if it does not properly contain another ds. For
α ∈ N^N, denote by F^* (α) the set of all mds's of F which are subsets of α. A functional F is called finitely-determined (fd) if every α ∈ N^N contains a finite ds. F is locally fd (lfd) if there
exists a set {F[i] |i ∈ I} of fd functionals such that F(α) = {F[i](α) | i ∈I} and F[i] (α;) ≠ F[j] (β) for i ≠ j and α, β ∈ N^N. Total continuous operators (from N^N to N^N) are lfd. Examples for fd
F show that F^* (α) may contain (even 2^Nא) infinite mds's. The two main results for lfd functionals are that every ds contains a mds and that if F^*(α) consists only of finite sets then F^* (α) is
itself finite. This follows from a combinatorial Theorem. If A = {n-ary union}^∞[n=1]A[n] where the A[n]'s are finite and A[n] ⊄ A[m] for n ≠ m, then ∃B ⊂ A such that ∀[n] (A[n] ⊄ B) and for an
infinite sequence n[1], n[2],..., (A[n[i]] - B) ∩ (A[n[j]] - B) = ∅ when ≠ j. A partial recursive functional F, if undefined on α, behaves differently when fd or non-fd on α. From any oracle-machine
for F we can effectively construct another which makes finitely many queries about α when F is undefined and fd on α.
ASJC Scopus subject areas
• Theoretical Computer Science
• Discrete Mathematics and Combinatorics
Dive into the research topics of 'Minimal determining sets of locally finitely-determined functionals'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/minimal-determining-sets-of-locally-finitely-determined-functiona","timestamp":"2024-11-11T01:20:26Z","content_type":"text/html","content_length":"54031","record_id":"<urn:uuid:a74c4191-2f9c-44f3-b2da-569a5cf08b86>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00552.warc.gz"} |
Bernoulli, Daniel
From Encyclopedia of Mathematics
Copyright notice
This article Daniel Bernoulli was adapted from an original article by Joe Gani, which appeared in StatProb: The Encyclopedia Sponsored by Statistics and Probability Societies. The original article
([http://statprob.com/encyclopedia/DanielBERNOULLI.html StatProb Source], Local Files: pdf | tex) is copyrighted by the author(s), the article has been donated to Encyclopedia of Mathematics, and its
further issues are under Creative Commons Attribution Share-Alike License'. All pages from StatProb are contained in the Category StatProb.
Daniel BERNOULLI
b. 8 February 1700 - d. 17 March 1782
Summary. Daniel Bernoulli, well known as a mathematician, provided the earliest mathematical model describing an infectious disease. In 1760 he modelled the spread of smallpox, which was prevalent at
the time, and argued the advantages of variolation.
One of the purposes of modelling the spread of an infectious disease mathematically is to provide a framework within which predictions can be made about its progress within a population susceptible
to infection. Perhaps the first such model for an infectious disease was that for smallpox due to Daniel Bernoulli.
Daniel, who was born in Groningen in 1700, was a member of the famous Bernoulli mathematical family, the second son of Johann I (Jean) Bernoulli, brother of the older and more famous Jakob I
(Jacques). He studied mathematics under his father and medicine at Heidelberg receiving his M.D. in 1721. In 1724 he published "Exercitationes Mathematicae", solving Riccati's equation, and in 1725,
he was appointed to the Academy of Sciences of St. Petersburg, which had just been founded by Peter the Great. He remained there until 1732.
After Peter's death in 1725, there followed a period of unrest until the imperial succession was settled. Bernoulli returned to Basel in 1733, to accept a post in Anatomy and Botany; this was
followed in 1743 by one in Physiology and in 1750 by another in Physics. His interests remained very diverse, and he pursued his research into mathematics (including probability), medicine, physics,
botany, anatomy and philosophy, writing several treatises on these subjects. In 1734 he published his "Mémoire sur l'Inclinaison des Orbites Planetaires", followed in 1738 by his "Traité
d'Hydrodynamique", and a paper on the measurement of risk (1738). In 1740 his "Traité sur les Marées" appeared, and shortly after, his work on vibrating strings and rods. He was involved in quarrels
with his friend Euler on the mathematical theory of the vibrating string, and with D'Alembert on his work on risk and the merits of variolation.
Basel was very close to Alsace and the French frontier, and while the Swiss cantons maintained their independence, intellectual and particularly scientific life was centered on France and its Royal
French Academy of Sciences in Paris. Between 1725 and 1749, Bernoulli won 10 Prizes from this Academy, including the Prize of Honour for his clepsydra (water clock) to measure time at sea.
For most of Bernoulli's mature life, the intellectual climate was dominated by the ``philosophes" who believed in rationalism and enlightenment; their views were presented in the "Encyclopedie of
Diderot (1713-1784), for which D'Alembert wrote the Introduction. Diderot himself was the author of an article on probability and inoculation commenting on D'Alembert's critiques of Bernoulli's
papers on these topics. Although France suffered from its defeat in the Seven Years War (1756-1763), its population had grown rapidly in the later half of the 18th century, as famines and epidemics
became rarer; it reached some 26 million by 1789. The size and health of the population was of vital concern in the recruitment of soldiers for the king's army.
Bernoulli was interested in the ravages caused by smallpox, which was prevalent at the time. He devised a simple deterministic model to relate the size of a cohort $w(t)$ of individuals at time $t$
after birth, with the number of susceptibles $x(t)$ among them who had not been infected with smallpox. To make his equations easier, he assumed that individuals infected by smallpox died
immediately, or recovered immediately and became immunes $z(t)$, so that $ w(t) = x(t) + z(t)$. By setting up and solving two ordinary differential equations he obtained the relation $$ x(t) = w(t)/
[(1-a)e^{bt} + a] $$ where $b$ is the rate of catching smallpox and $a$ the proportion of those infected with smallpox who die instantaneously. Setting $a = 1/8$ and $b = 1/8$, and using Halley's
(1693) life tables he was able to estimate the numbers of susceptibles in a cohort subject to smallpox at time $t >0$, as well as the increased expectation of life if smallpox were eliminated. He
calculated that this expectation would increase from 26 years 7 months to 29 years 9 months. If smallpox could be eliminated, then by age 26, the population would be some 14\% larger. He used his
results to argue the advantages of variolation; in an earlier popular article (1760), he wrote "The two great motives for inoculation are humanity and the interest of the State". It should be
mentioned that the Bernoulli family in Basel had several of its younger members variolated.
Variolation, or the inoculation of susceptibles by live virus from patients with a mild form of the disease, was becoming popular in Europe at that time. The practice had originated in China as early
as 1,000 AD; Chinese doctors had ground the scabs of dried smallpox pustules into powder, this being inhaled by individuals wishing to acquire immunity. The custom had migrated to India and Turkey in
a modified form involving inoculation with live virus from smallpox pustules. In the early 18th century, travellers to Turkey had reported on the success of variolation, and some doctors in Europe
began to practise it. Unfortunately, even virus from a patient with a mild form of smallpox did not guarantee a mild attack in the new host.
In a paper presented to the Royal French Academy of Sciences in Paris in 1760, Bernoulli set out to present the advantages of variolation. He became involved in a quarrel with D'Alembert on the
interpretation of the relative risks of death from smallpox and variolation; D'Alembert maintained that the real risks of the latter were 17 times greater than those of smallpox itself. However, he
eventually moderated his objections, and the paper was finally published in 1766. The potential benefits of variolation became irrelevant within 3 decades, as Jenner demonstrated the safety of the
cowpox vaccine against smallpox in England in 1796.
Among his many mathematical works, Daniel Bernoulli also made important contributions to probability theory, for example to what became known as the St. Petersburg paradox, to which his cousin
Nicholas Bernoulli also contributed usefully. Sheynin (1972) refers to 8 memoirs on probability published between 1738 and 1780, also briefly described in Todhunter (1865). Possibly the most
interesting are the memoirs on the normal law, the measurement of risk (1738), the theory of errors, and various urn problems. In his "Traité d'Hydrodynamique", Bernoulli studied the effect of
pressure and temperature on gases; assuming that a gas consisted of small particles, he treated the problem by the probabilistic methods of Pascal and Fermat. This was the first attempt to develop a
kinetic theory of gases, a feat later achieved by Maxwell and Boltzmann in the nineteenth century.
Bernoulli died in Basel in 1782, laden with honours, a polymath whose works on mathematics, probability and infectious disease modelling were recognized for their originality, depth and practical
"Acknowledgement My thanks are due to Professor K. Dietz for his help in preparing this article.
[1] Bernoulli, Daniel (1738). Exposition of a new theory on the measurement of risk. Translation from the original Latin. Econometrica 22(1954), 23-36.
[2] Bernoulli, Daniel (1760). Essai d'une nouvelle analyse de la mortalité causée par la petite vérole et des avantages de l'inoculation pour la prévenir. Mem. Math. Phys. Acad. Roy. Sci., Paris,
1-45. In Histoire de l'Academie Royale des Sciences, 1766.
[3] Bernoulli, Daniel (1760). Réflexions sur les avantages de l'inoculation. Mercure de France, June issue, 173-190. Gani, J. (1978). Some problems of epidemic theory (with discussion). Journal of
the Royal Statistical Society Series A, 141, 323-347.
[4] Halley, E. (1693). An estimate of the mortality of mankind, drawn from curious tables of the births and funerals at the city of Breslaw; with an attempt to ascertain the price of annuities upon
lives. Philosophical Transactions of the Royal Society of London, 17, No.196, 596-610.
[5] Seal, H. (1977). Studies in the history of probability and statistics. XXXV, Multiple decrements or competing risks. Biometrika, 64, 429-439.
[6] Shafer, G. (1982). The Bernoullis. Encyclopedia of Statistical Sciences, Wiley, New York, Vol. 1, 214-219.
[7] Sheynin, O.B. (1972). D. Bernoulli's work on probability. RETE Strukturgeschichte der Naturwissenschaften 1, 273-300. Reprinted in Studies in the History of Statistics and Probability, Vol. II
(1977), M.G. Kendall and R.L. Plackett, eds. Griffin, London, 105-132.
[8] Todhunter, I. (1865). A History of the Mathematical Theory of Probability. Reprinted by Chelsea, New York, 1949.
Reprinted with permission from Christopher Charles Heyde and Eugene William Seneta (Editors), Statisticians of the Centuries, Springer-Verlag Inc., New York, USA.
How to Cite This Entry:
Bernoulli, Daniel. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bernoulli,_Daniel&oldid=55609 | {"url":"https://encyclopediaofmath.org/wiki/Bernoulli,_Daniel","timestamp":"2024-11-10T08:25:20Z","content_type":"text/html","content_length":"24344","record_id":"<urn:uuid:6a4e4b97-37c3-43a1-a242-fe7ef7e0e3c8>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00041.warc.gz"} |
Printable Math Multiplication Chart 2024 - Multiplication Chart Printable
Printable Math Multiplication Chart
Printable Math Multiplication Chart – Getting a multiplication chart at no cost is a great way to aid your student discover their times tables. Here are some tips for using this useful source of
information. First, look at the habits from the multiplication dinner table. Next, use the graph or chart as an option to flashcard drills or like a research helper. Eventually, apply it being a
reference point help guide to practice the times furniture. The free of charge version of your multiplication graph or chart only includes periods dining tables for figures 1 through 12. Printable
Math Multiplication Chart.
Down load a free computer multiplication chart
Multiplication charts and tables are invaluable studying resources. Obtain a no cost multiplication chart Pdf file to aid your kids remember the multiplication tables and charts. You can laminate the
graph or chart for place and durability it in the child’s binder in the home. These free of charge computer sources are good for second, fourth and third and fifth-class students. This information
will describe utilizing a multiplication chart to teach your child math facts.
You will discover totally free computer multiplication charts of different sizes and shapes. You can get multiplication graph printables in 10×10 and 12×12, and there are also empty or small graphs
for more compact youngsters. Multiplication grids come in white and black, shade, and small versions. Most multiplication worksheets keep to the Elementary Mathematics Benchmarks for Level 3.
Styles inside a multiplication graph
Pupils who may have discovered the inclusion dinner table could find it easier to acknowledge designs inside a multiplication chart. This lesson shows the qualities of multiplication, such as the
commutative house, to aid individuals be aware of the patterns. As an example, students may find that the merchandise of a amount multiplied by a couple of will always come out as being the identical
amount. An identical design could be found for numbers multiplied with a factor of two.
Individuals may also get a routine within a multiplication dinner table worksheet. Those with trouble keeping in mind multiplication specifics need to work with a multiplication table worksheet. It
helps students comprehend that you have styles in rows, columns and diagonals and multiples of two. Furthermore, they may make use of the patterns in the multiplication graph or chart to share
details with others. This activity will even help students keep in mind the fact that several occasions 9 equals 70, as opposed to 63.
Utilizing a multiplication kitchen table chart instead of flashcard drills
Employing a multiplication dinner table graph or chart as an alternative for flashcard drills is an excellent strategy to aid kids discover their multiplication details. Youngsters typically discover
that imagining the best solution enables them to remember the simple fact. Using this method of understanding is effective for moving rocks to more challenging multiplication specifics. Picture
ascending a huge stack of rocks – it’s much easier to climb little stones rather than to go up a utter rock face!
Kids understand far better by undertaking a number of process strategies. By way of example, they can blend multiplication information and periods tables to construct a cumulative overview, which
cements the important points in long-term recollection. You are able to invest hours planning for a training and making worksheets. You can also seek out enjoyable multiplication game titles on
Pinterest to engage your youngster. As soon as your youngster has enhanced a selected times table, you are able to move on to the subsequent.
Employing a multiplication dinner table graph or chart as a groundwork helper
Utilizing a multiplication dinner table graph or chart as homework helper could be a very efficient way to check and fortify the concepts with your child’s math class. Multiplication desk charts
spotlight multiplication facts from 1 to 10 and retract into quarters. These graphs also display multiplication facts inside a grid file format to ensure that college students are able to see styles
and make connections involving multiples. By incorporating these tools into the home environment, your child can learn the multiplication facts while having fun.
By using a multiplication kitchen table graph as groundwork helper is a terrific way to promote pupils to practice dilemma dealing with capabilities, discover new techniques, and then make research
projects easier. Youngsters can usually benefit from discovering the tricks that will assist them remedy problems more quickly. These tricks can help them produce self confidence and easily discover
the proper merchandise. This method is perfect for kids who happen to be having trouble with handwriting as well as other good motor unit skills.
Gallery of Printable Math Multiplication Chart
Printable Multiplication Table Pdf PrintableMultiplication
Free Multiplication Chart Printable Paper Trail Design
Free Multiplication Chart Printable Paper Trail Design
Leave a Comment | {"url":"https://www.multiplicationchartprintable.com/printable-math-multiplication-chart/","timestamp":"2024-11-13T21:42:34Z","content_type":"text/html","content_length":"55739","record_id":"<urn:uuid:e3574594-fe8e-4dce-a1e6-446f8f63b729>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00583.warc.gz"} |
Acquiring an optimal amount of information for choosing from alternatives
An agent operating in the real world must often choose from among alternatives in incomplete information environments, and frequently it can obtain additional information about them. Obtaining
information can result in a better decision, but the agent may incur expenses for obtaining each unit of information. The problem of finding an optimal strategy for obtaining information appears in
many domains. For example, in ecommerce when choosing a seller, and in solving programming problems when choosing heuristics. We focus on cases where the agent has to decide in advance on how much
information to obtain about each alternative. In addition, each unit of information about an alternative gives the agent only partial information about the alternative, and the range of each
information unit is continues. We first formalize the problem of deciding how many information units to obtain about each alternative, and we specify the expected utility function of the agent, given
a combination of information units. This function should be maximized by choosing the optimal number of information units. We proceed by suggesting methods for finding the optimal allocation of
information units between the different alternatives.
Publication series
Name Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume 2446 LNAI
ISSN (Print) 0302-9743
ISSN (Electronic) 1611-3349
Conference 6th International Workshop on Cooperative Information Agents, CIA 2002
Country/Territory Spain
City Madrid
Period 18/09/02 → 20/09/02
Dive into the research topics of 'Acquiring an optimal amount of information for choosing from alternatives'. Together they form a unique fingerprint. | {"url":"https://cris.biu.ac.il/en/publications/acquiring-an-optimal-amount-of-information-for-choosing-from-alte-2","timestamp":"2024-11-08T20:57:21Z","content_type":"text/html","content_length":"58783","record_id":"<urn:uuid:f7022598-1699-4557-bd41-d3caee04692e>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00827.warc.gz"} |
Identifying Nonlinearity in Data
The linear regression model assumes that there is a linear relationship between the predictors and the response variable. However, if the true relationship is nonlinear, then virtually all of the
conclusions that we draw from the fit do not hold much credibility. In addition, the prediction accuracy of the model can be reduced significantly. In the forthcoming video, Anjali explains how we
can identify nonlinearity in data.
So, in the video, you learnt how to identify nonlinearity in data for simple linear regression and multiple linear regression:
• For Simple Linear Regression
□ Plot the independent variable against the dependent variable to check for nonlinear patterns.
• For Multiple Linear Regression, since there are multiple predictors, we, instead, plot the residuals versus the predicted values, ^yi. Ideally, the residual plot will show no observable pattern.
In case a pattern is observed, it may indicate a problem with some aspect of the linear model. Apart from that:
□ Residuals should be randomly scattered around 0.
□ The spread of the residuals should be constant.
□ There should be no outliers in the data
If nonlinearity is present, then we may need to plot each predictor against the residuals to identify which predictor is nonlinear.
How to handle nonlinear data?
Once we have the residual plots showing nonlinearity, we might need to make some changes, either to the model or to the data. In the upcoming video, you will learn how to do that.
In the above video, you learnt that there are three methods to handle nonlinear data:
• Polynomial regression
• Data transformation
• Nonlinear regression
In the next segment, we will deal with the first method to handle non-linearity in data that is Polynomial Regression. | {"url":"https://www.internetknowledgehub.com/identifying-nonlinearity-in-data/","timestamp":"2024-11-09T23:34:47Z","content_type":"text/html","content_length":"79798","record_id":"<urn:uuid:db78a6bf-57bc-4bc4-b3a7-116525f67985>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00594.warc.gz"} |
[NEW SOLN] CSIS 212 PROGRAMMING ASSIGNMENT 4 MULTIPLES CIRCLE AREA COMPUTER-ASSISTED INSTRUCTION - Grade Merit[NEW SOLN] CSIS 212 PROGRAMMING ASSIGNMENT 4 MULTIPLES CIRCLE AREA COMPUTER-ASSISTED INSTRUCTION - Grade Merit
[SOLVED] CSIS 212 PROGRAMMING ASSIGNMENT 4 MULTIPLES CIRCLE AREA COMPUTER-ASSISTED INSTRUCTION: Exercise 5.16 JHTP (Multiples) Write a method isMultiple that determines, for a pair of integers,
whether the second integer is a multiple of the first. The method should take 2 integer arguments and return true if the second is a multiple of the first and false otherwise.[Hint: Use the remainder
operator]. Incorporate this method into an application that inputs a series of pairs of integers (1 pair at a time) and determines whether the second value in each pair is a multiple of the first.
[SOLVED] CSIS 212 PROGRAMMING ASSIGNMENT 4 MULTIPLES CIRCLE AREA COMPUTER-ASSISTED INSTRUCTION: Exercise 5.20 JHTP (Circle Area) Write an application that prompts the user for the radius of a circle
and uses a method called circleArea to calculate the area of the circle.
[SOLVED] CSIS 212 PROGRAMMING ASSIGNMENT 4 MULTIPLES CIRCLE AREA COMPUTER-ASSISTED INSTRUCTION: Exercise 5.35 JHTP (Computer-Assisted Instruction) The use of computers in education is referred to as
computer assisted instruction (CAI). Write a program that will help an elementary school student learn multiplication. Use a Random object to produce 2 positive 1-digit integers. The program should
then prompt the user with a question, such as “How much is 6 times 7?”
[SOLVED] CSIS 212 PROGRAMMING ASSIGNMENT 4 MULTIPLES CIRCLE AREA COMPUTER-ASSISTED INSTRUCTION: The student then inputs the answer. Next, the program checks the student’s answer. If it’s correct,
display the message “Very Good!” and ask another multiple question. If the answer is wrong, display the message “No. Please try again.” And let the student try the same question repeatedly until the
student finally gets it right. A separate method should be used to generate each new question. This method should be called once when the application begins execution and each time the user answers
the question correctly. | {"url":"https://grademerit.com/downloads/new-soln-csis-212-programming-assignment-4-multiples-circle-area-computer-assisted-instruction/","timestamp":"2024-11-08T04:05:52Z","content_type":"text/html","content_length":"72475","record_id":"<urn:uuid:8dab4acb-d511-4bb2-a0ae-fbaad315835d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00241.warc.gz"} |
We study private computations in information-theoretical settings on networks that are not 2-connected. Non-2-connected networks are "non-private" in the sense that most functions cannot privately be
computed on them. We relax the notion of privacy by introducing lossy private protocols, which generalize private protocols. We measure the information each player gains during the computation. Good
protocols should minimize the amount of information they lose to the players. Throughout this work, privacy always means 1-privacy, i.e. players are not allowed to share their knowledge. Furthermore,
the players are honest but curious, thus they never deviate from the given protocol.
By use of randomness by the protocol the communication strings a certain player can observe on a particular input determine a probability distribution. We define the loss of a protocol to a player as
the logarithm of the number of different probability distributions the player can observe. For optimal protocols, this is justified by the following result: For a particular content of any player's
random tape, the distributions the player observes have pairwise fidelity zero. Thus the player can easily distinguish the distributions.
The simplest non-2-connected networks consists of two blocks that share one bridge node. We prove that on such networks, communication complexity and the loss of a private protocol are closely
related: Up to constant factors, they are the same.
Then we study one-phase protocols, an analogue of one-round communication protocols. In such a protocol each bridge node may communicate with each block only once. We investigate in which order a
bridge node should communicate with the blocks to minimize the loss of information. In particular, for symmetric functions it is optimal to sort the components by increasing size. Then we design a
one-phase protocol that for symmetric functions simultaneously minimizes the loss at all nodes where the minimum is taken over all one-phase protocols.
Finally, we prove a phase hierarchy. For any $k$ there is a function such that every (k-1)-phase protocol for this function has an information loss that is exponentially greater than that of the best
k-phase protocol.
TR03-071 | 18th August 2003 00:00
Privacy in Non-Private Environments
We study private computations in information-theoretical settings on
networks that are not 2-connected. Non-2-connected networks are
``non-private'' in the sense that most functions cannot privately be
computed on such networks. We relax the notion of privacy by
introducing lossy private protocols, which generalize private
protocols. We measure the information each player gains during the
computation. Good protocols should minimize the amount of information
it loses to the players.
The loss of a protocol to a player is the logarithm of the number of
different probability distributions on the communication strings a
player can observe. For optimal protocols, this is justified by the
following result: For a particular content of any player's random
tape, the distributions the player observes have pairwise fidelity
zero. Thus the player can easily distinguish the distributions.
The simplest non-2-connected networks consists of two blocks that
share one bridge node. We prove that on such networks, communication
complexity and the loss of a private protocol are closely related: Up
to constant factors, they are the same.
Then we study 1-phase protocols, an analogue of 1-round communication
protocols. In such a protocol each bridge node may communicate with
each block only once. We investigate in which order a bridge node
should communicate with the blocks to minimize the loss of
information. In particular, for symmetric functions it is optimal to
sort the components by increasing size. Then we design a 1-phase
protocol that for symmetric functions simultaneously minimizes the
loss at all nodes, where the minimum is taken over all 1-phase
Finally, we prove a phase hierarchy. For any k there is a function
such that every (k-1)-phase protocol for this function has an
information loss that is exponentially greater than that of the best
k-phase protocol. | {"url":"https://eccc.weizmann.ac.il/report/2003/071/","timestamp":"2024-11-11T23:53:27Z","content_type":"application/xhtml+xml","content_length":"26016","record_id":"<urn:uuid:31daf6b3-efac-4887-a4fe-b03b44b443a0>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00515.warc.gz"} |
The goal of biplotEZ is to provide users an EZ-to-use platform for visually representing their data with biplots. Currently, this package includes principal component analysis (PCA) and canonical
variate analysis (CVA) biplots. This is accompanied by various formatting options for the samples and axes. Alpha-bags and concentration ellipses are included for visual enhancements and
You can install the development version of biplotEZ like this:
This is a basic example which shows you how to construct a PCA biplot:
#> Attaching package: 'biplotEZ'
#> The following object is masked from 'package:stats':
#> biplot
biplot (iris[,1:4], Title="Test PCA biplot") |> PCA() |> plot()
While the PCA biplot provides a visual representation of the overall data set, optimally representing the variance in 1, 2 or 3 dimensions, the CVA biplot aims to optimally separate specified groups
in the data. This is a basic example which shows you how to construct a CVA biplot:
An over-the-top example of changing all the formatting and adding all the bells and whistles:
biplot (iris[,1:4], group.aes=iris[,5]) |> PCA() |>
samples(col="gold", pch=15) |>
axes(which=2:3, col="cyan", label.cex=1.2, tick.col="blue",
tick.label.col="purple") |>
alpha.bags (alpha=c(0.5,0.75,0.95), which=3, col="red", lty=1:3, lwd=3) |>
ellipses(alpha=0.9, which=1:2, col=c("green","olivedrab")) |>
legend.type(bags = TRUE, ellipses=TRUE) |>
#> Computing 0.5 -bag for virginica
#> Computing 0.75 -bag for virginica
#> Computing 0.95 -bag for virginica
#> Computing 2.15 -ellipse for setosa
#> Computing 2.15 -ellipse for versicolor
CA biplot
The default CA biplots represents row principal coordinates with a call such as:
To change to row standard coordinates use a call such as:
biplot(HairEyeColor[,,2], center = FALSE) |>
CA(variant = "Stand") |> samples(col=c("magenta","purple"), pch=c(15,18)) |> plot()
Regression biplot
With the function regress linear regression biplot axes can be fitted to a biplot
Report Bugs and Support
If you encounter any issues or have questions, please open an issue on the GitHub repository. | {"url":"https://cran.uib.no/web/packages/biplotEZ/readme/README.html","timestamp":"2024-11-06T01:54:19Z","content_type":"application/xhtml+xml","content_length":"14151","record_id":"<urn:uuid:97594b62-2397-4012-ac37-b198d9396176>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00153.warc.gz"} |
Maximum Entropy Matching: An Approach to Fast Template Matching
Maximum Entropy Matching: An Approach to
Fast Template Matching
Frans Lundberg
October 25, 2000
1 Introduction 2
2 Maximum Entropy Matching 2
2.1 The cornerstones of Maximum Entropy Matching . . . 2
2.2 Bitset creation . . . 4
2.3 Bitset comparison . . . 5
3 PAIRS and the details of the bitset comparison algorithm 6 3.1 PAIRS . . . 6
3.2 Motivation for PAIRS . . . 7
3.3 The bitset comparison algorithm . . . 7
3.4 Implementation issues . . . 8
3.5 Speed . . . 9
4 A comparison between PAIRS and normalized cross-correlation 11 4.1 Test setup . . . 11
4.2 Generation of image distortions . . . 12
4.2.1 Gaussian noise, NOISE . . . 12
4.2.2 Rotation of the image, ROT . . . 13
4.2.3 Scaling of the image, ZOOM . . . 13
4.2.4 Perspective change, PERSP . . . 13
4.2.5 Salt and pepper noise, SALT . . . 13
4.2.6 A gamma correction of the intensity values, GAMMA . . . . 13
4.2.7 NODIST and STD . . . 14
4.3 Relevance of the distortions . . . 14
4.4 Results . . . 14
4.5 Other template sizes . . . 19
4.6 Performance using other images . . . 19
5 Statistics of PAIRS bitsets 22 5.1 Statistics of acquired bitsets . . . 22
One important problem in image analysis is the localization of a template in a larger image. Applications where the solution of this problem can be used include: tracking, optical flow, and stereo
vision. The matching method studied here solve this problem by defining a new similarity measurement between a template and an image neigh-borhood. This similarity is computed for all possible
integer positions of the template within the image. The position for which we get the highest similarity is considered to be the match. The similarity is not necessarily computed using the original
pixel values directly, but can of course be derived from higher level image features.
The similarity measurement can be computed in different ways and the simplest ap-proach are correlation-type algorithms. Aschwanden and Guggenb ¨uhl [2] have done a comparison between such
algorithms. One of best and simplest algorithms they tested is normalized cross-correlation (NCC). Therefore this algorithm has been used to com-pare with the PAIRS algorithm that is developed by the
author and described in this text. It uses a completely different similarity measurement based on sets of bits ex-tracted from the template and the image.
This work is done within WITAS which is a project dealing with UAV’s (unmanned aerial vehicles). Two specific applications of the developed template matching algo-rithm have been studied.
1. One application is tracking of cars in video sequences from a helicopter. 2. The other one is computing optical flow in such video sequences in order to
detect moving objects, especially vehicles on roads.
The video from the helicopter is in color (RGB) and this fact is used in the presented tracking algorithm. The PAIRS algorithm have been applied to these two applications and the results are
A part of this text will concern a general approach to template matching called Maximum Entropy Matching (MEM) that is developed here. The main idea of MEM is that the more data we compare on a
computer the longer it takes and therefore the data that we compare should have maximum average information, that is, maximum entropy. We will see that this approach can be useful to create template
matching algorithms which are in the order of 10 times faster then correlation (NCC) without decreasing the performance.
Maximum Entropy Matching
The cornerstones of Maximum Entropy Matching
The purpose of template matching in image processing is to find the displacement r such that the image function I(x r)is as similar as possible to the template function
T(x). This can be expressed as
rmatch=argmax similarity T(x);I(x r) : (1)
Maximum Entropy Matching (MEM) and the PAIRS method described later are valid for all types of discretely sampled signals of arbitrary dimension, but here we will discuss the specific case of
template matching of RGB images.
The difficult part with template matching is to find a similarity measurement that will give a displacement of the template which corresponds to the real displacement of the signal in the world
around us. For many applications it is difficult to even define this ideal displacement, since the difference between the image neighborhood and the template does not consist of a pure translation.
This fact makes it difficult to compare different template matching algorithms. Furthermore, the similarity measurement that should be used is application dependent. For example, rotation invariance
might be wanted for one application, but not for another.
Maximum Entropy Matching does not necessarily lead to a similarity measurement that is better than others, but it aims to increase the speed of the template matching while keeping the performance. It
works by comparing derived image features of the image and the template for each possible displacement of the template. The approach is based on the following statements.
1. The less data we compare for each possible template position, the faster this comparison will be.
2. The data we compare should have high entropy.
3. On the average, less data needs to be compared to conclude two objects are dissimilar then to conclude they are similar. This statement will be called the
fast dissimilarity principle.
4. The data that we use for the comparison should be chosen so that the similarity measurement will be distortion persistent.
Statement 1 is true in the sense that the time to compute a similarity measurement is usually proportional to the amount of data that is compared. For correlation-type template matching all of the
pixel data in the template is used in the matching algo-rithm. We will see that the amount of data that is used for comparison can be decreased substantially using the MEM approach. The compare time
also depends on the way the data is compared. Not counting normalizations, the similarity measurement for these algorithms is acquired by one multiplication and one addition for each byte of pixel
data (assuming each intensity value is stored as one byte). The comparison of data for the MEM approach is done by an XOR-operation, and a look-up table, which is faster per byte then the
correlation-type approaches1[and simple to implement in hardware.]
Statement 2 is intuitively appealing. To increase the speed of the matching algo-rithm we want to use as little data as possible in the comparisons, but we wish to use as much information as
possible. Therefore the data used in the comparison should have high average information, that is, high entropy. Experiments show that it is possible to reduce the original amount of data used in the
comparisons in the order of 10 to 100 times while keeping good performance.
It is not very difficult to prove that the maximum entropy of digital data is only achieved when the following two criteria are fulfilled. One, the probability of each bit in the data being 1 is
0.50. Two, all bits should be statistically independent. When we talk about entropy we view the data to be compared as one random variable, and when we talk about independence of bits, the single
bits are considered random variables.
Since statistical independence of the bits is necessary to achieve maximum entropy it is natural and necessary to view the compare data as a set of bits. This view is used in MEM where a bitset is
extracted from the template and from each neighborhood in
the image. The similarity measurement used to compare the image neighborhood bitset and the template bitset is simply the number of equal bits.
Lossy data compression of images is a large research area that I believe can be very useful in order to find high entropy image features that are good to use for tem-plate matching. However, the
problems of finding these compare features and how to compress image data is fundamentally different, since there is no demand for image reconstruction from the compare features.
Statement 3 (the fast dissimilarity principle) is an important and a very general statement that is valid for all types of objects that are built up by smaller parts. The statement comes from the
fact that two objects are considered similar only if all their parts are similar. If a part from object A is dissimilar to the corresponding part of object B, we can conclude that A and B are
dissimilar. If the part from A and the corresponding part of B are similar we cannot conclude anything about the similarity between the whole objects. Therefore it usually takes less data to conclude
that two objects are dissimilar then to conclude they are similar. We will see how this statement can be used to speed up the matching algorithm.
Statement 4. In template matching no image neighborhood is identical to the tem-plate. There is always some distortion (for example: noise, rotation or a shadow on the template) present and we must
try to choose the data we compare so that the similarity measurement is affected as little as possible by these distortions.
The following two sections will describe how to extract compare data, and then how to compare this data for fast, high performance template matching.
Bitset creation
Maximum Entropy Matching consists of two separate parts: bitset creation and bitset
comparison. In the bitset creation part a set of bits is produced for the template and
for each neighborhood in the image directly or indirectly from the pixel data. How these bitsets are created is not determined by MEM. The optimal bitsets to extract is dependent on what image
distortions that are expected for the intended application. One example of a bitset creation algorithm is PAIRS. We demand three things from the bitset creation algorithm.
1. The created bitsets should have high entropy.
2. The created bitsets should be resistant to the image distortions that appears for the intended application.
3. The bitset creation time should be short.
In order to compare two different bitset creation algorithms we must have mea-surements of “high entropy”, “distortion resistance” and “bitset creation time”. We will suggest possible ways of
measuring this quantities.
It is difficult to estimate the entropy of a bitset consisting of more then a few bits. If the extracted bitset only has, say, 8 bits, we can estimate the full discrete probability distribution using
a database of image neighborhoods. The entropy is then computed by its definition from the probability distribution. This is possible for a 1-byte bitset which has only 256 possible states. But, for
a 4-byte bitset we have 232410
pos-sible states and an explicit estimation of the full probability distribution is not pospos-sible. There are other ways to estimate entropy. In [3] a method for estimating the entropy of
one-dimensional information sequences is applied to gray-scale images. The method
uses pattern matching to estimate the entropy. More about pattern matching in infor-mation theory can be found in [4].
Since the entropy is difficult to estimate we can instead use a measurement of how close to maximum entropy the data is. Assuming we have a database of image neigh-borhoods we can find the
probabilities for each bit being set to 1. These probabilities should be close to 0.50 to achieve high entropy. Also, we can measure how independent the bits are by estimating the correlationρ.
ρi j=
E(bi E(bi)(bj E(bj))) p
E denotes expectation value, V denotes variance, and bkdenotes the k’th bit in the
bitset. Since we are dealing with binary distributions we can fortunately conclude that ifρi j=0 the i’th and the j’th bits are independent.
We can construct a measurement of how much the bitsets deviate from having max-imum entropy by studying how much they deviate from the assumption of 50 per cent probability of a bit set to one and
from the desired independence of the bits.
If the bits in a bitset have a probability of being 1 equal to 0.50 and they are in-dependent the distribution of the number of ones in the bitset will follow a binomial distribution. Therefore we
can define another measurement of how close to maximum entropy the bitsets are as the deviation from a binomial distribution of the number of ones in the bitsets. These two ways of measuring how
close to maximum entropy the bitsets are will be exemplified.
Distortion persistence of the bits can be measured by performing experiments on
a number of templates subject to controlled distortions. A bitset is created from the template before and after the distortion. The number of bits that are equal of the two bitsets is a measurement
of how persistent the bitset creation algorithm is to the applied distortion.
The bitset creation time can be measured for a specific computer. However, if bitset creation method A is faster then method B on computer X, A is not necessarily faster on computer Y. Also, the
implementations are often not trivial to optimize. So it is not always possible to determine which bitset creation method that is generally the fastest.
Bitset comparison
The bitset comparison part of the MEM is not application dependent. For each image neighborhood and for the template a set of bits is generated somehow. The similarity measure in the template
matching algorithm is simply the number of equal bits in the template bitset and the image neighborhood bitset. The bitsets consists of a whole number of bytes2for practical reasons.
It is possible to use the fast dissimilarity principle (MEM Statement 3 in section 2.1) to decrease the bitset compare time. This is done by comparing the first number of bytes in the neighborhood
and the template bitsets. If the number of equal bits in these parts of the bitsets is below a certain threshold the bitsets are considered dissimilar, and the similarity value is set to zero. If the
number of equal bits is not below the threshold the whole bitsets are compared and the similarity measure is the number of equal bits of the whole bitsets. This algorithm and its implementation is
described in detail in the next section.
PAIRS and the details of the bitset comparison
PAIRS is an algorithm to create bitsets of arbitrary number of bytes from neighbor-hoods in RGB images. PAIRS can easily be modified to deal with other kinds of signals of arbitrary inner and outer
dimension The PAIRS method is based on random pairs of pixels within a neighborhood of an image. Each bit in a bitset is created from a certain pair of pixels. A bit is set to 1 if the first pixel
value is larger than the other in the pair. Otherwise the bit is set to 0. The pair of pixel values are chosen in the same color band. The random pairs are chosen according to Algorithm 1 which is
presented in C-like pseudo-code.
Algorithm 1 ---// Computes a list of pairs to be used // for bitset creation.
n Number of bytes in each bitset
colors Number of colors (3 for RGB images)
list List of pixel pair coordinates used to
form the bitsets, size: 8 x n where each element contains the coordinates of the pixel pair
rand rand(low,high) returns a random integer
between low and high. ALGORITHM
For i1=0 to n2*8-1 {
index1x = rand(0,N-1); index1y = rand(0,N-1); index2x = rand(0,N-1); index2y = rand(0,N-1); index3 = rand(0,colors-1);
Store all five index variables in list[i1]. }
---When the list of pixel pairs have been created according to Algorithm 1 or a pre-computed list is loaded from a file, the actual bitsets are created according to Algo-rithm 2.
Algorithm 2
---// Creates bitsets from image neighborhoods. INPUT VARIABLES
im An RGB image,
size: imSize1 x imSize2 x 3
list A list of pixel pairs created
by Algorithm 1 OUTPUT VARIABLES
bs Image bitset,
size: (imSize1-N+1) x (imSize2-N+1) x 8*n
For i1=0 to imSize1-N, i2=0 to imSize2-N /* For all image neighborhoods */ {
For i3=0 to 8*n-1
/* For all bits in the bitset */ {
Get index1x, index1y, index2x, index2y and index3 from list[i3].
If im[i1+index1x, i2+index1y, index3] > im[i1+index2x, i2+index2y, index3] { bs[i1,i2,i3] = 1; } Else { bs[i1,i2,i3] = 0; } } }
---Note that Algorithm 2 is used to create both the image bitset and the template bitset. The size of the resulting template bitset will be 11n or simply n if we neglect the
singleton dimensions.
Motivation for PAIRS
The PAIRS method for bitset creation that was described in the previous section has been developed since it is a good compromise between the desired properties of MEM bitsets as described in 2.2. The
entropy of these bitsets are high, the similarity mea-surement is resistant to certain kinds of distortion, and the bitset creation time is low. I believe other ways to create bitsets can prove
better then PAIRS for some applications, but PAIRS is fast and rather simple to implement, and it works on the original input intensity data. The method has proved useful in applications and is used
to demonstrate the maximum entropy approach to template matching. Matching using bitsets created with PAIRS compares very well with correlation approaches according to experiments with controlled
distortions, see Section 4. When high invariance against certain types of distortions, such as rotation, is needed I believe higher-level image features should be used when forming the bitsets.
The bitset comparison algorithm
The previous section described the PAIRS way to create bitsets. The bitset comparison algorithm is used to match the template bitset with the image bitsets. The algorithm is not dependent on what
bitset creation method that is used. There are two versions of the bitset comparison algorithm (Algorithm 3), with or without sort out. If sort out is not used the similarity measurement between two
bitsets is simply the number of equal bits. Sort out can be used to increase the speed of the algorithm by setting the similarity to zero if the first n1 bytes is less then a certain limit. The
principle behind this is the fast dissimilarity principle discussed in Section 2.1. The average number of bytes that have to be compared can be reduced substantially by using sort out. Notice that a
drawback with using sort out is that the execution time will be dependent on the input data.
---// Computes a similarity measurement between // the template bitset and the image bitsets. INPUT VARIABLES
im_bs Image bitset, size: s1 x s2 x 8*n
temp_bs Template bitset, size: 8*n
n1 Number of bytes to use for initial
sort out.
thres Threshold
s The similarity measurement,
size: s1 x s2 FUNCTIONS CALLED
simil simil(bs1, bs2) computes the number
of equal bits in bitset bs1 and bs2. ALGORITHM (without sort out)
For i1=0 to s1-1, i2=0 to s2-1
s[i1,i2] = simil(im_bs[i1,i2,0:n-1], temp_bs(0:n-1)); }
ALGORITHM (with sort out) For i1=0 to s1-1, i2=0 to s2-1 {
sortout_sim = simil(im_bs[i1,i2,0:n1-1], temp_bs(0:n1-1); if sortout_sim<thres { s[i1,i2] = 0; } else { s[i1,i2] = sortout_sim + simil(im_bs[i1,i2,n1:n-1], temp_bs[i1,i2,n1:n-1]); } }
Implementation issues
The implementations of the algorithms presented in this text do not follow the exact syntax presented, but they are functionally equivalent. To compare the speed of MEM-PAIRS with correlation-type
approaches Algorithm 1 through 3 and NCC have been implemented using C and run on a general purpose computer of the type UltraSPARK-II, 333 MHz. Execution times mentioned in this text refer to runs
on this computer. The details of the NCC algorithm is explained in section 4.1. Algorithm 1 is not time critical since the list of pixel pairs can be pre-computed. The bitsets created by Algorithm 2
are stored as arrays ofunsigned char’s, that is, as arrays of bytes. This is not necessarily the fastest way, but it is flexible. Some effort has been made to optimize the innermost loop of the
algorithm. The bitset creation time tcrhas been measured to
0.40 µs/byte. This time does not depend much on the number of bytes per bitset or the size of the image.
Algorithm 3 is about counting the number of equal bits in two arrays of bytes. An XOR-operation is performed between each of the bytes in the image array and the
corresponding template array byte. The number of zeros in the resulting byte (which is the number of equal bits of the two compared bytes) is computed with a 256-item long lookup table. The number of
equal bits from each byte in the arrays is summed and the result is the similarity measurement between the bitsets. The time to compare two bytes in the bitsets tcmphas been measured to to 0.025 µs.
The NCC algorithm has also been implemented in C. Some efforts have been made to optimize the code. The time it takes to compare two bytes (two intensity values) is 0.120 µs, denoted tcmpN. The
compare time per byte is based on the amount of input
data even though the bytes are converted todouble’s internally to do the multiplica-tion in the NCC algorithm.
In this section we will compare the speed of MEM-PAIRS3 and NCC for template matching. Let tmand tmN denote the total time to match the template with the image
for PAIRS and NCC. Let s denote the size of the template, sxthe width of the search
space, sythe height of the search space, and n the number of bytes used in the bitsets.
By search space is meant the rectangular set of tested template positions. If no sort out is used in the PAIRS method and both the bitsets have to be created to do the matching the match time for
PAIRS is
tm=(nsxsy+n)tcr+nsxsytcmp: (3)
Usually the time to create the template can be neglected and then we have
tm=nsxsy(tcr+tcmp): (4)
The match time for NCC is
cmpN: (5)
We exemplify using the following values: s=16, sx=sy=41, n=64, tcr =
6[s, t]
6[s, and t]
6[s. These values are]
used in the comparative test between PAIRS and NCC in section 4. The resulting match times for this example are:
tm=46 ms
tmN=155 ms
We see that PAIRS is 3 times faster for this case. We note that the time to compare the bitsets tcmpis much shorter than the time to create them tcr. For certain applications the
bitsets could be pre-computed or a large number of templates used on one image. We will see later that if optical flow estimation is done by PAIRS template matching the total time to create the
bitsets is shorter than the total compare time since each bitset is used in comparisons many times. For an application where the bitset creation time is of no importantance the actual match time for
PAIRS would instead be
tm=2:7 ms
if the same parameters as above are used. For this case PAIRS is 60 times faster then NCC.
3[The MEM-PAIRS template matching algorithm will from now on often be abbreviated to just “PAIRS”]
sparse dsparse sparse d sparse d d Image [Template]
Figure 1: This figure shows how the bitsets can be created sparsely in the image if several template bitsets are used in the matching.
For the application of tracking an object in a video sequence only one template is matched with a certain image. Therefore the bitsets created from the image will be used only once, so the bitset
creation will take most of the time. There is a remedy to speed up the process of creating the bitsets. Have a look at Figure 1. The figure depicts how we can reduce the total bitset creation time by
creating fewer image neigh-borhood bitsets and more template bitsets. The image bitsets are only created at every
dsparse’th pixel position in the x- and y-direction as shown on the left side in the figure
for the case when dsparse=3. The right hand side of the figure shows a 1010
tem-plate that we wish to match with the image. We view this temtem-plate as a collection of
d2[sparse]=9 number of 88 templates with the upper left corner positioned within the
gray square. One of these 88 templates is marked in the figure. The 88 templates
are denoted with the position of their upper left corner in the 1010 neighborhood.
The position indices k and l run from 0 to dsparse 1. The assumption we use now is
that if the(k;l)-template matches the image in position(i;j)then the 1010 template
matches the image in position(i k;j l). This assumption is reasonable when dsparse
is small compared to the template size. The assumption makes it possible to find a sim-ilarity measurement of all possible positions of the template in the image even though the image bitsets are
created sparsely. The algorithms for bitset creation and compar-isons become somewhat more complicated to implement, but the total match time is decreases substantially for many applications. We will
create d[sparse]2 times fewer image bitsets and d2
sparsetemplate bitsets instead of only one. Equation 3 can be modified for
the case of sparsely sampled image bitsets to
tm=n sxsy d2 sparse tcr+nd 2 sparsetcr+nsxsytcmp: (6)
The terms in the sum are from left to right the time to create the image bitsets, the time to create the template bitsets and the time to compare them. The equation neglects the edge effects that
occur if sxor syis not a multiple of dsparse. For the example parameters
given above and with dsparse=4 the total match time would be
tm=2:7+0:4+2:7 ms=5:8 ms
which is 27 times faster then NCC. Later we will see how the use of sparsely sampled bitsets affects the performance of template matching.
A comparison between PAIRS and normalized
Test setup
In this section we describe a performance test of PAIRS matching and normalized cross-correlation (NCC). The PAIRS matching algorithm uses Algorithm 1 and 2 to create the bitsets, and Algorithm 3
without sort out to match the bitsets. The NCC method uses a similarity measurement s between the image neighborhood I and the template T as defined below.
s= 2
=0 ∑
N 1 i
=0 ∑
N 1 j
) q ∑
N 1 i
=0 ∑
N 1 j
2 (
2 (
) (7)
i and j are spatial coordinates, and k is the color index. Since RGB images are assumed k runs from 0 to 2. The similarity measurement can be said to be standard gray scale
normalized cross-correlation done separately for all three color bands and the resulting similarity is the sum of the similarity measurements from each color band.
The approach in this test is to define a template as a neighborhood of an image. Then controlled distortions are applied to the template and the template is matched with the original image. The RGB
test images are chosen as random parts of images from a database containing 20000 images. The database is Photo Library 1 and 2 in Corel’s product Corel GALLERY Magic, see their homepage [1] for more
information. 5000 testimages of size 5656 are chosen from the database and the templates are
taken from the central neighborhoods of these images. The size of the templates is 1616. This implies that the number of possible positions of the template in the
image is 412. When other sizes of the templates are used the image sizes are adjusted so that the number of possible positions of the template is the same. Figure 2 shows 20 of these images and the
region from where the templates are taken. In the order of 50 per cent of the images initially chosen was not used since the energy of the template area was too low. (The image was refused if the
average standard deviation of the three color bands in the template area was less then 20. The intensity values are between 0 and 255.)
The distortions that are applied to the templates are are defined in the subsequent sections. After the template is distorted it is matched with the image using the PAIRS and the NCC method. If the
result is within a distance of 2 pixels from the correct position it is considered a hit. All the 5000 images are used for each of the 26 different cases of distortions. A new list of pairs (as
defined by Algorithm 1) is produced every 100’th image. The the number of misses for NCC and PAIRS are recorded. The tests are performed for different number of bytes in the bitsets formed by the
PAIRS. These different methods will be denoted PAIRSX or just PX where X is the number of bytes used to form the bitsets. PAIRSX+ denotes PAIRS with a number of bytes greater then or equal to X.
Generation of image distortions
The template is always shifted a subpixel distance in both directions before any other distortion is applied. This is a natural thing to do since even integer shifts are not more common than others
for real applications. The sizes of the x- and y-shifts are
These images are from Corel GALLERY Magic which are protected by coperight laws. Used under license.
Figure 2: Examples of 20 images used in a comparative test between PAIRS and NCC. The template regions are marked.
chosen randomly from a uniform distribution. The shift is performed using bicubic interpolation.
The distortions are applied only to the template. If more than one distortion is used they are applied in the same order as they appear below. The distortions are applied to a template larger than
the final distorted template so that the geometric distortions such as rotation and zoom can be performed without introducing undetermined intensity values in certain regions. The different types and
strengths of the distortions are denoted by a LABEL (for example NOISE, or ROT) followed by the distortion strength parameter
PLABEL. For example “ROT10” denotes a rotation distortion with a distortion strength
parameter (PROT) of 10. (For this example, it means a rotation with a maximum angle
of 10Æ
4.2.1 Gaussian noise, NOISE
The distortion called NOISE is generated by adding zero mean Gaussian noise to the image. The signal energy of the image is computed as the sum of the squares of all pixel values for all colors.
PNOISEis the noise to signal energy ratio4.
4[Unconventionally noise to signal ratio is used instead of signal to noise ratio, since we want a greater]
Figure 3: Perspective change for the PERSP distortion.
4.2.2 Rotation of the image, ROT
The rotation of the template is done by an angle of between φmaxandφmaxuniformly
distributed. Bicubic interpolation is used. PNOISE equalsφmax.
4.2.3 Scaling of the image, ZOOM
When this distortion is used the template is scaled with a factor drawn from a uniform distribution between 1 PZOOMand 1+PZOOM. Bicubic interpolation is used. When
the scale factor is<1 a suitable low-pass filter is applied before the interpolation to
avoid aliasing.
4.2.4 Perspective change, PERSP
Have a look at Figure 3. We assume that the undistorted template comes from a flat surface represented by the rectangle in the figure. The camera is located at C on the z-axis which is perpendicular
to the surface. The distorted template is then the image that would be acquired if we move the camera to C’ and still aims it at the origin O. The distance from the origin is not changed. The z’-axis
is obtained by rotating the unprimed coordinate system an angleφaround the z-axis followed by an angleθ around the x-axis. The angleφis picked from a uniform distribution between 0 and 2π, andθfrom a
uniform distribution between 0 and PPERSP.
4.2.5 Salt and pepper noise, SALT
This type of noise is created by randomly setting some intensity values to 0 or 255. The probability of setting the intensity to 0 is the same as the probability of setting it to 255. This
probability is denoted PSALT.
4.2.6 A gamma correction of the intensity values, GAMMA
This distortion is applied independently for all intensity values. In this case the input intensity values iinare integer values between 0 and 255 and the output values ioutare
iout=round iin 255 γ 255 : (8)
The value ofγis chosen from the distribution PU( 1;1)
GAMMAwhere U( 1;1)is a uniform
distribution between -1 and 1.
4.2.7 NODIST and STD
The label NODIST denotes no distortion except for the subpixel shift. This case can be used as a reference as the minimum miss rate that is possible. STD (standard distor-tion) is a mixture of the
distortions defined above and consists of NOISE0.01, ROT5, ZOOM0.05, PERSP10, SALT0.02 and GAMMA1.5. STD is a standard test case with distortions that are considered relevant for many applications
including tracking and optical flow estimation.
Relevance of the distortions
NOISE Noise in imaging systems is often modeled well as additive Gaussian noise. ROT, ZOOM, PERSP Template matching for tracking must be able to handle small
rotations, image scaling and perspective changes due to object and camera mo-tion.
SALT Salt-and-pepper noise occurs in some image measurement systems. Further-more, this distortion can be used to model object occlusion.
GAMMA Cameras are often not calibrated. This distortion is relevant when the tem-plate and the image are acquired from different cameras. Also, it reflects how well the template matching algorithm
can handle different lightning conditions for the template and the image.
The strength of the distortions is chosen quite high, since we need a large number of misses in the template matching tests in order to get statistically reliable data. However, the distortions are
not so large that they are not relevant for practical applications.
The results of the comparative test are shown in Figure 4 through 10 and the numerical figures are presented in Table 1. The miss rate is on the y-axis and the number of bytes for the PAIRS method is
on the logarithmic x-axis of the graphs. The horizontal line in the figures corresponds to the miss rate of the NCC matching. If the line is missing it is out of the y-axes range. The other curve in
the figures corresponds to the miss rate of the PAIRS method for different number of bytes in the bitsets.
Figure 4 show the miss rate for NODIST and for the standard test case, STD. The only distortion applied for the NODIST case is the subpixel shift. Opposite to expecta-tions PAIRS32+ perform
significantly better then NCC for the NODIST case. Also for the STD test case PAIRS32+ perform better. The miss rate for NCC is 7 times greater then for PAIRS256.
For the case of Gaussian noise PAIRS64 performs as well as NCC for a low amount of noise. For very high noise levels, NCC outperforms PAIRS. See Figure 5. A noise to signal energy ratio higher than
0.01 is not common for high quality video sequences from scenes with good lightning conditions.
The performance between NCC and PAIRS can be characterized the same way for all the three types of geometrical distortions that have been tested: rotation (ROT),
NCC P16 P32 P64 P128 P256 P512 P1024 NODIST 0.008 0.009 0.005 0.003 0.004 0.002 0.002 0.002 STD 0.046 0.065 0.034 0.019 0.011 0.007 0.005 0.006 NOISE0.001 0.008 0.018 0.009 0.005 0.004 0.004 0.005
0.004 NOISE0.01 0.008 0.043 0.021 0.012 0.009 0.006 0.005 0.005 NOISE0.1 0.012 0.140 0.084 0.047 0.030 0.022 0.018 0.015 NOISE1 0.032 0.499 0.311 0.203 0.140 0.103 0.077 0.067 ROT5 0.011 0.014 0.006
0.005 0.003 0.003 0.002 0.002 ROT10 0.039 0.054 0.030 0.018 0.011 0.008 0.008 0.007 ROT15 0.102 0.151 0.097 0.066 0.056 0.047 0.043 0.041 ROT20 0.213 0.272 0.205 0.163 0.150 0.132 0.128 0.123
ZOOM0.05 0.010 0.015 0.006 0.004 0.003 0.003 0.003 0.003 ZOOM0.10 0.013 0.022 0.008 0.004 0.003 0.003 0.002 0.002 ZOOM0.15 0.028 0.055 0.028 0.018 0.013 0.012 0.012 0.011 ZOOM0.20 0.040 0.082 0.049
0.032 0.024 0.021 0.020 0.017 PERSP10 0.006 0.012 0.004 0.003 0.002 0.001 0.001 0.001 PERSP30 0.012 0.016 0.008 0.004 0.004 0.003 0.003 0.003 PERSP40 0.021 0.034 0.017 0.011 0.008 0.006 0.006 0.005
PERSP50 0.056 0.084 0.051 0.036 0.030 0.025 0.024 0.021 SALT0.02 0.019 0.014 0.005 0.003 0.002 0.002 0.001 0.001 SALT0.04 0.037 0.019 0.007 0.003 0.002 0.002 0.001 0.001 SALT0.06 0.062 0.026 0.007
0.004 0.002 0.003 0.002 0.002 SALT0.08 0.082 0.038 0.011 0.005 0.003 0.002 0.001 0.001 GAMMA15 0.016 0.009 0.005 0.002 0.002 0.002 0.002 0.002 GAMMA20 0.039 0.011 0.004 0.002 0.001 0.002 0.002 0.002
GAMMA30 0.144 0.012 0.007 0.004 0.002 0.002 0.002 0.002 GAMMA50 0.252 0.022 0.011 0.009 0.008 0.006 0.006 0.005
Table 1: The miss rate of template matching experiments with different types of distor-tions and similarity measurements. PX denotes the PAIRS algorithm with X number of bytes in the bitsets.
16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 NODIST 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 STD
16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 NOISE0.001 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 NOISE0.01 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05
0.06 0.07 0.08 0.09 0.10 NOISE0.1 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 NOISE1
Figure 5: The miss rate for different levels of Gaussian noise. The noise to signal ratios of 0.001, 0.01, 0.1 and 1 corresponds to a SNR of 30 dB, 20 dB, 10 dB and 0 dB.
16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 ROT5 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 ROT10 16 64 256 1024 0 0.02 0.04 0.06 0.08 0.10 0.12 0.14
0.16 0.18 0.20 ROT15 16 64 256 1024 0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 ROT20
Figure 6: The miss rate for different amounts of rotation of the template. Note the different scales of the y-axis. The NCC miss rate is out of the y-axes range for the ROT20 distortion. 16 64 256
1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 ZOOM0.05 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 ZOOM0.10 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08
0.09 0.10 ZOOM0.15 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 ZOOM0.20
16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 PERSP10 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 PERSP30 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06
0.07 0.08 0.09 0.10 PERSP40 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 PERSP50
Figure 8: The miss rate for different amounts of perspective changes applied to the template. 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 SALT0.02 16 64 256 1024 0 0.01 0.02
0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 SALT0.04 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 SALT0.06 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10
Figure 9: The miss rate after a salt and pepper type of noise have distorted the template.
rescaling (ZOOM) and a perspective change (PERSP). See Figures 6 to 8. PAIRS32+ works better for all these distortion cases except for ZOOM0.20. For this case PAIRS64+ works better.
PAIRS16+ clearly outperforms NCC when the template is distorted with salt and pepper noise. See Figure 9. This can be explained by the fact that if a pixel value is set to 255, corresponding to
“white” or maximum intensity, it will have a larger effect on the total similarity measurement in NCC then in PAIRS since large intensity values affect the NCC similarity measurement more then small
intensity values do. This is not the case for PAIRS where each pixel value (statistically) have the same amount of influence on the similarity measurement independent of the magnitude of its
intensity value.
The results from theγ-distortion is shown in Figure 10. As expected PAIRS is far superior then NCC for this type of distortion. This is due to the fact that only the information about which of two
intensity values is greater is used when forming a bit in a bitset. This information stays intact for any strictly increasing intensity value transform, at least if we neglect quantization effects.
Thus, the bitsets are (nearly)
16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 GAMMA1.5 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 GAMMA2.0 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06
0.07 0.08 0.09 0.10 GAMMA3.0 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 GAMMA5.0
Figure 10: The miss rate for different amounts ofγ-distortion of the pixel values. The miss rate of NCC for GAMMA30 and GAMMA50 is out of the y-axes range and therefore not shown in the figure.
1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 PAIRS16 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 PAIRS64 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 PAIRS1024 1 2 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
0.8 0.9 1 NCC
Figure 11: The distribution of distances between match position and the correct posi-tion. The bar furthest to the right in each graph indicates the relative frequency of all distances greater than
3.0 pixels.
invariant to arbitrary intensity transforms. This fact is especially useful when the image and the template is acquired with different cameras.
In the above experiments a “miss” have been defined as a matching position greater than 2.0 pixels from the correct position. This threshold was set rather arbitrary. Exper-iments with other
thresholds seem to give the same qualitative results on how PAIRS and NCC compare. Figure 11 shows the distribution of distances from the match posi-tion to the correct posiposi-tion for NCC and
PAIRS for the standard test case (STD). Most match positions are less then one pixel from the correct position. For PAIRS1024 the relative frequency of matches less then 1.0 pixels from the correct
position is 0.984. The same figure for PAIRS16, PAIRS64, and NCC is 0.859, 0.949, and 0.911 respec-tively.
To summarize the results PAIRS performs better for most types of distortions ex-cept for Gaussian noise with very high energy. PAIRS64 performs better than NCC for 23 out of the 26 different test
cases. PAIRS clearly outperforms NCC for the SALT and GAMMA distortions even when few bytes are used in the bitsets. The distribu-tions of distances between the match position and the correct
position are similar for
NCC P16 P32 P64 P128 P256 P512 P1024 STD4 0.526 0.532 0.462 0.429 0.401 0.394 0.387 0.384 STD8 0.152 0.111 0.067 0.048 0.039 0.035 0.034 0.033 STD16 0.046 0.065 0.034 0.019 0.011 0.007 0.005 0.006
STD32 0.022 0.089 0.042 0.020 0.009 0.004 0.003 0.002 STD64 0.064 0.243 0.145 0.087 0.056 0.040 0.031 0.023
Table 2: The miss rate for NCC and PAIRS for different template sizes. The standard test case of distortions has been used. STDX denotes this distortion and the template size X.
PAIRS and NCC. Note that these results are valid for this type of images, templates and distortions. Other choices of test images could give different results.
Other template sizes
For the tests described above the template size was kept constant and equal to 16 since this is a reasonable size for the intended applications. Figure 12 and Table 2 shows the results of the STD
test case when different template sizes was used. The cases are denoted STDX where X equals the template size. The test images have been chosen from an image database as previously explained.
Different sets of images are used for the five different tests since the images that are sorted out due to low variance depends on the size of the template. The number of possible positions of the
template in the image is still 412.
The first thing to notice from the figure is that both NCC and PAIRS work best for template sizes 16 and 32. For smaller templates the uniqueness and the structure of the templates decrease which
increase the miss rate. The reason why a larger template size also increases the miss rate is that the geometrical distortions affect the outer pixels in a large template more then for a small
template. Another interesting thing is to look at the number of bytes needed in the PAIRS algorithm compared to the total number of bytes in the template which is used for the comparisons in the NCC
algorithm. The PAIRS algorithm performs as well as NCC when 32, 64 and 128 bytes are used for the template sizes 16, 32 and 64. The ratio between the amount of data in the template and the amount
used in the bitsets is 0.04, 0.02, and 0.01 for the cases above. That is, for larger templates we can reduce the data amount more then for small templates and still get the same performance as NCC.
This is not surprising since the larger template the more redundant data we have.
The results show that for this test case PAIRS128+ performs better than NCC for all template sizes.
Performance using other images
How well PAIRS compares to NCC depends on the set of images used in the test. The images previously used are picked randomly from a large database of photographs. Only the images with a template
region with low variance have been discarded. The other set of images are taken from a video sequence. The sequence show two cars driving through an intersection. The video is acquired for the WITAS
project from a radio-controlled helicopter and is called REVINGE2D. The position of one of the cars have been tracked for 500 frames and the set of test templates consists of 2020
16 64 256 1024 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0 STD4 16 64 256 1024 0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 STD8 16 64 256 1024 0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18
0.20 STD16 16 64 256 1024 0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 STD32 16 64 256 1024 0 0.02 0.04 0.06 0.08 0.10 0.12 0.14 0.16 0.18 0.20 STD64
Figure 12: The miss rate for NCC and PAIRS for different template sizes. The standard test case of distortions (STD) was used. Note the different scale for STD4.
Figure 13: Frame 0, 299 and 499 from sequence REVINGE2D. The template positions are marked.
neighborhoods around the tracked position. The test images are 6060
neighbor-hoods around that position. Possibly these images are more relevant for the specific application of tracking cars then the randomly chosen images. Figure 13 shows three of these frames with
the template positions marked. All together 5000 matchings are done for each type of applied distortion. Each image is reused 10 times. To limit the experiments only one distortion strength is used
per type of distortion. The distortions used are: NODIST, STD, NOISE0.1, ROT15, ZOOM0.20, PERSP50, SALT0.15 and GAMMA5.0. The results are presented in Table 3, and in Figure 14 and 15.
The results from these tests are difficult to interpret since it is difficult to know how well these results generalize to other sequences. They should have some relevance for the specific
application of vehicle tracking. When we compare these results with the ones from the Corel image database, we see that PAIRS does not compare as well with NCC for a low number of bytes in the
bitsets. However, PAIRS512+ performs better or as good as NCC for all the distortions tested. Note also that many results are close to zero and therefore statistically uncertain. The reason why PAIRS
with a low number of bytes does not compare as well with NCC for these images compared to when the previous images were used is not clear. Possibly PAIRS generally compares better with NCC for bad
templates, but a definite conclusion cannot be drawn from these limited tests. Further testing where the templates are classified on a scale from “bad” to “good” would possibly provide interesting
NCC P16 P32 P64 P128 P256 P512 P1024 NODIST 0.000 0.005 0.001 0.000 0.000 0.000 0.000 0.000 STD 0.003 0.053 0.014 0.003 0.001 0.000 0.000 0.000 NOISE0.1 0.000 0.052 0.011 0.002 0.000 0.000 0.000
0.000 ROT15 0.012 0.121 0.050 0.032 0.020 0.014 0.010 0.009 ZOOM0.20 0.038 0.134 0.080 0.052 0.043 0.038 0.034 0.033 PERSP50 0.038 0.113 0.060 0.042 0.033 0.025 0.024 0.023 SALT0.15 0.017 0.219 0.048
0.005 0.000 0.000 0.000 0.000 GAMMA5.0 0.295 0.004 0.001 0.000 0.000 0.000 0.000 0.000
Table 3: The miss rate for NCC and PAIRS for different distortions. The set of testim-ages are taken from the REVINGE2D sequence.
16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 NODIST 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 STD 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07
0.08 0.09 0.10 NOISE0.1 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 ROT15
Figure 14: The miss rate for the NODIST, STD, NOISE0.1, and ROT15 distortions. The test images are taken from the sequence REVINGE2D.
16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 ZOOM0.20 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 PERSP50 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06
0.07 0.08 0.09 0.10 SALT0.15 16 64 256 1024 0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.10 GAMMA5.0
Figure 15: The miss rate for the ZOOM0.20, PERSP50, SALT0.15, and GAMMA5.0 distortions. The test images are taken from the sequence REVINGE2D.
0 20 40 60 0.35 0.4 0.45 0.5 0.55 0.6 0.65 Graph 1 0 20 40 60 0.35 0.4 0.45 0.5 0.55 0.6 0.65 Graph 2 0 20 40 60 0.35 0.4 0.45 0.5 0.55 0.6 0.65 Graph 3
Figure 16: The probabilities for a zero of the first 64 bits in three different populations of bitsets. The population in Graph 1 is generated by Algorithm 1, the population corresponding to Graph 2
is created by a slightly modified version of Algorithm 1, and the last graph corresponds to a population created by a random number generator which optimally has maximum possible entropy.
Statistics of PAIRS bitsets
Statistics of acquired bitsets
The bitsets used in Maximum Entropy Matching should of course have maximum pos-sible entropy which is fulfilled when the probability of each bit being set to 1 is 0.50 and the bits biare
statistically independent. Given a specific template bitset btemp;ithat
we match with an image bitset bim;ithe number of equal bits is the number of zeros in
bmatch;i =XOR(bim ;i ;btemp ;i ): (9)
If the image bitsets have maximum entropy the number of equal bits m follows a bi-nomial distribution. We continue by studying a population of bitsets which are created randomly using the same
database as in Section 4. Random 1616 neighborhoods are
chosen from which 64-byte bitsets are created.
First we investigate the probability of each bit being set to 0 which should be close to 0.50 to obtain high entropy. Figure 16, Graph 1 shows the relative frequency of the event that the bit is set
to 0 for the first 64 bits in the bitset. 10000 bitsets were used. The average of the relative frequencies for the first 64 bits is 0.544 and the standard deviation is 0.034. The reason why these
values are not closer to 0.50 is due to the fact that pixel values in a pair are sometimes equal due to the limited resolution and then the bit is set to 0 according to Algorithm 2. This problem is
made worse by the fact that the images in the Corel image database are compressed so that neighboring pixel values are more likely be be exactly equal. This problem can be solved by adding an
if-statement to Algorithm 2. When the pixel values in a pair are equal the bit can be set to 1 if the pixel value is even. This makes the probabilities closer to 0.50. The result when Algorithm 2 has
been modified can be seen in 16, Graph 2. For this case the average is 0.501 and the standard deviation is 0.007. We call these bitsets REAL. Graph 3 is included for reference. The bits in the
bitsets used for the result in this graph have been created by a random number generator with the probability of a bit set to 0 equal to 0.50. The average and the standard deviation is 0.502 and
0.005 for this case. This set of bitsets are called OPTIMAL. 10000 bitsets have been used for all three cases. We do not expect a much better results by modifying Algorithm 2 in the
REAL 10 20 30 40 50 60 10 20 30 40 50 60 OPTIMAL 10 20 30 40 50 60 10 20 30 40 50 60
Figure 17: The absolute value of the correlation matrices of the first 64 bits in the bitsets called REAL and OPTIMAL.
way described above but it makes the bitset follow our theory better. For all practical applications we can say that the probability of a bit set to 0 is exactly equal to 0.50 if we modify Algorithm
2 which is assumed in the rest of this section.
We know that the probability of a bit being 0 is 0.50. To investigate how close to maximum entropy the bitsets are we now study how dependent the bits are. Since the bits are independent if they are
uncorrelated we study the correlation matrixρi j, see
Equation 2. The absolute value of the correlation matrix of the first 64 bits for REAL and OPTIMAL can be seen Figure !!!. To be able to compare different populations of bitsets we need a measurement
on how dependent the bits are. We can use the
dependence value pdep= v u u t ∑N 1 i=0 ∑ N 1 j=0ρ 2 i j N N2 [N] (10)
as such a measurement. N is the number of bits. pdepis simply the root mean square
value of the off-diagonal elements in the matrixρ. We also define an independence
value as
pind=1 pdep: (11)
To verify the correlation between entropy and the independence value pind bitsets
con-sisting of 8 bits are formed randomly the same way as the REAL set of bitsets were formed. This is done for a number of different template sizes and compared to the estimated relative entropy HR
defined as
∑N 1
log[2]N : (12)
pdep, pindand HRare limited to the interval[0;1]. Since the list of pixel pairs formed
by Algorithm 1 influences the relative entropy and the independence value significantly for small bitsets we study the average of these values for many different lists of pairs. Figure 5.1 shows the
result of the average relative entropy and the average indepen-dence value when 50 different lists are used. 10000 bitsets have been used for each list and template size. We can see that the
estimated entropy and the independence value are closely related. It is difficult to say how well this result generalizes for bitsets with much more data than one byte. We make the assumption that
when two populations of bitsets are compared the one with the highest independence value also has the highest entropy. A proof of the validity of this assumption is not available.
2 4 6 8 10 12 14 16 0 0.2 0.4 0.6 0.8 1 template size independence value relative entropy Figure 18: Blahh....
It is interesting to study the distribution of the number of equal bits when a template bitset is compared to an image neighborhood corresponding to a miss...
This paper was never completed. Hopefully this part of the paper is still of some value. This paper lacks some important things such as the applications of tracking cars and estimating optical flow.
A section about possible improvements of the algorithm should also be added. However, the existing main sections are fairly complete except for Section 5.
[1] Corel’s homepage, August 2000,www.corel.com.
[2] P. Aschwanden, W. Guggenb ¨uhl. Experimental Results from a Comparative Study on Correlation-Type Registration algorithms. Robust Computer Vision, Forstner, Rudwiedel (Eds.), Wichmann 1992, pp.
[3] Salvatore D. Morgera, Jihad M. Hallik. A Fast Algorithm for Entropy Estimation of Grey-level Images. Workshop on Physics and Computation, 1994. PhysComp ’94, Proceedings.
[4] Aaron D Wyner, Jacob Ziv, Abraham J. Wyner. On the Role of Pattern Matching in Information Theory. IEEE Transactions of Information Theory, Vol. 44, No. 6, October 1998. | {"url":"https://5dok.org/document/7qvg9xrq-maximum-entropy-matching-approach-fast-template-matching.html","timestamp":"2024-11-10T11:48:37Z","content_type":"text/html","content_length":"200731","record_id":"<urn:uuid:67c3cc58-1b8c-47cc-9b1c-6ad777a84a00>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00138.warc.gz"} |
The Fourfold Nature of Eclipses – Sacred Number Sciences
The Fourfold Nature of Eclipses
The previous post ended with a sacred geometrical diagram expressing the eclipse yearthe time taken (346.62 days) for the sun to again sit on the same lunar node, which is when an eclipse can happen.
as circumference and four anomalous months as its diameter. The circle itself showed an out-square of side length 4, a number which then divides the square into sixteen. If the diameter of the circle
is 4 units then the circumference must be 4 times π (pior π: The constant ratio of a circle's circumference to its diameter, approximately equal to 3.14159, in ancient times approximated by rational
approximations such as 22/7.) implying that the eclipse year has fallen into a relationship with the anomalous monthThe time taken for the Moon, in its orbit, to reach its nearest position (largest
size) to the Earth equal, on average, to 27.554 days., defined by the moon’s distance but visually by manifest in the size of the moon’s disc – from the point of view of the naked eye astronomy of
the megalithic.
In this article I want to share an interesting and likely way in which this relationship could have been reconciled using the primary geometry of π, that is the equal perimeterA type of
geometry where an rectilinear geometry has same perimeter as a circle, usually a square but also a 6 by 5 rectangle whose perimeter is 22, assuming pi is 22/7 or 3 + 1/7. model of a square and a
circle, in which an inner circle of 11 units has an out-square whose perimeter is, when pi is 22/7The simplest accurate approximation to the π ratio, between a diameter and circumference of a circle,
as used in the ancient and prehistoric periods., 44.
An equal perimeter for this belongs to a circle of diameter 14, since 14 times 22/7 equals 14. Since the units are common and arbitrary for any scheme, the equal perimeter geometry was learnt early,
by the megalithic, and evidence exists in monuments from then onwards, as a working building method.
In its simplest form, two concentric circles with diameter 11 and 14 and a square of side length 11 lies as natural “attractor” for resonance of celestial time periods. The next article will explore
this fourfold nature of the diameter in terms of the eclipse phenomenon, in a form amenable to such megalithic methods.
2 thoughts on “The Fourfold Nature of Eclipses” | {"url":"https://sacred.numbersciences.org/2022/06/28/fourfold-nature-of-eclipses-1b/","timestamp":"2024-11-09T07:18:02Z","content_type":"text/html","content_length":"125005","record_id":"<urn:uuid:12e45f80-9db9-4092-a3c4-aad8c603842e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00621.warc.gz"} |
Discounted Cash Flow Valuation Model
Discounted Cash Flow Valuation Model In 5 Easy Steps - Simple Spreadsheet Calculation Method
Discounted cash flow valuation (DCF) is a useful method of calculating stocks' fair value - powerful indicator for smart long-term investment decisions.
Shouldn't Miss Content...
Where To Invest Money Now?
Would you like to know where to invest money? Find fresh stock research, analysis and strategies, plus latest news, discussions and experts' opinions.
ETF Trading Strategies Revealed
If you are looking for ETF trading strategies, this is the place to be. We provide investing ideas for the most popular ETFs in different categories.
Stock analysts use discounted cash flow valuation to get the fair value of the company based on the projections of how much money will the company generate to investors in the future. There are
several factors to consider, when performing DCF valuation.
Forecasting the Period and Revenue Growth of the Company
- Step 1 of Discounted cash flow valuation
First thing to determine is how far in the future you will project and discount cash flows. The forecasting period depends on the company's competitive position and how long will the company be able
to generate excess returns to investors. Every company settles into maturity phase eventually.
As a rule of thumb, the forecasting period should be 5 years for solid companies with strong marketing channels, recognizable brand or any other advantage. For slow growing companies operating in
high competitive industry with low margins, the forecasting period should be shorter (1 year or so). For outstanding growth companies, dominating in their industry, with high barriers to enter, the
forecast period can be up to 10 years.
After you will define the forecasting period, proceed to forecasting the revenue growth over this period. Although the future is uncertain, try to estimate how the market will look like in a few
years time, will it expand or contract, forecast the market share of the company, do you expect any new products driving sales, will the pricing change, and so on. The basis for estimated future
revenue growth lays in the last two years average growth rate and management forecasts. It is wise to prepare three possible outcomes: optimistic, pessimistic and realistic one.
Forecasting Free Cash Flows
- Step 2 of Discounted cash flow valuation
Free cash flow is the cash left to the company after all the operating cash expenses are paid. Free cash flow enhances value to shareholders and can be used for R&D, paying out dividend or for buying
back shares. We have examined the calculation of free cash flow in cash flow statement analysis.
You should first forecast the future operating costs. The easiest way to do that is to look at historic operating costs margin, expressed as proportion of revenues. Then adopt the future operating
cost margin if needed.
Taxation is another figure to forecast. Be aware that companies with high capital expenditures do not have to pay taxes in year of investments. Therefore, it makes sense to calculate average taxation
rate for the past few years and apply it to the future (average annual income tax paid divided by profits before income tax).
The next figure to forecast is net investment or capital invested in property, plants and equipment to sustain future growth. Calculate net investment as capital expenditures (from cash flow
statement) minus non-cash depreciation (from income statement). Compare this figure with income to get investment ratio and then apply this figure to the future. Check competitors; if they are
investing more aggressively, you can expect the company will have to invest at higher rate also.
The last figure in calculating future free cash flows is change in working capital. It represents the cash needed for day-to-day operations, to maintain current assets, such as inventory. Calculate
working capital as current assets minus current liabilities; you can find both figures in balance sheet. Afterwards calculate the net change in working capital by comparing figures between two
consecutive years. Take into consideration that normally increase in sales requires higher working capital to finance bigger investment in inventory and receivables.
At the end, calculate Free Cash Flow for every year of forecasted period by the following formula:
Free Cash Flow (FCF) =
Sales Revenue - Operating Costs - Taxes - Net Investment - Change in Working Capital
Importance of Discount Rate in FCF Calculation
- Step 3 of Discounted cash flow valuation
At this stage, we know the predicted future free cash flows of the company for every year of the forecasting period. Now we have to calculate the net present value of these future free cash flows and
to do that, we need to determine the discount rate. This is one of the crucial factors in discounted cash flow valuation, because a very small difference in discount rate has a big impact on the
company's fair value. Different methods of determining discount rate exist, but the weighted average cost of capital (WACC) is most commonly used.
WACC is a function of debt to equity and cost of both of them. While the cost of debt is very straight forward (the current market rate at which the company is paying back its debt), there are more
open questions regarding the cost of equity. The most common way of calculating cost of equity is to sum risk free rate and premium for equity markets. The latest number is additionally multiplied by
beta, which is representing the above or below average risk for the industry, a company operates in. This method of calculating cost of capital is called CAPM model (capital asset pricing model) and
has won a Nobel prize. Here are all the formulas needed for calculation.
The WACC calculation looks simple, but in practice, it rarely happens that two analytics derive the same WACC, because of all the variables in the formula.
Calculating Company's Terminal Value
- Step 4 of Discounted cash flow valuation
After we calculate free cash flow over the forecasted years, we have to figure out the total value of free cash flows afterwards. If we do not perform this step, your assumption is that the company
will stop operating after the forecasted period, which is probably not the case.
As you have already seen, it is difficult to forecast cash flows for next few years. You can imagine that forecasting cash flows for the entire future of the company lifetime is even tougher task.
One possible way of calculating the terminal value is by using Gordon Growth Model, which simplifies the task by assuming that company's cash flow will stabilize after last projected year and will
continue at the same rate forever. Here is the formula:
Another possibility of determining terminal value of the company is to use multipliers of income or cash flow measures (net income, net operating profit, EBITDA, operating cash flow or FCF), which
are determine by comparable companies on the market.
Terminal value plays a big role in final figure of company's value. Therefore, we recommend you to be rather more conservative with estimates.
Calculating the Fair Value of the Company and Its Equity
- Step 5 of Discounted cash flow valuation
Now we can finally calculate the fair enterprise value (EV). Discount every year's cash flow with the appropriate discount factor.
The last step of calculating the fair value of the stock is to deduct net debt out of the enterprise value. You should further divide the fair equity value of the company by number of outstanding
shares and here you go, the fair price of one share is here.
Investment logic from here on is quite simple. If the stock is trading below its fair value, you should consider buying it and vice versa. If the stock is trading above its fair value, you should
consider selling it.
Advantages and Disadvantages of DCF Valuation Model
• Useful method when use of multiples to compare stocks makes no sense (if the whole sector or industry is under- or overvalued for example).
• The method is based on free cash flow (FCF), which is a trustworthy measure compared to some other figures and ratios calculated out of income statement or balance sheet.
• The FCF method is only as good as its assumptions are. The result can fluctuate widely with just a small change in your estimations about free cash flows, discount rate or growth rates. Use
conservative scenario next to realistic one.
• The method is not so useful when analysts have problems with visibility of the company's trends (sales, costs, prices, etc.).
• Finally, be aware that DCF model is not suitable for short-term trading; it is only useful for long-term investments.
Do you find this content useful? Like! Tweet! Recommend! Share!
Related Articles
Where to Invest Money Safely Now, Today, 2011 To Get Good Returns?
Would you like to know where to invest money? Find fresh stock research, analysis and strategies, plus latest news, discussions and experts' opinions.
Online Stock Brokers - Reviews, Ratings, Comparison, and More
Selecting online stock broker is an important initial step of every trader or investor. Take smart decision with reviews, tools and articles published here.
Written by: Goran Dolenc
Back from Discounted Cash Flow Valuation to Investing In Stock Market
Back from Discounted Cash Flow Valuation to Best Online Trading Site for Beginners Home Page | {"url":"https://www.stocks-for-beginners.com/discounted-cash-flow-valuation.html","timestamp":"2024-11-08T18:13:46Z","content_type":"application/xhtml+xml","content_length":"54325","record_id":"<urn:uuid:c4ec0e03-1399-4d80-8283-6e53ded8e154>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00026.warc.gz"} |
Logic and Linguistics Research Paper
Sample Logic and Linguistics Research Paper. Browse other research paper examples and check the list of research paper topics for more inspiration. If you need a research paper written according to
all the academic standards, you can always turn to our experienced writers for help. This is how your paper can get an A! Feel free to contact our research paper writing service for professional
assistance. We offer high-quality assignments for reasonable rates.
Although the founding fathers of modern logic showed some interest in the analysis of meaning in natural language, solid links between the two disciplines of logic and linguistics had not been
established before the middle of the twentieth century. In the late 1950s logical techniques were first applied to the analysis of grammars as theories of linguistic competence. These applications
rapidly developed into the field of mathematical linguistics and later became a central part of theoretical computer science. They will be briefly addressed in the second part of this research paper.
The tightest connections between logic and linguistics, however, grew out of the (re-) discovery of logical analysis as a tool of natural language semantics, starting with the advent of Montague
Grammar in the early Seventies. The role of logic in the analysis of meaning forms the subject of the first part of this research paper.
Academic Writing, Editing, Proofreading, And Problem Solving Services
Get 10% OFF with 24START discount code
1. Logic and Natural Language Semantics
The influence of logic on natural language semantics starts with the key notion of the field: meaning. Following the tradition of analytic philosophy (cf. Ammerman 1965), many linguists take a
‘realistic’ approach to meaning that makes abundant use of logical apparatus. The basic ideas of the logical analysis of meaning will be explained in Sect. 1.1. Sect.1.2 is concerned with some more
specific relations between languages of formal logic and natural languages such as English. (See Barwise and Etchemendy 1990 and Gamut 1991 for introductions to modern logic from a semantic
1.1 The Logical Approach to Meaning
A primary goal of semantic theory is to describe the sense relations that competent speakers perceive between expressions of a language (cf. Lyons 1968, Chap. 10; Larson and Segal 1995, Chap. 1).
Thus, e.g., knowledge of the following facts about English phrases is part of native speakers’ semantic competence: (a) brown paper bag is more specific than, or hyponymous to, paper bag; (b) blonde
eye-doctor and fair-haired ophthalmologist are synonymous in that they necessarily apply to the same persons; (c) Mary owns a dog and Nobody owns a dog are incompatible with each other. Many sense
relations can be expressed in terms of entailments between sentences: (a) holds because This is a brown paper bag entails This is a paper bag; (b) holds because This is a blonde eye-doctor and This
is a fair-haired ophthalmologist entail each other; (c) holds because Nobody owns a dog entails It is not the case that Mary owns a dog. A theory that correctly predicts the entailment relations
holding between the sentences of a particular language may therefore be expected to go a long way towards accurately describing its sense relations in general.
In modern logic, the notion of entailment has been successfully formalized by means of truth conditions: statement A entails statement B if, and only if, any way of making A true is also a way of
making B true. Ways of making statements true are called models, which is why the enterprise of describing meaning in terms of truth-conditions is known as model-theoretic semantics. More
specifically, a model consists of a set of objects, the model’s Uni erse, plus an Interpretation assigning to every non-logical word a suitable denotation. For instance, the Universe of a specific
model might be the set of all numbers, past US presidents, plants, or whatever—any set would do, as long as it is not empty. Similarly, the Interpretation could assign any individual to the name
Mary—the number 20, Jimmy Carter, my little green cactus—as, long as that individual is in the model’s Universe; and it could assign any set (even the empty set) to the noun ophthalmologist, as long
as that set’s members are all in the model’s universe, and as long as it is the same set as that which the model assigns to eye-doctor. The idea is that any model in which the individual assigned to
Mary happens to be a member of the set assigned to ophthalmologist makes the sentence Mary is an ophthalmologist true; and any model in which the set assigned to ophthalmologist happens to be empty,
is one that makes the sentence Nobody is an ophthalmologist true.
In general, then, a model assigns each expression of the language a denotation, thereby deciding the truth and falsity of each (declarative) sentence. In fact, the denotation of a sentence is
identified with its truth value: the number 1 if the sentence is true, and the number 0 otherwise. Using this numerical convention and writing [X] ͫ for the denotation of an expression X according to
a model m , a statement A thus turns out to entail B just in case, for any model m , [A] ͫ≤ [B] ͫwhenever A’s truth value A according to any model is 1, then so is B’s truth value B according to
the same m.
Traditionally, the statements made true by models are logical formulae. In linguistic semantics, there are two ways of adapting models to the interpretation of natural language: either natural
language expressions are translated into logical notation (indirect interpretation); or models directly assign denotations to them (direct interpretation). On either approach, complex expressions,
i.e., anything longer than one word, are usually interpreted following the Principle of Compositionality (Janssen 1997): the translation or the denotation of a complex expression consisting of more
than one word [like a sentence] is obtained by combining the translations or, respectively, the denotations of its immediate parts [like subject and predicate]. As a consequence, the denotations
models assign to (the translations of ) natural language expressions tend to be rather abstract. If, for instance, the denotation of the noun phrase my car, i.e., a concrete object, is to be obtained
by combining the denotations of the noun car, which is a set of concrete objects, and that of the possessive my, the latter would have to be some ‘selection mechanism’ that picks out the unique
object belonging to the speaker when combined with a set of objects. More abstract denotations turn out to be needed in a compositional treatment of determiners (like e ery and most), adverbs (like
quickly) and prepositions (like between). This is one of the reasons why the most common systems of logic, propositional calculus and first-order logic, do not suffice as the basis of indirect
The Principle of Compositionality has an important consequence, known as the Substitution Principle: if the denotation of a complex expression is completely determined by the denotations of its
parts, then one may replace any part by something with the same denotation without thereby changing the denotation of the whole. A case in point is the sentence (a) Mary is an ophthalmologist. If
Mary happens to be the speaker’s neighbor, then the name Mary and the noun phrase my neighbor have the same denotation. Hence, if (a) is true and thus denotes the truth value 1, then, by the
Substitution Principle, (b) My neighbor is an ophthalmologist has the same denotation as (a), i.e., it is also true. While the Substitution Principle appears to make the right prediction in this
case, this is not always so. Together with the assumption that sentences denote truth values, the Substitution Principle immediately leads to what, in the philosophy of language, is known as Frege’s
Problem (Frege 1892): replacing any sentence by another one with the same truth value ought to preserve denotation, which it does not always do. For instance, the denotation, i.e., the truth value,
of (b) Nobody doubts that Mary is an ophthalmologist ought to be preserved when (a) is replaced by a sentence with the same truth value. If Mary is an ophthalmologist, the truth value of (a) is 1 and
hence the same as that of (c) Paris is the capital of France. However, (b) might well be false while (b ) Nobody doubts that Paris is the capital of France is true. Clausal embedding under so-called
attitude erbs like doubt, know, or tell, thus turns out to be an intensional construction, i.e., one that defies the Substitution Principle.
Various solutions to Frege’s Problem have been proposed. Practically all of them employ the idea that, at least in certain constructions, the contribution that a sentence makes to the denotations of
larger expressions is not its truth value but something else. According to a widely accepted view (originating with Frege), it is its informational content: attitude verbs like doubt denote a
relation between the individual denoted by the subject and the information conveyed by the embedded clause. A common way of modelling informational content is by means of possible worlds—whence the
term possible worlds semantics. (See Barwise 1989 and Devlin 1991 for alternative approaches.) The basic idea is akin to that of measuring the expectance value of an event by the proportion between
favorable and unfavorable cases. The cases become possible worlds, the event is the information (as expressed by a sentence) being correct and instead of the quantitative relation between the sets of
cases expressed by a ratio, it is the qualitative difference that is emphasized, i.e., which worlds are favorable and which are not. Following this line of thought, the informational content of a
sentence, then, can be represented by a function (in the mathematical sense) pairing worlds with truth values. For example, the informational content of (a) above becomes a function assigning to any
possible world the truth value 1 if that world is such that, in it, Mary is an ophthalmologist; all other worlds will be assigned the truth value 0. Each possible world is thought of as representing
a specific way the world could be, or might have been—to the last detail. So the informational content of a sentence will typically assign 1 to an infinity of possible worlds differing in all sorts of
aspects that are irrelevant to the truth of that sentence; and it will typically assign 0 to innumerably many worlds differing in matters that do not affect the falsity of the sentence.
In order to give a full solution to Frege’s Puzzle within possible worlds semantics, the informational content of each sentence must be systematically determined, which is again done compositionally,
replacing denotations by intensions. The intensions of sentences are their informational contents, but the notion of intension is more general in that it applies to all kinds of expressions. The
intension of any expression is its denotation (or extension) as it varies from world to world, i.e., a function (in the mathematical sense) assigning to each possible world the extension of the
expression in that world. As a case in point, one may compare the descriptions (d ) the present capital of France and (e) the largest city in France: in some worlds, including our actual reality,
their extensions coincide, but given a different course of history, as represented by a possible world w, Reims, though smaller than Paris, would be the capital of France. So the intensions of (d )
and (e) cannot be the same, because they disagree on w (and on many other worlds). In general, then, expressions with the same denotations may still differ in their intensions and hence, given
compositionality, in the contribution they make to the informational contents of the sentences in which they occur.
The method of extension and intension complements, rather than replaces, the model-theoretic approach to meaning. Instead of directly assigning arbitrary denotations to expressions, the models of
possible worlds semantics couple each expression with an arbitrary intension, which then, being a function, uniquely determines an extension relative to a given world. Moreover, just like models come
with their own Universes, in possible worlds semantics each model is also equipped with its own Logical Space of worlds. This construction originates from modal logic (see below) and offers an
alternative to the model-theoretic reconstruction of entailment: a sentence A strictly implies a sentence B if B is true in all worlds in which A is true, i.e., if the intension of B yields the truth
value 1 for any possible world for which the intension of A yields 1.
1.2 Logic and Language
The basic tools of logically based linguistic semantics—models and possible worlds—had originally been developed for artificial languages designed for specific purposes, like the analysis of
mathematical proofs. These logical formulas can usually be paraphrased in colloquial language; the relation between formula and paraphrase is taught in introductory logic classes. Conversely, many
constructions in natural language (passive, relative clauses, …) have long been known to be systematically expressible in languages of logic (cf. Quine 1960); formalization, i.e., the art of
translating ordinary sentences into logical notation, is also something to be picked up in an introduction to logic. As a side-effect of the logical approach to natural language semantics, these
relations between natural language grammar and logical notation can be explored in a more rigorous way. We will briefly go through some pertinent examples; more detailed casestudies can be looked up
in any textbook or handbook on formal semantics (Dowty et al. 1981, Heim and Kratzer 1998, Gabbay and Guenthner 1984, Vol. 4, von Stechow and Wunderlich 1991, van Benthem and ter Meulen 1997).
The most obvious connections between logic and language concern the meaning of logical words, i.e., those lexical items that translate the basic inventory of logical formalism: the (‘Boolean’)
connectives ‘ ¬,’ ‘ Λ,’
‘ V,’ … translated as [it is] not [the case that], and, or, … and the quantifiers ‘Ɐ’ and ‘ꓱ’ corresponding to e erything and something. The logicians’ version of the former are combinations of
truth-values. For example, just like a sentence of the form ‘A and B’ is true just in case both A and B are, ‘Λ’ combines 1 and 1 into 1 and the other pairs of truth values into 0. Similarly, ‘¬’
(glossed as not) turns truth (1) into falsity and vice versa; ‘V’ ( or) combines 0 and 0 into 0 and everything else into 1, etc. These combinations of truth values can thus be given by means of the
truth tables in Table 1.
Even though truth tables work fine as first approximations to a semantic analysis of the words not, and and or as they are used in mathematical jargon, there is more to the meaning of these words. To
begin with, and and or do not always stand between sentences but may be used to connect almost all kinds of expressions: e ery book and a pen; see and or hear; true or false; etc. However, in many
cases these non-sentential uses abbreviate coordinated sentences: John owns e ery book and a pen is short for John owns e ery book and John owns a pen, etc. In fact, non-sentential and and or can
frequently be systematically reduced to the relevant truth tables by a type shift, a semantic rule that systematically generalizes combinations of truth values to combinations of other denotations
(see Keenan and Faltz 1985, Hendriks 1993). However, even this generalization does not cover all nonsentential uses of these words. In particular, there is no obvious reduction of the group reading
of and, as in John and Mary are a married couple, to the truth table of Boolean conjunction. Neither does there seem to be a straightforward reduction of alternative or, as in questions like Shall we
lea e or do you want us to stay?, to Boolean disjunction. Certain other semantic aspects that the truth table analysis seems to miss may be captured by pragmatic principles governing efficient use, as
opposed to literal meaning (Grice 1989, Levinson 1983). As a case in point, the difference between John lit a cigarette and Mary left the room and Mary left the room and John lit a cigarette may be
explained by a general principle of discourse organization, according to which the order of sentences should match the order of the events they describe. More involved pragmatic reasonings have been
given to account for the exclusive sense of or, according to which John missed the train, or he decided to sleep late implies that John did not both miss the train and decide to sleep late.
In addition to the Boolean connectives, standard formal languages of logic contain both a uni ersal and an existential quantifier, usually translated as e erything and something and expressing
properties of predicates: being universal, i.e., true of everything, and being non-empty, i.e., applying to something. Natural language statements are only expressible in standard (‘first-order’)
logic if they can be rephrased by combinations of Boolean connectives and the two quantifiers. A case in point is the sentence (N ) Every boy lo es a girl that can be paraphrased by (L) For e erything
it holds that either it is not a boy, or for something it holds that it is a girl and the former loves the latter—which translates into predicate logic as (F ) (Ɐ x) [ ¬ B(x) V (ꓱy) [G(y) Λ L(x,y)]].
As the logician’s paraphrase (L) indicates, the internal structure of quantified statements in natural language is quite different from that in logic. In particular, it usually involves determiners
like e ery and a instead of the natural language quantifiers e erything and something. The difference is that a determiner relates two sets, whereas a quantifier makes a statement about one set. Thus,
in (N ), e ery relates the set of boys with the set of girl-lovers, whereas the quantifier e erything in (L) as well as the universal quantifier ‘ ’ in (F ) attribute universality to the set of
individuals that love girls if they are boys, i.e., the set of those that are either not boys or else love a girl.
Another means of expressing quantification in natural language is by way of certain ad erbs. In particular, in so-called donkey sentences like If a farmer owns a donkey, he ne er beats it, the adverb
ne er (on one reading) expresses that the set of donkey-owning farmers that beat their donkeys is empty. The very form of this ad erbial quantification is rather remote from the usual logical
formalization; it is not even clear whether in this case, as is usually assumed (see above), the indefinites a farmer and a donkey correspond to existential quantifiers. Recent work in natural language
semantics has developed special logical systems to give a more adequate logical account of indefinites in donkey sentences and related constructions. (See Kamp and Reyle 1993, Chierchia 1995.)
Apart from these structural differences in the formulation of quantified statements, there are also differences in expressive power. In particular, not all natural language determiners can be
paraphrased by combining Boolean connectives and logical quantifiers. For instance, it can be shown that a statement like Most boys are asleep cannot be expressed in firstorder logic (Barwise and
Cooper 1981). Adequate formalizations of such sentences, and hence of natural language in general, again call for more powerful and complex languages of logic.
Logicians have devised various extensions of classical propositional and predicate logic as tools of formalizing reasoning involving specific locutions (cf. Gabbay and Guenthner 1984, Vol. 2). For
instance, in addition to Boolean connectives, languages of (propositional) modal logic (Chellas 1993, Bull and Segerberg 1984) contain a necessity operator, usually written as ‘□’; a formula ‘□ϕ ’
[read: it is necessarily so that ϕ] is true in a world w of a given model if the formula itself is true in all worlds that are possible for (or, technically: accessible from) w. Combining negation
and necessity, it is also possible to express a corresponding notion of possibility as ‘¬ □ ¬.’ Models of modal logic thus contain domains of possible worlds organized by an accessibility relation.
Depending on the precise structure of the latter, modal logic can be used to formalize various uses of modal verbs as in Someone must ha e been here (epistemic must) or You may now lea e the room
(deontic may). However, detailed studies of modality have revealed that these formalizations are at best approximations of the meanings of modal verbs in natural language (cf. Kratzer 1991). A
similar relation is that between tense logic (Burgess 1984) and the tenses and other means of reference to time in natural language.
Various phenomena have led semanticists to explore alternatives to classical logic with its model-theoretic interpretation assigning one of two truth values to each formula or sentence (cf. Gabbay
and Guenthner 1984, Vol. 3). A sentence like (1) John knows that there will be a party tonight seems to imply (P) There will be a party tonight; on the other hand, so does its negation (2) John
doesn’t know that there will be a party tonight. But if (1) and (2) both implied (P) in the sense of classical model-theory, then (P) would be true throughout the models in which (1) is true—as well
as in those models in which (2) is true, i.e., the ones in which (1) is false. But then (3) would have to be a tautology, i.e., a statement that is true in all models; for any model would have to
make (1) true or false. Giving up this last assumption, one enters the realm of (certain) non-classical logics, the simplest among which are threealued logics with an additional truth value U (read:
undefined ) which models assign to sentences like (1) if their presupposition (P) is not true in them.
2. The Logic of Grammar
As theories of language structure, grammars themselves have been the objects of logical analysis. The earliest such investigations mark the beginning of generative grammar and were directed to
shedding a light on the complexity of human language (see Sect. 2.1). Later approaches focused on the role of grammatical rules in the activity of parsing, which bears some resemblance to the process
of deduction in logic; and more recently, grammatical reasoning has been seen as a non-monotonic process, thus reaching beyond the limits of classical logic (see Sect. 2.2).
2.1 Syntactic Complexity
A syntactic theory of a particular language may be seen as a body of theoretical statements describing the structure of grammatically correct phrases. Formal reconstruction reveals that these
theories come in different formats and complexities. For instance, a context-free description of a language only contains grammatical rules of the form: ‘If a phrase of category B is immediately
followed by a phrase of category C, the result will be a phrase of category A.’ The standard notation for a rule of this form is: ‘A→B+C . Thus the following rule of English grammar is context-free:
if a preposition (like about) is followed by a noun phrase (like corrupt politicians), they form a prepositional phrase (about corrupt politicians). In standard notation, this rule reads: ‘PP Prep
NP.’ On the other hand, the following would-be rule is not context-free: after a ditransiti e erb (like gi e), two noun phrases (like e ery kid and a penny) form an object-phrase (a kid a penny).
Rather, the rule comes under the more complex scheme: ‘If a phrase of category B is immediately followed by a phrase of category C, the result will be a phrase of category A, provided that it follows
a phrase of category D’; one just has to put A = object-phrase, B = C = noun phrase, and D = ditransiti e erb. Rules of this form are called contextsensiti e. Using standard notation, they are
written as: ‘A →B+C/D_.’
The distinction between context-free and context-sensitive rules—and similar distinctions between other kinds of grammatical rules—can be used to define an abstract complexity ranking between
languages, the Chomsky Hierarchy (after Chomsky 1957, 1959). A language is context-free (or, as the case may be, context-sensitive) if it can in principle be described by a body of context-free (or,
respectively, context- sensitive) rules. Languages in the sense of this definition are arbitrary sets of sentences, which themselves are arbitrary strings of words. It turns out that, in this abstract
mathematical setting (known as mathematical linguistics or formal language theory), though all context-free languages are trivially context-sensitive, the converse is not the case: one can construct
languages that are context-sensitive without being describable by any body of context-free rules whatsoever. A famous example is the ‘language’ anbn n whose ‘sentences’ consist of arbitrarily many
occurrences of the ‘word’ a followed as many bs, which are again followed by the same number of cs. (A proof and further examples can be found in Martin 1997, Chap. 8.)
In applying these abstract concepts, one must identify a language like English with the set of its grammatical sentences and individuate the latter by their words only—and not, say, by their internal
grammatical structure. Given this identification, it is natural to ask for the exact location of human languages in the Chomsky Hierarchy. It may seem that an answer to this question requires complete
knowledge of the grammatical sentences of any given language—something that current linguistic research has not attained. However, at least some of these complexity issues can be settled on the basis
of partial information about the languages under investigation. In particular, it has been shown that, given the intricate structure of its verb complexes, the Swiss German dialect spoken in Zurich
cannot be captured by context-free means (Shieber 1985); a similar result concerning word formation has been obtained for the African language Bambara spoken in Mali (Culy 1985), thus proving that
natural languages in general are not context-free.
2.2 Further Issues
The fact that ordinary syntactic notation looks rather different from logical symbolism should not distract from the close connections between the two. For instance, as indicated above, a context-free
rule only abbreviates a lengthy statement about the internal structure of phrases; and this statement can be easily (though somewhat clumsily) expressed in standard logical symbolism (first-order
predicate logic). This fact becomes particularly important when it comes to the task of parsing phrases of a given language according to a given grammar. As a case in point, parsing the prepositional
phrase about Mary on the basis of the above context-free rule boils down to making a connection between the following three pieces of information: (I ) about is a preposition; (II ) Mary is a noun
phrase; and (III ) if x is a preposition and y is a noun phrase, then x y (i.e., x followed by y) is a prepositional phrase. (I) and (II) are basic facts about English words to be recorded in a
lexicon (which may be thought of as part of a syntactic description); (III) is just a paraphrase of the above context-free rule: ‘PP → Prep + NP.’ (I)–(III ) obviously entail that about Mary is a
prepositional phrase, and this is precisely the kind of entailment that can be formalized by means of elementary logic. Hence it should not come as a surprise that there are close relations between
parsing strategies and procedures for verifying logical entailments. This connection has been studied in computational linguistics and in categorical grammar under the heading Parsing as Deduction
(see Johnson 1991).
The role of logical deduction in grammatical theory is more complex than can be indicated in an article of this length. In particular, certain phenomena like the markedness of grammatical features or
the interaction of phonological constraints, seem to call for nonclassical logics that violate the Law of Monotonicity, according to which a conclusion once drawn remains correct when more premises
are added.
1. Ammerman R R (ed.) 1965 Classics of Analytic Philosophy. McGraw-Hill, New York
2. Barwise 1989 The Situation in Logic. CSLI, Stanford, CA
3. Barwise J, Cooper R 1981 Generalized quantifiers and natural language. Linguistics and Philosophy 4: 159–219
4. Barwise J, Etchemendy J 1990 The Language of First-Order Logic. CSLI, Stanford, CA
5. van Benthem J, ter Meulen A (eds.) 1997 Handbook of Logic and Language. Elsevier, Amsterdam
6. Bull R A, Segerberg K 1984 Basic modal logic. In: Gabbay D, Guenthner F (eds.) Handbook of Philosophical Logic. Kluwer, Dordrecht, Vol. 2
7. Burgess J 1984 Basic tense logic. In: Gabbay D, Guenthner F (eds.) Handbook of Philosophical Logic. Kluwer, Dordrecht, Vol. 2
8. Chellas B 1993 Modal Logic. Cambridge University Press, Cambridge, UK
9. Chierchia G 1995 Dynamics of Meaning. University of Chicago Press, Chicago
10. Chomsky N 1957 Syntactic Structures. Mouton, The Hague, The Netherlands
11. Chomsky N 1959 On certain formal properties of grammars. Information and Control 2: 137–67
12. Culy C 1985 The complexity of the vocabulary of Bambara. Linguistics and Philosophy 8: 345–51
13. Devlin K 1991 Logic and Information. Cambridge University Press, Cambridge, UK
14. Dowty D, Wall R, Peters S 1981 Introduction to Montague Semantics. Kluwer, Dordrecht, The Netherlands
15. Frege G 1892 Uber Sinn und Bedeutung. Zeitschrift fur Philosophie und philosophische Kritik 100: 25–50
16. Gabbay D, Guenthner F (eds.) 1984 Handbook of Philosophical Logic, 4 vo1s. Kluwer, Dordrecht, The Netherlands
17. Gamut L T F 1991 Logic, Language, and Meaning, 2 vols. University of Chicago Press, Chicago
18. Grice P 1989 Studies in the Way of Words. Harvard University Press, Cambridge, MA
19. Heim I, Kratzer A 1998 Semantics in Generati e Grammar. Blackwell, Oxford, UK
20. Hendriks H 1993 Studied flexibility: Categories and types in syntax and semantics. Ph.D. thesis, University of Amsterdam
21. Janssen T M V 1997 Compositionality (with an apppendix by B H Partee). In: van Benthem J, ter Meulen A (eds.) Handbook of Logic and Language. Elsevier, Amsterdam
22. Johnson M 1991 Deductive parsing: The use of knowledge of language. In: Berwick R C et al. (ed.) Principle-based Parsing: Computation and Psycholinguistics. Kluwer, Dordrecht, The Netherlands
23. Kamp H, Reyle U 1993 From Discourse to Logic. Kluwer, Dordrecht, The Netherlands
24. Keenan E, Faltz L 1985 Boolean Semantics for Natural Language. Reidel, Dordrecht, The Netherlands
25. Kratzer A 1991 Modality. In: von Stechow A, Wunderlich D (eds.) Semantik. Ein internationales Handbuch zeitgenossischer Forschung. [Semantics. An International Handbook of Contemporary
Research.]. De Gruyter, Berlin
26. Larson R, Segal G 1995 Knowledge of Meaning. MIT Press, Cambridge, MA
27. Levinson, S C 1983 Pragmatics. Cambridge University Press, Cambridge, UK
28. Lyons J 1968 Introduction to Theoretical Linguistics. Cambridge University Press, Cambridge, UK
29. Martin J C 1997 Introduction to Languages and the Theory of Computation, 2nd ed. McGraw-Hill, Boston
30. Quine W V 1960 Word and Object. MIT Press, Cambridge, MA
31. Shieber S M 1985 Evidence against the context-freeness of natural language. Linguistics and Philosophy 8: 333–43
32. Stechow A von, Wunderlich D (eds.) 1991 Semantik. Ein internationales Handbuch zeitgenossischer Forschung. [Semantics. An International Handbook of Contemporary Research.] De Gruyter, Berlin | {"url":"https://www.iresearchnet.com/research-paper-examples/linguistics-research-paper/logic-and-linguistics-research-paper/","timestamp":"2024-11-02T04:29:23Z","content_type":"text/html","content_length":"120815","record_id":"<urn:uuid:47f63760-08aa-4a70-b025-4c480c939e67>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00870.warc.gz"} |
What Is Programming?
1.4. What Is Programming?¶
Programming is the process of taking an algorithm and encoding it into a notation, a programming language, so that it can be executed by a computer. Although many programming languages and many
different types of computers exist, the important first step is the need to have the solution. Without an algorithm there can be no program.
Computer science is not the study of programming. Programming, however, is an important part of what a computer scientist does. Programming is often the way that we create a representation for our
solutions. Therefore, this language representation and the process of creating it becomes a fundamental part of the discipline.
Algorithms describe the solution to a problem in terms of the data needed to represent the problem instance and the set of steps necessary to produce the intended result. Programming languages must
provide a notational way to represent both the process and the data. To this end, languages provide control constructs and data types.
Control constructs allow algorithmic steps to be represented in a convenient yet unambiguous way. At a minimum, algorithms require constructs that perform sequential processing, selection for
decision-making, and iteration for repetitive control. As long as the language provides these basic statements, it can be used for algorithm representation.
All data items in the computer are represented as strings of binary digits. In order to give these strings meaning, we need to have data types. Data types provide an interpretation for this binary
data so that we can think about the data in terms that make sense with respect to the problem being solved. These low-level, built-in data types (sometimes called the primitive data types) provide
the building blocks for algorithm development.
For example, most programming languages provide a data type for integers. Strings of binary digits in the computer’s memory can be interpreted as integers and given the typical meanings that we
commonly associate with integers (e.g. 23, 654, and -19). In addition, a data type also provides a description of the operations that the data items can participate in. With integers, operations such
as addition, subtraction, and multiplication are common. We have come to expect that numeric types of data can participate in these arithmetic operations.
The difficulty that often arises for us is the fact that problems and their solutions are very complex. These simple, language-provided constructs and data types, although certainly sufficient to
represent complex solutions, are typically at a disadvantage as we work through the problem-solving process. We need ways to control this complexity and assist with the creation of solutions.
You have attempted of activities on this page | {"url":"https://runestone.academy/ns/books/published/pythonds3/Introduction/WhatIsProgramming.html","timestamp":"2024-11-11T04:29:09Z","content_type":"text/html","content_length":"24691","record_id":"<urn:uuid:969797b3-6a97-4798-bd57-f0e94dde4aae>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00890.warc.gz"} |
What is Log-Normal Distribution - Harbourfront Technologies
What is Log-Normal Distribution
Posted on December 23, 2020 In Uncategorized
Subscribe to newsletter
The log-normal distribution is a term associated with statistics and probability theory. Similarly, another name for the log-normal distribution is Galton distribution. The log-normal distribution
represents a continuous distribution of random variables with normally distributed logarithms. It follows the concept that instead of having normally distributed original data, the logarithms of the
data also show normal distribution.
A log-normal distribution is similar to normal distribution. In fact, the data in both of them can be used interchangeably by calculating the logarithms of the data points. However, the log-normal
distribution is different from the normal distribution in many ways.
The biggest differentiating factor between the two is their shapes. While normal distribution represents a symmetrical shape, a log-normal distribution does not. The difference in their shapes comes
due to their skewness. As log-normal distribution uses logarithmic values, the values are positive, thus, creating a right-skewed curve. Another difference between the two is the values used on
deriving both.
What are the parameters of Log-normal Distribution?
The log-normal distribution has three parameters. These are the median, the location, and the standard deviation. Firstly, the median, also known as the scale, parameter, shrinks, or stretches the
graph, represented by ‘m’. Secondly, the location, represented by ‘Θ’ or ‘μ’ represents the x-axis location of the graph.
Lastly, the shape parameter or standard deviation, represented by ‘σ’, affects the overall shape of the log-normal distribution. It does not impact the location or height of the graph. The parameters
are available in historical data. However, it is also possible to estimate using current data.
What are the characteristics of Log-normal distribution?
Log-normal distribution has several characteristics or features. First of all, it shows a positive skew towards the right due to its lower mean values and higher variances in the random variables in
consideration. Secondly, for log-normal distribution, the mean is usually higher than its mode because of its skew with a large number of small values and few major values.
Lastly, log-normal distribution does not include negative values. It is a feature that differentiates it from a normal distribution and, therefore, a defining characteristic.
What are the uses of Log-normal distribution?
Log-normal has several use cases in the world of finance. Most importantly, it fixes some problems with normal distribution, which helps increase its usage. For example, a normal distribution may
include negative variables, while log-normal distribution consists of positive variables only. Apart from that, log-normal distribution is also commonly used in stock prices analysis.
Log-normal distribution can help investors identify the compound return that they can expect from a stock over a period of time. Usually, they use the normal distribution to analyze the potential
returns they get from it. However, for analyzing the prices of stocks, log-normal is a better choice.
In finance, log-normal distribution common for calculating asset price over a period of time. It is because normal distribution may provide inconsistent prices, while log-normal does not have the
same problem. It solves the problem with normal distribution taking asset prices below zero or negative. Therefore, the log-normal produces better results.
The log-normal distribution shows the continuous distribution of random variables with normally distributed logarithmic values. It is different from the normal distribution in several ways. There are
three parameters in log-normal distribution, the median, the location, and the standard deviation.
Further questions
What's your question? Ask it in the discussion forum
Have an answer to the questions below? Post it here or in the forum
A residential cruise ship has a 4-year, round-the-world voyage to 'escape' politics
An all-inclusive single-occupancy suite aboard the Villa Vie Odyssey for four years costs $255,999 and guarantees a visit to over 140 countries.
Stay up-to-date with the latest news - click here
Where to watch Inter Miami vs. Atlanta United: Live stream Game 3 of MLS Playoffs anywhere
Inter Miami and Atlanta United will face off for a third and final Round 1 match at the MLS Playoffs, a game that will decide which team will head to the Conference Semifinals.
Stay up-to-date with the latest news - click here | {"url":"https://harbourfronts.com/log-normal-distribution/","timestamp":"2024-11-10T03:02:24Z","content_type":"text/html","content_length":"106704","record_id":"<urn:uuid:84b6d02b-d359-43d9-9487-29cc9d647d59>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00669.warc.gz"} |
Simulate Parameters from a Theta/Omega specification — rxSimThetaOmega
Simulate Parameters from a Theta/Omega specification
Simulate Parameters from a Theta/Omega specification
params = NULL,
omega = NULL,
omegaDf = NULL,
omegaLower = as.numeric(c(R_NegInf)),
omegaUpper = as.numeric(c(R_PosInf)),
omegaIsChol = FALSE,
omegaSeparation = "auto",
omegaXform = 1L,
nSub = 1L,
thetaMat = NULL,
thetaLower = as.numeric(c(R_NegInf)),
thetaUpper = as.numeric(c(R_PosInf)),
thetaDf = NULL,
thetaIsChol = FALSE,
nStud = 1L,
sigma = NULL,
sigmaLower = as.numeric(c(R_NegInf)),
sigmaUpper = as.numeric(c(R_PosInf)),
sigmaDf = NULL,
sigmaIsChol = FALSE,
sigmaSeparation = "auto",
sigmaXform = 1L,
nCoresRV = 1L,
nObs = 1L,
dfSub = 0,
dfObs = 0,
simSubjects = TRUE,
simVariability = as.logical(c(NA_LOGICAL))
Named Vector of rxode2 model parameters
Estimate of Covariance matrix. When omega is a list, assume it is a block matrix and convert it to a full matrix for simulations. When omega is NA and you are using it with a rxode2 ui model, the
between subject variability described by the omega matrix are set to zero.
The degrees of freedom of a t-distribution for simulation. By default this is NULL which is equivalent to Inf degrees, or to simulate from a normal distribution instead of a t-distribution.
Lower bounds for simulated ETAs (by default -Inf)
Upper bounds for simulated ETAs (by default Inf)
Indicates if the omega supplied is a Cholesky decomposed matrix instead of the traditional symmetric matrix.
Omega separation strategy
Tells the type of separation strategy when simulating covariance with parameter uncertainty with standard deviations modeled in the thetaMat matrix.
□ "lkj" simulates the correlation matrix from the rLKJ1 matrix with the distribution parameter eta equal to the degrees of freedom nu by (nu-1)/2
□ "separation" simulates from the identity inverse Wishart covariance matrix with nu degrees of freedom. This is then converted to a covariance matrix and augmented with the modeled standard
deviations. While computationally more complex than the "lkj" prior, it performs better when the covariance matrix size is greater or equal to 10
□ "auto" chooses "lkj" when the dimension of the matrix is less than 10 and "separation" when greater than equal to 10.
When taking omega values from the thetaMat simulations (using the separation strategy for covariance simulation), how should the thetaMat values be turned int standard deviation values:
□ identity This is when standard deviation values are directly modeled by the params and thetaMat matrix
□ variance This is when the params and thetaMat simulates the variance that are directly modeled by the thetaMat matrix
□ log This is when the params and thetaMat simulates log(sd)
□ nlmixrSqrt This is when the params and thetaMat simulates the inverse cholesky decomposed matrix with the x\^2 modeled along the diagonal. This only works with a diagonal matrix.
□ nlmixrLog This is when the params and thetaMat simulates the inverse cholesky decomposed matrix with the exp(x\^2) along the diagonal. This only works with a diagonal matrix.
□ nlmixrIdentity This is when the params and thetaMat simulates the inverse cholesky decomposed matrix. This only works with a diagonal matrix.
Number between subject variabilities (ETAs) simulated for every realization of the parameters.
Named theta matrix.
Lower bounds for simulated population parameter variability (by default -Inf)
Upper bounds for simulated population unexplained variability (by default Inf)
The degrees of freedom of a t-distribution for simulation. By default this is NULL which is equivalent to Inf degrees, or to simulate from a normal distribution instead of a t-distribution.
Indicates if the theta supplied is a Cholesky decomposed matrix instead of the traditional symmetric matrix.
Number virtual studies to characterize uncertainty in estimated parameters.
Named sigma covariance or Cholesky decomposition of a covariance matrix. The names of the columns indicate parameters that are simulated. These are simulated for every observation in the solved
system. When sigma is NA and you are using it with a rxode2 ui model, the unexplained variability described by the sigma matrix are set to zero.
Lower bounds for simulated unexplained variability (by default -Inf)
Upper bounds for simulated unexplained variability (by default Inf)
Degrees of freedom of the sigma t-distribution. By default it is equivalent to Inf, or a normal distribution.
Boolean indicating if the sigma is in the Cholesky decomposition instead of a symmetric covariance
separation strategy for sigma;
Tells the type of separation strategy when simulating covariance with parameter uncertainty with standard deviations modeled in the thetaMat matrix.
□ "lkj" simulates the correlation matrix from the rLKJ1 matrix with the distribution parameter eta equal to the degrees of freedom nu by (nu-1)/2
□ "separation" simulates from the identity inverse Wishart covariance matrix with nu degrees of freedom. This is then converted to a covariance matrix and augmented with the modeled standard
deviations. While computationally more complex than the "lkj" prior, it performs better when the covariance matrix size is greater or equal to 10
□ "auto" chooses "lkj" when the dimension of the matrix is less than 10 and "separation" when greater than equal to 10.
When taking sigma values from the thetaMat simulations (using the separation strategy for covariance simulation), how should the thetaMat values be turned int standard deviation values:
□ identity This is when standard deviation values are directly modeled by the params and thetaMat matrix
□ variance This is when the params and thetaMat simulates the variance that are directly modeled by the thetaMat matrix
□ log This is when the params and thetaMat simulates log(sd)
□ nlmixrSqrt This is when the params and thetaMat simulates the inverse cholesky decomposed matrix with the x\^2 modeled along the diagonal. This only works with a diagonal matrix.
□ nlmixrLog This is when the params and thetaMat simulates the inverse cholesky decomposed matrix with the exp(x\^2) along the diagonal. This only works with a diagonal matrix.
□ nlmixrIdentity This is when the params and thetaMat simulates the inverse cholesky decomposed matrix. This only works with a diagonal matrix.
Number of cores used for the simulation of the sigma variables. By default this is 1. To reproduce the results you need to run on the same platform with the same number of cores. This is the
reason this is set to be one, regardless of what the number of cores are used in threaded ODE solving.
Number of observations to simulate (with sigma matrix)
Degrees of freedom to sample the between subject variability matrix from the inverse Wishart distribution (scaled) or scaled inverse chi squared distribution.
Degrees of freedom to sample the unexplained variability matrix from the inverse Wishart distribution (scaled) or scaled inverse chi squared distribution.
boolean indicated rxode2 should simulate subjects in studies (TRUE, default) or studies (FALSE)
determines if the variability is simulated. When NA (default) this is determined by the solver. | {"url":"https://nlmixr2.github.io/rxode2/reference/rxSimThetaOmega.html","timestamp":"2024-11-05T03:31:45Z","content_type":"text/html","content_length":"27116","record_id":"<urn:uuid:c835e2ac-ab65-474a-bcfe-4d1dec7ea9e2>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00454.warc.gz"} |
Money and Inflation Are Still Related
“There is perhaps no empirical regularity among economic phenomena that is based on so much evidence, for so wide a range of circumstances,” Milton Friedman observed in 1989, “as the connection
between substantial changes in the quantity of money and in the level of prices.” And yet, despite the wide body of work alluded to by Friedman, most monetary policymakers and economists believe that
there is no information to be gained by looking at monetary aggregates. This widespread belief has resulted in governments’ not estimating monetary aggregates, or else estimating them for a much
narrower set of series than in prior years. Monetary aggregates make no appearance in many econometric models of the economy, and are rarely if ever brought up in the briefings for Federal Open
Market Committee meetings.
This widespread belief that monetary aggregates are uninformative is incorrect.
Figure 1. Excess Money Growth and Inflation in 108 countries, 2008-2022.
Figure 1 shows the excess growth rate of money measured by M2 and the inflation rate across 108 countries for 2008 to 2022. M2 is a monetary aggregate that estimates the funds available to buy goods
and services. It includes currency, checking accounts, and savings accounts that are close substitutes for currency and checking accounts. The excess growth rate of money is the growth rate of M2,
less the growth rate of real income, as measured by GDP. The growth rate of income subtracts the non-inflationary growth of the goods and services available. If the growth rate of money and the
growth rate of income were equal, then the inflation rate would be roughly zero. Growth of money in excess of income growth fuels inflation.
The positive relationship between the inflation rate and excess money growth over these 14 years is obvious. A linear relationship with a coefficient of one between inflation and excess money growth
is an implication of some theories relating inflation and excess money growth. The correlation of inflation and excess money growth is 0.92. The slope of a line relating inflation and excess money
growth is 0.95, when estimated by a regression of inflation on excess money growth.
While not one, 0.95 is not all that far from one. Figure 1 shows a line with the one-for-one relationship and another with the slope of 0.95. They are not all that far apart. Most of the countries in
Figure 1 have lower inflation than implied by the excess money growth. This means that the demand for money in these countries increased even more than is implied by the growth rate of real income
alone. It does not mean there is no relationship between money growth and inflation.
Figure 2. Excess Money Growth and Inflation in Countries with less than 30 Percent Inflation, 2008-2022.
An often remarked aspect of Figure 1 is that the correlation may just reflect the high-inflation countries and the relationship for the low-inflation countries is far less evident. Figure 2 shows the
relationship between excess money growth and inflation for countries with average inflation less than 30 percentage points per year. The relation is not as clear, but the correlation between excess
money growth and inflation is 0.75. While this correlation of 0.75 is less than a correlation of 0.92 for all the countries, it is hardly trivial.
The slope of the regression line has a larger difference. Comparing the two figures, it is clear that the regression line in Figure 2 deviates from the slope equal to one by more. The slope of the
line is 0.85, which is farther from one than 0.95 but also far from zero. And zero is the number implied by an assertion that the information content of monetary aggregates is zero.
The data in Figure 2 are averaged over fourteen years of growth. The fourteen years is the result of data availability. But while the relationship between excess money growth and inflation is evident
over longer periods of time, it is not particularly evident for short periods of time. For the data in Figure 1, the correlation of the annual growth rates of excess money and inflation is 0.69,
quite a bit less than the 0.95 with the fourteen years of averaged data. For the countries with lower inflation in Figure 2, the correlation is 0.23, again quite a bit lower than the 0.85 with
averaged data.
These correlations show two things:
1. The relationship is weaker for countries with lower inflation than higher inflation; and
2. The relationship is weaker over shorter time periods.
There is a common explanation for both of these observations. Over shorter periods of time and in countries with relatively low inflation, Mark Fisher and Gerald Dwyer have shown that inflation which
is more persistent than money growth can explain both. Inflation is quite persistent as a general rule and money growth less so.
Is the information content of money growth zero? Unequivocally, the answer is no.
Why does this matter?
M2 in 2020 and 2021 increased by the largest percentages in the last 60 years. To the surprise of the Federal Reserve (although not everyone), inflation resulted. Not all countries have increased the
money stock to the same extent. Japan and Switzerland have not had outsized increases in the money stock and have not had higher inflation. Monetary policymakers and economists in the United States
and some other countries would have done better if they had not ignored money growth. | {"url":"https://aier.org/article/money-and-inflation-are-still-related/","timestamp":"2024-11-11T07:20:34Z","content_type":"text/html","content_length":"156511","record_id":"<urn:uuid:47996bd8-9190-4daa-b662-45685e8e7805>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00081.warc.gz"} |
ps_adjust: adjustment on propensity score in adapt4pv: Adaptive Approaches for Signal Detection in Pharmacovigilance
Implement the adjustment on propensity score for all the drug exposures of the input drug matrix x which have more than a given number of co-occurence with the outcome. The binary outcome is
regressed on a drug exposure and its estimated PS, for each drug exposure considered after filtering. With this approach, a p-value is obtained for each drug and a variable selection is performed
over the corrected for multiple comparisons p-values.
ps_adjust( x, y, n_min = 3, betaPos = TRUE, est_type = "bic", threshold = 0.05, ncore = 1 )
x Input matrix, of dimension nobs x nvars. Each row is an observation vector. Can be in sparse matrix format (inherit from class "sparseMatrix" as in package Matrix).
y Binary response variable, numeric.
n_min Numeric, Minimal number of co-occurence between a drug covariate and the outcome y to estimate its score. See details belows. Default is 3.
betaPos Should the covariates selected by the procedure be positively associated with the outcome ? Default is TRUE.
est_type Character, indicates which approach is used to estimate the PS. Could be either "bic", "hdps" or "xgb". Default is "bic".
threshold Threshold for the p-values. Default is 0.05.
ncore The number of calcul units used for parallel computing. Default is 1, no parallelization is implemented.
Input matrix, of dimension nobs x nvars. Each row is an observation vector. Can be in sparse matrix format (inherit from class "sparseMatrix" as in package Matrix).
Numeric, Minimal number of co-occurence between a drug covariate and the outcome y to estimate its score. See details belows. Default is 3.
Should the covariates selected by the procedure be positively associated with the outcome ? Default is TRUE.
Character, indicates which approach is used to estimate the PS. Could be either "bic", "hdps" or "xgb". Default is "bic".
The number of calcul units used for parallel computing. Default is 1, no parallelization is implemented.
The PS could be estimated in different ways: using lasso-bic approach, the hdps algorithm or gradient tree boosting. The scores are estimated using the default parameter values of est_ps_bic,
est_ps_hdps and est_ps_xgb functions (see documentation for details). We apply the same filter and the same multiple testing correction as in the paper UPCOMING REFERENCE: first, PS are estimated
only for drug covariates which have more than n_min co-occurence with the outcome y. Adjustment on the PS is performed for these covariates and one sided or two-sided (depend on betaPos parameter)
p-values are obtained. The p-values of the covariates not retained after filtering are set to 1. All these p-values are then adjusted for multiple comparaison with the Benjamini-Yekutieli correction.
COULD BE VERY LONG. Since this approach (i) estimate a score for several drug covariates and (ii) perform an adjustment on these scores, parallelization is highly recommanded.
An object with S3 class "ps", "adjust", "*", where "*" is "bic", "hdps" or "xgb"according on how the score were estimated.
Regression coefficients associated with the drug covariates. Numeric, length equal to the number of selected variables with this approach. Some elements could be NA if (i) the
estimates corresponding covariate was filtered out, (ii) adjustment model did not converge. Trying to estimate the score in a different way could help, but it's not insured.
corrected_pvals One sided p-values if betaPos = TRUE, two-sided p-values if betaPos = FALSE adjusted for multiple testing. Numeric, length equal to nvars.
selected_variables Character vector, names of variable(s) selected with the ps-adjust approach. If betaPos = TRUE, this set is the covariates with a corrected one-sided p-value lower than threshold.
Else this set is the covariates with a corrected two-sided p-value lower than threshold. Covariates are ordering according to their corrected p-value.
Regression coefficients associated with the drug covariates. Numeric, length equal to the number of selected variables with this approach. Some elements could be NA if (i) the corresponding covariate
was filtered out, (ii) adjustment model did not converge. Trying to estimate the score in a different way could help, but it's not insured.
One sided p-values if betaPos = TRUE, two-sided p-values if betaPos = FALSE adjusted for multiple testing. Numeric, length equal to nvars.
Character vector, names of variable(s) selected with the ps-adjust approach. If betaPos = TRUE, this set is the covariates with a corrected one-sided p-value lower than threshold. Else this set is
the covariates with a corrected two-sided p-value lower than threshold. Covariates are ordering according to their corrected p-value.
Benjamini, Y., & Yekuteli, D. (2001). "The Control of the False Discovery Rate in Multiple Testing under Dependency". The Annals of Statistics. 29(4), 1165–1188, doi: \Sexpr[results=rd]
set.seed(15) drugs <- matrix(rbinom(100*20, 1, 0.2), nrow = 100, ncol = 20) colnames(drugs) <- paste0("drugs",1:ncol(drugs)) ae <- rbinom(100, 1, 0.3) adjps <- ps_adjust(x = drugs, y = ae, n_min =
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/adapt4pv/man/ps_adjust.html","timestamp":"2024-11-08T01:01:56Z","content_type":"text/html","content_length":"28499","record_id":"<urn:uuid:6e5b1d4c-73d1-4b16-ae21-80740ac0d91c>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00316.warc.gz"} |
Descriptive Statistics Calculator
Enter the comma-separated values in the box to find descriptive statistics using the Descriptive Statistics Calculator.
We love feedback
How would you rate your experience?
Any thing you want to tell us?
Feedback Submitted Successfully.
How to Use a Descriptive Statistics Calculator?
Here are a few simple steps to calculate Descriptive statistics with the help of Descriptive Statistics Calculator:
• Enter your data into the input field. Make sure your data is separated by commas.
• Choose whether your data represents a sample or the entire population by selecting the option.
• Click the “Calculate" button to process your data.
• To clear the input field and reset the calculator, click the “Reset”
Standard Deviation0
Mid Range0
Q1 --> 0 Q2 --> 0 Q3 --> 0
Interquartile Range0
Sum of Squares0
Mean Absolute Deviation0
Root Mean Square0
Std Error of Mean0
Kurtosis Excess0
Coefficient of Variation0
Relative Standard Deviation0
Frequency Table
We love feedback
How would you rate your experience?
Any thing you want to tell us?
Feedback Submitted Successfully. | {"url":"https://www.standarddeviationcalculator.io/descriptive-statistics-calculator","timestamp":"2024-11-03T14:12:41Z","content_type":"text/html","content_length":"75272","record_id":"<urn:uuid:02873bbf-5891-4b35-b15c-83b1a9176fe7>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00109.warc.gz"} |
muzhikyan.ru How To Figure Amortization Payments
How To Figure Amortization Payments
This calculator will figure a loan's payment amount at various payment intervals - based on the principal amount borrowed, the length of the loan and the annual. How to Calculate Mortgage Loan
Payments, Amortization Schedules (Tables) by Hand or Computer Programming · P = principal, the initial amount of the loan · I = the. Payments Formula · PMT = total payment each period · PV = present
value of loan (loan amount) · i = period interest rate expressed as a decimal · n = number of loan. An amortization schedule for a loan is a list of estimated monthly payments. At the top, you'll see
the total of all payments. For each payment, you'll see the. Amortization Formula · P = Principal · r= Rate of interest · t = Time in terms of year · n = Monthly payment in a year · I = Interest · ƥ
= Monthly Payment or EMI.
Calculate Principal and Interest Payments Over Time. This loan amortization calculator figures your loan payment and interest costs at various payment intervals. Enter your desired payment - and the
tool will calculate your loan amount. Or, enter the loan amount and the tool will calculate your monthly payment. This amortization calculator returns monthly payment amounts as well as displays a
schedule, graph, and pie chart breakdown of an amortized loan. Loan Calculator with Amortization Schedule. Print-Friendly, Mobile-Friendly. Calculate Mortgages, Car Loans, Small Business Loans, etc.
Amortization schedules use columns and rows to illustrate payment requirements over the entire life of a loan. Looking at the table allows borrowers to see. This amortization calculator shows the
schedule of paying extra principal on your mortgage over time. See how extra payments break down over your loan term. The amortization table shows how each payment is applied to the principal balance
and the interest owed. Payment Amount = Principal Amount + Interest Amount. Amortization takes into account the total amount you'll owe when all interest has been calculated, then creates a standard
monthly payment. How much of that. Use this Amortization Schedule Calculator to estimate your monthly loan or mortgage repayments, and check a free amortization chart. Amortization is the process of
paying off a debt with a known repayment term in regular installments over time. Mortgages, with fixed repayment terms of up to. Use the amortization functions (menu items 9, 0, and A) to calculate
balance, sum of principal, and sum of interest for an amortization schedule.
An amortization calculator helps you understand how fixed mortgage payments work. It shows how much of each payment reduces your loan balance and how much. How Do I Calculate Amortization? To
calculate amortization, first multiply your principal balance by your interest rate. Next, divide that by 12 months to know. Loans that amortize, such as your home mortgage or car loan, require a
monthly payment. · Convert the interest rate to a monthly rate. · Multiply the principal. Monthly loan payment is $ for 60 payments at %. *indicates required. Loan inputs. A mortgage amortization
schedule shows a breakdown of your monthly mortgage payment over time. Figure out how to calculate your mortgage amortization. Use this simple amortization calculator to see a monthly or yearly
schedule of mortgage payments. Compare how much you'll pay in principal and interest and. Use our amortization schedule calculator to estimate your monthly loan repayments, interest rate, and payoff
date on a mortgage or other type of loan. Or, enter in the loan amount and we will calculate your monthly payment. You can then examine your principal balances by payment, total of all payments made.
For more information about or to do calculations specifically for car payments, please use the Auto Loan Calculator. Amortization schedule. Year $0 $50K $K.
The following mathematical formula can also be used to calculate the loan payments and to construct an amortization schedule. instalment payment. = PV x i x. A is the monthly payment, P is the loan's
initial amount, i is the monthly interest rate, and n is the total number of payments. Using our numbers (P = , Create a printable amortization schedule, with dates and subtotals, to see how much
principal and interest you'll pay over time. An amortization schedule shows how the proportions of your monthly mortgage payment that go to principal and interest change over the life of the loan.
Bret's mortgage/loan amortization schedule calculator: calculate loan payment, payoff time, balloon, interest rate, even negative amortizations.
Amortization Schedules. After figuring out the monthly payment using the amortization formula, the car loan amortization schedule is fairly easy to derive.
Nwpn Stock Forecast | How To Set Up Amazon Pay Later | {"url":"https://muzhikyan.ru/overview/how-to-figure-amortization-payments.php","timestamp":"2024-11-06T08:04:49Z","content_type":"text/html","content_length":"11913","record_id":"<urn:uuid:2e0d85ab-40df-41eb-9566-82a5b721a801>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00215.warc.gz"} |
Calendar Field Options
Calendar Options
• Calendar Picker
□ Clicking in the date field's textbox will open up the calendar picker.
This short video gives instructions on how to use the calendar picker.
□ The date can be manually entered in the textbox in the below formats:
Please type dates in the month/day/year format, other formats could be interpreted but our code will have to guess and the resulting date might be converted incorrectly.
☆ MM/DD/YYYY
☆ MM/DD/YY
☆ M/DD/YYYY
☆ M/DD/YY
☆ MM/D/YYYY
☆ MM/D/YY
☆ M/D/YYYY
☆ M/D/YY
□ Text interpretations of dates - Text interpretations of dates are very strict and should remain inside the parameters mentioned below, for example, "two weeks" will not be converted but "2
weeks" will.
☆ # days|weeks|months|years, e.g.: “2 days”, “3 weeks”, “6 months”, “2 years”.
☆ this|next <weekday>|week|month|year, e.g.: “this wednesday”, “next monday”, “next week”, “next month”, “next year”.
☆ first|second|third|last|previous <weekday> of <month> <year>, e.g.: “first tuesday of june”, “last friday of may”, “second wednesday of july”, “third friday of august 2023”.
NOTE: A text interpretation of a date can be converted but be mindful of the calendar context as they might be converted but do so in the past. For example, the text "first Monday of March" will be
converted to 03/07/2022 instead of 03/06/2023, this is because, at the time of writing, the calendar year is 2022 and our logic will convert the text to a date in that year. To solve this, try
specifying the future year, e.g. "first Monday of March 2023".
0 comments
Article is closed for comments. | {"url":"https://support.seedsiep.com/hc/en-us/articles/10933012762647-Calendar-Field-Options","timestamp":"2024-11-09T14:16:48Z","content_type":"text/html","content_length":"30469","record_id":"<urn:uuid:40072786-2cf9-4a66-8a85-328a5dd63be8>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00812.warc.gz"} |
Can You Pass This Mental Calculation Math Quiz?
What is the next number in this sequence: 2, 4, 8, 16, 32 ...
Each number in this sequence is simply doubled, so the next number is double of 32, which is 64. This number sequence goes on forever and each number will always be even since the sum of two even
numbers is always even.
A direct flight from NYC to LA is 2,500 miles. If a plane travels 500 miles per hour, how long does the flight take?
A plane that flies 500 miles per hour will need to travel for five hours to cover 2,500 miles. Most direct LA to NYC flights will take a bit more than five hours once you account for boarding and
landing times.
If an hour-long TV show has 16 minutes of commercials, what is the actual duration of the show?
If you're watching a 60-minute program on cable television, it will likely contain about 16 minutes of commercials, meaning the actual program is only 44 minutes long. If you're watching premium TV
or a streaming service, you might bypass commercials altogether.
There are only 22 chocolate bars in the 36-pack you bought yesterday. How many chocolate bars are missing?
This is simple subtraction. To answer the question you just have to solve 36 - 22, which equals 14. Regardless of size, 14 candy bars is a lot of candy bars and isn't a recommended amount of candy to
eat in one sitting.
If your cellphone battery burns 7% of power per hour, how long will a 35% battery level last?
If your cellphone battery burns 7% per hour, and you have 35% battery life left, you just need to divide 35 by 7 to figure out how long until your battery dies. In this situation, 35/7 = 5, so a 35%
battery life will last five hours.
An airplane has five columns of 30 seats in one cabin and 10 rows of three seats in another cabin. How many seats are in total?
There are 30 seats in each column and there are five columns, so that means you have (30 x 5) 150 seats in one cabin. There are three seats in each row in the other cabin and 10 rows total, so that
means there is (10 x 3) 30 seats in the other cabin. 150 + 30 = 180 total seats.
If you make $10 an hour and you work three days a week for four hours each day, how much money will you make after 10 weeks?
If you work three days a week at four hours per day, you log 12 hours of work per week. If you make $10 per hour, you can multiply 10 by 12 to equal 120. If you make $120 a week, you will make $1,200
in 10 weeks ($120 x 10).
If you make $60,000 per year and 25% goes to taxes, how much do you pay in taxes each year?
The easiest way to find any percent of any number is to first find 10% of the number. To find 25% of 60,000, just find 10%, which is 6,000. To find 20%, just double 10% (6,000 x 2 = 12,000). To find
5%, just split 10% in half (6,000/2 = 3,000). Then add the results together (12,000 + 3,000 = 15,000).
Monday started with 275 books in the library. A total of 125 were checked out and 75 were returned. How many books were in the library at the end of Monday?
This problem is simple addition and subtraction but the values have to be calculated in the correct order. If you start with 275 and subtract 125, you're left with 150. And if 75 books are added to
150, you're left with 225.
How do you write 4:20 p.m. in military time?
In order to convert the time to military time, you just have to add it to 12 (noon). So 4:20 pm is 12 + 4:20 and equals 16:20. What Americans refer to as military time is known around the world as
the 24-hour clock and it makes a.m. and p.m. distinctions unnecessary.
Andrea shot four three-pointers and 12 two-pointers during the game. What percentage of her shots were three-pointers?
Andrea shot 16 shots in total during the basketball game and four of them were three-pointers. Half of 16 is eight, and half of eight is four, which is 25% of 16. The easiest way to mentally
calculate percentages is by finding 10% or 50% of the number and using it as a starting point.
If a box is filled with 10 rows of 10 candy bars that are stacked 10 bars high, how many candy bars are in the box?
The formula for volume is length x width x height, so for this equation all you have to do is solve 10 x 10 x 10, which equals 1,000. If a box can hold 10 rows of 10 candy bars stacked 10 bars high,
it can hold 1,000 candy bars.
If you're 6'2'' and your arm extends two feet above your head, how high do you have to jump to dunk a 10-foot basketball hoop?
If you stand 6'2'' and your arm extends two feet above your head, that means you can reach something that is eight feet and two inches high. A standard basketball hoop is 10 feet, so if you want to
reach the rim, you'd have to jump one foot and 10 inches, or 22 inches.
One out of every four people at the Halloween party was dressed as Dracula. There were 200 people at the party. How many people were dressed as Dracula?
All you have to do to figure this problem out is divide 200 by four to equal 50. If there are 50 people dressed as Dracula at the 200-person party, then one out of every four people is dressed as
If you sleep eight hours, work eight hours and eat for three hours every day, how much free time do you have left?
Eight plus eight is 16 and plus three is 19. There are 24 hours in a day, so 24 - 19 = 5. If you follow this routine, you only have five hours of free time per day. If you sleep or work more than
eight hours per day, your free time shrinks even more.
What is the next number in this sequence: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55...
This is the classic Fibonacci sequence, which is probably the most important number sequence in the history of number sequences, and each number is the sum of the previous two numbers. The next
number is 55 + 34 = 89.
How fast do you need to drive if you want to travel 50 miles in 30 minutes?
If you need to travel 50 miles in 30 minutes, you're likely not going to make it because you would have to drive 100 mph the entire way. At 100 mph you will travel 100 miles in an hour, and 50 miles
in a half hour.
How many different letters are in the word "Mississippi"?
Mississippi is one of the longest state names at 11 letters long, but it only has four different letters: m, i, s and p. The capital of Mississippi is Jackson and is named for the seventh President
of the United States, Andrew Jackson.
America declared its independence in 1776. When is America's 250th birthday?
In 1876, the United States held nationwide centennial celebrations. In 1976, the nation held bicentennial celebrations. In 2026, the country will hold semiquincentennial celebrations to observe the
nation's 250th anniversary.
If you walk at a pace of three miles per hour, how long will it take you to walk 10 miles?
If you walk three miles per hour, it will take you three hours to walk nine miles. And since you are walking at a pace of one mile every 20 minutes, you will walk 10 miles in three hours and 20
You brought $7.75 to the corner store and bought a soda for $1.25, a bag of chips for $0.50 and three pieces of candy at $0.10 each. How much money do you have left?
Nobody likes adding and subtracting decimals but it's a necessary function of life. You started with $7.75 then subtracted $1.25 to equal $6.50. Then a bag of chips costs $0.50 and drops you to
$6.00, and three pieces of 10-cent candy costs $0.30 in total, leaving you with $5.70.
James made $250 after selling cookies for $0.25 each. How many cookies did he sell?
In this scenario, James has to sell four cookies to make $1. He then has to do that 250 times in order to make $250. So if four cookies make a dollar, you can use 250 x 4 = 1,000. James sold 1,000
cookies at $0.25 a piece to make $250.
How many feet are in a meter?
A meter is 3.28 feet and is a common unit of measure around the world. It is more accurately equivalent to 100 centimeters, which is 39.37 inches. In order to be crowned the fastest person in the
world, you have to win Olympic gold in the 100-meter dash.
Every single baseball game last season, Mike got exactly one more hit than Jake. Which of the following equations can you use to find Mike's hits per game?
If Mike got exactly one more hit than Jake in every baseball game, you can accurately calculate the amount of hits he got per game by knowing how many hits Jake got and using the formula: Jake's hits
+ 1 = Mike's hits.
Three pizzas were ordered for the pizza party. Each pizza had eight slices. Seven people ate two slices each, and one person ate five slices. How many slices were left over?
If there are three pizzas with eight slices each, there are 24 slices of pizza in total. Seven people each ate two slices, which equals 14 slices in total, and one kid ate five slices to bring the
total to 19 slices. That leaves five uneaten slices of pizza.
If you begin taking a test at 2:32 p.m. and finish at 3:17 p.m., how long did the test take?
It's not fun to add and subtract time values and everything is extra complicated because there are only 60 minutes in an hour instead of 100, but it's a skill everyone needs to know. If you starts at
2:32 p.m. it will take 28 minutes to get to 3:00 p.m., and then 28 + 17 = 45.
What is the next number in this sequence: 1, 3, 7, 15, 31...
The numbers in this sequence follow the formula: (n x 2) + 1. So (1 x 2) + 1 = 3, (3 x 2) + 1 = 7, (7 x 2) + 1 = 15 and (15 x 2) + 1 = 31. The next number in the sequence is (31 x 2) + 1 = 63. There
is no easy trick to figuring out all number sequences since each one is so different.
What is the next number in this sequence: 2, 6, 14, 30, 62 ...
This sequence of numbers follows the following formula: (n + 1) x 2. So (2 + 1) x 2 = 6, (6 + 1) x 2 = 14 and (14 + 1) x 2 = 30. The next number in this sequence is (62 + 1) x 2 = 126. It would then
go to 254 and 510.
If you deposit $10,000 into a bank account that gains 2% interest per year. How much money in interest will you accrue after 5 years?
A normal savings account isn't the best way to multiply your money. If you put $10,000 in an account that gains 2% interest, you will gain $200 in interest each year. You will net $1,000 in interest
over the course of five years.
A jar of almond butter costs $6. If you buy 2 and get 1 free, how much does each jar cost?
Two jars of almond butter cost $12, and if you get a third one free, it equals three jars of almond butter for $12. Twelve divided by three equals four. With this discount, each jar of almond butter
will technically cost $4.
If you post 10 Instagram pictures and five of them receive 100 likes and five of them receive 50 likes, what is the average amount of likes each post receives?
To find the average you have to add up all the likes (100 x 5 = 500) + (50 x 5 = 250) = 750 likes, and then divide them by the number of Instagram posts, which is 10 (750/10 = 75). Each Instagram
post receives 75 likes on average.
If you have 180 tickets and a drink costs seven tickets, how many drinks can you get?
The number 180 isn't evenly divisible by seven, but you can use 7 x 20 to get to 140. Then you know 7 x 5 = 35, and 35 + 140 = 175. Twenty-five drinks will cost you 175 tickets, and you will have
five tickets left over, which isn't enough to get a 26th drink.
The formula to roughly convert Fahrenheit to Celsius is (F - 30)/2. What does 82 degrees Fahrenheit convert to in Celsius?
If you want to convert 82 degrees Fahrenheit to Celsius you first have to subtract 30 (82 - 30 = 52) and then divide by two (52/2 = 26). The exact conversion is 27.7 degrees, and if you want the
exact conversion, use 32 and 1.8 in place of 30 and 2.
How many miles are in a 5K race?
One mile equates to about 1.6 kilometers and a 5K race equates to 3.1 miles. A kilometer is 1,000 meters, which itself is 1,000 centimeters. The meter runs the metric system and it might be the most
popular unit of measure in the world.
What does 25,000 x 400,000 equal?
Whenever you're multiplying numbers that end in zeros, all you have to do is multiply the initial numbers and then place all the zeroes onto the end of the result. For this example just multiply 25 x
4 (100) and then add eight zeroes (10,000,000,000). The answer is 10 billion. | {"url":"https://lahore.zoo.com/quiz/can-you-pass-this-mental-calculation-math-quiz?rmalg=es&remorapos=8&remorasrc=f07879b0def0445a93007f24eb1df43f&remoraregion=bottom","timestamp":"2024-11-14T08:10:44Z","content_type":"text/html","content_length":"268736","record_id":"<urn:uuid:23ffcf99-47fb-4a2f-908d-13ea69012132>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00658.warc.gz"} |
2.6 Working With Formulas
Chapter 2: Linear Equations
2.6 Working With Formulas
In algebra, expressions often need to be simplified to make them easier to use. There are three basic forms of simplifying, which will be reviewed here. The first form of simplifying expressions is
used when the value of each variable in an expression is known. In this case, each variable can be replaced with the equivalent number, and the rest of the expression can be simplified using the
order of operations.
Evaluate [latex]p(q + 6)[/latex] when [latex]p = 3[/latex] and [latex]q = 5.[/latex]
[latex]\begin{array}{rl} (3)((5)+(6))&\text{Replace }p\text{ with 3 and }q\text{ with 5 and evaluate parentheses} \\ (3)(11)&\text{Multiply} \\ 33&\text{Solution} \end{array}[/latex]
Whenever a variable is replaced with something, the new number is written inside a set of parentheses. Notice the values of 3 and 5 in the previous example are in parentheses. This is to preserve
operations that are sometimes lost in a simple substitution. Sometimes, the parentheses won’t make a difference, but it is a good habit to always use them to prevent problems later.
Evaluate [latex]x + zx(3 - z)\left(\dfrac{x}{3}\right)[/latex] when [latex]x = -6[/latex] and [latex]z = -2.[/latex]
[latex]\begin{array}{rl} (-6)+(-2)(-6)\left[(3)-(-2)\right]\left(\dfrac{-6}{3}\right)&\text{Evaluate parentheses} \\ \\ -6+(-2)(-6)(5)(-2)&\text{Multiply left to right} \\ -6+12(5)(-2)&\text{Multiply
left to right} \\ -6+60(-2) &\text{Multiply} \\ -6-120 & \text{Subtract} \\ -126& \text{Solution}\end{array}[/latex]
Isolating variables in formulas is similar to solving general linear equations. The only difference is, with a formula, there will be several variables in the problem, and the goal is to solve for
one specific variable. For example, consider solving a formula such as [latex]A = \pi r^2+ \pi rs[/latex] (the formula for the surface area of a right circular cone) for the variable [latex]s.[/
latex] This means isolating the [latex]s[/latex] so the equation has [latex]s[/latex] on one side. So a solution might look like [latex]s = \dfrac{A - \pi r^2}{\pi r}.[/latex] This second equation
gives the same information as the first; they are algebraically equivalent. However, one is solved for the area [latex]A,[/latex] while the other is solved for the slant height of the cone [latex]s.
When solving a formula for a variable, focus on the one variable that is being solved for; all the others are treated just like numbers. This is shown in the following example. Two parallel problems
are shown: the first is a normal one-step equation, and the second is a formula that you are solving for [latex]x.[/latex]
Isolate the variable [latex]x[/latex] in the following equations.
[latex]\begin{array}{ll} \begin{array}{rrl} 3x&=&12 \\ \\ \dfrac{3x}{3}&=&\dfrac{12}{3} \\ \\ x&=&4 \end{array} & \hspace{0.5in} \begin{array}{rrl} wx&=&z \\ \\ \dfrac{wx}{w}&=&\dfrac{z}{w} \\ \\ x&=
&\dfrac{z}{w} \end{array} \end{array}[/latex]
The same process is used to isolate [latex]x[/latex] in [latex]3x = 12[/latex] as in [latex]wx = z.[/latex] Because [latex]x[/latex] is being solved for, treat all other variables as numbers. For
these two equations, both sides were divided by 3 and [latex]w,[/latex] respectively. A similar idea is seen in the following example.
Isolate the variable [latex]n[/latex] in the equation [latex]m+n=p.[/latex]
To isolate [latex]n,[/latex] the variable [latex]m[/latex] must be removed, which is done by subtracting [latex]m[/latex] from both sides:
[latex]\begin{array}{rrrrl} m&+&n&=&p \\ -m&&&&\phantom{p}-m \\ \hline &&n&=&p-m \end{array}[/latex]
Since [latex]p[/latex] and [latex]m[/latex] are not like terms, they cannot be combined. For this reason, leave the expression as [latex]p - m.[/latex]
Isolate the variable [latex]a[/latex] in the equation [latex]a(x - y) = b.[/latex]
This means that [latex](x-y)[/latex] must be isolated from the variable [latex]a.[/latex]
If no individual term inside parentheses is being solved for, keep the terms inside them together and divide by them as a unit. However, if an individual term inside parentheses is being solved for,
it is necessary to distribute. The following example is the same formula as in Example 2.6.5, but this time, [latex]x[/latex] is being solved for.
Isolate the variable [latex]x[/latex] in the equation [latex]a(x - y) = b.[/latex]
First, distribute [latex]a[/latex] throughout [latex](x-y)[/latex]:
[latex]\begin{array}{rrrrr} a(x&-&y)&=&b \\ ax&-&ay&=&b \end{array}[/latex]
Remove the term [latex]ay[/latex] from both sides:
[latex]\begin{array}{rrrrl} ax&-&ay&=&b \\ &+&ay&&\phantom{b}+ay \\ \hline &&ax&=&b+ay \end{array}[/latex]
[latex]ax[/latex] is then divided by [latex]a[/latex]:
The solution is [latex]x=\dfrac{b+ay}{a},[/latex] which can also be shown as [latex]x=\dfrac{b}{a}+y.[/latex]
Be very careful when isolating [latex]x[/latex] not to try and cancel the [latex]a[/latex] on the top and the bottom of the fraction. This is not allowed if there is any adding or subtracting in the
fraction. There is no reducing possible in this problem, so the final reduced answer remains [latex]x = \dfrac{b + ay}{a}.[/latex] The next example is another two-step problem.
Isolate the variable [latex]m[/latex] in the equation [latex]y=mx+b.[/latex]
First, subtract [latex]b[/latex] from both sides:
[latex]\begin{array}{lrrrr} y&=&mx&+&b \\ \phantom{y}-b&&&-&b \\ \hline y-b&=&mx&& \end{array}[/latex]
Now divide both sides by [latex]x[/latex]:
Therefore, the solution is [latex]m=\dfrac{y-b}{x}.[/latex]
It is important to note that a problem is complete when the variable being solved for is isolated or alone on one side of the equation and it does not appear anywhere on the other side of the
The next example is also a two-step equation. It is a problem from earlier in the lesson.
Isolate the variable [latex]s[/latex] in the equation [latex]A= \pi r^2+\pi rs.[/latex]
Subtract [latex]\pi r^2[/latex] from both sides:
[latex]\begin{array}{rrrrr} A\phantom{- \pi r^2}&=&\pi r^2&+&\pi rs \\ \phantom{A}-\pi r^2&&-\pi r^2&& \\ \hline A- \pi r^2&=&\pi rs&& \end{array}[/latex]
Divide both sides by [latex]\pi r[/latex]:
[latex]\dfrac{A-\pi r^2}{\pi r}=\dfrac{\pi rs}{\pi r}[/latex]
The solution is:
[latex]s=\dfrac{A-\pi r^2}{\pi r}[/latex]
Formulas often have fractions in them and can be solved in much the same way as any fraction. First, identify the LCD, and then multiply each term by the LCD. After reducing, there will be no more
fractions in the problem.
Isolate the variable [latex]m[/latex] in the equation [latex]h=\dfrac{2m}{n}.[/latex]
To clear the fraction, multiply both sides by [latex]n[/latex]:
This leaves:
Divide both sides by 2:
Which reduces to:
Isolate the variable [latex]b[/latex] in the equation [latex]A=\dfrac{a}{2-b}.[/latex]
To clear the fraction, multiply both sides by [latex](2-b)[/latex]:
Which reduces to:
Distribute [latex]A[/latex] throughout [latex](2-b),[/latex] then isolate:
[latex]\begin{array}{rrrrl} 2A&-&Ab&=&a \\ -2A&&&&\phantom{a}-2A \\ \hline &&-Ab&=&a-2A \end{array}[/latex]
Finally, divide both sides by [latex]-A[/latex]:
[latex]b=\dfrac{a-2A}{-A}\text{ or }b=\dfrac{2A-a}{A}[/latex]
For questions 1 to 10, evaluate each expression using the values given.
1. [latex]p + 1 + q - m\text{ (}m = 1, p = 3, q = 4)[/latex]
2. [latex]y^2+y-z\text{ (}y=5, z=1)[/latex]
3. [latex]p- \left[pq \div 6\right]\text{ (}p=6, q=5)[/latex]
4. [latex]\left[6+z-y\right]\div 3\text{ (}y=1, z=4)[/latex]
5. [latex]c^2-(a-1)\text{ (}a=3, c=5)[/latex]
6. [latex]x+6z-4y\text{ (}x=6, y=4, z=4)[/latex]
7. [latex]5j+kh\div 2\text{ (}h=5, j=4, k=2)[/latex]
8. [latex]5(b+a)+1+c\text{ (}a=2, b=6, c=5)[/latex]
9. [latex]\left[4-(p-m)\right]\div 2+q\text{ (}m=4, p=6, q=6)[/latex]
10. [latex]z+x-(1^2)^3\text{ (}x=5, z=4)[/latex]
For questions 11 to 34, isolate the indicated variable from the equation.
11. [latex]b\text{ in }ab=c[/latex]
12. [latex]h\text{ in }g=\dfrac{h}{i}[/latex]
13. [latex]x\text{ in }\left(\dfrac{f}{g}\right)x=b[/latex]
14. [latex]y\text{ in }p=\dfrac{3y}{q}[/latex]
15. [latex]x\text{ in }3x=\dfrac{a}{b}[/latex]
16. [latex]y\text{ in }\dfrac{ym}{b}=\dfrac{c}{d}[/latex]
17. [latex]\pi\text{ in }V=\dfrac{4}{3}\pi r^3[/latex]
18. [latex]m\text{ in }E=mv^2[/latex]
19. [latex]y\text{ in }c=\dfrac{4y}{m+n}[/latex]
20. [latex]r\text{ in }\dfrac{rs}{a-3}=k[/latex]
21. [latex]D\text{ in }V=\dfrac{\pi Dn}{12}[/latex]
22. [latex]R\text{ in }F=k(R-L)[/latex]
23. [latex]c\text{ in }P=n(p-c)[/latex]
24. [latex]L\text{ in }S=L+2B[/latex]
25. [latex]D\text{ in }T=\dfrac{D-d}{L}[/latex]
26. [latex]E_a\text{ in }I=\dfrac{E_a-E_q}{R}[/latex]
27. [latex]L_o\text{ in }L=L_o(1+at)[/latex]
28. [latex]m\text{ in }2m+p=4m+q[/latex]
29. [latex]k\text{ in }\dfrac{k-m}{r}=q[/latex]
30. [latex]T\text{ in }R=aT+b[/latex]
31. [latex]Q_2\text{ in }Q_1=P(Q_2-Q_1)[/latex]
32. [latex]r_1\text{ in }L=\pi(r_1+r_2)+2d[/latex]
33. [latex]T_1\text{ in }R=\dfrac{kA(T+T_1)}{d}[/latex]
34. [latex]V_2\text{ in }P=\dfrac{V_1(V_2-V_1)}{g}[/latex] | {"url":"https://opentextbc.ca/intermediatealgebraberg/chapter/working-with-formulas/","timestamp":"2024-11-11T18:12:19Z","content_type":"text/html","content_length":"132976","record_id":"<urn:uuid:6fcb8e78-c239-4b7b-a36f-cd0d7221c16d>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00500.warc.gz"} |
Displaying data
Tips for teaching maths skills to our future chemists, by Paul Yates of Keele University. In this issue: displaying data
Information is redundant if not shared, so being able to communicate numerical data is important. Additionally students need to be able to express their experimental data in an appropriate way for
assessment purposes, and so they must be able to organise data in a meaningful way.^1 Generally, data are displayed using tabular or graphical techniques, or both.
What difficulties do students have?
I find that students are better at interpreting, rather than constructing, graphs.^2 Many are reluctant to tabulate data, preferring lengthy repetitive text which is occasionally punctuated by the
quantities they want to report. When it comes to drawing graphs, some students have problems choosing the right scale and the origin, and units pose problems whether they are displaying their data in
a table or on a graph.
How is this best taught?
Research distinguishes between qualitative (interpreting trends) and quantitative (reading values) activities, and stresses the importance of encouraging students to construct graphs.^2
Table or graph?
A table may be the final form in which data are published, or it may be a preliminary stage in the construction of a graph.^3 A table has the advantage that a required number of significant figures
may be quoted, whereas in a graph this is limited by the size of the graph and the range of the data. On the other hand, data in a graph can be interpolated and extrapolated rapidly, if accuracy is
not an immediate requirement. Trends can be detected in graphs that would be unlikely to be detected if the data were given in tabular form only.^4
Units in graphs and tables
Most physical chemists agree that the figures in the body of a table, or plotted on a graph, should be pure numbers, and that the units need to be stated on table headings or axis labels. This
approach also has the advantage that we can unambiguously include powers of 10. In the worked example we have a table heading of [NH[3]]/10^-7 mol dm^-3, and the third entry in this column is 5.84.
If we equate these we have
[NH[3]]/10^-7 mol dm^-3 = 5.84
and multiplying both sides by 10^-7 mol dm^-3 gives
([NH[3]]/10^-7 mol dm^-3) × 10^-7 mol dm^-3 = 5.84 × 10^-7 mol dm^-3
[NH[3]] = 5.84 × 10^-7 mol dm^-3
A similar argument applies when reading data from graphs. Alternatively, you could use the column heading [NH[3]](× 10^-7 mol dm^-3).^4 The question immediately arises as to whether the numbers in
the table have been multiplied by 10^-7, or need to be multiplied by 10^-7. The previous approach, I believe, removes this ambiguity.
Guidelines for tables
Reference 5 contains some useful guidelines for presenting data in tables. For example:
• use headings at the top with data in columns, as opposed to side headings with data in rows;
• tabulate in ascending or descending order of the independent variable. For example, when measuring concentration as a function of time, time is the independent variable since the time at which
readings are taken is chosen by the experimenter;
• vertically align decimal points in each column. This advice can be extended to include values without decimal points, where powers of 10 should be vertically aligned, as in the worked example;
• box in all headings and tabulations by ruled horizontal and/or vertical lines.
Further suggestions include rounding all figures to two significant figures and including row and column averages.^5 However, it may not be wise to share this advice with students as general rules
because they might be tempted to apply them without exception. Figures to be compared need to be close, while at the same time incorporating gaps to guide the eye across the table. This applies to
tables with large amounts of data, so might be useful for extended student projects.
Guidelines for graphs
References 1 and 6 give some useful advice for presenting data on graphs, including:
• the title should describe the relationship being investigated;
• plot the independent variable on the x -axis and the dependent variable (ie not controlled by the experimenter) on the y -axis;
• label the axes with names of quantities being measured and their units;
• choose scales so that the page is filled;
• only include the origin if specifically needed. Usually this would be if the range of data included or went close to the origin. Intercepts of straight lines can be calculated much more
accurately than they can be read from a graph;
• choose simple scales: factors of two, five and 10 are much easier than three or seven. The latter scales make points very difficult to plot and are far more likely to lead to mistakes;
• plot data points using an appropriate symbol. Some authors^4 suggest that the symbol representing the data point should be too large rather than too small, but others^6 suggest × or ⊙, which I
think are preferable because they are visible without obscuring the actual position of the point.^6 Others suggest that different symbols should be used to represent data collected under
different conditions;^5
• join the points with the smoothest curve possible so that half the points lie above and half below the line. The term 'curve' is used here in its most general sense; this will be a straight line
in certain cases. No departure from a smooth curve should be accepted unless there are several neighbouring points supporting this course of action.
Final thoughts
Opinion is divided on whether graphs should be plotted as an experiment proceeds, or after the experimental work has been done. The advantage of the former is that any unusual points can immediately
be investigated, whereas in the latter the experimenter will know the range of the values to be plotted. A compromise is to plot the graph at the end of the experiment, but keep the apparatus
available to allow any dubious measurements to be repeated.
There are certain pitfalls when graphs are generated automatically using a computer. When judging the appearance of the final graph the considerations already outlined still apply, and appropriate
user intervention may be required to achieve this. In particular, axis labels, including units with subscripts, superscripts, or Greek letters may need to be generated by a word processor with 'cut
and paste' used as appropriate.
Worked example
These data^7 relate to the thermal decomposition of ammonia at 2000 K:
NH[3(g)]→ NH[2(g)] + ½H[2(g)]
Tabular representation
Thermal decomposition of ammonia
t/h [NH[3]]/10^-7 mol dm^-3
0 8.00
25 6.75
50 5.84
75 5.15
Graphical representation
This was produced using the program Microsoft Excel. The range of values on the y -axis starts at 5.0 (rather than zero) and all numbers on the axis are given to one decimal place. Axis labels and
the title were added using text boxes in Microsoft Word and pasting from the table. (Note that the symbols recommended in the main text are not available, but choosing a suitable size for the symbol
does make them sufficiently visible.)
1. E. A. Steele, K. A. Kelsey and J. Morita, Environ. and Ecolog. Stat., 2004, 11, 21.
2. H. H. Tairab and A. K. K. Al-Naqbi, J. Bio. Educ., 2004, 38, 127.
3. P. D. Lark, B. R. Craven and R. C. L. Bosworth, The handling of chemical data. Oxford: Pergamon, 1968.
4. L. Kirkup, Experimental methods. Brisbane: Wiley, 1994.
5. A. S. C. Ehrenberg, J. Roy. Stat. Soc., Series A (General), 1977, 140, 277.
6. M. Pentz and M. Shott, Handling experimental data. Milton Keynes: Open University, 1988.
7. J. C. Kotz and P. Treichel, Chemistry & chemical reactivity, 3rd edn. Forth Worth: Saunders College, 1996. | {"url":"https://edu.rsc.org/maths/displaying-data/2020320.article","timestamp":"2024-11-02T08:33:15Z","content_type":"text/html","content_length":"227474","record_id":"<urn:uuid:4295f0e5-2f6e-4a11-a014-02ca7a526ce2>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00330.warc.gz"} |
Length - Units, Conversion Table, Measurements and FAQs
Length can be defined as the measurement or extent of something from end to end. In other words, it is the larger of the two or the highest of three dimensions of objects. For example, a rectangle
has its dimensions as the breadth and length. Also, the length can be defined as a quantity with dimension distance in the International System of Quantities.
What is the Unit of Length?
As defined in the previous section, the length is the measure of something. The base unit for length in the International System of Units (SI) system, is the meter and it is abbreviated as m. This
can be done when the length is expressed using suitable units, such as the length of a table is 2 meters or 200 cm, the length of a string is 15 meters, and so on. Thus, the units of measurement help
in understanding the given parameters in numeral format.
Units of Length Conversion
In the metric system, length or distance is expressed in terms of kilometers (km), meters (m), decimeter (dm), centimeters (cm), millimeters (mm). It is possible to convert units from km to m or from
m to cm or from cm to mm and so on.
For measuring large lengths the unit kilometer is used as the unit of length. The relationship between different units of length is given below:
Units for the Measurement of Length
10 millimeters (mm) ---1 centimeter (cm)
10 centimeters ----1 decimeter (d)
10 decimeters --- 1 meter (m)
10 meters -----1 dekameter (da)
10 dekameter ---1 hectometer (h)
10 hectometers ---1 kilometers (km)
The conversion of units from one unit to another unit is essential while solving many problems to understand the various parameters. Below are a few conversions which are basic and will help in
Length Conversion Table
Metric System and Customary System
With the assortment of various units, the metric system seems quite a logical system as compared to the known customary system, and converting units in the metric system is much simpler than
converting them in the customary system.
The United States is the last remaining nation, which is still to adopt the concept of the metric system. However, it is an easy task to convert units in metric to the customary system by using the
given conversion.
1 meter/metre (m) = 39.4 inches = 1.09 yards;
1 yard = 0.92 m
1 centimeter (cm) = 0.39 inches;
1 inch = 2.54 cm
What is the Difference Between Length and Height?
You can go through the below comparison chart which helps you to differentiate the dimensions such as length and height and understand it in a better way.
Measurements of Length
Measurements of length and distance are done in various ways. Did you know that you can use the average human body as a means to measure? For example, the foot is around 25 – 30 cm. This particular
unit of measurement is still in use nowadays. We also use units like yards and inches which are still in use but they are not the standard units of length measurements.
What Is Distance?
The distance can be defined as the product of time and speed and it can be represented as follows:
• d is equal to the distance travelled in m
• t is equal to the time taken to cover the distance in s
• s is equal to the speed in m/s (metre per second)
Metric Units of Distance
The most commonly used units of measurement of length are as follows:
• Millimetre
• Centimetre
• Metre
• Kilometre
FAQs on Length
Question 1: Is Length Always the Longest Dimension?
Answer: Length: If you choose to use the word length, it should refer to the longest dimension of the rectangle. Think of how you would describe the distance along a road: it is the long-distance,
that is the length of the road.
Question 2: What is Length Width?
Answer: Width or breadth usually refers to a shorter dimension when the length is the longest one. Length is defined as the measure of one spatial dimension, whereas area is a measure of two
dimensions (length squared) and volume is a measure of three dimensions (length cubed).
Question 3: What Comes First Length or Height?
Answer: For example, when referring to blueprints or the size of a room, the dimensions are listed with width first and length second. Likewise, when measuring windows, the width comes first then the
height of the window. Conversely, when expressing the measurements of a painting on canvas, the height comes first then the width of the canvas.
Question 4: What's the Difference Between Length and Height?
Answer: The difference between length and height is very precise, as length denotes how long the shape is and height denotes how tall it is. Length is the horizontal measurement in a plane whereas
height is the vertical measurement. | {"url":"https://www.vedantu.com/maths/length","timestamp":"2024-11-05T09:39:51Z","content_type":"text/html","content_length":"321616","record_id":"<urn:uuid:e8a7b0f6-31bb-46d7-ab35-2c1e62e3219d>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00898.warc.gz"} |
Quantum phases of hardcore bosons with
SciPost Submission Page
Quantum phases of hardcore bosons with repulsive dipolar density-density interactions on two-dimensional lattices
by J. A. Koziol, G. Morigi, K. P. Schmidt
This is not the latest submitted version.
This Submission thread is now published as
Submission summary
Authors (as registered SciPost users): Jan Alexander Koziol · Giovanna Morigi · Kai Phillip Schmidt
Submission information
Preprint Link: https://arxiv.org/abs/2311.10632v1 (pdf)
Data repository: https://zenodo.org/records/10126774
Date submitted: 2023-11-20 14:09
Submitted by: Koziol, Jan Alexander
Submitted to: SciPost Physics
Ontological classification
Academic field: Physics
• Atomic, Molecular and Optical Physics - Theory
Specialties: • Condensed Matter Physics - Theory
• Condensed Matter Physics - Computational
Approaches: Theoretical, Computational
We analyse the ground-state quantum phase diagram of hardcore Bosons interacting with repulsive dipolar potentials. The bosons dynamics is described by the extended-Bose-Hubbard Hamiltonian on a
two-dimensional lattice. The ground state results from the interplay between the lattice geometry and the long-range interactions, which we account for by means of a classical spin mean-field
approach limited by the size of the considered unit cells. This extended classical spin mean-field theory accounts for the long-range density-density interaction without truncation. We consider three
different lattice geometries: square, honeycomb, and triangular. In the limit of zero hopping the ground state is always a devil's staircase of solid (gapped) phases. Such crystalline phases with
broken translational symmetry are robust with respect to finite hopping amplitudes. At intermediate hopping amplitudes, these gapped phases melt, giving rise to various lattice supersolid phases,
which can have exotic features with multiple sublattice densities. At sufficiently large hoppings the ground state is a superfluid. The stability of phases predicted by our approach is gauged by
comparison to the known quantum phase diagrams of the Bose-Hubbard model with nearest-neighbour interactions as well as quantum Monte Carlo simulations for the dipolar case on the square lattice. Our
results are of immediate relevance for experimental realisations of self-organised crystalline ordering patterns in analogue quantum simulators, e.g., with ultracold dipolar atoms in an optical
Current status:
Has been resubmitted
Reports on this Submission
Report #2 by Anonymous (Referee 2) on 2024-2-26 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2311.10632v1, delivered 2024-02-26, doi: 10.21468/SciPost.Report.8617
1) The manuscript is clearly written. A brief review of the method used is provided. All relevant physics is neatly summarized in the first sections of the manuscript.
2) Citations are carefully chosen and very relevant to the topic.
3) Results are for the most part clearly described.
4) The results of this work are interesting and relevant to current experiments with ultracold gases.
5) The manuscript provides novel in depth analysis of the strongly interacting regime (where the method presented in most accurate). A plethora of solid phases are presented which had not been
observed before with approximation-free methods.
1) On the one hand, significant emphasis is given to QUANTUM phases, such as supersolid phases, in the introductory part of the manuscript. On the other hand, most of the description of results is
dedicated to solid phases. Only supersolid phases at 1/2 (for bipartite) and 1/3,1/4 (for non-bipartite) are marked in the phase diagrams. While the 1/4 supersolid in triangular lattice is new, the
others were already known. It would be better to further describe novel supersolids such as the ones mentioned for square lattice (for honeycomb lattice limitations of the method are discussed). For
example, what are supersolids "with more complex sub lattice structure"? Maybe a picture of density patterns of this supersolid would help. If it is already there, then it is not clear that it does
refer to a supersolid phase. Moreover, the authors should comment about how their results compare with QMC study of hard-core purely repulsive dipolar bosons in triangular lattice (PRL 104, 125302
Overall, this is an interesting, well-written manuscript which presents results on many-body hamiltonians relevant to current experimental efforts in ultracold gases. It meets the criteria for
publication of SciPost and I therefore recommend it for publication once the requested changes are addressed.
Requested changes
1) Do authors have an explanation on why their method does not compare as well with approximation-free numerical methods in the case of non-bipartite triangular lattice?
2) Why is the method so much more accurate at the Heisenberg point?
3) Can the authors address the point mentioned in the Weaknesses Section of this report?
We thank the referee for the report. We have modified the text accordingly in order to address the referees remarks and criticisms.
Below we address the points raised by the referee.
i) Regarding the nearest-neighbour case: the literature states that quantum fluctuations have a larger impact on systems with geometrical frustration. The model on the triangular lattice is subject
to geometrical frustration, therefore a larger impact of quantum fluctuation on the phase diagram is expected. We have added a clarification of this point in Sec. 5.
ii) The classical approximation captures the transition at the Heisenberg point correctly since the transition is driven by the change in symmetry of the Hamiltonian from an easy-axis to a
rotationally invariant interaction. This change in symmetry is present in the classical spin picture, as well as, the full quantum mechanical problem for the same parameter value $t/V=1/2$. We have
added a clarification of this point in Sec. 5.
iii) We have strengthened the emphasis on the quantum phases in the results section. We realised that we used the misleading name "order" for "supersolid" phases in Figs. 5, 6, 10. We correspondingly
modified it in the figure captions and in the text. Further, we added text passages discussing the different supersolid phases for the square and the triangular lattice.
In Fig. 7, we have added real and momentum space depictions of the complex supersolid phases on the square lattice. We have added a picture how to understand the supersolid phases on the square
lattice in terms of defective checkerboard patterns.
Regarding the triangular lattice, we are grateful for the reference to the quantum Monte Carlo study PRL 104, 125302 (2010). We have added a discussion of the reference in comparison to our results
in Sec. 7. In the conclusion we highlighted the application of the discussed method to determine relevant observables and unit cells for further numerical studies.
Report #1 by Anonymous (Referee 1) on 2024-2-2 (Invited Report)
• Cite as: Anonymous, Report on arXiv:2311.10632v1, delivered 2024-02-02, doi: 10.21468/SciPost.Report.8487
1) A detailed mean-field analysis of dipolar bosons particularly in a triangular lattice where QMC data does not exist.
1) The paper seems to be a possible and somewhat expected application of the formalism developed in Ref 34 of the work (by the same authors).
2) The mean field phase diagrams of dipolar bosons has been worked out by several authors in the phase predicting existence of supersolid phase and
devil staircase structures. See for example Phys. Rev. A 83, 013627 (2011), New J. Phys. 17 123014 (2015) or Europhys. Lett. 87 36002 (2009). In fact there are many other similar results in the
I believe, based on the weaknesses mentioned, the paper in its present form is suitable for publication in Scipost Phys. Core. Whereas the analysis is detaiked and it deserves publication in some
form, I do not see sufficiently new results which allows consideration in Scipost.
We thank the referee for the report. We have modified the text accordingly in order to address the referees remarks and criticisms.
We acknowledge that the referee recommends the publication of our work in SciPost Phys. Core. However, we disagree and believe that this assessment is mostly due to the presentation of our results,
which did not sufficiently clarified their novelty. For this purpose, we have revised introduction and the text in order to emphasize the results we obtained.
As far as it concerns the relation to SciPost Phys. 14, 136 (2023): we now include quantum quantum fluctuations and study the interplay of quantum fluctuations, long-range interactions, and
frustration. The novelty of the present work lies thus not only on the methodology, but also on the results we obtain.
We summarize our results:
We identify a number of features that are not captured by the nearest-neighbor truncation and unveil the effect of the interplay between the long-range dipolar interactions with the lattice geometry.
In the limit of vanishing tunnelling, we show that the ground state is a devil's staircase of solid phases, which can be identified up to an arbitrary precision. This precision, in fact, is only
limited by the size of the considered unit cells and the optimization scheme we employ. To the best of our knowledge, we are not aware of studies mapping out quantitatively the complete devil's
staircase for two-dimensional systems.
For finite tunnelling, we find solid and supersolid phases, some of which have been reported by numerical studies based on advanced quantum Monte Carlo simulations. Differing from these works, we can
avoid the limitation imposed by the constraints on the unit cell and thereby unveil a plethora of solid and supersolid phases which have not been reported before. We characterize the corresponding
phases, and unveil a very structured phase diagram. Besides the fundamental interest, our results will be a guide for experimentalists working in the field of dipolar gases as well as frustrated
Finally, our results thus also provide an important benchmark and guidance for numerical programs, identifying the relevant unit cells, simulation geometries, and observables.
Let us finally state that we fully agree with the referee that "The mean field phase diagrams of dipolar bosons has been worked out by several authors in the phase predicting existence of supersolid
phase and devil staircase structures". Nevertheless, the majority of these works truncated the dipolar interactions to the nearest-neighbour, as is the case for some of the ones the referee mentions.
Indeed, a result of our paper is in line with advanced numerical studies, and show that the nearest-neighbor approximation for dipolar interactions misses to identify multi-lattice solid and
supersolid phases, that our resummation approach instead allows to capture.
We have added these references to the conclusions, in the paragraph where we discuss realizations with tilted dipoles.
Let us further comment on the references mentioned by the referee:
• Thieleman et al., PRA 83, 013627 (2011): The authors focus on nearest-neighbor interactions and a complex hopping as well as cold atoms in a staggered flux. In our work, we study a different
model because we have no staggered flux. At the same time, we concentrate on the effects of long-range interactions, thus no truncation in the power-law is performed.
• Zhang et al., New J. Phys. 17 123014 (2015: The authors apply QMC for dipolar hardcore bosons on the square lattice, but restrict to half filling. In our work, we investigate the whole phase
diagram at any filling for several 2d lattice geometries.
• Isakov et al., Europhys. Lett. 87 36002 (2009): The authors apply variational QMC and Schwinger-boson mean-field theory for half-filling and anisotropic nearest-neighbor interactions giving rise
to a staircase of phases. Again, we focus on the effects of the untruncated long-range interactions. | {"url":"https://scipost.org/submissions/2311.10632v1/","timestamp":"2024-11-10T02:43:05Z","content_type":"text/html","content_length":"48719","record_id":"<urn:uuid:96e07f56-4358-45be-a540-03cd5277425e>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00891.warc.gz"} |
LILAVATI (12TH CENTURY)
Lilavati was a mathematician and follower of her father Bhaskaracharya, the medieval mathematician, who codified Indian mathematics and described operations upon the zero, and whose work influenced
later Arab mathematicians who took the science to Europe. The first part of his Siddhanta Shiromani, the ‘Patiganita’ dealing with arithmetic, is often called the ‘Lilavati’ after his daughter. In
this, Bhaskara’s theories are presented in the form of a dialogue with his daughter, and it is clear from the text that she was an accomplished mathematician in her own right. Legend has it that she
became a widow very young and Bhaskara, having tried to prevent her widowhood by astrological means, thereafter consoled her by teaching her his skills. Bhaskara’s theories were far ahead of
corresponding European thought on the subject and were only surpassed in the eighteenth century in the West. | {"url":"https://www.streeshakti.com/bookL.aspx?author=12","timestamp":"2024-11-05T13:28:16Z","content_type":"application/xhtml+xml","content_length":"4748","record_id":"<urn:uuid:abe0f0b9-2b02-421c-a843-c011e057a856>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00479.warc.gz"} |
What are the 7 basic fundamental quantities?
The present SI has seven base quantities: time, length, mass, electric current, thermodynamic temperature, amount of substance, and luminous intensity.
What are fundamental quantities in physics 11?
There are seven fundamental quantities– length, mass, temperature, time, electric current, luminous intensity and amount of substance.
What are fundamental quantities?
The Fundamental Quantity is independent Physical Quantity that is not possible to express in other Physical Quanitity. It is used as pillars for other quantities aka Derived Quantities. In Physics,
Length, Mass, Time, Electric Current, Thermodynamic Temperature, etc are examples of Fundamental Quantities.
What are fundamental quantities and fundamental units?
i.e. the unit of fundamental quantity is called fundamental unit. It does not depend on any other unit. There are seven fundamental (basic) physical quantities: Length, mass, time, temperature,
electric current, luminous intensity and amount of a substance and their units are fundamental units.
Is newton a fundamental unit?
Option A: Newton, We all know that Newton is the unit of force. So we can clearly see that this is a derived unit.
What are fundamental units Class 11?
The fundamental units are the base units defined by International System of Units. These units are not derived from any other unit, therefore they are called fundamental units. The seven base units
are: Meter (m) for Length. Second (s) for Time.
Why current is a fundamental quantity?
Current is fundamental quantity because it can be measured more easily than counting the charges. We can measure current using instruments (Ammeter) but charges can’t be counted so easily. A
fundamental quantity must be easy to measure therefore we use electric current as a fundamental quantity instead of charge.
What is fundamental and derived quantities?
Fundamental quantity: quantities which are independent on other physical quantity. ex: length, mass, time, current, amount of substance, luminous intensity, thermodynamic temperature, Derived
quantity: quantities which depend on fundamental quantities.
Is time a fundamental quantity?
Because there are two different possible values of 25 sec or 20 sec for the same event, time cannot be treated as a fundamental quantity.
How many fundamental quantities are?
In physics, there are seven fundamental physical quantities that are measured in base or physical fundamental units: length, mass, time, electric current temperature, amount of substance, and
luminous intensity.
Is energy a fundamental quantity?
So, since energy is the product of a few variables (e.g., E=12mv2 or E=mgฮ h or what have you), it cannot be a fundamental unit.
What are the 7 SI units in physics?
The units and their physical quantities are the second for time, the metre for length or distance, the kilogram for mass, the ampere for electric current, the kelvin for thermodynamic temperature,
the mole for amount of substance, and the candela for luminous intensity.
What are the 7 fundamental quantities and their symbols?
• Length (metre)
• Mass (kilogram)
• Time (second)
• Electric current (ampere)
• Thermodynamic temperature (kelvin)
• Amount of substance (mole)
• Luminous intensity (candela)
How do you find the fundamental quantity?
What is power SI unit?
unit of power is watt (W). When a body does work at the rate of 1 joule per second, its power is 1 watt.
Which is unit of force?
The SI unit of force is the newton, symbol N. The base units relevant to force are: The metre, unit of length โ symbol m. The kilogram, unit of mass โ symbol kg.
What is called derived unit?
derived unit. noun. a unit of measurement obtained by multiplication or division of the base units of a system without the introduction of numerical factors.
Which fundamental unit is joule?
One joule equals the work done (or energy expended) by a force of one newton (N) acting over a distance of one meter (m). One newton equals a force that produces an acceleration of one meter per
second (s) per second on a one kilogram (kg) mass. Therefore, one joule equals one newtonโ ขmeter.
What is base unit of joule?
The SI unit of energy is the joule (J). The joule has base units of kgยทmยฒ/sยฒ = Nยทm. A joule is defined as the work done or energy required to exert a force of one newton for a distance of one
What is the SI unit of fundamental?
The fundamental quantities and their SI units are: the kilogram for mass, the metre for measurement of length, the second for time, the mole for amount of substance, the ampere for electric current,
and the candela for luminous intensity.
What is independent unit?
Independent living units, also sometimes referred to as villas, offer one, two or three bedroom accommodation, in a village environment, for older people who are actively independent and able to care
for themselves.
Why Litre is not a fundamental unit?
Litre is a unit of volume, which is a derived physical quantity.
Is charge a scalar or vector?
Charge has only magnitude but no direction. So, charge is a scalar quantity.
Is voltage a base quantity?
Units, such as the joule, newton, volt and ohm, are SI units, but they are not base SI units.
Is temperature a basic quantity?
Hence temperature is not a fundamental quantity. In the past, temperature was used for the measurement of “hotness”. For that we (Humans) devised different temperature scale and laws like the Zeroth
Law of Thermodynamics. | {"url":"https://physics-network.org/what-are-the-7-basic-fundamental-quantities/","timestamp":"2024-11-05T18:55:39Z","content_type":"text/html","content_length":"307319","record_id":"<urn:uuid:7c5200a8-1851-45c7-bc30-05fc1c708f7c>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00408.warc.gz"} |
Is part of the Bibliography
This paper reports on the fifth version of the Mixed Integer Programming Library. The MIPLIB 2010 is the first MIPLIB release that has been assembled by a large group from academia and from industry,
all of whom work in integer programming. There was mutual consent that the concept of the library had to be expanded in order to fulfill the needs of the community. The new version comprises 361
instances sorted into several groups. This includes the main benchmark test set of 87 instances, which are all solvable by today's codes, and also the challenge test set with 164 instances, many of
which are currently unsolved. For the first time, we include scripts to run automated tests in a predefined way. Further, there is a solution checker to test the accuracy of provided solutions using
exact arithmetic.
We report on the selection process leading to the sixth version of the Mixed Integer Programming Library. Selected from an initial pool of over 5,000 instances, the new MIPLIB 2017 collection
consists of 1,065 instances. A subset of 240 instances was specially selected for benchmarking solver performance. For the first time, the compilation of these sets was done using a data-driven
selection process supported by the solution of a sequence of mixed integer optimization problems, which encoded requirements on diversity and balancedness with respect to instance features and
performance data.
This paper describes a new instance library for Quadratic Programming (QP), i.e., the family of continuous and (mixed)-integer optimization problems where the objective function, the constrains, or
both are quadratic. QP is a very diverse class of problems, comprising sub-classes of problems ranging from trivial to undecidable. This diversity is reflected in the variety of solution methods for
QP, ranging from entirely combinatorial ones to completely continuous ones, including many for which both aspects are fundamental. Selecting a set of instances of QP that is at the same time not
overwhelmingly onerous but sufficiently challenging for the many different interested communities is therefore important. We propose a simple taxonomy for QP instances that leads to a systematic
problem selection mechanism. We then briefly survey the field of QP, giving an overview of theory, methods and solvers. Finally, we describe how the library was put together, and detail its final
Tai256c is the largest unsolved quadratic assignment problem (QAP) instance in QAPLIB; a 1.48% gap remains between the best known feasible objective value and lower bound of the unknown optimal
value. This paper shows that the instance can be converted into a 256 dimensional binary quadratic optimization problem (BQOP) with a single cardinality constraint which requires the sum of the
binary variables to be 92.The converted BQOP is much simpler than the original QAP tai256c and it also inherits some of the symmetry properties. However, it is still very difficult to solve. We
present an efficient branch and bound method for improving the lower bound effectively. A new lower bound with 1.36% gap is also provided.
Tai256c is the largest unsolved quadratic assignment problem (QAP) instance in QAPLIB. It is known that QAP tai256c can be converted into a 256 dimensional binary quadratic optimization problem
(BQOP) with a single cardinality constraint which requires the sum of the binary variables to be 92. As the BQOP is much simpler than the original QAP, the conversion increases the possibility to
solve the QAP. Solving exactly the BQOP, however, is still very difficult. Indeed, a 1.48% gap remains between the best known upper bound (UB) and lower bound (LB) of the unknown optimal value. This
paper shows that the BQOP admits a nontrivial symmetry, a property that makes the BQOP very hard to solve. The symmetry induces equivalent subproblems in branch and bound (BB) methods. To effectively
improve the LB, we propose an efficient BB method that incorporates a doubly nonnegative relaxation, the standard orbit branching and a technique to prune equivalent subproblems. With this BB method,
a new LB with 1.25% gap is successfully obtained, and computing an LB with 1.0% gap is shown to be still quite difficult.
Tai256c is the largest unsolved quadratic assignment problem (QAP) instance in QAPLIB. It is known that QAP tai256c can be converted into a 256 dimensional binary quadratic optimization problem
(BQOP) with a single cardinality constraint which requires the sum of the binary variables to be 92. As the BQOP is much simpler than the original QAP, the conversion increases the possibility to
solve the QAP. Solving exactly the BQOP, however, is still very difficult. Indeed, a 1.48% gap remains between the best known upper bound (UB) and lower bound (LB) of the unknown optimal value. This
paper shows that the BQOP admits a nontrivial symmetry, a property that makes the BQOP very hard to solve. Despite this difficulty, it is imperative to decrease the gap in order to ultimately solve
the BQOP exactly. To effectively improve the LB, we propose an efficient BB method that incorporates a doubly nonnegative relaxation, the orbit branching and the isomorphism pruning. With this BB
method, a new LB with 1.25% gap is successfully obtained, and computing an LB with gap is shown to be still quite difficult. | {"url":"https://opus4.kobv.de/opus4-zib/solrsearch/index/search/searchtype/authorsearch/author/Hans+Mittelmann","timestamp":"2024-11-14T05:17:34Z","content_type":"application/xhtml+xml","content_length":"49998","record_id":"<urn:uuid:ed3641f5-71e0-4e9d-b6a2-c7cefab24cdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00097.warc.gz"} |
1.7: The Rainbow
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
I do not know the exact shape of a raindrop, but I doubt very much if it is drop-shaped! Most raindrops will be more or less spherical, especially small drops, because of surface tension. If large,
falling drops are distorted from an exact spherical shape, I imagine that they are more likely to be flattened to a sort of horizontal pancake shape rather than drop shaped. Regardless, in the
analysis in this section, I shall assume drops are spherical, as I am sure small drops will be.
We wish to follow a light ray as it enters a spherical drop, is internally reflected, and finally emerges. See Figure I.15. We shall refer to the distance \(b\) as the impact parameter.
We see a ray of light headed for the drop, which I take to have unit radius, at impact parameter \(b\). The deviation of the direction of the emergent ray from the direction of the incident ray is
\[D = \theta - \theta' + \pi -2\theta' +\theta - \theta' = \pi + 2\theta - 4\theta'. \label{eq:1.7.1} \]
However, we shall be more interested in the angle \(r = \pi − D\). A ray of light that has been deviated by \(D\) will approach the observer from a direction that makes an angle \(r\) from the centre
of the bow, which is at the anti-solar point (Figure I.16)
We would like to find the deviation \(D\) as a function of impact parameter. The angles of incidence and refraction are related to the impact parameter as follows:
\[\sin\theta=b,\label{eq:1.7.2} \]
\[\cos\theta=\sqrt{1-b^2},\label{eq:1.7.3} \]
\[\sin\theta' = b/n,\label{eq:1.7.4} \]
\[ \cos\theta = \sqrt{1-b^2/n^2}. \label{eq:1.7.5} \]
These, together with Equation \(\ref{eq:1.7.1}\), give us the deviation as a function of impact parameter. The deviation goes through a minimum – and \(r\) goes through a maximum. The deviation for a
light ray of impact parameter \(b\) is
\[D = \pi + 2\sin^{-1}b - 4\sin^{-1}(b/n)\label{eq:1.7.6} \]
The angular distance \(r\) from the centre of the bow is \(r = \pi − D\), so that
\[r = 4 \sin^{-1}(b/n) - 2\sin^{-1}b.\label{eq:1.7.7} \]
This is shown in Figure I.17 for \(n\) = 1.3439 (blue - \(\gamma\) = 400 nm) and \(n\) = 1.3316 (red - \(\gamma\) = 650 nm).
Differentiation gives the maximum value, \(R\), of \(r\) - i.e. the radius of the bow – or the minimum deviation \(D_{\text{min}}\). We obtain for the radius of the bow
\[R = 4\sin^{-1}\sqrt{\frac{4-n^2}{3n^2}}- 2\sin^{-1}\sqrt{\frac{4-n^2}{3}}. \label{eq:1.7.8} \]
For \(n\) = 1.3439 (blue) this is 40° 31' and for \(n\) = 1.3316 (red) this is 42° 17'. Thus the blue is on the inside of the bow, and red on the outside.
For grazing incidence (impact parameter = 1), the deviation is \( 2 \pi -4 \sin^{-1}(1/n)\), or 167° 40' for blue or 165° 18' for red. This corresponds to a distance from the centre of the bow \( r =
4 \sin^{-1}(1/n)-\pi\), which is ' 12° 20' for blue and 14° 42' for red. It will be seen from figure I.17 that for radii less than \(R\) (i.e. inside the rainbow) but greater than 12° 20' for blue
and 14° 42' for red there are two impact parameters that result in the same deviation, i.e. in the same position inside the bow. The paths of two rays with the same deviation are shown in Figure
I.18. One ray is drawn as a full line, the other as a dashed line. They start with different impact parameters, and take different paths through the drop, but finish in the same direction. The
drawing is done for a deviation of 145°, or 35° from the bow centre. The two impact parameters are 0.969 and 0.636. When these two rays are recombined by being brought to a focus on the retina of the
eye, they have satisfied all the conditions for interference, and the result will be brightness or darkness according as to whether the path difference is an even or an odd number of half
If you look just inside the inner (blue) margin of the bow, you can often clearly see the interference fringes produced by two rays with the same deviation. I haven’t tried, but if you were to look
through a filter that transmits just one colour, these fringes (if they are bright enough to see) should be well defined. The optical path difference for a given deviation, or given \(r\), depends on
the radius of the drop (and on its refractive index). For a drop of radius \(a\) it is easy to see that the optical path difference is
\( 2a(\cos\theta_2 - \cos\theta_1) - 4n(\cos\theta'_2-\cos\theta'_1),\)
where \(\theta_1\) is the larger of the two angles of incidence. Presumably, if you were to measure the fringe spacing, you could determine the size of the drops. Or, if you were to conduct a Fourier
analysis of the visibility of the fringes, you could determine, at least in principle, the size distribution of the drops.
Some distance outside the primary rainbow, there is a secondary rainbow, with colours reversed – i.e. red on the inside, blue on the outside. This is formed by two internal reflections inside the
drop (Figure I.19). The deviation of the final emergent ray from the direction of the incident ray is \((\theta − \theta') + (π − 2\theta') + (π − 2\theta') + (\theta − \theta')\), or \(2π + 2\theta
− 6\theta'\) counterclockwise, which amounts to \(D = 6\theta' − 2\theta\) clockwise. That is,
\[ D = 6\sin^{-1}(b/n)-2\sin^{-1}b. \label{eq:1.7.9} \]
clockwise, and, as before, this corresponds to an angular distance from the centre of the bow \(r = \pi − D\). I show in Figure I.20 the angular distance from the centre of the bow versus the impact
parameter \(b\). Notice that \(D\) goes through a maximum and hence \(r\) has a minimum value. There is no light scattered outside the primary bow, and no light scattered inside the secondary bow.
When the full glory of a primary bow and a secondary bow is observed, it will be seen that the space between the two bows is relatively dark, whereas it is brighter inside the primary bow and outside
the secondary bow.
Differentiation shows that the least value of \(r\), (greatest deviation) corresponding to the radius of the secondary bow is
\[R = 6\sin^{-1} \sqrt{\frac{3-n^2}{2n^2}} - 2\sin^{-1}\sqrt{\frac{3-n^2}{2}}\label{eq:1.7.10} \]
For \(n = 1.3439\) (blue) this is 53° 42' and for \(n = 1.3316\) (red) this is 50° 31'. Thus the red is on the inside of the bow, and blue on the outside.
In principle a tertiary bow is possible, involving three internal reflections. I don’t know if anyone has observed a tertiary bow, but I am told that the primary bow is blue on the inside, the
secondary bow is red on the inside, and “therefore” the tertiary bow would be blue on the inside. On the contrary, I assert that the tertiary bow would be red on the inside. Why is this?
Let us return to the primary bow. The deviation is (Equation \(\ref{eq:1.7.1}\)) \(D = π + 2\theta − 4\theta'\). Let’s take \(n = 4/3\), which it will be for somewhere in the middle of the spectrum.
According to Equation \(\ref{eq:1.7.8}\), the radius of the bow \((R = \pi − D_{\text{min}})\) is then about 42° . That is, \(2\theta' − \theta\) = 21° . If we combine this with Snell’s law, \(3\sin
\theta = 4\sin \theta'\), we find that, at minimum deviation (i.e. where the primary bow is), \(\theta\) = 60°.5 and \(\theta'\) = 40°.8. Now, at the point of internal reflection, not all of the
light is reflected (because \(\theta'\) is less than the critical angle of 36°.9), and it will be seen that the angle between the reflected and refracted rays is (180 − 60.6 − 40.8) degrees = 78°.6.
Those readers who are familiar with Brewster’s law will understand that when the reflected and transmitted rays are at right angles to each other, the reflected ray is completely plane polarized. The
angle, as we have seen, is not 90°, but is 78°.6, but this is sufficiently close to the Brewster condition that the reflected light, while not completely plane polarized, is strongly polarized. Thus,
as can be verified with a polarizing filter, the rainbow is strongly plane polarized.
I now want to address the question as to how the brightness of the bow varies from centre to circumference. It is brightest where the slope of the deviation versus impact parameter curve is least –
i.e. at minimum deviation (for the primary bow) or maximum deviation (for the secondary bow). Indeed the radiance (surface brightness) at a given distance from the centre of the bow is (among other
things) inversely proportional to the slope of that curve. The situation is complicated a little in that, for deviations between \(D_{\text{min}}\) and \(2\pi - 4\sin^{-1}(1/n)\), (this latter being
the deviation for grazing incidence), there are two impact parameters giving rise to the same deviation, but for deviations greater than that (i.e. closer to the centre of the bow) only one impact
parameter corresponds to a given deviation.
Let us ask ourselves, for example, how bright is the bow at 35° from the centre (deviation 145°)? The deviation is related to impact parameter by Equation \(\ref{eq:1.7.6}\). For \(n = 4/3\), we find
that the impact parameters for deviations of 144, 145 and 146 degrees are as follows:
D° b
144 0.6583 and 0.9623
145 0.6366 and 0.9693
146 0.6157 and 0.9736
Figure I.21 shows a raindrop seen from the direction of the approaching photons.
Any photons with impact parameters within the two dark annuli will be deviated between 144° and 146°, and will ultimately approach the observer at angular distances between 36° and 34° from the
centre. The radiance at a distance of 35° from the centre will be proportional, among other things, to the sum of the areas of these two annuli.
I have said “among other things”. Let us now think about other things. I have drawn Figure I.15 as if all of the light is transmitted as it enters the drop, and then all of it is internally reflected
within the drop, and finally all of it emerges when it leaves the drop. This is not so, of course. At entrance, at internal reflection and at emergence, some of the light is reflected and some is
transmitted. The fractions that are reflected or transmitted depend on the angle of incidence, but, for minimum deviation, about 94% is transmitted on entry to and again at exit from the drop, but
only about 6% is internally reflected. Also, after entry, internal reflection and exit, the percentage of polarization of the ray increases. The formulas for the reflection and transmission
coefficients (Fresnel’s equations) are somewhat complicated (Equations 1.5.1 and 1.5.2) are for unpolarized incident light), but I have followed them through as a function of impact parameter, and
have also taken account of the sizes of the one or two annuli involved for each impact parameter, and I have consequently calculated the variation of surface brightness for one color \((n = 4/3)\)
from the centre to the circumference of the bow. I omit the details of the calculations, since this chapter was originally planned as an elementary account of reflection and transmission, and we seem
to have gone a little beyond that, but I show the results of the calculation in Figure I.22. I have not, however, taken account of the interference phenomena, which can often be clearly seen just
within the primary bow. | {"url":"https://phys.libretexts.org/Bookshelves/Optics/Geometric_Optics_(Tatum)/01%3A_Reflection_and_Refraction/1.07%3A_The_Rainbow","timestamp":"2024-11-12T12:38:59Z","content_type":"text/html","content_length":"139051","record_id":"<urn:uuid:9782e940-3021-4c1c-8596-474ab7772452>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00715.warc.gz"} |
Determinants, Adjugates, and Inverses
(A new question of the week)
Looking for a new topic, I realized that a recent question involves determinants, and an older one provides the background for that. We’ll continue the series on determinants by seeing how they can
be used in finding the inverse of a matrix, and how something called the adjugate matrix might fit in (with side trips into Cramer’s Rule and row reduction).
Finding an inverse using determinants
This question came from Sarah, in February of last year:
I was studying matrices, and was thinking, is there some proof on finding the inverse of a matrix?
I know how to do it step by step by heart but l do not understand what I’m doing and why it is like that.
For example, the inverse uses the determinant of a matrix – how do you interpret it? For instance, if the determinant of a 3×3 matrix is 2, what is that telling you about the matrix?
We also find minors – if an element has a minor of -1, what does that really mean, please?
We’ve recently seen what a determinant means, algebraically and geometrically; but the “meaning” in this context is a little different. We haven’t yet looked at minors, which are determinants of
Doctor Fenton answered:
Hi Sarah,
Yes, there are ways of proving that a given algorithm does produce an inverse to a matrix, and there is more than one way to compute the inverse, one of which is to use determinants.
It would help to know what you already know about matrices. Do you use matrices to solve systems of linear equations, to transform vectors (column matrices), or for some other application?
Sarah replied,
Thanks for your reply. I’m using it in a course about mathematical economics where it is mostly applied to finding inverses to solve a system of 3 equations. If you’re familiar with some economic
theory, there is also an application to find OLS estimators in a regression.
We had covered matrices before, but now l want to understand a bit deeper what I’m actually doing.
So l know if I’m using determinants, l can find the reciprocal of that and multiply by the adjoint, where the adjoint is the transpose of the cofactor matrix but beyond that, l still don’t know
what the determinant is. I’ve always learnt it as “ad – bc”.
Even minors, l get the definition that you delete the ith and jth row and column and find determinant of resultant matrix, but doing that by heart is a bit strange because l don’t understand why
l am doing that, in the sense l don’t know what the minor shows you and how it leads to the inverse matrix. I think that logic is why you can only apply inverses to square matrices, although to
solve systems of equations, number of equations = number of unknowns shouldn’t be a problem.
We had previously covered row reduction technique, l also know Laplace expansion and the short hand rule. And we have solved systems using Cramer’s rule.
We’ll touch on most of these topics: Finding the inverse using what she calls the “adjoint“, more often today called the “adjugate“, and also by row reduction; “minors” in a determinant (used in
finding the adjugate, and also in the Laplace expansion for evaluating a determinant; and Cramer’s rule for solving a system of equations.
Sometime we will look into what matrices are, why they are added and multiplied as they are, and so on. But we’ll see the basics of multiplication and inverses momentarily.
What is a matrix inverse?
Doctor Fenton responded, first stating what an inverse is:
Thank you for clarifying what you already know. Using the adjugate (previously called the adjoint) matrix to find the inverse is not the most efficient way to compute the inverse. I will
illustrate the ideas with 2×2 matrices, although the idea works for square matrices of any size (only square matrices can have an inverse).
When I multiply two 2×2 matrices AB, with
A = [a[11] a[12]] and B = [b[11] b[12]],
[a[21] a[22]] [b[21] b[22]]
note that the product is
[a[11]b[11]+a[12]b[21] a[11]b[12]+a[12]b[22]] = [ [a[11] a[12]][b11] [a[11] a[12]][b[12]] ]
[a[21]b[11]+a[22]b[21] a[21]b[12]+a[22]b[22]] [ [a[21] a[22]][b21] [a[21] a[22]][b[21]] ]
= [A(B[1]) A(B[2])]
where B[1] and B[2] are the first and second columns of B. That is, to multiply A by the matrix B=[B[1] B[2]] on the right, you just multiply each of the columns in B by A.
To help us follow this, I’ll make a simple 2×2 example:
The first column of the product is A times the first column of B:
That’s how we multiply. So what is the inverse?
The inverse of a matrix A (if it exists) is the matrix A^-1 such that
AA^-1 = A^-1A = I ,
where I is the identity matrix.
If A is invertible, and we want to solve the matrix equation AX=B, where
X is a 2x1 column matrix [x[1]] and B is a column matrix [b[1]],
[x[2]] [b[2]]
we multiply AX=B by A^-1 and get X = A^-1B as the solution.
For our A, the inverse (which we’ll calculate below in two ways) turns out to be $$A^{-1}=\begin{bmatrix}-2&1\\\frac{3}{2}&-\frac{1}{2}\end{bmatrix},$$ which we can check by seeing that $$AA^{-1}=\
cdot1+4\cdot-\frac{1}{2}\end{bmatrix}=\begin{bmatrix}1&0\\0&1\end{bmatrix}$$ and
If we wanted to solve the equation \(AX=Y\), $$\begin{bmatrix}1&2\\3&4\end{bmatrix}X=\begin{bmatrix}4&5\\10&9\end{bmatrix},$$ we could multiply both sides by \(A^{-1}\) to get
frac{3}{2}\cdot5-\frac{1}{2}\cdot9\end{bmatrix}=\begin{bmatrix}2&-1\\1&3\end{bmatrix},$$ which is our B above.
Inverse by solving equations
So, how do we find that inverse matrix?
To simplify notation by reducing the number of super- and subscripts, let me denote the inverse matrix of A, A^-1, by C, so that C[1] is the first column of C and C[2] the second.
The equation AA^-1 = AC = I can be written as
AC = A[C[1] : C[2]] = (AC[1] : AC[2]] = [E[1] : E[2]] ,
E1 = [1] is the first column of I and E2 = [0] is the second.
[0] [1]
Then AC[1]=E[1] and AC[2]=E[2], which says that C[1] is the solution to AX=E[1], and C[2] is the solution to AX=E[2].
In our example, we find the two columns of the inverse by solving $$AC_1=E_1$$ $$\begin{bmatrix}1&2\\3&4\end{bmatrix}C_1=\begin{bmatrix}1\\0\end{bmatrix}$$ and $$AC_2=E_2$$ $$\begin{bmatrix}1&2\\3&4\
But you know how to solve AX=B by row reducing the augmented matrix [A:B] (the matrix A augmented with B as an extra column) to the form [I:X], so that the solution X is the last column of the
reduced augmented matrix.
Then, to find the inverse matrix, we augment the matrix A with the identity matrix [A:I] (a 2×4 matrix) and row reduce to the form [I:C], and the inverse matrix will be the right half of the
reduced 2×2 matrix. (If the left half cannot be reduced to I, then the matrix A is not invertible.) That is the efficient way to find A^-1.
This is the standard method that he referred to before, and which we’ll see below. But we can also use determinants to solve this equation, which will lead to the adjugate. For that, keep reading …
Cramer’s rule and the inverse
Finding \(C_1\) and \(C_2\) each amounts to solving a system of equations, which we can do with determinants:
If you solve
ax + by = u
cx + dy = v
with elimination, multiplying the first equation by d and the second equation by b, and then subtracting, you get
(ad – bc)x = du – bv,
x = (du – bv)/(ad – bc), or
[u b]
det [v d]
x = --------- ,
[a b]
det [c d]
and similarly y = (av – cu)/(ad – bc) is a quotient of determinants. This indicates where determinants can come from and can lead to Cramer’s Rule, but using determinants is not the best way to
find the inverse.
Here we have derived Cramer’s Rule by brute force in the 2×2 case. As Wikipedia puts it,
Consider a system of n linear equations for n unknowns, represented in matrix multiplication form as follows: $$A\mathbf{x}=\mathbf{b}$$
where the n × n matrix A has a nonzero determinant, and the vector \(\mathbf{x}=(x_1,\dots,x_n)^T\) is the column vector of the variables. Then the theorem states that in this case the system has a
unique solution, whose individual values for the unknowns are given by: $$x_i=\frac{\det(A_i)}{\det(A)}\; \; \; i=1,\dots n$$ where \(A_i\) is the matrix formed by replacing the i-th column of A by
the column vector \(\mathbf{b}\).
So let’s solve our system this way, in order to find the inverse of A:
To find the first column of our inverse, we need to solve
Cramer’s rule gives this solution:
$$C_{21}=\frac{\begin{vmatrix}1&{\color{Green}1}\\ {\color{Red}3}&{\color{Green}0}\end{vmatrix}}{\begin{vmatrix}1&2\\3&4\end{vmatrix}}=\frac{1\cdot0-1\cdot{\color{Red}3}}{1\cdot4-2\cdot3}=\frac{-{\
But observe that the determinant on the top, in each case, is just the element (4 or 3) opposite the 1, with an alternating sign; I’ve highlighted them. These, as we’ll see, are cofactors.
So the first column is $$C_{1}=\begin{bmatrix}-2\\\frac{3}{2}\end{bmatrix}$$
Similarly, to solve
we use
So the second column of the inverse is $$C_{2}=\begin{bmatrix}1\\-\frac{1}{2}\end{bmatrix}$$
This gives us the inverse I showed before,
We almost used the adjugate here, though we haven’t yet even talked about what it is. We’ll get there eventually, but first, he answered the side questions:
Determinants have a geometric interpretation. The determinant of
[a b]
[c d]
is the area of the parallelogram with sides given by the vectors (a,b) and (c,d) in the plane. I don’t know of any significance of this fact for solving linear systems, other than the fact that
if the determinant is 0, then the system either has no solution or infinitely many solutions, depending upon the right side B.
Does this help?
This is the subject of our last two posts.
Finding the inverse by row reduction
Sarah asked for a little more:
Thank you so much for that, Dr Fenton.
Just to make sure l understood, could you kindly illustrate through an example? I can then apply that myself to a 3×3, don’t worry 🙂
Why is there such an emphasis on determinants not being the most efficient way, please?
The part on deriving the determinant and how it can lead to Cramer’s Rule is very interesting, thank you.
What about the part on minors, particularly interpreting them – the idea behind WHY we delete the i^th row and j^th column and take the determinant of the resultant matrix.
Thank you!
Doctor Fenton replied with, first, a statement of what we did above with Cramer’s Rule:
By an example, I assume that you want an example of using row reduction to compute an inverse of a matrix. In the 2×2 case, the determinant approach gives the inverse matrix of
[a b]^-1 [ d -b]^
[c d] = 1/(ad-bc) [-c a]
which doesn’t require much computation.
That matrix is, in fact, the adjugate.
Then he gave an example of the more efficient method of finding inverses, before getting back to minors:
For a 3×3 example, to find
[ 1 -1 0]^-1
[ 1 0 -1]
[-6 2 3] ,
we write
[ 1 -1 0 1 0 0]
[ 1 0 -1 0 1 0]
[-6 2 3 0 0 1]
and row reduce to
[ 1 0 0 -2 -3 -1]
[ 0 1 -1 -3 -3 -1]
[ 0 0 1 -2 -4 -1] ,
[ 1 -1 0]^-1 [-2 -3 -1]
[ 1 0 -1] = [-3 -3 -1]
[-6 2 3] [-2 -4 -1] .
We’ll see the adjugate method, for the same matrix, later.
The reason for preferring row operations is because of complexity. Even in the 3×3 case, the arithmetic work required is not onerous, but for larger matrices, there is a big difference. It’s
not hard to see that in general, computing an nxn determinant requires computing n! terms, while row-reducing an nxn matrix to upper triangular form takes roughly n^3/6 operations, so reducing
the left half of the augmented n x (2n) matrix to the identity will take about n^3/3 operations. For n=2 or 3, n! and n^3/3 are comparable, but for larger n, say n=10, 10! is over 3×10^6, while
10^3/3 is about 300. For n=100, the value of 100! is an integer with 158 digits, while 100^3/3 is in the hundreds of thousands.
To compute the value of large determinants, it is more efficient to use row operations to transform the matrix to upper triangular form, since the determinant of a triangular matrix is just the
product of its diagonal elements, and the effects of two operations on a determinant is easy to determine: interchanging rows changes the sign of the determinant; multiplying a row by a constant
multiplies the determinant by the same constant; and replacing a row by the sum of itself and another row doesn’t change the determinant.
This provides a way to find determinants that is quicker than doing it directly; but in the adjugate method we’re about to see, we’d need to calculate many determinants!
Minors and cofactors
The adjugate is defined in terms of minors, which arise in the Laplace expansion of a determinant; so he explained that first. Here is what it looks like for a 3×3 determinant, starting with the
algebraic definition we saw two weeks ago:
As for the Laplace expansion, I don’t know how Laplace discovered it, but if you look at the 3×3 case,
[a b c]
det [d e f] = aei + cdh + bfg - ceg - afh - bdi = a(ei-hf) + b(fg-di) + c(dh-eg)
[g h i]
= a det[e f] - b det [d f] + c det [d e]
[h i] [g i] [g h] .
You can pick any row (or column) and rewrite the determinant as a sum of the entries in that row (or column) times determinants which are the minors of the entries.
Each element of one row (here, the top) is multiplied by the determinant of the matrix formed by removing that element’s row and column. The minor of the bold entry here is the determinant of the
part in red, and the cofactor is the minor multiplied by \(\pm1\):
\begin{vmatrix}a&\mathbf{b}&c\\ {\color{Red}d}&e&{\color{Red}f}\\ {\color{Red} g}&h&{\color{Red}i}\end{vmatrix}
\begin{vmatrix}a&b&\mathbf{c}\\ {\color{Red}d}&{\color{Red}e}&f\\ {\color{Red} g}&{\color{Red}h}&i\end{vmatrix}
The same pattern is true, almost trivially, of the 2×2 determinant: the minors are just the diagonally opposite entries, as I mentioned above.
Inverse by adjugate
Sarah now asked for the one missing piece:
Thank you Dr Fenton! This is why l love asking questions here – l always learn more than l ever thought l would before asking!
The part about number of operations isn’t as obvious to me, but l do get the gist why row operations are quicker.
Could you elaborate on the notion of minors, please? I’m still unsure what a minor of 4 would really be saying. I think there’s more to it that l just don’t know about.
And what about proving that 1/det multiplied by adjugate indeed gives you the inverse matrix, please?
Thank you 🙂
Doctor Fenton answered:
As I think I said earlier, I just regard minors as quantities which arise in evaluating determinants. As a determinant, it has a geometric interpretation as an area or volume in 2 or 3
dimensions, but I am not aware of any geometric significance to that fact. The Laplace expansion (or cofactor expansion) tells you that the absolute value of a 3×3 determinant is a volume of a
3-dimensional parallelepiped, which is a linear combination of some 2-dimensional areas (the areas corresponding to the minors of the determinant), but I don’t know that this interpretation helps
understand what a determinant is.
This could be interesting to think more about, but if there is a meaning, it is not obvious.
Now we finally get to the adjugate:
As for the inverse formula of an invertible matrix A, you form the cofactor matrix C of A, where the entry in the i^th row and j^th column is c[ij], the cofactor of the entry a[ij] in A (that is,
(-1)^i+jM[ij]), obtained by deleting the i^th row and j^th column of A. Next, you transpose the cofactor matrix, C^T. This is the adjugate matrix.
Then the matrix product AC^T is
[a[11] a[12] ... a[1n]][c[11] c[21] ... c[n1]]
[a[21] a[22] ... a[2n]][c[12] c[22] ... c[2n]]
[ : : :][ : : : ]
[a[n1] a[n2] ... a[nn]][c[1n] c[2n] ... c[nn]] ,
so the 11 entry of the product is
a[11]c[11]+a[12]c[12] + … + a[1n]c[1n]
which is exactly the cofactor expansion of det(A). The 12 entry of the product is
a[11]c[21]+a[12]c[22] + … + a[1n]c[2n] ,
which is the cofactor expansion of the determinant of the matrix
[a[11] a[12] ... a[1n]]
[a[11] a[12] ... a[1n]]
[ : : : ]
[a[n1] a[n2] ... a[nn]] .
This matrix has a repeated row, so the determinant of this matrix is 0.
Then the product AC^T is
[det(A) 0 0 ... 0 ]
[ 0 det(A) 0 ... 0 ]
[ 0 0 det(A) ... 0 ]
[ : : : ... : ]
[ 0 0 0 ... det(A)] ,
which is det(A)I, where I is the nxn identity matrix.
2×2 example
We’ve already done this in our 2×2 example. With $$A=\begin{bmatrix}1&2\\3&4\end{bmatrix},$$ the cofactor matrix is $$C=\begin{bmatrix}4&-3\\-2&1\end{bmatrix},$$ swapping diagonally opposite entries
and changing the sign of every other one. Its transpose is $$C^T=\begin{bmatrix}4&-2\\-3&1\end{bmatrix},$$ which is the adjugate. Dividing this by the determinant, \(1\cdot4-2\cdot3=-2,\) we get $$A^
{-1}=\begin{bmatrix}\frac{4}{-2}&\frac{-2}{-2}\\\frac{-3}{-2}&\frac{1}{-2}\end{bmatrix}=\begin{bmatrix}-2&1\\\frac{3}{2}&-\frac{1}{2}\end{bmatrix}.$$ This is what we got before.
Can you see the connection between this and what we did with Cramer’s Rule?
3×3 example
Now let’s do a 3×3 example; using the example Doctor Fenton used above, I’ll take $$A=\begin{bmatrix}1&-1&0\\1&0&-1\\-6&2&3\end{bmatrix}.$$
The cofactor of the first entry, \(a_{11}\), is $$(-1)^{1+1}\begin{vmatrix}0&-1\\2&3\end{vmatrix}=2,$$ so that is the first entry. The cofactor of \(a_{12}\), is $$(-1)^{1+2}\begin{vmatrix}1&-1\\-6&
3\end{vmatrix}=-(-3)=3,$$Continuing, the cofactor matrix is $$C=\begin{bmatrix}2&3&2\\3&3&4\\1&1&1\end{bmatrix},$$ and the adjugate is $$C^T=\begin{bmatrix}2&3&1\\3&3&1\\2&4&1\end{bmatrix}.$$
Its determinant is (using cofactors in the first row) $$\det(A)=\begin{vmatrix}1&-1&0\\1&0&-1\\-6&2&3\end{vmatrix}=1\cdot2+-1\cdot3+0\cdot2=2-3+0=-1.$$
So the inverse is $$A^{-1}=\frac{C^T}{\det(A)}=\frac{1}{-1}\begin{bmatrix}2&3&1\\3&3&1\\2&4&1\end{bmatrix}=\begin{bmatrix}-2&-3&-1\\-3&-3&-1\\-2&-4&-1\end{bmatrix},$$ as we got by row reduction.We
can check this by multiplying:
1 thought on “Determinants, Adjugates, and Inverses”
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.themathdoctors.org/determinants-adjugates-and-inverses/","timestamp":"2024-11-06T12:36:16Z","content_type":"text/html","content_length":"137561","record_id":"<urn:uuid:1a45ec5a-714f-4cb9-beb4-1e1030e2c899>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00311.warc.gz"} |
WARNING: A Hungry Lion, or a Dwindling Assortment of Animals (+ GIVEAWAY)
WARNING: A Hungry Lion, or a Dwindling Assortment of Animals (+ GIVEAWAY)
A Hungry Lion, or a Dwindling Assortment of Animals by Lucy Ruth Cummins is a book that I have been wanting to devour for months and months. It’ll be released into the wild from Atheneum/Simon &
Schuster on March 15, and Lucy is HERE to share some behind-the-scenes information.
Let me begin by telling you that the book’s description alone is delicious:
The very hungry lion is all set to enjoy an exciting day with his other animal pals. But all of a sudden his friends start disappearing at an alarming rate! Is someone stealing the hungry lion’s
friends, or is the culprit a little…closer to home?
This, by the way, is what Kirkus had to say about the book in a starred review:
“Gets even better with multiple readings…a good dose of subversively hearty laughter.”
And, this is what PW shared in yet another starred review:
“Cummin’s dizzy meta-tale has just enough wink and cheek to assure readers that it’s all in good fun, and her visual style—sketchbook playful, slyly spiking sweet-seeming scenes with moments of
menace and fear—should leave them hungering (in a nice way) for her next book.”
Personally, I give this book five gold stars, two thumbs up, and a standing ovation.
Welcome to Picture Book Builders, Lucy! I have to know…what sparked the wonderful idea for this book?
One day the thought crossed my mind that if you had a group of animals, and with a page turn, there were animals who seemed to disappear, that with just that much you’d have a mystery on your hands!
I did some thumbnails of this concept and I was sharing them with my writer friend Alexandra Penfold and I mentioned to her that I needed to think of an author and illustrator who could tackle the
Without missing a beat, she suggested that I take a crack at it myself, which hadn’t even occurred to me! As an art director, I’m so used to matching projects to talented people and to smoothing and
polishing the stories of other folks that it didn’t instinctively occur to me that what I had was a solid story idea I could explore on my own.
Your animals are so fresh, fun, and just-plain-adorable. Some might even call them yummy. Did the cast members start out looking this way or did they evolve over time?
There was a bit of a casting call process for who made it into the pages, actually. I started by filling a page with individual animal doodles as they came to me, species by species stream of
consciousness style. Then I whittled them down bit by bit to a core group who met two criteria: very cute, and very edible. Honestly! Among the rejects was a very adorable porcupine who I didn’t
think would . . . go down quite so easily.
Did you face any particular challenges while writing and/or illustrating A Hungry Lion?
One of the challenges I had was that I did the initial roughs for the story straight through, working from beginning to end, thinking “these are roughs, and they will evolve” but when it came time to
add polish, there were a few spreads where the very first attempt I made had the most energy, and captured the emotion and expression I wanted more perfectly than later versions. I was honestly a
little nervous to say to my art director Sonia Chaghatzbanian that I wanted to keep some elements of the dummy. She was very supportive, though, and agreed with my preferences. I don’t know why I was
terrified to ask! I think I was worried I wasn’t working hard enough if some things came straight out of the pencil in a way that worked for the storytelling.
In terms of writing the story, my editor Justin Chanda asked during the acquisition process if I thought there was anyway to add one last beat to sweeten the pot a little bit. Initially I balked—I
didn’t want to defang my lion! But in the end I think that prompt led to a VERY satisfying final spread that I wouldn’t have gotten to otherwise.
Text for this spread:
Once upon a time there was a hungry lion,
a penguin, a turtle, a little calico kitten,
a brown mouse, a bunny with floppy ears
and a bunny with un-floppy ears,
a frog, a bat, a pig, a slightly bigger pig,
a woolly sheep, a koala, and also a hen.
Please give us a tour of your studio.
My studio is really just an art cart with all my supplies, wheeled up to my dining room table when I need to work. And if I have a lot of work to do, I tilt that table so it has a view to the
television set (because I very much enjoy doodling and watching at once). I do a lot of drawing while I sit on the couch, too, just with a lap desk, when I’m just doodling.
My primary writing studio is the L train and then the F train, every morning and every evening on the way to and from work. I work on my stories on my iPhone in the Pages app, as often as I can, in
bursts while on the train. I also keep little lists of idea stubs in my phone’s notepad.
I had my son Nathaniel in August of 2015, and as a new mom, and newly back at work, my creative time often has to be carved out wherever it can be found these days!
***Click here for a 30-second time lapse video of Lucy doing her thing.***
If you had one key piece of advice for writers and/or illustrators, what would it be?
As an art director, one of my favorite things to do is look through the portfolios of illustrators, and identify a piece within their body of work that looks like it is situated in the center of an
existing story, and ask them to brainstorm back a few steps and forward a few steps to create a story around the piece.
This is also a good exercise for writers—just to look at the world around them, existing photography, things that show up in your Facebook feed, a fight between two birds over a bagel in the street.
Look to the world around you, and craft stories that answer the question of “how did they arrive at this place, and how will the move forward.”
Ooh. That’s a great exercise!
Scoop time! What’s next for you?
I am currently polishing up a dummy for a new picture book and hoping to start sharing it soon, and I have two other stories still in draft stages that I’m excited to keep honing. In the meantime,
I’m getting super excited for the launch of A Hungry Lion, which is March 16, and the launch party that will be at Books of Wonder in New York on March 19. I finally found a dress! And the cupcakes
will be spectacular.
Thanks for visiting Picture Book Builders, Lucy!
Lucy Ruth Cummins is a writer and an illustrator and also a full time art director of children’s books. She loves watching television, reading really long books about US Presidents, and Pomeranian
dogs. She was born in Canada, raised in upstate New York, and currently resides in Brooklyn, New York. Her favorite food is the french fry.
Bonus Fact: Lucy is also the mastermind designer behind ME WANT PET!, my book with Bob Shea.
Lucy is giving away a signed copy of her fun and fierce book. Leave a comment for your chance to win. I’ll notify the winner on April 1, no foolin’.
Also, pretty please WARN EVERYONE YOU KNOW ABOUT THIS BOOK! 🙂
226 Comments:
1. I can’t wait to read your book, Lucy! And I loved seeing your time lapse! Congrats on your debut!
2. This looks absolutely DELICIOUS! Congratulations, Lucy, and thank you, Tammi, for spreading the word about it!
□ It really is scrumptious. 🙂
3. Sounds like a great book! And I’m going to try that writing exercise.
□ Oh, it is! 🙂
I’m going to give that exercise I try, too.
4. Thanks for sharing this post and how the idea came to and the development of it. Very helpful.
5. Okay, I HAVE to go get this, pronto.
□ I KNEW you’d make this a must-have. 🙂
6. Definitely going on my must read list! I love the writing exercise as well.
7. Kids will love this. When I read this post I thought of how much kids love “There Was an Old Woman Who Swallowed a Fly.” This one leaves us guessing who did the action which makes it even more
fun. I am curious about that and the setting. Where would all these animals be gathered together? Looking forward to reading and rereading this one. Thanks for this write up and for the chance at
a copy of the book.
□ You’re welcome! Thanks for stopping by PBB. 🙂
□ You are the winner of A HUNGRY LION!
Please send your snail mail address to me at tksauer at aol dot com, and I will pass it along to Lucy. Congratulations!
P.S. Be very careful when reading that book. The lion is hungry, you know.
☆ Tammi,
I got your Facebook message and sent you my mailing address. Thank you!
8. Very nice job on the interview.Great questions. Loss of information.
□ Thank you!
9. Can’t wait to. Get my hands on this delicious looking book! Thanks for allowing us a view into the making of it 🙂
□ Be careful. There’s a hungry lion inside that book. 🙂
10. OMG = FUN !!!
□ YES!
11. Can’t wait to read it! I love the colorful characters and their loose, expressive lines.
□ It’s so, so good!
12. I’m so hungry for this PB my mouth is watering. Yum. Yum, YUM!
□ Enjoy the book with a side of fries. 🙂
13. I am very hungry to read this book. Yum.
14. Yummmm!
□ I know! 🙂
15. I have a very similar “studio”. Wishing you success–can’t wait to see the book!
16. This book looks sooo tasty. I can’t wait to read it! Congratulations, Lucy!
17. Can’t wait to read it! Congrats!
□ You are in for a delicious treat!
18. I enjoyed this interview so much. I especially love the tips for writers/illustrators. I’m going to try them out. Looking forward to reading your book. Congratulations!
19. Wonderful! Can’t wait to take a look at this book. Also inspiring how you work on the train- we all can find time for creativity if we try!
20. How fun! I’m so curious to find out how it resolves. ☺️
□ You can find out on Tuesday, March 15! 🙂
21. Can’t wait for this roaring new book. I hope they disappear off the shelves quickly. It’s a jungle out there!!!!
□ Yes, it is! 🙂
22. Looks great. I can’t wait to see it. Congratulations!
23. Thanks for sharing the story behind how the book came to be! I’m really excited to read it now!
□ You SHOULD be! 🙂
24. Oh my, with a new baby, you’ll be drafting and creating new roughs steadily. You’ll always have something new to write about! Wonderful interview
25. Looks awesome and darling! Thanks for the interview – and the giveaway! 🙂
□ It is and it is!
You’re welcome and you’re welcome!
26. I would love to DEVOUR this book page by page! The bunny with un-floppy ears would be the yummiest I think.
□ Ha! 🙂
27. I can’t wait to read it. Thanks for giving us a glimpse into the mind of of a person who wears so many hats!
□ Lucy is so stylish, she can pull off just about any hat! 🙂
28. I can’t wait to read this book! And the time-lapse was wonderful!
□ Thanks for stopping by PBB, Gaye!
29. Great interview, Tammi! I’m already seeing hilarious page turns. Can’t wait to read it!!
□ You are going to LOVE it, my friend. 🙂
30. Love the writing exercise. Thanks for taking the time to share with us. Looking forward to A Hungry Lion.
31. Putting this book on my to-eat list. Er, make that to-read. Thanks for the great interview and writing exercise!
□ Yes. Please try not to be hungry when reading this book.
32. What a playful and awesome idea! Sincere congratulations on the new baby and thanks for the give away.
□ I agree!!! Thanks for stopping by PBB!
33. Can’t wait to sink my teeth into it! And glad to know I’m not the only one with a lap desk and/or dining room for a studio.
□ Ha! I was thinking about you and your fancy-shmancy studio when I first read Lucy’s response.
34. I’m looking forward to the Hungry Lion. Thank you so much for sharing your exercise ideas Lucy!
35. Loved the interview! Can’t wait to read the book.
36. Must order my copy before they all start disappearing!
□ Excellent plan, Janee.
37. I can’t wait to read this book!
□ Be brave! 🙂
38. Must, must, must, must have this book!
□ Seriously!
39. Any book that Tammi Sauer recommends must be wonderful. She has an eye and heart for making PBs sing and I know, if she loves this Lion a lot, it has to be wonderful.
□ You are too kind. 🙂
40. I love the heads up about a book with great page turns! Thank you for the posting. I look forward to this read.
41. Can’t wait to read this one. And maybe make it to the launch.
□ Oh, that would be so great if you could make the launch. I bet it will be fierce. 🙂
42. I can’t wait to read this! Thanks for a great interview, Tammi!
□ Yay! You’re welcome. 🙂
43. What a great interview and that cast of characters is so sweet! Looking forward to reading it!
44. What a fun story! Can’t wait to read it. And I loved the writing suggestion – will definitely be using that!
□ Happy reading and writing over there. 🙂
45. Oh my gosh. How fun! I can’t wait to read this book!
46. Sounds like this one is going to be joining my bookshelves soon. I love books about ani-mules.
□ I think everyone needs this one! 🙂
47. On my reading list! Inspirational how you carve out your time…lesson to all! Thanks!
48. Thank you so much! From someone who collects images and doesn’t know what to do with them, I am know going to ask the questions- How did they arrive at this place and how will they move forward?
□ Oh, you are going to be BUSY. 🙂
49. What fun…and a great interview! Thanks for sharing!
50. It’s on order. Can’t wait to read it. Love your idea of working back and forward from a sketch. Hmmm.
□ Wonderful! Happy reading!
51. What a great interview! I loved the time lapse video! Thanks so much for giving us a heads up about this book, Tammi! I can’t wait to pick it up!
□ Yay!!!
52. I had the yummy thrill of “devouring” this book after unpacking my library’s Baker & Taylor order. I would LOVE a signed copy to add to my sacred Storytime shelf (as long as all my other
“treasures” don’t mysteriously disappear from the shelf)! Congratulations! I can’t wait to share at library Storytime!
□ Yes! You can’t be too careful with hungry lions. 🙂
53. I can’t wait to see how she let this idea become a story! Sounds like a lot of fun.
□ Enjoy!
54. WOW, at the comments. I loved this interview. Thank Tammi and Lucy. I’m dying to read this book er, wait a sec, I can’t wait to read this book. LOL
□ Yes. Probably best to rephrase. 🙂
55. I have been dying to get her to an Oklahoma conference, knowing what an amazing wealth of knowledge and creativity she would be, and this interview proves it. Well done, Tammi, in giving us a
taste of that!
□ I always try to seve up the good stuff. 🙂
56. This looks like such a great book! I can’t wait to read it!
57. Have a feeling I’ll be rootin’ for Lion. Carnivores are so misunderstood! Can’t wait to read it. 🙂
□ They really are. 🙂
58. This book makes me hungry to read it! Love the organic, informal process.
□ Yes!
59. Loved the time lapse video–so cool! This is my kind of story–can’t wait to read it!
Congrats to you, Lucy & thanks for sharing, Tammi :).
60. My son will LOVE this book, he’s the master of reacting to great page turns (and I might just like it quite a lot too!). Very excited to see it in the flesh 🙂 Thanks for the great time-lapse and
writing exercise too.
61. Congratulations, Lucy, on the book! It looks wonderful! And thanks for the chance to win a copy.
62. Your book sounds like a definite read! Thank you for telling us of your process and congrats on your new reader (Nathaniel)!
□ Oh, it is!
63. ” A Hungry Lion” is on my list! Thanks, Lucy, for sharing your invaluable insights.
64. Love lions AND page turns! Wonderful article as well.
□ It’s the perfect combo. 🙂
65. Love it in every dimension, from the formal and whimsical title, to the doodl-ish art, to the wry concept. I’m going to ping the library right now and put a copy on hold. Tammi, great job on this
interview, the level of detail, the time-lapse video. It’s all good stuff!
□ Glad you enjoyed the post!
66. This sounds like a book you can really sink your teeth into. I’m looking forward to reading it. Thanks, Lucy!
□ I agree!
67. Delightful tour of the creative process. I especially relate to the first effort having energy the revised version doesn’t. Sometime that’s just the way it is.
68. What a clever idea for a page turn–missing animals that were next to a lion! Between a new baby and full time job as art director at a large house, I’m in awe you have the not just the time, but
the brain power left to create what seems like such a great book. (Also congrats on your son, Lucy, we met when you were pregnant and you presented at our Spring Spirit 2015 conference.)
□ Lucy is amazing.
69. You’ve completely blown all of my “why I’m not writing” excuses (justifications, really) out of the water. Slinking back to my desk now.
□ Chop-chop! 🙂
70. I can’t wait to read the book and find out what happens!
□ The drama! The suspense!
71. I love that someone had to suggest to Lucy that she create this book herself. 🙂 Congratulations, Lucy, and thanks for sharing, Tammi. Can’t wait to check it out–it looks super tasty indeed!
□ It really is a masterful example of making the most of the page turn.
72. This is a terrific post. I can’t wait to read this adorable book.
73. I’m already laughing at the line-up of characters. Congratulations! On the book AND Nathaniel!
□ This book has SO MANY great qualities, that lineup being one of them.
74. This looks amazing! I can’t wait to read it with my kids.
□ Enjoy! 🙂
75. Lucy, This sounds like a fun book, definitely going on the to-read pile. And I loved the time lapse video, so cool! Thanks, Tammi, for sharing!
□ Thanks for visiting PBB, Jennifer!
76. TOTALLY need this book! Like, now 😉 Thanks for highlighting it. You’re the best, Tammi (and Lucy!)
□ I’m so glad you enjoyed this post! 🙂
77. Looks like a delightful treat! Looking forward to reading A HUNGRY LION!
78. Looking forward to reading this!
□ Wonderful. 🙂
79. Okay, you’ve got me hooked! Looking forward to March 15, hungrily. 🙂
□ Ha! Yes. The suspense!
80. What a fun idea!
81. This story sounds grrrrrrreat, but I especially love the loose style of the art. I know what you mean about the first sketch sometimes being the best. The list of animal friends is very
interesting. Thanks for the opportunity to win a copy.
82. I love the l-o-n-g sentence that mentions the 15 animals. That sure got my attention!
□ I know! Lucy really knows how to break the rules in great and unexpected ways.
83. Fun interview! Thanks for sharing. I want to read this book.
84. Such a long list, but somebody has to win that delicious book.
□ Yes! Thanks for stopping by PBB, Sharon.
85. This book looks thoroughly scrumptious! And I love the prompt to focus on a picture & then think what came before/what comes after. Congratulations on your book birthday & all of the starred
86. This book looks so great! Thanks for the great interview (and giveaway!)
Happy Friday! =)
87. This post and interview has piqued my interest! I’ll be looking for this one!
88. Love the video!
89. I forget how I heard about this book but when Tara Lazar posted the link to this post/giveaway I couldn’t click it fast enough! I definitely need to find out what is happening to all those animal
□ Oh, the suspense!
90. What an interesting way this book came about. Thanks for sharing.
91. This has been on my “to read” list since i heard about it awhile ago. I’m looking forward to reading it now that it’s been released!
92. Congrats to you!! Can’t wait to read your book!
93. Thanks Tammi for the brilliant post. And Lucy I cannot wait for your book’s debut!! I adore subversive books and your illustrations are divine. Super congrats and thanks for letting us peak in at
your process.
94. Can’t wait to read this! Thanks for the interview.
95. I’ve been waiting for this book too and loved the short video that came around a few days ago. So winsome. Brava to Alexandra Penfold for planting that seed to do it yourself, Lucy. She’s a big
smartie. Congrats!
96. So excited to get this one!
97. Tammi, Enjoyed hearing you speak in Miami in Jan.! Thanks for introducing us to Lucy! Can’t wait to see the book!
□ Thanks, Mary! I loved being a part of that conference. 🙂
98. The suspense is painful! I can’t wait to read this book!
□ Ha! Tomorrow is the day of the big reveal. Get yourself to the bookstore! 🙂
99. What a cute story idea. I’m glad the book is coming out soon because I can’t wait to see how it evolves. Thanks for the interview, Tammi. I lover the little exercise idea!
100. The writing exercise is brilliant. The criteria for animals that made it into the book with lion: “very cute and very edible” – made me laugh out loud in an empty room. Can’t wait to read this
book!! Congratulations on the book release! We’ll be on the look out for more of your books. Please share pictures of the dress AND the cupcakes! 🙂
□ Lucy is such a superstar.
101. Thanks, Tammi, for the introduction to Lucy and her mouth watering book! Love the brilliant writing exercise, too.
□ You are so welcome!
102. Thanks for putting this one on my radar, Tammi. Such a clever idea–can’t wait to read it!
103. I always love learning the backstory. Can’t wait to read this book. Thanks for the writing exercise, too!
104. So excited to read your debut, Lucy! Congrats!
105. This book looks like so much fun! And the illustrations are so charming!
106. Congratulations Lucy! I can’t wait to read your book! Thanks for sharing your studio world.
107. This sounds like a fun read. Thanks for the heads-up!
108. This book sounds like such a fun read! I think it’s getting released today … I’m going to try and find it. Thanks for sharing.
109. This book sounds wonderful and I love the style of illustrations.
110. Thank you for this post, Lucy. I like hearing how you collect ideas in all sorts of ways and keep them! Seems like you always have an active growing folder of ideas that start to blossom on
their own. And the notion of observing and asking how did the situation arrive at what it is and what do the people (or animals or make believe creatures)need to do to move on seems so relevant
to story building. Thank you and I am eager to read your new book!!
111. What a great premise and lovely illustrations. Looking forward to solving the mystery! Thanks for sharing the writing exercise — I’ve jotted it down next to a list of potential titles for a PB
I’m hoping my daughter will write with me.
112. I love Lucy 🙂 . . . and her work— on all fronts (Henny and Peddles say, “Bwwaaak!” and “Howdy!” to their talented AD). We all enjoyed this interview, Tammi 🙂
□ Thanks! 🙂 | {"url":"https://picturebookbuilders.com/2016/03/warning-a-hungry-lion-or-a-dwindling-assortment-of-animals-giveaway/","timestamp":"2024-11-03T09:16:36Z","content_type":"text/html","content_length":"738170","record_id":"<urn:uuid:630fb02c-b587-4a3b-b520-102af70c5b6d>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00369.warc.gz"} |
On This Day
Unfortunately, the events weren't related to ringing. This web page is a ringing version of On This Day.
Bill Butler has agreed to supply his collection of events that happened on this day. These complement the automated extracts from PealBase.
Bill Butler's Blog for 11th November Edit BBB
Year Event Reference
1799 The Stamford Mercury for November 8, 1799 has this entry: On Monday, November 11, 1799, will be opened at Glinton near Peterborough, a peal of six new bells, the tenor, in the key nv
of F, cast and hung by Thomas Osborn of Downham in Norfolk. Six hats will be given to the company who ring the best round peal for the space of 30 minutes.
1934 The first twelve-bell peal to be rung outside the British Isles was rung at Melbourne Cathedral, conducted by J. S. Goldsmith as part of the Great Adventure I celebrations. Six English AGA118
visitors and six native Australians made up the band that was successful at the third attempt.
1980 Lincoln Cathedral bells were the first sounds to be broadcast live by the new BBC Radio Lincolnshire. The ringers had to be in the tower by 6.30 a.m. nv
First pealers for 11th November
1920 First peal rung by Catherine E Willers. Recorded peal total is 5.
1922 First peal rung by Fred Dunkerley. Recorded peal total is 353.
1922 First peal rung by Harold J Shuck. Recorded peal total is 51.
1926 First peal rung by J Frederick Milner. Recorded peal total is 342.
1930 First peal rung by Elsie K Hart. Recorded peal total is 302.
1931 First peal rung by Dorothy Constance. Recorded peal total is 2.
1933 First peal rung by Percy Stone. Recorded peal total is 135.
1944 First peal rung by Frank C Lister. Recorded peal total is 1.
1944 First peal rung by G Richard Hessey. Recorded peal total is 2.
1944 First peal rung by Philip H Speck. Recorded peal total is 37.
1946 First peal rung by Alan G Foster. Recorded peal total is 181.
1946 First peal rung by Noreen Brain. Recorded peal total is 18.
1949 First peal rung by Geoffrey R Parker. Recorded peal total is 1888.
1949 First peal rung by Herbert A Bradbury. Recorded peal total is 22.
1950 First peal rung by Beryl Lawrey. Recorded peal total is 1.
1950 First peal rung by Ernest W Thurlow. Recorded peal total is 2.
1950 First peal rung by Godfrey Matthews. Recorded peal total is 1.
1950 First peal rung by John Triggs. Recorded peal total is 1.
1950 First peal rung by John Williams (Gulval). Recorded peal total is 2.
1950 First peal rung by Joseph F V Bellars. Recorded peal total is 2.
1950 First peal rung by Mary C Poyner. Recorded peal total is 834.
1951 First peal rung by Anne M Hollingdale. Recorded peal total is 1.
1951 First peal rung by George Thompson. Recorded peal total is 5.
1952 First peal rung by Richard H Corderoy. Recorded peal total is 3.
1953 First peal rung by John R Norman. Recorded peal total is 2.
1959 First peal rung by Alan G Payne. Recorded peal total is 297.
1960 First peal rung by Keith J Triplow. Recorded peal total is 54.
1961 First peal rung by Anthony Talbot. Recorded peal total is 5.
1961 First peal rung by M June Porter. Recorded peal total is 7.
1961 First peal rung by Michael H Gregory. Recorded peal total is 12.
1961 First peal rung by Michael R T Lambert. Recorded peal total is 11.
1961 First peal rung by Paul A Brand. Recorded peal total is 38.
1961 First peal rung by Stephen A Turner. Recorded peal total is 4.
1964 First peal rung by David J Puttick. Recorded peal total is 1.
1964 First peal rung by Gerard Summergood. Recorded peal total is 1.
1964 First peal rung by Helen C Summergood. Recorded peal total is 1.
1964 First peal rung by Norah D McCarthy. Recorded peal total is 1.
1964 First peal rung by Richard R Kennard. Recorded peal total is 105.
1966 First peal rung by Andrew Ellis (Saxmundham). Recorded peal total is 5.
1966 First peal rung by Kathleen Davies. Recorded peal total is 1.
1967 First peal rung by Alan J Fletcher. Recorded peal total is 11.
1967 First peal rung by Christopher J Bennett. Recorded peal total is 23.
1967 First peal rung by Colin Bottomley. Recorded peal total is 6.
1967 First peal rung by David R Cox. Recorded peal total is 453.
1967 First peal rung by Denzil G Kerly. Recorded peal total is 2.
1967 First peal rung by Ian Coombs. Recorded peal total is 1.
1967 First peal rung by Ivan A Sheffield. Recorded peal total is 10.
1967 First peal rung by Jane A Briggs. Recorded peal total is 1.
1967 First peal rung by John Rampley. Recorded peal total is 1.
1967 First peal rung by Malcolm Murphy. Recorded peal total is 558.
1967 First peal rung by Neil McCormick. Recorded peal total is 7.
1967 First peal rung by Philip Mason. Recorded peal total is 2.
1967 First peal rung by Stephanie M A Willcocks. Recorded peal total is 9.
1967 First peal rung by Stephen T Yates. Recorded peal total is 239.
1970 First peal rung by Christopher B Wacher. Recorded peal total is 1.
1972 First peal rung by Ann U Herbert. Recorded peal total is 74.
1972 First peal rung by Carol L Cronin. Recorded peal total is 7.
1972 First peal rung by Carroll Welland. Recorded peal total is 2.
1972 First peal rung by Christopher W Wiggins. Recorded peal total is 2.
1972 First peal rung by Diane Lewis. Recorded peal total is 1.
1972 First peal rung by Jaqui M Clark. Recorded peal total is 1.
1972 First peal rung by Pauline P Seymour. Recorded peal total is 4.
1972 First peal rung by Richard B Samways. Recorded peal total is 3.
1972 First peal rung by Richard Drakes. Recorded peal total is 1.
1972 First peal rung by Richard G M Smith. Recorded peal total is 1.
1972 First peal rung by Robin J Mears. Recorded peal total is 12.
1972 First peal rung by Roy L Wareham. Recorded peal total is 5.
1972 First peal rung by Stephanie Rollings. Recorded peal total is 1.
1973 First peal rung by Elizabeth I Murray. Recorded peal total is 37.
1973 First peal rung by Janet Slaney. Recorded peal total is 1.
1973 First peal rung by John A Hutchinson. Recorded peal total is 3.
1973 First peal rung by Robert G Radford. Recorded peal total is 40.
1973 First peal rung by Stanley G Harpham. Recorded peal total is 13.
1974 First peal rung by Audrey Purnell. Recorded peal total is 1.
1976 First peal rung by Philip C Barron. Recorded peal total is 109.
1978 First peal rung by Andrew J Patterson. Recorded peal total is 2.
1978 First peal rung by Anne D Thorne. Recorded peal total is 34.
1978 First peal rung by Edmund Bergstrom. Recorded peal total is 3.
1978 First peal rung by Janet E Hartley. Recorded peal total is 2.
1978 First peal rung by Joe G Cadman. Recorded peal total is 1.
1978 First peal rung by Judy A Barker. Recorded peal total is 77.
1978 First peal rung by Lynne Wilkinson. Recorded peal total is 1.
1978 First peal rung by Malcolm Steward. Recorded peal total is 1.
1978 First peal rung by Margaret J Barber. Recorded peal total is 4.
1978 First peal rung by Margaret J Sanderson. Recorded peal total is 7.
1978 First peal rung by Maureen A Sparling. Recorded peal total is 8.
1978 First peal rung by Phyllis B Andrews. Recorded peal total is 2.
1978 First peal rung by Roy T R Knight. Recorded peal total is 2.
1978 First peal rung by Sarah Forrester. Recorded peal total is 1.
1979 First peal rung by Jane M Leathart. Recorded peal total is 1.
1981 First peal rung by Lindsey M Douglas-Orr. Recorded peal total is 4.
1984 First peal rung by Anthony K Booer. Recorded peal total is 14.
1984 First peal rung by Helen M Armeson. Recorded peal total is 67.
1985 First peal rung by Alison L Edmonds. Recorded peal total is 39.
1988 First peal rung by Catherine M Gladwin. Recorded peal total is 2.
1988 First peal rung by Graham P Redman. Recorded peal total is 6.
1989 First peal rung by Melvyn Potts. Recorded peal total is 13.
1989 First peal rung by Samuel R Nankervis. Recorded peal total is 48.
1989 First peal rung by Susan J Bocking. Recorded peal total is 1.
1989 First peal rung by Wendy M Leng. Recorded peal total is 11.
1993 First peal rung by Peter Bradnam. Recorded peal total is 2.
1993 First peal rung by Trudie E Bradnam. Recorded peal total is 2.
1995 First peal rung by David C Morrow. Recorded peal total is 1.
1995 First peal rung by David M Ellis (Barford). Recorded peal total is 1.
1995 First peal rung by Michael J Ashton (Barford). Recorded peal total is 1.
1995 First peal rung by Tina Reed. Recorded peal total is 1.
1996 First peal rung by Jonathan Francis. Recorded peal total is 3.
2000 First peal rung by Barbara Arkley. Recorded peal total is 1.
2000 First peal rung by Claire Penny. Recorded peal total is 6.
2000 First peal rung by Deirdre R Watson. Recorded peal total is 18.
2000 First peal rung by John R Harris (Watlington). Recorded peal total is 2.
2000 First peal rung by Philip S Taylor. Recorded peal total is 10.
2000 First peal rung by Steven Merriott. Recorded peal total is 1.
2001 First peal rung by Martin A Heaslip. Recorded peal total is 9.
2001 First peal rung by Rosemary L Maclaine. Recorded peal total is 1.
2001 First peal rung by Steve Woodjetts. Recorded peal total is 2.
2003 First peal rung by Christine M Dobinson. Recorded peal total is 1.
2004 First peal rung by Susan P Sturdy. Recorded peal total is 4.
2009 First peal rung by Helen M Beaumont. Recorded peal total is 28.
2012 First peal rung by Eleanor Cooper. Recorded peal total is 1.
2012 First peal rung by Harry J Andrews. Recorded peal total is 7.
2013 First peal rung by Thomas W Mitchell. Recorded peal total is 1.
2014 First peal rung by Ben Kellett. Recorded peal total is 29. | {"url":"https://www.pealbase.co.uk/onthisday/index.php?d=-5","timestamp":"2024-11-06T14:36:31Z","content_type":"application/xhtml+xml","content_length":"28933","record_id":"<urn:uuid:017197a2-5e33-4192-baed-fbf606f1fe63>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00417.warc.gz"} |
Forensic Science Archives
20 April 2022
Are you eagerly waiting for the Forensic Medicine MCQ Question? Don’t feel bad! Here, we have attached the last 5-year Forensic Medicine MCQs Question Paper Pdf. Refer the Forensic Medicine MCQ
Question Papers before starting the preparation. Did you know? Reading the Forensic Medicine MCQ Question not only helps you to be aware of the
Are you eagerly waiting for the Forensic Medicine Quiz Question? Don’t feel bad! Here, we have attached the last 5-year Forensic Medicine Quiz Question Paper Pdf. Refer the Forensic Medicine Quiz
Question Papers before starting the preparation. Did you know? Reading the Forensic Medicine Quiz Question not only helps you to be aware of the
Are you eagerly waiting for the Forensic Medicine Sample Question? Don’t feel bad! Here, we have attached the last 5-year Forensic Medicine Sample Question Paper Pdf. Refer the Forensic Medicine
Sample Question Papers before starting the preparation. Did you know? Reading the Forensic Medicine Sample Question not only helps you to be aware of the
Are you eagerly waiting for the Forensic Medicine Model Question? Don’t feel bad! Here, we have attached the last 5-year Forensic Medicine Model Question Paper Pdf. Refer the Forensic Medicine Model
Question Papers before starting the preparation. Did you know? Reading the Forensic Medicine Model Question not only helps you to be aware of the
Are you eagerly waiting for the Forensic Medicine Questions and Answers? Don’t feel bad! Here, we have attached the last 5-year Forensic Medicine Questions and Answers Paper Pdf. Refer the Forensic
Medicine Questions and Answers Papers before starting the preparation. Did you know? Reading the Forensic Medicine Questions and Answers not only helps you to
Looking for NEET PG Forensic Medicine Previous Year Question Papers? Download just by clicking on the NEET PG Forensic Medicine Papers with Answers Pdf links. NEET PG Forensic Medicine Model Exam
Paper gives candidates an overview of the online assessment test. NEET PG Forensic Medicine Syllabus and Exam Pattern is available here for the reference
Forensic Science GK question paper helpful for the applicants in the preparation. Hence, to help the candidates we have given the Forensic Science GK question paper in the section below. Hence,
download the Forensic Science GK Papers and start your preparation. The direct links enclosed below to get the PDFs of Forensic Science GK Papers
Forensic Science MCQ question paper helpful for the applicants in the preparation. Hence, to help the candidates we have given the Forensic Science MCQ question paper in the section below. Hence,
download the Forensic Science MCQ Papers and start your preparation. The direct links enclosed below to get the PDFs of Forensic Science MCQ Papers
Forensic Science Quiz question paper helpful for the applicants in the preparation. Hence, to help the candidates we have given the Forensic Science Quiz question paper in the section below. Hence,
download the Forensic Science Quiz Papers and start your preparation. The direct links enclosed below to get the PDFs of Forensic Science Quiz Papers
Forensic Science Practice Set question paper helpful for the applicants in the preparation. Hence, to help the candidates we have given the Forensic Science Practice Set question paper in the section
below. Hence, download the Forensic Science Practice Set Papers and start your preparation. The direct links enclosed below to get the PDFs of Forensic | {"url":"https://www.examyear.com/forensic-science/","timestamp":"2024-11-09T09:35:06Z","content_type":"text/html","content_length":"123400","record_id":"<urn:uuid:4fc3a8e2-5918-4190-b7ab-b57f1cc50d6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00380.warc.gz"} |
RE: st: Dummy Variables vs. Subgroup Models in Logistic Regression
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
RE: st: Dummy Variables vs. Subgroup Models in Logistic Regression
From "Hoetker, Glenn" <[email protected]>
To <[email protected]>
Subject RE: st: Dummy Variables vs. Subgroup Models in Logistic Regression
Date Fri, 22 Oct 2004 10:30:31 -0500
At 01:45 PM 10/22/2004 +0000, [email protected] wrote:
>Dear Stata Users,
> I'm creating a logistic regression model with many dichotomous
> variables along with one term that has 8 categories coded 1,2,..8. I
> create 7 dummy variables and have a very large model. Would it be
> legitimate if my sample sizes are large enough to create 8 separate
> models with each model representing one subgroup? Can anyone comment
> the pros and cons of using dummy variables versus creating separate
> "subgroup" models based on the remaining independent variables?
Comparing logit/probit coefficients across groups is actually
considerably more difficult than doing so in OLS. This reflects the
fact that the betas are not identified in a logit model without imposing
a restriction by setting the variance of the error term to pi^2/3. As a
result, the estimated coefficients are the underlying "true" effect
scaled by the amount of unobserved heterogeneity (a.k.a. residual
variation). If the unobserved heterogeneity varies across groups, as it
often will, then the estimated betas will vary too, even if the "true"
effect is the same. Allison (1999) discusses this and proposes a test
for detecting differences in unobserved heterogeneity and differences in
underlying coefficients. Other discussions of the scale issue include
Maddala (1983:23), Long (1997:47), and Train (2004).
Hoetker (2004) uses Monte Carlo simulations to show that (a) the problem
Allison identified isn't just theoretical--it leads to misleading
inferences in common situations and (b) Allison's tests are a
significant improvement over current practice, but are not a panacea. It
also offers some alternative analytical approaches, including code in
Stata (of course) to implement them. One finding in particular is that
the use of interaction terms to detect inter-group differences in logit
equations if likely to yield misleading results if unobserved
heterogeneity differs across groups. In some circumstances, it's
actually more likely to find significant results in the OPPOSITE
direction than in the right direction.
For cross-group comparisons in general, Liao (2002) is a helpful
Sorry to actually muddy the waters rather than providing a simple
solution. Best wishes.
Glenn Hoetker
Assistant Professor of Strategy
College of Business
University of Illinois at Urbana-Champaign
[email protected]
Allison, P.D. 1999. Comparing logit and probit coefficients across
groups. SMR/Sociological Methods & Research 28(2): 186-208.
Hoetker, Glenn (2004). Confounded coefficients: Extending recent
advances in the accurate comparison of logit and probit coefficients
across groups. Working paper
Liao, T.F. 2002. Statistical group comparison. Wiley Series in
Probability and Statistics. New York : Wiley-Interscience.
Long, J.S. 1997. Regression models for categorical and limited dependent
variables. Advanced Quantitative Techniques in the Social Sciences.
Thousand Oaks, CA: Sage Publications.
Maddala, G.S. 1983. Limited-dependent and qualitative variables in
econometrics. New York: Cambridge University Press.
Train, K.E. 2004. Discrete choice methods with simulation. Cambridge :
Cambridge University Press.
-----Original Message-----
From: [email protected]
[mailto:[email protected]] On Behalf Of Richard
Sent: Friday, October 22, 2004 9:42 AM
To: [email protected]; [email protected]
Subject: Re: st: Dummy Variables vs. Subgroup Models in Logistic
If you estimate separate models, you are allowing ALL parameters to
across groups, e.g. the effect of education could be different in each
group. If you just add dummies, you are allowing the intercept to
in each group, but the effects of the other variables stay the same.
If you estimate separate models for each group, your models will
be much less parsimonious, i.e. you'll have a lot more parameters
around. But the real question is, what is most appropriate given your
theory and the empirical reality? If the effects of everything really
different across every group, then you should estimate separate
models. But, if the effects do not differ across groups, then you are
producing unnecessarily complicated models, and you are also reducing
statistical power, e.g. by not pooling groups when you should be pooling
them you'll be more likely to conclude that effects do not differ from
when they really do.
These sorts of issues are discussed in
Richard Williams, Notre Dame Dept of Sociology
OFFICE: (574)631-6668, (574)631-6463
FAX: (574)288-4373
HOME: (574)289-5227
EMAIL: [email protected]
WWW (personal): http://www.nd.edu/~rwilliam
WWW (department): http://www.nd.edu/~soc
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2004-10/msg00694.html","timestamp":"2024-11-02T02:41:56Z","content_type":"text/html","content_length":"14017","record_id":"<urn:uuid:33435f21-2afc-473c-9019-483e372491a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00189.warc.gz"} |
CUSTPRIM - Editorial
PROBLEM LINK:
Author: Kevin Atienza
Tester: Jingbo Shang and Pushkar Mishra
Editorialist: Kevin Atienza
Special thanks to the author of the problem for providing a detailed explanation.
Payton numbers are defined, and prime Payton numbers are defined. Given a Payton number, is it prime or not? (Please see the problem statement for more details)
First, the term “prime” defined in the problem statement actually refers to the abstract-algebraic term “irreducible”, and the term “prime” itself is defined differently. Thankfully, as we shall see
below, the primes and irreducibles in Payton numbers coincide.
Let \omega be a symbol such that \omega^2 = \omega - 3. Then there is an isomorphism between the Payton numbers and the Euclidean domain \mathbb{Z}[\omega] given by \phi(a,b,c) = (33-2a-c)+(b-a)\
omega. This means that the Payton numbers also form a Euclidean domain (thus primes are the same as irreducibles), and that we have reduced the problem to a primality check in \mathbb{Z}[\omega].
In \mathbb{Z}[\omega], all elements x are of the form x = a + b\omega where a, b \in \mathbb{Z}. If we define the norm of x (denoted as Nx) as a^2 + ab + 3b^2, then we can show the following:
• If x is an integer (i.e. b = 0), then x is prime if and only if x is a prime integer, and either x = \pm 2 or x \not= \pm 11 and -11 is not a quadratic residue modulo x.
• If x is not an integer (i.e. b \not= 0), then x is prime if and only if Nx is a prime integer.
This yields an efficient solution to the problem as long as we can quickly check whether an ordinary integer is prime, and we can quickly check whether -11 is a quadratic residue modulo a prime.
Primality checking is a standard problem which can be solved using the Miller-Rabin test for example, and checking whether -11 is a quadratic residue modulo a prime can be done with Euler’s
Denote S as the set of Payton numbers.
Primes in S
Let \omega be a symbol such that \omega^2 = \omega - 3. Then we can define addition and multiplication in the set \mathbb{Z}[\omega] = \{a + b\omega : a, b \in \mathbb{Z}\} naturally, and the
resulting structure is actually a principal ideal domain (we’ll discuss this below).
In fact, S is isomorphic to \mathbb{Z}[\omega]! This means that the structure of S and \mathbb{Z}[\omega] are identical. In other words, S is just \mathbb{Z}[\omega] with its elements simply
relabeled or renamed.
Define \phi as a map from S to \mathbb{Z}[\omega] where \phi(a,b,c) = (33-2a-c)+(b-a)\omega. Then \phi is an isomorphism, i.e. a bijective map that satisfies:
\phi(a+b) = \phi(a) + \phi(b)
\phi(ab) = \phi(a)\phi(b)
The inverse of \phi can be written as the following:
\phi^{-1}(a+b\omega) = \begin{cases} (11-k,11-k+b,11) & \text{if $a$ is even and $a = 2k$} \\ (4-k,4-k+b,24) & \text{if $a$ is odd and $a = 2k+1$} \end{cases}
This is great, because this means that the properties of S are exactly the same with \mathbb{Z}[\omega]. For instance, an element x of S is prime in S if and only if \phi(x) is prime in \mathbb{Z}[\
omega]. Therefore, we can study the primes in \mathbb{Z}[\omega] instead.
How to find this isomorphism
Now, you might be wondering: how does one discover this isomorphism? To be honest, I don’t know of any surefire way to do this. Even inspecting the accepted solutions doesn’t help much, because we
can’t see the process of discovery there. But a possible line of attack might be:
• Play around with multiplication and realize it is commutative, and also associative if you’re lucky. This suggests that it might be isomorphic to some well-known ring.
• Realize that multiplication is symmetric in a and b, and that the Payton “zero” and “unity” both satisfy a = b. This suggests studying the Payton numbers where a = b would be fruitful. If you’re
lucky enough, you might notice that this subring is isomorphic to the ring of ordinary integers ((a,a,c) pairs with the integer 33-2a-c).
• Realize that Payton numbers are really just two-dimensional, and (using the knowledge of a subring isomorphic to the integers) guess that they must be isomorphic to a quadratic integer ring.
Discovering which ring it is might take some experimentation, but I would guess that the worst is behind at this point.
Primes in \mathbb{Z}[\omega]
Notice that one can write \omega = \frac{1+\sqrt{-11}}{2}. First, we show that \mathbb{Z}[\omega] is a Euclidean domain. A Euclidean domain R is an integral domain endowed with a Euclidean function,
which is a function f from R to \mathbb{N} satisfying the following property:
If a and b are in R and b \not= 0, then there exists q and r in R such that a = bq + r and either r = 0 or f(r) < f(b).
In the case of \mathbb{Z}[\omega], the Euclidean function we will use is f(a+b\omega) = a^2+ab+3b^2. Notice that if one writes \omega = \frac{1+\sqrt{-11}}{2} = \frac{1}{2}+\frac{\sqrt{11}}{2}i, then
f(x+yi) is just x^2+y^2, which is just the square of the distance to the origin 0+0i of the complex plane.
Our proof requires a property of the field \mathbb{Q}[\omega] = \{a + b\omega : a, b \in \mathbb{Q}\}. Notice that f can be naturally generalized to include \mathbb{Q}[\omega] in its domain, though
the codomain of f becomes \mathbb{Q}.
For every element x of \mathbb{Q}[\omega], there exists an element n in \mathbb{Z}[\omega] such that f(x-n) < 1.
Let’s call x good if there exists an n in \mathbb{Z}[\omega] such that f(x-n) < 1, so we are trying to prove that all elements of \mathbb{Q}[\omega] are good. The following are facts about goodness
(simple to show):
• Let m\in \mathbb{Z}[\omega]. Then x\in \mathbb{Q}[\omega] is good if and only if x-m is good.
• x\in \mathbb{Q}[\omega] is good if and only if -x is good.
Therefore, if a+b\omega is an element of \mathbb{Q}[\omega], then the element \lfloor a \rfloor + \lfloor b \rfloor \omega is an element of \mathbb{Z}[\omega]. So a+b\omega is good if and only if
(a+b\omega) - (\lfloor a \rfloor + \lfloor b \rfloor \omega) = (a-\lfloor a \rfloor) + (b - \lfloor b \rfloor \omega) is good. Thus we have reduced the general case to the case where 0 \le a < 1 and
0 \le b < 1. Also, note that if a + b > 1, then a+b\omega is good if and only if -a-b\omega is good, if and only if -a-b\omega+(1+\omega) = (1-a)+(1-b)\omega is good. Thus we have further reduced the
case to the case where a \ge 0, b \ge 0 and a + b \le 1.
Finally, assume a \ge 0, b \ge 0 and a + b \le 1. Then, considering a+b\omega as a point in the complex plane with \omega = \frac{1+\sqrt{11}i}{2}, we have that a + b\omega is inside the triangle
bounded by vertices 0+0\omega = 0+0i, 1+0\omega = 1+0i and 0+1\omega = \frac{1}{2} + \frac{\sqrt{11}}{2}i. The vertices are elements of \mathbb{Z}[\omega]. Inside this triangle, the point farthest
from any vertex is the circumcenter of the triangle, which can be easily calculated as \frac{1}{2}+\frac{5}{2\sqrt{11}}i. The norm of this number minus each of the vertices is 9/11 < 1, so the claim
is proven for a+b\omega (take the nearest vertex).
End proof
We can now show that \mathbb{Z}[\omega] is a Euclidean domain. Note that division is defined in \mathbb{Q}[\omega], and that f(ab) = f(a)f(b) for any a and b (Hint: consider a and b as complex
\mathbb{Z}[\omega] is a Euclidean domain with the Euclidean function f(a+b\omega) = a^2+ab+3b^2.
Let a and b be in \mathbb{Z}[\omega], and b \not= 0. Note that a/b is an element of \mathbb{Q}[\omega]. By the claim above, there exists a q in \mathbb{Z}[\omega] such that f(a/b-q) < 1. Let r = a -
qb. Then f(r/b) < 1, so
f(r) = f(r/b\cdot b) = f(r/b)f(b) < 1f(b) = f(b)
and the Euclidean property is satisfied, QED.
End proof
Being a Euclidean domain is very nice. For instance, greatest common divisors are well-defined and always exist. Also, every Euclidean domain is a principal ideal domain (PID). It is a well-known
fact that every PID is a unique factorization domain (UFD), which means that every number can be written uniquely as a product of primes and a unit.
It is also true in UFDs that irreducible elements are the same as prime elements. Note that irreducible and primes are actually distinct, although we are used to the idea that these are the same,
because they are the same in the domain of integers. Also, note that the definition given in the problem statement is actually that of irreducible elements, not prime elements, but since these are
the same in a UFD, there’s no problem.
In number theory, we say that a divides b in R, denoted as a \mid b, if there is a c in R such that ac = b. We say that x \in R is a unit if x \mid 1 in R (in the ring of integers, the units are
precisely -1 and 1, and in the Payton numbers, (4,4,24) and (5,5,24)). A non-unit p is irreducible if p cannot be factorized into non-units (i.e. if p = ab, then either a or b is a unit), and a
non-unit p is prime in R if p \mid ab implies p \mid a or p \mid b.
We now define greatest common divisors. We say that g is a greatest common divisor, or gcd, of a and b if every common divisor of a and b divides g. Note that we say a gcd instead of the gcd because
there can be many greatest common divisors of a and b. For example, if g is a gcd of a and b, then -g is also a gcd. However, it is also true that if g_1 and g_2 are gcd’s of a and b, then g_1 = ug_2
for some unit u (this follows from the easily provable fact that x \mid y and y \mid x implies x = uy for some unit u).
We now show that gcd is well-defined:
Every pair (a,b) of values in a Euclidean domain R has a gcd.
If b is zero, then a is a gcd, because every element r in R divides 0 (because r0 = 0). Therefore, the common divisors of a and 0 are simply the divisors of a. If b is nonzero, then we prove the
claim by induction on f(b) (which makes sense because \mathbb{N} is well-ordered). Let a = bq + r with r = 0 or f(r) < f(b).
If r = 0, then b is a gcd of a and b, because every divisor of b is also a divisor of bq = a. If f(r) < f(b), then by induction, b and r has a gcd, say g. We claim that g is also a gcd of a and b.
This is because every common divisor of a and b also divides a-bq = r, so by definition of gcd, it also divides g, thus proving that g is a gcd of a and b. QED.
End proof
Note that this proof is analogous to the Euclidean gcd algorithm. In fact, this is why R is called a Euclidean domain.
We now prove Bezout’s identity in Euclidean domains.
If g is a gcd of a and b in R, then there exists x and y in R such that ax + by = g.
If b = 0, then a is a common divisor of a and b, and since g is a gcd, a \mid g, therefore, g = ac for some c. This means that ca + 0b = g (i.e. (x,y) = (c,0)). If b is nonzero, then we prove the
claim by induction on f(b). let a = bq + r with r = 0 or f(r) < f(b).
If r = 0, then b is a common divisor of a and b, and since g is a gcd, b \mid g, therefore g = bc for some c. This means that 0a + cb = g (i.e. (x,y) = (0,c). If f(r) < f(b), then g is also a gcd of
b and r, so by induction there exists x' and y' such that x'b + y'r = g. Then y'a + (x' - qy')b = g (i.e. (x,y) = (y', x' - qy')). QED.
End proof
Note that this is analogous to the extended Euclidean gcd algorithm.
Define the conjugate of a + b\omega as a + b - b\omega, and denote it as (a + b\omega)'. The conjugate has the following nice properties (which are easy to prove and are left as exercises):
• x'' = x
• (x + y)' = x' + y'
• (xy)' = x'y'
• x is prime if and only if x' is prime.
• If x \mid y, then x' \mid y'
• If g is a gcd of a and b, then g' is a gcd of a' and b'
• x is an integer if and only if x' = x
• x + x' is an integer. This quantity is called the trace of x. The trace of a + b\omega is 2a + b.
• xx' is an integer. This quantity is called the norm of x.
The norm is actually important for our purposes. It is denoted as Nx (i.e. Nx = xx'), and has the following properties (which are also easy to prove and are left as exercises):
• N(a+b\omega) = a^2+ab+3b^2. Note that this coincides with the Euclidean function f we used above.
• Nx \ge 0.
• Nx = 0 if and only if x = 0.
• Nx = N(x')
• x \mid Nx
• If x \mid y, then Nx \mid Ny.
• N(xy) = Nx\cdot Ny. It follows that if x is a unit, then Nx = 1.
• Nx = 1 if and only if x = 1 or x = -1. It follows that the only units in \mathbb{Z}[\omega] are 1 and -1.
We now begin to uncover exactly which elements of \mathbb{Z}[\omega] are prime. We begin with the following:
If x is a non-unit in \mathbb{Z}[\omega] and Nx = p for some prime integer p, then x is prime.
If yz = x, then Ny\cdot Nz = N(yz) = Nx = p. Therefore, either Ny or Nz is 1, so either y or z is a unit. Therefore, x is irreducible, so x is prime, QED.
End proof
Next, we show that the norms of prime elements are actually very limited in form:
If x is prime in \mathbb{Z}[\omega], then Nx is either a prime integer or a square of a prime integer.
Factorize the integer Nx as Nx = p_1\cdot p_2\cdots p_k. Now x divides Nx, and since x is prime, x therefore divides some p_i. Therefore, Nx \mid Np_i = p_i^2. So Nx can be 1, p_i or p_i^2. But x is
not a unit, so Nx is a prime or a square of a prime, QED.
End proof
Now, a normal composite integer is also composite in \mathbb{Z}[\omega], because its integer factorization is also valid in \mathbb{Z}[\omega]. However, not all prime integers are primes in \mathbb
{Z}[\omega]. The following describes a family of prime integers which are not prime in \mathbb{Z}[\omega].
If p is an odd prime, p \not= \pm 11, and if -11 has a square root mod p, then p is factorable as p = xx', where x and x' are primes in \mathbb{Z}[\omega].
Let a be a square root of -11 mod p (i.e. a^2 \equiv -11 \pmod{p}). Note that p divides a^2+11.
Now, let x be a gcd of p and a+1-2\omega. By Bezout, there exist A and B such that x = Ap+B(a+1-2\omega). By taking conjugates, and noting that p' = p and (a+1-2\omega)' = (a-1+2\omega), we see that
x' is a gcd of p and a-1+2\omega, and x' = Ap + B(a-1+2\omega).
xx' = (Ap+B(a+1-2\omega))(Ap+B(a-1+2\omega))
= A^2p^2+ABp(2a)+B^2(a^2+11)
= p[A^2p+AB(2a)+B^2(a^2+11)/p]
So Nx = xx' is divisible by p, and x is not a unit. Thus x' is also not a unit.
Now, let g be a gcd of x and a-1+2\omega. Thus there exists C and D such that Cx + D(a-1+2\omega) = g. Now, since x \mid (a+1-2\omega), we have g \mid (a+1-2\omega). Thus, g \mid (a+1-2w)+(a-1+2w) =
2a. Thus, g is a common factor of 2a and p. But 2a and p are coprime, so g \mid 1, and gh = 1 for some h.
Cx + D(a-1+2\omega) = g
Chx + Dh(a-1+2\omega) = gh = 1
Ch(xp) + Dhp(a-1+2\omega) = p
Now, x' divides p, so xx' divides xp. Also, x divides p and x' divides a-1+2\omega, so xx' divides p(a-1+2\omega). Therefore, xx' divides Ch(xp) + Dhp(a-1+2w) = p.
Now, Nx = xx' divides p and is divisible by p, therefore xx' = p. Thus, p is product of non-units x and x'. Furthermore, Nx = N(x') = p, so x and x' are prime, QED.
End proof
The following shows that the converse is also true:
If p is a prime and p \not= \pm 11, and -11 doesn’t have a square root mod p, then p is irreducible in \mathbb{Z}[\omega].
We prove the contrapositive.
If p is reducible as xy = p with non-units x and y, then Nx\cdot Ny = Np = p^2. Since x and y are non-units, the only possibility is Nx = Ny = p. Now, let x = a+b\omega. Then:
p = Nx
p = a^2+ab+3b^2
4p = 4a^2+4ab+12b^2
4p = (2a+b)^2+11b^2
0 \equiv (2a+b)^2+11b^2 \pmod{p}
(2a+b)^2 \equiv -11b^2 \pmod{p}
[(2a+b)b^{-1}]^2 \equiv -11 \pmod{p}
This means that (2a+b)b^{-1} \bmod p is a square root of -11 mod p. Note that b is invertible mod p because if p \mid b, then p also divides p - ab - 3b^2 = a^2, so p \mid a. This means that p \mid a
+ b\omega = x, and p^2 = Np \mid Nx = p, a contradiction, QED.
End proof
These two claims cover most of the primes. The only ones not covered are \pm 2 and \pm 11. Luckily, they can be easily checked by hand: 11 and -11 are not prime because -11 = (1-2\omega)^2 and 11 = -
(1-2\omega)^2 = (1-2\omega)(1-2\omega)', and it can easily be shown that 2 and -2 are prime (Hint: Show that there doesn’t exist x such that Nx = 2).
Next, we describe the rest of the prime elements of \mathbb{Z}[\omega]. Let x be a prime. We have shown above that that Nx should be a prime or a square of a prime. If Nx is prime, we have shown
above that x is prime. Now, what if Nx is the square of a prime? The following shows that x is an integer:
If x is prime and Nx = p^2, then x = \pm p.
First, if p cannot be factored in \mathbb{Z}[\omega], then the only nontrivial factorizations of p^2 are p\cdot p and -p\cdot -p. Since xx' = p^2, then x = \pm p.
If p can be factorized in \mathbb{Z}[\omega] into two primes p = yy', then p^2 = yy'yy' = y^2(y')^2. Thus, the only factors of p^2 whose norm is p^2 are \pm y^2, \pm yy' and \pm (y')^2, and x can
only be one of those. But none of those are prime, so this case is impossible.
End proof
We can now combine all of the above to characterize the primes in \mathbb{Z}[\omega]. Let element x be an element of \mathbb{Z}[\omega]. Then:
• If x is not an integer, then x is prime if and only if Nx is a prime integer.
• If x is an integer, then x is prime if and only if x is a prime integer, and either x = \pm 2 or x \not= \pm 11 and -11 is not a quadratic residue modulo x.
Checking whether an integer p is prime can be done with Miller-Rabin.
Checking whether -11 is a quadratic residue modulo a prime p can be done using the Legendre symbol \left(\frac{a}{p}\right), which is defined as:
\left(\frac{a}{p}\right) = \begin{cases} 0 & \text{if $p \not\mid a$} \\ 1 & \text{if $p\mid a$ and $a$ is a quadratic residue modulo $p$} \\ -1 & \text{if $p \mid a$ and $a$ is not a quadratic
residue modulo $p$} \end{cases}
By Euler’s criterion, this can be calculated in O(\log p) time as:
\left(\frac{a}{p}\right) \equiv a^{\frac{p-1}{2}} \pmod{p}
Note that you could also use the law of quadratic reciprocity to more quickly check whether -11 is a quadratic residue: -11 is quadratic modulo an odd prime p if and only if p \equiv 0, 1, 3, 4, 5, \
text{ or } 9 \pmod{11} (i.e. those odd p's which are quadratic residues modulo 11).
We now summarize the algorithm.
def is_prime_integer(k) {
// returns whether k is a prime integer.
// you can use something like Miller-Rabin for this.
def is_prime(a,b,c) {
A := 33 - 2*a - c
B := b - a
if (B = 0) {
return |A| = 2 or (
is_prime_integer(|A|) and
modpow(-11,(|A|-1)/2,|A|) = |A|-1
} else {
return is_prime_integer(A*A + A*B + 3*B*B);
7 Likes
It looks like an amazing problem, but the problem statement is repulsive. I(and probably others too) didn’t care to try this problem because of that reason. It would have been much better if the
problem had been cleaner.
2 Likes
Oh man. In high school I was very passionate about mathematics and number theory, now I’m on a third year of bachelor mathematical studies and I attended Algebra I and Algebra II, so as you can see,
I’m also a huge fan of stuff like quadratic residues, quadratic fields and algebrical thingies like PID’s, UFD’s and so on, so everything from “Primes in Z[w]” on looks amazing for me, but you have
just made a terrible thing. From a beatiful problem from a mathematical point of view you created litterally the biggest shit I have ever seen. If I had been told to check whether something is prime
in some quadratic ring I would have probably delved into it and have much fun, but those rules of multiplications were so incredibly ugly that I haven’t even read them properly. What was your goal
when expressing problem in such repulsive way!? Main goal of problems creators should be to provide fun for competitors (because what are other goals?) and problems should be adjusted to achieve
this. I learned that the hard way when I gave one nice problem for PolishOI. It contained some tricky divide-and-conquer algorithm with sweeping line, but nobody even approached it, because I
included a little bit of geometry there. That geometry was really not a big obstacle, but people just got scared and nobody even wondered how to solve the main part of the problem (you can view it
here: Parked | Free Dynamic DNS | Dynu Systems, Inc.). Here is even more severe case, because that geometry wasn’t that bad either way and here we are given something incredibly tedious and we can’t
see the beauty of Z[w] behind those ugly formulas.
I’ll let you know that I registered in CodeChef while ago just to express my sorrow that such nice problem was wasted in such a stupid way… I like your math problems very much, but, but… Never do
that again. Don’t hide core of the problem behind something which might get people scared.
6 Likes
It rarely happens that authors provide detailed explanations for a problem. And when they do, they are met with such criticism as ‘Question being repulsive’. I believe there is a better way to
criticize the work of some one. And also while we do it, let’s not forget that the author took his precious time to actually explain the solution in details. So while we criticize the problem, we
shouldn’t forget to commend the author on putting such an amazing editorial.
Honestly, this will be the same author who will probably create a different problem in some other contest and don’t even bother to write a single line of explanation for the solution, especially when
people treat someone’s hard work this way.
To the author: You did an amazing job on the editorial. Thank you. Please understand, it is these kind of editorials that help beginners like us to learn and progress. I and I am sure many others owe
you for the hard work on the editorial.
6 Likes
In my opinion, “problem statement is repulsive” kind of arguments can be considered valid if the contest is of very small duration. In my opinion, problem statement was quite concise and clear. Some
people have said that it would have been great if he would have directly asked to check primes in Z[w] without making us find what is Z[w] for the current case (i.e. isomorphism from S to Z(w)). I
humbly disagree with them, cracking the actual representation was a crucial part of the problem. It might be dependent on person to person, I think figuring out the actual isomorphism for the current
case is certainly interesting. The problem was quite non-standard for codechef, but codechef had a lot of amazing nice number theory problems which deal with very advanced subjects. But in my view, I
consider this problem one of the my favorite problems in codechef. This problem clears a lot of fundamental concepts in number theory.
It would be really interesting to see opinions of people who have solved it.
4 Likes
The problem seems to become approachable once one makes the reduction from the given multiplication algorithm to Z[w]. I looked at most of the accepted solutions and it seems that they all noticed
this reduction (to something similar to Z[w]). I, for one, didn’t notice any such reduction. Not that I didn’t look for one, but the fact that the multiplication algorithm made use of floor(s/2)
discouraged me (I didn’t think that the effects of the floor function could be achieved by normal multiplication). Thus, my solution is very different from anything written in this editorial (but it
took me way longer to solve the problem than I would have liked). One important difference between my solution and the one in the editorial is that in my solution I actually factorize each given
Payton number (i.e. I decide that (A,B,C) is not prime if I find a pair of Payton numbers (X,Y,Z) and (U,V,W) such that (X,Y,Z) x (U,V,W) = (A,B,C)). The trick, of course, was to limit the amount of
candidates I had to test.
It turns out that these candidates (which are very few in number) could be derived from the solutions of two separate two-variable quadratic equations: F(x,y)=0 and G(v,w)=0. One of them has an easy
form: f1 * x * y + f2 * x + f3 * y + f4 = 0 (the solutions can be found by considering the divisors of some combination of the coefficients). The other one is tougher: w^2 + 11 * v^2 = N, where N is
a 64-bit composite number (and depends on the given Payton number (A,B,C)). I used the results from a paper to solve the second equation. It requires factorizing N into prime powers, finding
quadratic residues modulo those powers and combining the results using Chinese remainder theorem (CRT). For each of the numbers obtained through CRT (there are around 2^r such numbers, where r is the
number of different prime divisors of N) one can use something similar to Euclid’s algorithm to find a valid solution (v,w) (in general there are very few such solutions, which is what helped in
having a very small number of candidates to test). Out of all these steps, factorizing N was the slowest part, because there could be cases where N had two large prime factors. In general, not all
the test cases lead to Ns which were hard to decompose. In the end I remained with 2 test files which had many such Ns and for which I was getting TLEs (I was about 5 times slower than needed on
these 2 files, while all the others tests passed within the time limit). I needed between half a day and a day (including sleep) to optimize my solution enough to squeeze it into the time limit (in
the end I had a test which ran in 9.9 seconds out of the 10 seconds time limit for Java).
By the way, I have a question for the author. Why was the time limit set to 5 seconds (for C/C++) ? I think that the solution described in the editorial takes much less time. Not that I’m complaining
about this
12 Likes
First of all, I really enjoyed the problem, and I think it is totally OK to obfuscate the problem in a 10-day long contest.
Here is how I solved the problem.
I first used a brute force way to verify the test cases given in the problem, and then I found out that
(4, 5, 24) = (5, 5, 24) x (5, 4, 24)
Which means that one of the (5, 5, 24) and (5, 4, 24) must be unit, and sure it can be verified that (5, 5, 24) divides unity. This gave me the impression that probably (4, 4, 24) corresponds to 1,
and (5, 5, 24) corresponds to -1. Also, it seems like that the 1 and -1 have the center (9/2, 9/2, 24) between them, which probably could be another zero, had we allowed a and b to be half integers.
Anyway, I decided to explore the multiplication of the numbers (a, b, c), where a = b.
(a, a, 11) x (b, b, 11) → (11 - 2 (a - 11) (b - 11), 11 - 2 (a - 11) (b - 11), 11)
(a, a, 11) x (b, b, 24) → (11 - 2 (a - 11) (b - 9/2), 11 - 2 (a - 11) (b - 9/2), 11)
(a, a, 24) x (b, b, 24) → (9/2 - 2 (a - 9/2) (b - 9/2), 9/2 - 2 (a - 9/2) (b - 9/2), 24)
From the above expression it seems like that if we do the following transformation
(a, b, 11) → (11 - a, 11 - b)
(a, b, 24) → (9/2 - a, 9/2 - b)
then probably the multiplication could be expressed in a more uniform way, i.e., after the above transformation, we should be able to find a nice functions F() and G() such that
(A1, B1) x (A2, B2) = (F(A1, B1, A2, B2), G(A1, B1, A2, B2))
and sure enough, we can find such function F() and G(), except that they are not that pretty either.
(A, B) = (A1, B1) x (A2, B2), then
2A = 3 (A1 B2 + A2 B1 - B1 B2) + A1 A2
2B = 9 (A1 B2 + A2 B1 - A1 A2) - 5 B1 B2
I thought that maybe we can take a linear combination of the two equations in such a way that the RHS can be factored nicely, i.e.,
x (2A) + y (2B) = (x1 A1 + y1 B1) x (x2 A2 + y2 B2)
because, then we could find all divisors of LHS, and match them with the RHS, and solve the two linear diophantine equations (using euclidean gcd algorithm) independently.
Let us say z = y/x, then
2A + z (2B) = A1 ((3 + 9z) B2 + (1 - 9z) A2) + B1 ((-3 - 5z) B2 + (3 + 9z) A2)
In order for the RHS to be factorable, we need
(3 + 9z) / (1 - 9z) = (-3 - 5z) / (3 + 9z)
This gives us a quadratic equation in z, whose roots are imaginary numbers,
z = (4 + i sqrt(11)) / 9, and its conjugate
which means we won’t get nice factors on the RHS, but we will get some factors. After using this value of z to compute the linear combination, we get the following.
2 (A + zB) = (1 - 9z) (A1 + zB) (A2 + zB)
we can take the norm of both sides, so we get
9 ((9A - 4B)^2 + 11 B^2) = ((9A1 - 4B1)^2 + 11B1^2) ((9A2 - 4B2)^2 + 11B2^2)
So, if we can find all factors of LHS, we can match them with the two terms on RHS, and try to compute the possible values of A1, B1, A2, B2, i.e., if P is a factor of LHS, then find all integer
solutions of
P = x^2 + 11y^2, and then solve
9A1 - 4B1 = x,
B1 = y,
and check if this is a valid factor. Unfortunately, 11 is not an Idoneal number, so we don’t have any nice characterization of number of the number (x^2 + 11y^2). This is the point where my number
theory skills gave up, and I decided to just factorize the LHS, and solve the diophantine (x^2 + 11 y^2 = P) using a combination of precomputed solutions and and O (sqrt§) time brute force method.
However, solving the quadratic diophantine was not the bottleneck in my approach. The factorization of LHS was taking 85% of total time (as mugurelionut@ already noted). Luckily, I found Shank’s
factorization method which is quite fast, if you implement it properly (I didn’t, I copied the code from somewhere in web, which I cannot find again, so I cannot acknowledge the person who wrote it).
It takes around 100 usec to factor a number as large as (2 * 10^15) on a moderate machine.
Anyway, looking at author’s solution I think time limit of 5s was very liberal to allow me to fit the factorization of large numbers into time limit, of course, during the contest I had different
opinion about the time limit.
6 Likes
It’s really a nice problem! I took the form ϕ(a,b,c)=(33-a-b-c)+(a−b)ω and ω^2+3ω+5=0. The other parts were similar. Most properties I used in this problem were reached by analogy with Gaussian
integers and Eisenstein integer without any proof. During solving the problem, I also read something about Euclidean domain or UFD, which help a lot to discovering or guessing some conclusions.
3 Likes
Criticism of a problem (a specific, but relevant part of the problem) is not criticism of its editorial.
There’s no better way to criticise than by directly saying what’s wrong. It was repulsive because of the lengthy way multiplication was defined: true. It’s not good: true. Are you saying that’s
wrong, or is this just an Appeal to Emotion?
1 Like
who will probably create a different problem in some other contest and don’t even bother to write a single line of explanation for the solution
In other words, there’s no way the author could actually listen to others and try to improve the part that’s being (validly) complained about. Right?
Have you ever heard the term “constructive criticism”?
3 Likes
Something I have learned on the job, good leaders who have to criticize the employees reporting to them, manage to do it in an encouraging way. I am sure what’s being said here is constructive
criticism. But where is the appreciation for the editorial? At the end of the day, whether it is constructive criticism or pure bashing. If you deliver it in a way that discourages the person
concerned, then you have failed at your attempt and simply vented out your frustration, that’s all.
Personal experience: I work for Microsoft and in one project I was working with a veteran(23 years). I worked hard to come up with a pattern that was usable for intrusion detection using ML but this
person started challenging me.He made me realize, the pattern generates lot of false positives. Now he could have bashed me saying I didn’t validate. Instead he thanked me for my hard work but asked
me to write emails using softer words like ‘I Propose’ rather than ‘must be’. That is also constructive criticism my friend, delivered well. Not discouraging but making me learn and self reflect!
1 Like
If it discourages the author from adding extremely technical parts to his problems, then it’s good.
There’s a huge difference between “on the job” and “programming contests over the Internet” talks: the former needs to build a good relationship between people who meet each other daily, the latter
needs being heard and remembered. Saying absolutely nothing in a nice tone isn’t a good way to have your point remembered.
You should get used to some answers to editorial posts to be on the problems themselves, not the editorials. There’s just no other place to post them. And it’s not like an answer posted here has to
talk about the editorial.
These can be considered “editorial appreciation”, though: “It looks like an amazing problem” and “everything from “Primes in Z[w]” on looks amazing for me”. If the statement is repulsive, where else
would that come from than from the editorial?
@dpraveen: I think the problem was fine as it was - though much more difficult than it would have been if Z[w] had been explicitly mentioned in the problem statement. Cracking the isomorphism was one
of the toughest parts of this problem. I actually solved the problem without noticing the isomorphism
4 Likes
@djdolls: I will look for Shank’s factorization method. It seems like it could have also made my life easier
In Shank’s method almost all operations are done on n-bit integers, when factoring a 2n-bit integer, which makes it significantly faster.
I set the time limit to 5 seconds because I thought the most important part of this problem is to find the isomorphism and fast primality testing is only secondary. I even had my Python code
Plus I wasn’t really aware that it could be cracked with advanced factorization, but to me, your solutions are still good as accepted, because of the huge amount of effort involved
1 Like
Sorry for delaying for comments, but it have to be a consensus opinion due to topic,
I’m a huge fan of swistakk’s comments and agree that he is partly right, but from other side I participated in that contest (and later was a problemsetter on CodeChef and was scared of such comments
and followed logics “if you did not critised that problem” then “you should not criticise this bayan”) and did not solved that problem during that contest (it means that problem had something
non-trivial) and here it looked more like a nice captcha-problem so… that is weird, making from world objects one-liners is not difficult, but try to make from one-liner (Algebra?!) something real,
briefly saying if you have troubles with it that means that that’s not that troubles you should inspect for…
What is the essence of having beautiful math core in problems? You solve that core and forget about that core forever (and would not even trying to come up with something different because that is a
part of modern scientific method) making that core consensus-like, imagine having that ~10^100 different chess games and O(1) using by Stockfish/LeelaChess latest versions | {"url":"https://discuss.codechef.com/t/custprim-editorial/9728","timestamp":"2024-11-15T00:45:35Z","content_type":"text/html","content_length":"97683","record_id":"<urn:uuid:db0052f3-d99e-4d86-9162-4b8477c82a34>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00810.warc.gz"} |
Generate to Defend (G2D) - Chainbase Docs
Generate to Defend (G2D)
For any AI models, adversarial attacks occur because attackers exploit vulnerabilities in the model’s parameters by crafting inputs that cause the model to make incorrect predictions. These attacks
are particularly effective when the model parameters remain static, allowing attackers ample time to probe and understand the model’s weaknesses. The necessity of the Generate to Defend (G2D)
algorithm arises from the need to counteract these attacks by periodically regenerating model parameters, thereby preventing attackers from gaining prolonged access to a consistent set of parameters.
By continually altering the model parameters while maintaining performance, G2D disrupts the attackers’ ability to reliably exploit the model, thus enhancing its robustness against adversarial
threats, similar to frequently changing passwords to increase security.
Generate to Defend (G2D) Algorithm
The G2D algorithm periodically generates new model parameters to prevent adversarial attacks by ensuring that the model parameters are not exposed for extended periods. The key idea is to tailor a
diffusion model to create new parameters that maintain the model’s performance while being sufficiently different from the original parameters. This is achieved by adding a regularizer to the
diffusion process that maximizes the L1 distance between the generated parameters and the original parameters.
1. Preliminaries of Diffusion Models:
Diffusion models consist of forward and reverse processes indexed by timesteps. We summarize these processes below:
Forward Process: Given a sample $x_0 \sim q(x)$, Gaussian noise is progressively added for $T$ steps to obtain $x_1, x_2, \ldots, x_T$. This process is described by:
$q(x_t | x_{t-1}) = \mathcal{N}(x_t; \sqrt{1 - \beta_t} x_{t-1}, \beta_t I)$
Reverse Process: The reverse process aims to train a denoising network to remove the noise from $x_t$, moving backward from $x_T$ to $x_0$:
$p_\theta(x_{t-1} | x_t) = \mathcal{N}(x_{t-1}; \mu_\theta(x_t, t), \Sigma_\theta(x_t, t))$
The denoising network is optimized using the negative log-likelihood:
$L_{dm} = \text{KL}(q(x_{t-1} | x_t, x_0) \parallel p_\theta(x_{t-1} | x_t))$
2. Embedding Model into Compact Space:
To prepare the data, we train an autoencoder to extract latent representations of the model parameters. The encoding and decoding processes are formulated as:
$Z = f_{\text{encoder}}(V + \xi_V, \sigma) V' = f_{\text{decoder}}(Z + \xi_Z, \rho)$
where $V$ is the set of model parameters, $Z$ is the latent representation, $\xi_V$ and $\xi_Z$ are added Gaussian noise, and $\sigma$ and $\rho$ are parameters of the encoder and decoder,
respectively. The autoencoder is trained by minimizing the mean square error (MSE) loss:
$L_{MSE} = \frac{1}{K} \sum_{k=1}^K \| v_k - v'_k \|^2$
3. Training Diffusion Models with Regularizer:
We modify the diffusion process to include a regularizer that maximizes the L1 distance between the generated parameters and the original parameters. The training objective for the diffusion model
with the additional regularizer is given by:
$\theta \leftarrow \theta - abla_\theta \left( \| \epsilon - \epsilon_\theta (\sqrt{\bar{\alpha}_t} z_k^0 + \sqrt{1 - \bar{\alpha}_t} \epsilon, t) \|^2 + \lambda \| \theta_{\text{gen}} - \theta_{\
text{orig}} \|_1 \right)$
where $\lambda$ is a hyperparameter that controls the importance of the regularizer, $\theta_{\text{gen}}$ are the generated parameters, and $\theta_{\text{orig}}$ are the original parameters.
4. Model Generation:
During inference, random noise is fed into the trained diffusion model and decoder to generate new sets of model parameters:
$\theta_{\text{gen}} = f_{\text{decoder}}( \text{ReverseProcess}(\text{Random Noise}))$
These generated parameters are then used to replace the existing model parameters, ensuring the model’s resilience against adversarial attacks by frequently updating its parameters. This modified
diffusion process ensures that the new parameters are significantly different from the original ones, thereby enhancing the model’s defense against adversarial attacks while maintaining its
After the model is trained, the Step 4 is executed periodically to randomize AI model parameters, so that the cost of attack is significantly increased. The more frequent we generate, the safer our
AI model become. To generate a new parameter, only a diffusion process is inferenced in a cost-effective manner. | {"url":"https://docs.chainbase.com/theia/Developers/Glossary/G2D","timestamp":"2024-11-07T03:01:35Z","content_type":"text/html","content_length":"749223","record_id":"<urn:uuid:602391df-9130-4c40-b774-2873be9727f2>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00827.warc.gz"} |
The goal of ggvolcano is to provide a flexible and customizable solution for generating publication-ready volcano plots in R. It simplifies the process of visualizing differential expression results
from analyses like RNA-seq, making it easier to communicate key findings.
You can install the development version of ggvolcano from GitHub using the following commands:
This is a basic example showing how to create a volcano plot using the ggvolcano package:
# Load the package
# Load example data
data <- read.csv(system.file("extdata", "example.csv", package = "ggvolcano"))
# Create a volcano plot
logFC_col = "log2FoldChange", # Column containing log2 fold changes
pval_col = "pvalue", # Column containing p-values
pval_cutoff = 10e-4, # Cutoff for significance in p-values
logFC_cutoff = 1.5, # Cutoff for significance in fold change
x_limits = c(-5.5, 5.5), # X-axis limits
y_limits = c(0, -log10(10e-12)), # Y-axis limits
title = "Example Volcano plot", # Title of the plot
caption = "FC cutoff, 1.5; p-value cutoff, 10e-4", # Caption of the plot
legend_aes = list(position = "right"), # Position legend on the right
gridlines = list(major = TRUE, minor = TRUE), # Show major and minor gridlines
horizontal_line = 10e-8, # Draw a horizontal line at this p-value cutoff
horizontal_line_aes = list(type = "dashed", color = "red", width = 0.5) # Aesthetics for horizontal line | {"url":"https://cran.r-project.org/web/packages/ggvolcano/readme/README.html","timestamp":"2024-11-12T06:25:08Z","content_type":"application/xhtml+xml","content_length":"9977","record_id":"<urn:uuid:8ebb5138-676e-4096-a8ff-adbdc735f298>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00773.warc.gz"} |
vcov.coxph: Variance-covariance matrix
Extract and return the variance-covariance matrix.
# S3 method for coxph
vcov(object, complete=TRUE, ...)
# S3 method for survreg
vcov(object, complete=TRUE, ...)
logical indicating if the full variance-covariance matrix should be returned. This has an effect only for an over-determined fit where some of the coefficients are undefined, and coef(object)
contains corresponding NA values. If complete=TRUE the returned matrix will have row/column for each coefficient, if FALSE it will contain rows/columns corresponding to the non-missing coefficients.
The coef() function has a simpilar complete argument.
additional arguments for method functions
For the coxph and survreg functions the returned matrix is a particular generalized inverse: the row and column corresponding to any NA coefficients will be zero. This is a side effect of the
generalized cholesky decomposion used in the unerlying compuatation. | {"url":"https://www.rdocumentation.org/packages/survival/versions/2.42-3/topics/vcov.coxph","timestamp":"2024-11-09T21:57:30Z","content_type":"text/html","content_length":"55287","record_id":"<urn:uuid:12c18e41-c1aa-46bb-80a9-014bbd6a72e8>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00153.warc.gz"} |
Spelling Checker Ignores Words That Contain Numbers - SpellingNumbers.com
Spelling Checker Ignores Words That Contain Numbers
Spelling Checker Ignores Words That Contain Numbers – Learning to spell numbers may be difficult. The process of learning to spell can be made easier if you have the proper resources. There are many
sources that will help you improve your spelling skills regardless of whether or not you are at school or at work. These include books or tips, as well as games that you can play online.
The Associated Press format
If you write for a newspaper or other printed media, you need to be able to write numbers in the AP style. The AP style gives instructions for how to type numbers and other things to make your work
more succinct.
The Associated Press Stylebook is an extensive collection of updates released since its debut in 1953. This stylebook is at its 55th anniversary. This stylebook is used by the majority of American
newspapers periodicals, newspapers as well as online news outlets.
A set of guidelines regarding punctuation and the use of language, referred to as AP Style are used frequently in journalism. AP Style’s top practices include capitalization, use of dates and time,
and citations.
Regular numbers
An ordinal is a number that uniquely represents a place in a sequence or list. These numbers are used frequently to indicate size, significance or even time passing. They also reveal what’s in what
In accordance with the circumstances the normal numbers can be expressed either verbally or numerically. The usage of a distinct suffix differentiates them in the most significant way.
To make a number ordinal put the “th” at the end. The ordinal number 31 could be represented by 31.
An ordinal can be used for a myriad of reasons like dates or names. It is important to distinguish between an ordinal and a cardinal.
Both millions and trillions
Numerology can be utilized in a variety of contexts, including the stock market as well as geology. Millions and billions are just two examples. A million is a natural number that is prior to
1,000,001 while the number of billions follows after 999.999.999.
In millions, the annual revenue for a corporation is expressed in millions. They can also be used to calculate the value the value of a fund, stock, or piece of money is worth. In addition, billions
are utilized as a unit of measurement for the capitalization of a company’s stock market. You may verify the accuracy of your estimations by converting millions to billions using a unit conversion
Fractions are utilized in English to denote particular items or portions of numbers. Divided into two parts is the numerator (and denominator). The numerator shows how many equal-sized pieces were
taken in, while the denominator shows the number of pieces that were divided into.
Fractions can be expressed mathematically or written in words. Be cautious when spelling fractions. This could be difficult in the event that you must use many hyphens, especially when it comes to
larger fractions.
There are a few basic rules to follow when you intend to write fractions in words. The first is to put the numbers at the top of each sentence. You can also write fractions in decimal format.
Many Years
Writing a thesis paper or research paper, or an email will require you to utilize years of experience spelling numbers. You may avoid typing out the same number over and over and maintain proper
formatting by employing a few tricks and tricks.
Numbers should always be written in formal style. There are numerous styles guides that offer various guidelines. The Chicago Manual of Style recommends using numbers between 1 and 100. It is not
recommended to write out numbers greater than 401.
Of course, exceptions exist. One example is the American Psychological Association’s (APA) Style guide. While it is not a specific publication, this manual is widely used for scientific writing.
Time and date
The Associated Press style handbook provides general guidelines on how to style numbers. For numbers greater than 10 numbers, numerals are used. Numerology can also be utilized in other places. For
the first five numbers of your document, “n-mandated” is the norm. There are exceptions.
The Chicago Manual of Technique, and the AP stylebook mentioned earlier recommend using a lot of numbers. However, this doesn’t mean it is impossible to make a version that is not based on numbers.
It is a significant difference and I am an AP graduate.
To find out what is not noticing, a stylebook must be used. For instance, you’ll need to ensure that you don’t ignore a “t,” such as the “t” in “time.”
Gallery of Spelling Checker Ignores Words That Contain Numbers
Ignoring Words Containing Numbers Microsoft Word
MS Excel How To Use Spell Check Javatpoint
Ignore Uppercase Words Numbers Or Internet Addresses When Checking | {"url":"https://www.spellingnumbers.com/spelling-checker-ignores-words-that-contain-numbers/","timestamp":"2024-11-10T01:47:52Z","content_type":"text/html","content_length":"62048","record_id":"<urn:uuid:9d098a8e-fcd6-45a5-b988-c08869fbfe42>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00712.warc.gz"} |
Math Colloquia - On the resolution of the Gibbs phenomenon
Since Fourier introduced the Fourier series to solve the heat equation, the Fourier or polynomial approximation has served as a useful tool in solving various problems arising in industrial
applications. If the function to approximate with the finite Fourier series is smooth enough, the error between the function and the approximation decays uniformly. If, however, the function is
nonperiodic or has a jump discontinuity, the approximation becomes oscillatory near the jump discontinuity and the error does not decay uniformly anymore. This is known as the Gibbs-Wilbraham
phenomenon. The Gibbs phenomenon is a theoretically well-understood simple phenomenon, but its resolution is not and thus has continuously inspired researchers to develop theories on its resolution.
Resolving the Gibbs phenomenon involves recovering the uniform convergence of the error while the Gibbs oscillations are well suppressed. This talk explains recent progresses on the resolution of the
Gibbs phenomenon focusing on the discussion of how to recover the uniform convergence from the Fourier partial sum and its numerical implementation. There is no best methodology on the resolution of
the Gibbs phenomenon and each methodology has its own merits with differences demonstrated when implemented. This talk also explains possible issues when the methodology is implemented numerically.
The talk is intended for a general audience. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=colloquia&l=en&sort_index=speaker&order_type=asc&page=9&document_srl=765295","timestamp":"2024-11-09T14:07:36Z","content_type":"text/html","content_length":"44825","record_id":"<urn:uuid:474fe7cd-5276-4531-a433-2d4c440cd97b>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00543.warc.gz"} |
Why this Blog?
The aim of this blog is to explain people (or at least try to explain) what applied mathematics is. We don’t claim that we will be exhaustive but we will surely try to capture your attention.
We think that schools teach mathematics in an old way and we try to fill the gap the school left open. Indeed, we try to post articles telling what real mathematics in real world is.
Moreover, we think that popular science books are not doing a good work. Very often they give a very confusing idea of mathematics. They compare it with strange people spending all their life trying
to solve problems and prove theorems or with logic problems. This things are obviously fascinating and interesting but have a little in common with the applied mathematicians’ world. | {"url":"http://www.mathisintheair.com/eng/why-this-blog/","timestamp":"2024-11-10T11:43:16Z","content_type":"text/html","content_length":"46089","record_id":"<urn:uuid:4fcd7dce-4790-40fc-a2eb-a82c5f74fa79>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00680.warc.gz"} |
Guaranteed cost control of mobile sensor networks with Markov switching topologies. - PDF Download Free
ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Contents lists available at ScienceDirect
ISA Transactions journal homepage: www.elsevier.com/locate/isatrans
Guaranteed cost control of mobile sensor networks with Markov switching topologies Yuan Zhao, Ge Guo n, Lei Ding School of Information Sciences and Technology, Dalian Maritime University, Linghai
Road 1#, Dalian 116026, China
art ic l e i nf o
a b s t r a c t
Article history: Received 18 October 2014 Received in revised form 29 March 2015 Accepted 26 May 2015 This paper was recommended for publication by Y. Chen
This paper investigates the consensus seeking problem of mobile sensor networks (MSNs) with random switching topologies. The network communication topologies are composed of a set of directed graphs
(or digraph) with a spanning tree. The switching of topologies is governed by a Markov chain. The consensus seeking problem is addressed by introducing a global topology-aware linear quadratic (LQ)
cost as the performance measure. By state transformation, the consensus problem is transformed to the stabilization of a Markovian jump system with guaranteed cost. A sufficient condition for global
meansquare consensus is derived in the context of stochastic stability analysis of Markovian jump systems. A computational algorithm is given to synchronously calculate both the sub-optimal consensus
controller gains and the sub-minimum upper bound of the cost. The effectiveness of the proposed design method is illustrated by three numerical examples. & 2015 ISA. Published by Elsevier Ltd. All
rights reserved.
Keywords: Mobile sensor networks Consensus Markov switching topologies Mean-square stability Guaranteed cost control
1. Introduction In the past decade, wireless sensor networks have received a great deal of research attention due to their diverse applications in industrial automation, health monitoring,
environment and climate monitoring, intruder detection, etc., [1]. In a dangerous or hostile environment, sensors cannot be manually deployed and fixed. It is necessary to deploy sensors mounted on
mobile platforms such as unmanned vehicles, mobile robots, and spacecraft or man-made satellites. These sensors can collaborate among themselves to set up a sensing/actuating network, which is called
a mobile sensor network (MSN). A typical MSN consists of hundreds or thousands of mobile sensor nodes distributed over a spatial region. Each sensor node has some level of capability for sensing,
communication, signal processing and movement. The tendency that MSNs operated in a distributed manner will make use of small low power mobile devices may play revolutionary impact on many civil and
military applications in exploration and monitoring. Due to the limitation of resource, MSNs have limited costs for communication, computation and motion sub-capabilities. As a result, power-aware
algorithms have recently been the subjects of extensive research [2–6] regarding various key issues such as localization, deployment, environment estimation and coverage control, rendezvous and
consensus. For example, energy-efficient
Corresponding author. E-mail address:
[email protected]
(G. Guo).
localization algorithms were proposed to reposition sensors in desired locations in order to recover or enhance network coverage or to maximize the covered area in [7,8] and [9]. In [8–10], the
power-constrained deployment and coverage control issues were addressed by modeling energy consumption by the total traveling distance of the sensors. In [11] and [12], the vehicle speed management
and the optimization problem of the number of agents for adequate coverage were addressed. In [13], a new algorithm for the maximum distance which an agent could travel by a dynamically changing
energy radius was presented to solve the distributed deployment problem. An energy aware protocol which can prevent the agents from depleting their energy in achieving rendezvous was proposed in
[14]. Consensus seeking, which means a group of mobile sensors achieve agreement upon a common state (i.e., position, velocity and direction), is another interesting problem in cooperative control of
MSNs. There have been many papers studying consensus problems with cost optimization. To mention a few, an optimal consensus control method was proposed in [15] to minimize energy cost of sensors
deployed in intelligent buildings for resource allocation. In [16], by introducing the cost functions to weigh both the consensus regulation performance and the control effort, an LQR consensus
method was derived for multivehicle systems with single integrator dynamics. In [17], an optimal consensus seeking problem was studied in a network of general linear multi-agents. In [18], a two-step
sub-optimal consensus control algorithm guaranteeing minimum energy cost for mobility and communication sub-tasks were derived.
http://dx.doi.org/10.1016/j.isatra.2015.05.013 0019-0578/& 2015 ISA. Published by Elsevier Ltd. All rights reserved.
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/
Y. Zhao et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
However, most of the above researches assume a static communication topology of MSNs. In practice, MSNs may have a dynamic network topology caused by link failure, packet dropout or environmental
disturbances. [19] proposed a theoretical framework to study the consensus problem of multi-agent systems with switching topology. Based on nonnegative matrix theory, [20] and [21] investigated
consensus control of multi-agent systems under dynamic topology. [22] considered a tradeoff between system performance and control effort of multi-agent systems with switching topologies. In some
cases, due to random network conditions or environmental factors (e.g., sea wave, the wind and weather condition, [23]), an MSN may experience a randomly switching communication topology. Recently,
increasing research attention has been paid to multi-agent systems with randomly switching network topologies, especially those with Markov switching topologies [24–27]. For example, [28] studied the
almost sure convergence to consensus for agent network with Markovian switching topologies. By using the pth moment exponential stability theory and M-matrix approach, [29] considered the average
consensus for the wireless sensor networks with Markovian switching topology and stochastic noise. In these results, it is required that Markov chains are ergodic, which implies that the multi-agent
systems experience switching topologies in infinite time horizon. In other words, the systems cannot stay in a certain topology. In many practical applications, it is however more reasonable that
systems may go though from switching topologies to a certain fixed topology. An example can be found when the systems pass from unsteady environment to a settled one. In fact, the control cost of a
mobile sensor network depends on the communication and mobility behaviors of the sensors as well as the network topology. Therefore, for MSNs with Markov switching topologies, it is of great
importance how to delicately involve the network topology factor into the control cost in setting up a low cost consensus control protocol. However, there are few results available on guaranteed cost
control for consensus of multi-agent systems with Markov switching topologies. In this paper, we aim to investigate the problem of guaranteed cost consensus seeking of MSNs with Markov switching
topologies. We consider a collection of mobile sensors whose dynamics is described by a discrete-time state space equation. The communications topologies are assumed to be a set of directed graphs
with a spanning tree. The switching of network topology is modeled as a Markov chain. A topology-dependent consensus protocol without local feedback is proposed, where the subtle structural dynamics
of the switching topology is involved. A global LQ cost function depending on the control input and the state errors of neighboring sensors is introduced. Then, using graph theory and model
transformation, the consensus problem with guaranteed cost is transformed to the problem of guaranteed cost stabilization of a reduced order Markov jumping system. A sufficient condition which
guarantees global exponential consensus of the MSN in the mean square sense is derived based on stochastic Lyapunov functional method. A computational algorithm by which the consensus controller
gains and a minimum upper bound of the cost can be calculated is given. The effectiveness of the consensus control method is illustrated by three numerical examples. The remainder of this paper is
organized as follows. Section 2 gives some preliminaries of graph theory and the problem formulation. Section 3 contains the main results on the sufficient condition of consensus and controller design
for MSNs with Markov switching topology. Numerical examples are given in Section 4, which is followed by the conclusion in Section 5. Notations: Rn denotes n dimensional Euclidean space, Rnm
represents the family of n m dimensional real matrices. In is the identity matrix of dimension n. For a given vector or a matrix
X, X Τ and j j X j j denotes its transpose and its Euclidean norm. ρðXÞ means the eigenvalue of matrix X. For a square nonsingular matrix X, X 1 denotes its inverse matrix. And diag{…} stands for a
blockdiagonal matrix. For symmetric matrices P and Q, P 4Q (respectively, P ZQ ) means that matrix P–Q is positive define (positive semi-definite). The sign represents matrix Kronecker product. 1
denotes a column vector whose entries equal to one. Similar notation is adopted for 0. The symmetric elements of a symmetric matrix are demoted by n. E(y) and Pro(y) are the mathematical expectation
and probability of stochastic variable y. N þ stands for non-negative integers.
2. Preliminaries and problem formulation 2.1. Preliminaries of graph theory We use a directed graph (digraph) Gðυ; ε; ΛÞ to model the interactions among sensors, where υ A fυ1 ; ⋯; υN g is the set of
N nodes, ε D υ υ is the set of edges, Λ ¼ ½aij is the adjacency matrix with its elements associated with the edges, i.e., if υi ; υj ; A ε, aij 4 0, otherwise ðυi ; υj Þ2 = ε, aij ¼0. In the paper we
will consider graphs without selfedge, i.e., aii ¼ 0. Each edge ðυi ; υj Þ A ε implies that node υi can receive information from node υj . A sequence of edges ðυi ; υk Þ, ðυk ; υl Þ, … , ðυm ; υj Þ
is called a directed path from node υj to node υi . A digraph is said to have a spanning tree, if there is a root (which has only children but no parent) such that there is a directed path from the
root to any other nodes in the graph. The set of neighbors of node υi is denoted by N i ¼ ðυj A υ : ðυi ; υj Þ A εÞ. Define the in-degree of node υi as P di ¼ N j ¼ 1 aij and in-degree matrix Δ ¼
diagfd1 ; ⋯dN g. The Laplacian matrix of the directed graph G is defined as L ¼ Δ Λ. Accordingly, P o define the out-degree of node υi as di ¼ N j ¼ 1 aji and the out-degree o o o matrix Δ ¼ diagfd1 ;
⋯; dN g. The graph column Laplacian matrix of o T the directed graph G is defined as Lo ¼ Δ Λ . An important property of L is that all of its row sums are zero, thus 1 is an eigenvector of L
associated with eigenvalue zero. Zero is a simple eigenvalue of L if and only if the directed graph has a spanning tree, and the other eigenvalues are with positive real parts. 2.2. Markov switching
topology Consider a mobile sensor network with N identical sensors. At every instant k, the interconnection of these sensors can be considered as a directed graph with a spanning tree. The
communication topology is switching but not fixed due to a certain random event. Assume that the topology is switching within a given set of graphs G θðkÞ A GðkÞ, GðkÞ ¼ G1 ; G2 ; ⋯; Gq , where þ
fθðkÞ; k A N g is the switching signal. Here, θðkÞ A S ¼ f1; ⋯; qgis assumed to be a Markov chain taking values in a finite set. Its transition probability is given as Profθðk þ 1Þ ¼ vθðkÞ ¼ lg ¼ π lv
; with ProðθðkÞ ¼ lÞ ¼ π l ðkÞ wherel; v A S, π l ðkÞ is the transition probability of Gl at time k with initial probability Proðθð0Þ ¼ lÞ ¼ π 0l , and π lv is the single step transition probability
from mode l to mode v, which satisfies q P π lv ¼ 1. The adjacency matrix ΛðθðkÞÞ and Laplacian matrix v¼1 of graph G θðkÞ are defined as Λ θðkÞ A Λ1 ; Λ2 ; ⋯Λq and L θðkÞ A L1 ; L2 ; ⋯Lq ,
respectively. Denote the whole topology modal probability distribution by matrix Π ðkÞ ¼ ½π 1 ðkÞ; ⋯; π q ðkÞT , with initial probability distribution Π 0 ¼ ½π 01 ; ⋯; π 0q T . Let π ¼ ½π lv qq be
the transition probability matrix. Then we have Π ðk þ 1Þ ¼ π T Π ðkÞ.
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/
Y. Zhao et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Remark 1. Since θðkÞ A S ¼ f1; ⋯; qg is a finite state Markov chain, stationary distribution always exists. In such a Markov chain, there exists at least one positive recurrent closed set. As each
topology has a spanning tree, it is not necessary for the Markov chain to be ergodic, which is required in related papers such as [24,26] and [27]. 2.3. Problem formulation The dynamics of sensor i
in the MSN is described by: xi ðk þ 1Þ ¼ Axi ðkÞ þ Bui ðkÞ; i ¼ 1; …; N; m
where ui ðkÞ A R and xi ðkÞ A R are the control input and the state of sensor i, with xi ð0Þ being the initial state, A A Rnn and B A Rnm are constant matrices. To make our study nontrivial, we
assume matrix A is not Schur stable, i.e., all eigenvalues of A lie outside of the open unit disk. It is obvious that if A is Schur, the MSN will converge to zero and consensus can be achieved under
zero consensus gain. As in many other studies [30], it is assumed that a number of mobile base stations are involved in the system to detect the topology information of the MSN and broadcast the
knowledge about the topology to the sensors. In other words, this is a practical way that the sensors can know the present topology of the MSN at any time. Therefore, we use the following mode
dependent control protocol for sensor i. X ui ðkÞ ¼ KðθðkÞÞ aij ðθðkÞÞðxi ðkÞ xj ðkÞÞ; θðkÞ A S; ð2Þ j A Nj
where KðθðkÞÞ is the controller gain matrix to be determined. Furthermore, the control protocols should be able to guarantee a limited overall cost, which is defined below: 0 1 1 X N X X T T @ J¼ E
aij ðθ ðkÞÞ xi ðkÞ xj ðkÞ Q xi ðkÞ xj ðkÞ þ ui ðkÞRui ðkÞA k¼0i¼1
j A N i ðkÞ
ð3Þ where Q A Rnn Z 0, R A Rmm 4 0 are constant matrices. Before stating our objective of this paper, we first give the following definition. Definition 1. The MSN is said to achieve mean square
consensus (MS-consensus) under protocol (2) with Markov switching topologies in set GðkÞ, if for any finite xi ð0Þ, θðkÞ A S ¼ f1; ⋯; qg, the following holds true for any i, j ¼1, …, N lim E½‖xi ðkÞ
xj ðkÞ‖2 ¼ 0:
Thus, our objective is to design controllers in the form of (2) such that the collection of sensors reach mean square consensus with J r J~ , where J~ is a given cost constraint. Remark 2. Note that
the cost defined in (4) covers both mobility energy cost and communication energy cost. To be more specific, let the mobile sensors have mass mi and dynamics x_ i ðtÞ ¼ ui ðtÞ, where xi(t) and ui(t)
denote the position and the velocity of the sensors. The mobility energy of sensor i amounts to EM ðtÞ ¼ mi u2i ðtÞ=2, and the wireless transmission energy is 2 EC ðtÞ ¼ lbi dij ðtÞ, where dij ðtÞ ¼
j xi ðtÞ xj ðtÞj is the distance between sensors i and j [5], l denotes the bits of data transmitted, and bi is the channel constant. Clearly, the overall energy cost of agent i can P 2 2 be given by
J i ¼ 1 k ¼ 0 mi ui ðkÞ=2 þlbi ðxi ðkÞ xj ðkÞÞ , which is a special case of the cost defined in (4). Remark 3. Different from the one introduced in [18], the topology-dependent consensus protocol
given in (2) has no local feedback part. Here, we aim to design controllers guaranteeing consensus of MSNs under Markov switching topologies with a
sub-minimum cost bound. In [18], the local feedback part is used in the consensus protocol to prevent the mobility energy from growing infinitely large as time goes to infinity. Remark 4. The consensus
seeking method in [21] based on nonnegative matrix theory requires the joint graph of the communication topologies to have a spanning tree. However, the method is not applicable to the guaranteed
cost consensus control problem with stochastically switching topologies. Besides the entirely different problem setup, the protocol studied in [21] is independent of the topology mode, while the
protocol in this paper is mode-dependent. Furthermore, the method in [21] requires that matrix B is of full rank, while in our method B can be any matrix with compatible dimension. 3. Main results In
this section, we will first derive a sufficient MS-consensus condition for MSN (1) with guaranteed cost. Based on this condition, we will give a consensus controller design algorithm. Let I k ¼ fxðt Þ;
θðtÞ; t ¼ 0; 1; 2; ⋯kg be the admissible information set. Clearly, I k I k þ 1 I 1 . Then, we can rewrite system (1) with protocol (2) at the network level in the following form Xðk þ 1Þ ¼ ðI N A þ
Lθ BK θ ÞXðkÞ;
where Laplacian matrix Lθ ¼ LðθðkÞÞ, and controller gain K θ ¼ KðθðkÞÞ. Also, the total energy cost in (3) is written as 1 X ð6Þ E X T ðkÞ Lθ þ Loθ Q þ LTθ Lθ K Tθ RK θ XðkÞ ; J¼ XðkÞ ¼ ½xT1 ðkÞ; xT2
ðkÞ; ⋯xTN ðkÞT ,
where column Laplacian matrix Loθ ¼ Lo ðθðkÞÞ. The following lemma will be useful for obtaining the main results, which is a minor extension of the result in [31]. Lemma 1. Given h i 1 T ¼ pffiffiNffi1N T
o A RNN ;
where To is the orthogonal complement of 1N satisfying T To T o ¼ I N 1 , then for any Laplacian matrix L A RNN of a directed graph, the similarity transformation of L , Lo , and LTL are 2 3 2 3
p1ffiffiffi1T LT o p1ffiffiffi1T Lo T o 0 0 N N N N T T o 5; T L T ¼ 4 5; T LT ¼ 4 0N 1 T To LT o 0N 1 T To Lo T o " # 0 0TN 1 T T T L LT ¼ : ð8Þ 0N 1 T To LT LT o According to Lemma 1, a state transformation
can be conducted as follows, X~ ðkÞ ¼ ðT I n ÞT XðkÞ;
Then, partitioning X~ ðkÞ A R into two parts, i.e.,X~ ðkÞ ¼ T T ½X~ 1 ðkÞ; X~ 2 ðkÞT , where X~ 1 ðkÞ A Rn is a vector consisting of the first n elements of X~ ðkÞ, the dynamics of system (5) can be
written as nN
X~ 1 ðk þ 1Þ ¼ AX~ 1 ðkÞ þ Θθ BK θ X~ 2 ðkÞ;
X~ 2 ðk þ 1Þ ¼ ðI N 1 A þ Φθ BK θ ÞX~ 2 ðkÞ;
where Θθ ¼ p1ffiffiNffi1T Lθ T o , Φθ ¼ T o T Lθ T o . By [26], the MS-consensus of MSNs will be achieved if system (11) is mean square stable, which will be defined later. Also, we can write the energy
cost (6) in the following form 1 X T o T J¼ EðX~ 2 ðkÞ Φθ þ Φθ Q þ Φθ Φθ K Tθ RK θ X~ 2 ðkÞÞ; k¼0
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/
Y. Zhao et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
where Φθ ¼ T o T Loθ T o . Note that, by Lemma 1, we can easily prove that T is an orthogonal matrix. By matrix theory we know that the orthogonal transformation will not change the norm. Therefore,
the energy cost J is not altered after the state transformation. Based on the above discussion, we can transform the consensus problem of system (5) into the stability problem of the reduced order
system in (11) with cost function (12). Here we are interested in exponential mean square stability defined below. o
Definition 2. [32] System (11) is said to be exponentially mean square stable (MSS) with Markovian topologies set GðkÞ, if k E ‖X~ 2 ðkÞ‖2 r βζ ‖X~ 2 ð0Þ‖22 ; k ¼ 0; 1; …; for any finite X~ 2 ð0Þ; β Z
1; 0 o ζ o 1: We are now in the position to establish a sufficient MSS condition for system (11) with a suboptimal guaranteed energy cost Jn. Theorem 1. MSN (1) with controllers (2) are mean square
consensus under Markovian switching topologies set GðkÞ, if there exist matrices P l 4 0, P l A RðN 1ÞnðN 1Þn , Q 40, R 40, l A S, such that the following condition holds: X Ψ Tl π lv P v Ψ l P l þQ
l þ Q ol þ ðI N 1 K l ÞT Rl ðIN 1 vAS
K l Þ o 0; 8 l A S;
where Ψ l ¼ I N 1 A þ Φl BK l , Φl ¼ T o Ll T o , Ll ¼ LðθðkÞ ¼ lÞ, o o T Q l ¼ Φl Q , Q ol ¼ Φl Q , Φl ¼ T o T Lol T o , Rl ¼ ðΦl Φl Þ R and T o is the orthogonal basis for the null space of 1.
Moreover, the guaranteed cost upper bound X T J~ ¼ sup J ¼ X~ 2 ð0Þ π 0v P v X~ 2 ð0Þ T
(18), we can get EðVðkÞj I k 2 Þ ¼ EðEðV ðkÞj I k 1 Þj I k 2 Þ r ζ EðVðk 1Þj I k 2 Þ r ζ Vðk 2Þ: 2
ð19Þ By recursion like (17)–(19), we can have that T k k T EðVðkÞÞ ¼ EðX~ 2 ðkÞP θðkÞ X~ 2 ðkÞÞ r ζ V ð0Þ ¼ ζ X~ 2 ð0ÞP θð0Þ X~ 2 ðkÞ;
from which it is straightforward that T k T EðX~ 2 ðkÞP θðkÞ X~ 2 ðkÞÞ r βζ X~ 2 ð0ÞX~ 2 ð0Þ;
where β ¼ λmax ðP θð0Þ Þ=λmin ðP θð0Þ Þ 41, 0 o ζ o 1. Therefore, by Definition 2, the reduced system in (11) is exponentially mean square stable, namely, the MSNs under Markov switching topologies
GðkÞ can achieve mean square consensus under controller (2) We then proceed to prove the guaranteed cost. By accumulating both sides of inequality (15) from k ¼0 to infinite, we can have J¼
1 X k¼0
T T EðX~ 2 ðkÞF l X~ 2 ðkÞÞ r E Vð0; θð0ÞÞ ¼ EðX~ 2 ð0ÞP θð0Þ X~ 2 ð0ÞÞ; 8 l A S: ð22Þ
By (22), we can achieve the following performance bound X T π 0v P v X~ 2 ð0Þ; 8 l A S; J~ ¼ sup J ¼ X~ 2 ð0Þ S
where X~ 2 ð0Þ ¼ ðT o I n ÞT X 2 ð0Þ. This completes the proof. Remark 5. From Theorem 1, it is easily seen that T Q l þ Q ol ¼ ðT To ðLl þ Lol ÞT o Þ Q , and Q l þ Q ol ¼ ðT To ðLl þ Lol ÞT T o Þ Q
T . Since, by the definitions of the Laplacian and column Laplacian matrices of directed graphs, Q is a symmetric matrix, we know that T Q l þ Q ol ¼ Q l þ Q ol , 8 l A S. Furthermore, as each
topology has a spanning tree, zero is a simple eigenvalue of Ll and hence Φl ¼ T o T Ll T o has no zero eigenvalue. By the properties of Kronecker T product, ρðRl Þ ¼ ρðΦl Φl ÞρðRÞ. As R40, it is
clear that Rl is reversible, 8 l A S.
Let θðkÞ ¼ l, θðk þ 1Þ ¼ v, l; v A S. Then, the backward difference of (14) is obtained as
Remark 6. This paper considers that each topology has a spanning tree. The results can be easily reduced to the case with connected undirected topologies. For a connected undirected graph, the
formulations will be simplified since Lθ ¼ LTθ ¼ Loθ , 2 N i T T Lθ T ¼ diagð 0 Λθ Þ, Λθ ¼ diagð λθ ; ⋯; λθ Þ, λθ 4 0, i¼2, …, N, o Φl ¼ Λl , Ll ¼ Ll . In this case, the cost function in (6) will
become 1 X E X T ðkÞ 2Lθ Q þ LTθ Lθ K Tθ RK θ XðkÞ ; J¼
ΔVðkÞ ¼ EðV ðk þ1; j I k Þ VðkÞ
and system (11) will be reduced to
where X~ 2 ð0Þ ¼ ðT o I n ÞT X 2 ð0Þ. Proof. We first prove the stability of system (11). Define the following stochastic Lyapunov functional: T Vðk; θðkÞÞ ¼ X~ 2 ðkÞP θðkÞ X~ 2 ðkÞ; θðkÞ A S ¼ f1; ⋯;
T T ¼ EðX~ 2 ðk þ 1ÞP v X~ 2 ðk þ1Þj I k Þ X~ 2 ðkÞP l X~ 2 ðkÞ ! X T T ¼ X~ ðkÞ Ψ π P v Ψ P X~ 2 ðkÞ; 2
X~ 2 ðk þ 1Þ ¼ ðI N 1 A þ Λθ BK θ ÞX~ 2 ðkÞ: ð15Þ
where Ψ l ¼ I N 1 A þ Φl BK l . If (13) holds, we can have
ΔV r X~ 2 ðkÞF l X~ 2 ðkÞ; T
whereF l ¼ Q l þ Q ol þ ðI N 1
ð16Þ T
K l Þ Rl ðI N 1 K l Þ. Therefore, by (15) and (16), we can obtain that
T λ ðF Þ EðV ðk þ1j I k Þ o VðkÞ X~ 2 ðkÞF l X~ 2 ðkÞ r 1 min l VðkÞ ¼ ζ VðkÞ: λmax ðP l Þ ð17Þ
If inequality (13) holds, it is clearly seen that λmax ðF l Þ o λmax ðP l Þ, 0 o ζ o 1. To get the MSS condition in Definition 2, similarly, we can have the following equality EðV ðkÞj I k 1 Þ ¼ ζ Vðk
According to smooth characteristics of conditional mean [33], by
The presentation of communication cost in this case will be much less complex than our case. In what follows, we will give a controller design method. For this purpose, the following sufficient
condition for the existence of the stochastic controller gain is derived based on stability criterion in Theorem 1. Theorem 2. For MSN (1) under Markovian switching topologies set GðkÞ, the protocols
in (2) can drive the system to reach MS-consensus, if there exist matrices P l , M l A RðN 1ÞnðN 1Þn , P l 4 0, M l 40, such that the following LMIs 2 3 T P l þ Q l þ Q lo ðI N 1 K l ÞT π Tl Ψ l 6 7
6 IN 1 K l ð24Þ Rl 1 0 7 4 5 o 0: ~ πlΨ l 0 M h 1=2 i 1=2 1=2 T for all l A S, where π l ¼ π l1 π l2 ⋯ π lq , n o 1 ~ M M ⋯ M q with the constraint M l ¼ P l , l A S. M ¼ diag 1 2
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/
Y. Zhao et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
Proof. Theorem 2 can be easily proved by using Theorem 1 and Schur complement lemma, so we omit it. As for the cost bound of the MSNs, we have obtained a feasible one in Theorem 1. Since the capacity
of batteries for the MSNs is limited, a minimum cost bound is preferable. Thus, We will give the minimum guaranteed cost by minimizing the upper bound given in (23) over the feasibility set Ω ¼ ðP l
; M l ; K l Þ, l A S, determined by Theorem 2. Namely, we aim to find the infimum of J~ X T π 0v P v X~ 2 ð0Þ: ð25Þ J n ¼ inf J~ ¼ inf X~ 2 ð0Þ Ω
The minimization problem can be formulated as min δ T s: t: δ Z X~ 2 ð0Þ
ð26Þ X
π 0v P v X~ 2 ð0Þ
which, by Schur complement, is equivalent to 2 3 T δ π T0 X~ 2 ð0Þ 5 4 Z0 ð28Þ ~ π 0 X~ 2 ð0Þ M h 1=2 i 1=2 1=2 T where π 0 ¼ π 01 π 02 ⋯ π 0q . The conditions in Theorem 2 contain some matrix
inversion constraints which are equivalent to the rank constrained LMI " # " # Pl n Pl n Z 0; l A S; rank r ðN 1Þn ð29Þ I Ml I Ml Therefore, the above minimization problem can be reduced to min δ;
s:t:ð24Þ; ð28Þ and ð29Þ:
To solve the rank constrained LMI in (29), we can use the LMIRank solver [34], called the YALMIP interface and the underlying SeDuMi solver. However, LMIRank does not support objective functions, but
only solves feasibility problems. To simultaneously determine the controller gain matrix in (5) and the sub-minimum quantity of energy cost in (7), we propose Algorithm 1.
topology, switching topology and switching-to-fixed topology. In other words, using Theorem 2, we can design mode-dependent consensus controllers for all these topologies, while the existing results
are only applied to the cases with a fixed topology or a switching topology [24–27].
4. Numerical examples In this section, we present three numerical examples to illustrate the effectiveness of the proposed consensus protocol. Example 1. Consider a team of three identical sensors,
whose dynamics can be described in the form of (1) with the following parameters [35] ( 2 r i ðk þ 1Þ ¼ r i ðkÞ þ hvi ðkÞ þ 12h ui ðkÞ ð31Þ vi ðk þ1Þ ¼ vi ðkÞ þ hui ðkÞ where ri(k), vi(k) and ui(k)
are respectively the position, velocity and control input of sensor i at time t ¼kh, h is the sample interval.
1 0:6 When h¼0.6, we have for the system in (1) A ¼ , 0 1
0:18 B¼ . Choose the initial states as x1 ð0Þ ¼ ½ 4 3 T , 0:6 x2 ð0Þ ¼ ½ 3:6 5 T , and x3 ð0Þ ¼ ½ 1:7 2 T . In the forthcoming simulation, we will use Q ¼ 2I 2 and R ¼ 0:5. All the possible
information transmission relationships among sensors are given as a group of three directed graphs with each one a spanning tree (shown in Fig. 1). For simplicity, assuming that all the weights are
equal to 1, we can have three Laplacian matrices of these graphs in Fig. 1 as follows 2 3 2 3 2 1 1 1 0 1 6 7 6 7 1 0 5; L2 ¼ 4 1 1 0 5; L1 ¼ 4 1 0 1 1 0 1 1 2
Algorithm 1. Step 1: Set the initial statexi ð0Þ (i¼1,…, N), weight matrices Q, R and the computational accuracy ε. Let a ¼ 0. Step 2: Find a feasible solution δ of δ by solving LMIs (24), (28) and
the rank constrains LMI (29), set b ¼ δ. Step 3: Let c ¼ ða þ bÞ=2, δ ¼ c. Solving LMIs (24), (28) and the rank constrains LMI (29), if there exist the feasible solutions P l and K l (l A S), set b ¼
c. Otherwise, a ¼ c. Step 4: If j a bj o ε, output δ,P l and K l . Otherwise, go to Step 3. Remark 7. In this paper, the ergodic characteristics (0 o π lv o 1;l; v A S) of the Markov chain is not
required any longer since each topology is assumed to have a spanning tree. If π 0l ¼ π ll ¼ 1, the condition reduces to the case with a fixed topology Gl (namely, the sensors remain the initial
topology all the time). If π 0l ¼ 0; π ll ¼ 1 (i.e., Gl is the absorbing state), the system will undergo from a switching topology to a fixed topology. This means that the method presented in this
paper is more general than existing results, since it covers all cases of fixed
0 6 L3 ¼ 4 1 0
7 0 5: 1
The corresponding column Laplacian matrices are 2 3 2 3 1 1 0 1 1 0 6 7 o 6 7 o 2 1 5; L2 ¼ 4 0 1 1 5; L1 ¼ 4 1 1 0 1 1 0 1 2
1 6 Lo3 ¼ 4 0 0
3 0 7 1 5: 0
We assume that the communication topology of the MSN is switching according to a Markov chain with the following 4
Markov state
Fig. 1. Directed graph set GðkÞ ¼ fG1 ; G2 ; G3 g.
Fig. 2. Markov switching sequences of communication topologies in GðkÞ.
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/
Y. Zhao et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
10 sensor 1 sensor 2 sensor 3
0 0 -10
-10 0
Fig. 6. Position trajectories of MSNs.
Fig. 3. Position trajectories of MSNs. 6 4
2 0 -2
sensor1 sensor2 sensor3
-4 0
Fig. 7. Velocity trajectories of MSNs.
k 4
Markov state
Fig. 4. Velocity trajectories of MSNs.
Markov state
Fig. 8. Markov switching sequences of communication topologies in GðkÞ.
k Fig. 5. Markov switching sequences of communication topologies in GðkÞ.
transition probability matrix 0:7
7 π¼6 4 0:5 0:4 0:1 5; 0:3
3 ð32Þ
and the initial probability distribution is Π 0 ¼ 0:5 0:4 0:1 . Using the presented design algorithm, we get the following controller gains K 1 ¼ ½ 0:6422 0:8469 , K 2 ¼ ½ 0:9422 1:1842 , K 3 ¼ ½
1:7489 2:1834 . The mean energy cost in (4) is calculated as J¼220.6958, while the cost upper bound Jn is min δ ¼ 741.0021. The Markov switching sequences of topologies is shown in Fig. 2, where
modes 1, 2 and 3 in the ordination denote topologies G1, G2, and G3, respectively. The simulation results of the three sensors with the obtained controllers are shown in Figs. 3 and 4. It can be
clearly seen that the three sensors' states asymptotically reach agreement, which illustrates the effectiveness of the proposed method. A comparison with the controller design method in [31] is given
to show the advantage of Algorithm 1 proposed in this paper. We consider the same systems (31) in Example 1. Assume that the communication topology set is the same in Fig. 1, and the transition
probability matrix of Markov chain is same as (32). According to the controller design algorithm proposed in [31], we get the controller gains K c1 ¼ ½ 0:1272 0:5001 , K c2 ¼ ½ 0:124 0:4878 , K c3 ¼
½ 0:1088 0:4281 . The mean
10 sensor 1 sensor 2 sensor 3
0 -10
k Fig. 9. Position trajectories of MSNs.
energy cost defined in (4) can be calculated as J c ¼ 438:5833, while the energy upper bound J nc ¼ 2805:4. The simulation results of the three sensors with the controllers are shown in Figs. 5–7.
Obviously, by comparisons with Example 1, we can have J o J c , J n o J nc , which show that our controller design method will guarantee the MSNs to achieve consensus under less cost consumption with
Markov switching topologies. Example 2. The aim of this example is to show that the control method applies to MSNs with the switching to fixed topology. We consider the same systems as in Example 1.
Assume that the communication topology set is the same as in Fig. 1 but is
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/
Y. Zhao et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
The authors acknowledge the financial support of the Natural Science Foundation Of China under Grants 61273107 and 61174060, and the Dalian Leading Talents, Dalian, China, the Fundamental Research
Funds for Central Universities under Grant 3132013334, China.
2 0 -2
sensor 1 sensor 2 sensor 3
-4 -6
Fig. 10. Velocity trajectories of MSNs.
switching according to a Markov chain with the following transition probability matrix 2 3 0 0:8 0:2 6 π^ ¼ 4 0 0:5 0:5 7 ð33Þ 5: 0 0 1 ^ ¼ 0:7 0:3 0 . Note that The initial probability distribution
is Π 0 this Markov chain is no longer an ergodic one, where graph G1 is a transient state and the G3 is an absorbing state. The Markov switching sequences of the topologies are shown in Fig. 8. Using
Theorem 2, we can get the controller gains as K^ 1 ¼ ½ 0:7398 0:9074 , K^ 2 ¼ ½ 1:0377 1:2909 , and K^ 3 ¼ ½ 1:6759 2:1437 . The mean energy cost is calculated as ^J ¼ 219.3023, while the energy
upper n bound ^J is min δ^ ¼ 730.8635. The simulation results of the three sensors are shown in Figs. 9 and 10. Obviously, the MSN can achieve consensus with guaranteed cost when the switching
topology finally settles down to a fixed topology.
5. Conclusion We have studied the consensus seeking problem for MSNs with Markov switching topologies. A sufficient condition for achieving exponential mean square consensus with guaranteed cost has
been obtained. Complementing this condition by rank constrained LMIs, we have derived a numerical algorithm to calculate the controller gains and the sub-minimum cost bound. The method and the
results are obtained based on Markov jumping system theory and state transformation. This paper investigates the feasibility and means of achieving cooperative control of MSNs with a guaranteed
control cost. We just gave an upper bound of the control cost but not the minimum cost that could be guaranteed. Clearly, there is a problem of further decreasing the cost to a minimum, especially
for energy-critical applications. In addition, circuit energy consumption cannot be overlooked in WSNs (unlike cellular networks) compared to the actual communication power. Thus, usual energy
optimization techniques that minimize communication energy may not be effective in the case of wireless sensor networks. Besides counting in circuit energy consumption, multiple-input multiple-output
(MIMO) technique may be another promising solution to energy limited wireless sensor networks due to large spectral efficiency. However, direct application of MIMO techniques is not practical for two
reasons, (1) it requires complex transceiver circuitry and signal processing, implying large power consumption at the circuit level; (2) physical implementation of multiple antennas at a small node
may not be realistic. Instead, one can consider cooperative MIMO [36] to achieve MIMO capability in a network of single antenna (single-input/single-output, SISO) sensors, as it has been shown that
in some cases cooperative MIMO based sensor networks may lead to better energy optimization and smaller end-toend delay.
[1] He JP, Li H, Chen JM, Cheng P. Study of consensus-based time synchronization in wireless sensor networks. ISA Trans 2014;53(2):347–57. [2] Raghunathan V, Pereira C, Srivastava M, Gupta R.
Energy-aware wireless systems with adaptive power-fidelity tradeoffs. IEEE Trans Very Large Scale Integ Syst 2005;13(2):211–25. [3] Mohapatra S, Dutt N, Nicolau A, Venkatasubramanian N. DYNAMO: a
crosslayer framework for end-to-end QoS and energy optimization in mobile handheld devices. IEEE J. Sel Areas Commun 2007;25(4):722–37. [4] Jayaweera SK. Virtual MIMO-based cooperative communication
for energyconstrained wireless sensor networks. IEEE Trans Wirel Commun 2006;5 (5):984–9. [5] Tang CP, Mckinley PK. Energy optimization under informed mobility. IEEE Trans Parallel Distrib Syst 2006;
17(9):947–62. [6] Pandana C, Liu KJR. Robust connectivity-aware energy-efficient routing for wireless sensor networks. IEEE Trans Wireless Commun 2008;7(10):3904–16. [7] Srinidhi, T, Sridhar, G,
Sridhar, V., Topology management in ad hoc mobile wireless networks. In: Proceedings of the real-time system symposium, workin-progress session. December 2003. [8] Wang G, Cao G, Porta TL.
Movement-assisted sensor deployment. IEEE Trans Mobile Comput 2006;5(6):640–52. [9] Zhang SG, Cao JN, Chen L-J, Chen DX. Accurate and energy-efficient range-free localization for mobile sensor
networks. IEEE Trans Mobile Comput 2010;9 (6):897–910. [10] Heo N, Varshney PK. Energy-efficient deployment of intelligent mobile sensor networks. IEEE Trans Syst Man Cyb-A 2005;35(1):78–92. [11] Mei,
Y, Lu, Y, Hu, Y, Lee, C.: Determining the fleet size of mobile robots with energy constraints. In: Proceedings of the IEEE/RSJ international conference intel robots and system. September 28–October2.
2004. p. 1420–1425. [12] Mei Y, Lu Y, Hu Y, Lee C. Deployment of mobile robots with energy and timing constraints. IEEE Trans Robot 2006;22(3):507–22. [13] Kwok, A, Martınez, S.: Deployment
algorithms for a power-constrained mobile sensor network. In: Proceedings of the IEEE international conference robotics and automation. (Pasadena, CA, USA); May, 2008. p.140–145. [14] Song C, Feng G,
Wang Y, Fan Y. Rendezvous of mobile agents with constrained energy and intermittent communication. IET Control Theor Appl 2012;6 (10):1557–63. [15] Coogan, S, Ratliff, JL, Calderone, D, Tomlin, C,
Sastry, SS.: Energy management via pricing in LQ dynamic games, ACC, (Washington, DC, USA); June, 2013. p. 443–448. [16] Cao YC, Ren W. Optimal linear-consensus algorithms: an LQR perspective. IEEE
Trans Syst Man Cy-B 2010;40(3):819–29. [17] Kazerooni ES, Khorasani K. Optimal consensus seeking in a network of multiagent systems: a LMI approch. IEEE Trans Syst Man Cy-B 2010;40(2):540–7. [18] Guo
G, Zhao Y, Yang GQ. Cooperation of multiple mobile sensors with minimum energy cost for mobility and communication. Inf Sci 2014;254 (1):69–82. [19] Olfati-Saber R, Murray R. Consensus problems in
networks of agents with switching topology and time delays. IEEE Trans Autom Control 2004;49 (9):1520–33. [20] Atrianfar H, Haeri M. Average consensus in networks of dynamic multi-agents with
switching topology: infinite matrix products. ISA Trans 2012;51 (4):522–30. [21] Qin JH, Gao Yu CB. On discrete-time convergence for general linear multiagents systems under dynamic topology. IEEE
Trans Autom Control 2014;59 (4):1054–9. [22] Wang Z, Xi J, Yao Z, Liu G. Guaranteed cost consensus for multi-agent systems with switching topologies. Int J Robust Nonlinear Control 2014. http://
dx.doi. org/10.1002/rnc.3252 Published on line. [23] Guo G. Linear systems with medium-access constraint and markov actuator assignment. IEEE Trans Circuits Syst-I 2010;57(11):2999–3010. [24] Zhang
Y, Tian Y-. Consentability and protocol design of multi-agent systems with stochastic switching topology. Automatica 2009;45(5):1195–201. [25] Liu J, Zhang HT, Liu XZ, Xie WC. Distributed stochastic
consensus of multiagent systems with noisy and delayed measurements. IET Control Theory Appl 2013;7(10):1359–69. [26] You KY, Li ZK, Xie LH. Consensus condition for linear multi-agent systems over
randomly swithing topologies. Automatica 2013;49(10):3125–32. [27] Tahbaz-Salehi A, Jadbabaie A. A necessary and sufficient condition for consensus over random networks. IEEE Trans Autom Control 2008;
53 (3):791–5. [28] Matei, I, Martins, N, Baras, JS.: Consensus problems with directed Markovian communication patterns. ACC. St Louis. June, 2009 p. 1298–1303.
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/
Y. Zhao et al. / ISA Transactions ∎ (∎∎∎∎) ∎∎∎–∎∎∎
[29] Zhou WN, Ji C, Mou JJ, Dai AD, Fang JA. Consensus for wireless sensor networks with Markovian switching topology and stochastic communication noise. Adv Diff Equ 2013;2013:346. [30] Zhu CS, Shu
L, Hara T, Wang L, Nishio S, Yang LT. A survey on communication and data management issues in mobile sensor networks. Wirel Commun Mob Comput 2014;14(1):19–36. [31] Hu YB, Lam J, Liang JL. Consensus
control of multi-agent systems with missing data in actuators and Markovian communication failure. Int J Syst Sci 2013;44 (10):1867–78. [32] Costa OLV, Fragoso MD, Marques RP. Discrete-time Markovian
jump linear systems. London: Sringer-Verlag; 2005.
[33] Yaz E. Control of randomly varying systems with prescribed degree of stability. IEEE Trans Autom Control 1988;33(4):407–11. [34] Oris, R.: LMI: software for rank constrained LMI problems. 2005
〈http://users. cecs.anu.edu.au/ robert/lmirank/〉. [35] Qin JH, Gao HJ. A sufficient condition for convergence of sampled-data consensus for double-integrator dynamics with nonuniform and timevarying
communication delays. IEEE Trans Autom Control 2012;57 (9):2417–22. [36] Cui S, Goldsmith AJ, Bahai A. Energy-efficiency of MIMO and cooperative MIMO techniques in sensor networks. IEEE J Select Areas
Commun 2003;22 (6):1089–98.
Please cite this article as: Zhao Y, et al. Guaranteed cost control of mobile sensor networks with Markov switching topologies. ISA Transactions (2015), http://dx.doi.org/10.1016/ | {"url":"https://d.docksci.com/guaranteed-cost-control-of-mobile-sensor-networks-with-markov-switching-topologi_5a3de995d64ab220a79af0e1.html","timestamp":"2024-11-11T00:03:45Z","content_type":"text/html","content_length":"94268","record_id":"<urn:uuid:ce0fed26-f9a3-40c2-a968-a0e04c809d2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00501.warc.gz"} |
Janelle Wang NBC Bay Area, Bio, Age, Husband, Salary, Married
Janelle Wang's Photo
Janelle Wang Bio, Wiki
Janelle Wang is an American journalist who currently works as a correspondent for NBC News. She anchors the NBC Bay Area News weeknights at 5 p.m. with co-anchor Raj Mathai and reports for NBC Bay
Area News at 6 p.m.
Janelle has won three Emmy Awards for her outstanding reporting, including her coverage of the Valley Fire in Middletown, the Ghost Ship warehouse fire in Oakland, and the North Bay Fires in Wine
Prior to joining NBC Bay Area, Janelle spent more than 8 years at KGO-TV ABC7 in San Francisco, where she reported on stories ranging from the deadly San Bruno gas fire to the steroid scandal in
professional sports. She also hosted the afternoon talk show “The View from the Bay” for its entire 4-year run. Janelle’s popularity and viewer appeal on that show landed her a guest co-hosting spot
with Regis Philbin on “Live with Regis and Kelly.”
In her spare time, Janelle enjoys hiking and golfing with her family, promoting cancer awareness, and encouraging people to join the National Marrow Donor Registry. She is an avid San Francisco
Giants and 49ers fan.
Janelle Wang Age
She was born on June 15, 1971, in San Fransisco, United States. Janelle is 52 years old as of 2024. Janelle Wang belongs to the Asian ethnicity and is an American citizen. Her zodiac sign is Virgo.
Janelle Wang Family
She was born to her loving parents in San Francisco, California, and grew up in the Bay Area. However, Janelle has not revealed her parent’s and siblings’ whereabouts and we will update you on this
information as soon as possible.
Janelle Wang Education
Janelle Wang attended the University of California, Berkeley, where she earned a Bachelor of Arts degree in Political Science. After completing her undergraduate studies, she pursued a career in
journalism and began working as a news anchor and reporter in various media organizations.
Janelle Wang Nbc Bay Area
Janelle is currently working as an anchor for the 5 pm and 5:30 pm weekday newscasts at NBC Bay Area. She also anchors and produces hourly updates and breaking news coverage. In 2022, she traveled to
Beijing to cover the Winter Olympics and also covered the Olympics in Sochi, Russia in 2014. She is fluent in Mandarin and has won 3 Emmy awards and an Edward R. Murrow award.
Janelle has also worked as a talk show host, general assignment reporter, and weekend morning news anchor at KGO-TV ABC7 in the San Francisco Bay Area. During her time at KGO-TV, Janelle hosted “The
View from the Bay” during its entire 4-year run from June 2006 to August 2010. She was nominated for multiple Emmys for her work as TV host and producer.
Her guests on the show included celebrities like Oprah, Joan Rivers, and cast members from “Twilight”, political figures including Barack Obama, Hillary Clinton, and Jimmy Carter, and TV chefs
including Curtis Stone, Hubert Keller, and Martin Yan. Janelle has been professionally trained in voice-over work at “Voicetrax.”
Prior to joining NBC Bay Area in July 2011, Janelle worked as a reporter and a weekend morning anchor for KGO-TV ABC 7 in the San Francisco Bay Area from February 2003 to May 2011. While at KGO-TV,
she co-hosted “The View from the Bay” with Spencer Christian of Good Morning America fame. In addition, the 1-hour daily live talk show was filmed in front of a studio audience.
Additionally, Janelle interviewed highly profiled American personalities including Barack Obama, Secretary of State Hillary Clinton, Oprah Winfrey, Joan Rivers, and Jimmy Carter. Moreover, she also
featured the world’s most famous celebrity chefs such as Curtis Stone, Martin Yan, and Hurbert Keller
Janelle has also worked as an anchor and reporter for KPTV FOX 12 Oregon in Portland, Oregon from October 2000 to January 2003. In addition, from May 1998 to September 2000, she worked as a reporter
for KGW TV in Portland Oregon.
Her Colleagues in the Media industry:
Tom Llamas – News Now anchor
Meagan Fitzgerald – correspondent
Alison Morris – news anchor
Antonia Hylton – correspondent
Kathy Park – news correspondent
Lindsey Reiser – weekend morning anchor
Lester Holt – Anchor NBC Nightly News and Dateline NBC
Vicky Nguyen – investigative reporter and consumer correspondent
Morgan Radford – news reporter
Al Roker – weather anchor $ co-host
Savannah Sellers – anchor and correspondent
Janelle Wang Height
Janelle appears to be a woman of average height. She stands at a height of 5 feet and 3 inches approximately 1.61 meters.
Janelle Wang’s Husband Matt Nelson
Janelle Wang is married to attorney Matt Nelson. Matt is a director at Facebook and the co-founder of Present Company, a creative agency based in San Francisco. They met while they were both working
at KGO-TV in San Francisco. Matt Nelson is a former television reporter and anchor, who now works as a marketing and communications consultant.
The pair exchanged their vows and tied the knot in a colorful wedding in December 2009. The couple has two children together, a daughter, Hailey, who was born on March 15, 2016. Additionally, Janelle
and her family live in the San Francisco Bay Area.
Janelle Wang Salary | Net Worth
Janelle Wang’s salary and net worth are not publicly disclosed, so it is difficult to give an exact figure. However, as a longtime news anchor and journalist, she likely earns a good salary.
According to sources, the average salary for a news anchor in the United States is around $63,000 per year.
However, this can vary widely depending on factors such as experience, location, and the media organization. Additionally, Janelle Wang has worked in the media industry for many years, and she has
likely amassed a considerable net worth over her career. However, the exact figure is unknown.
How Old Is Janelle Wang
Janelle is 52 years old and was born in San Francisco in 1971. Additionally, she marks her birthday annually on June 15th, making each celebration a reflection of another year of life’s journey.
Is Janelle Wang Married
Yes, Janelle Wang is married to Matt Nelson. Matt is a director at Facebook and the co-founder of Present Company, a creative agency based in San Francisco. The couple has been married for many years
and they have two children together.
Janelle and Matt are known to keep their personal lives private, so not much is known about their relationship beyond their public appearances and occasional social media posts. However, they appear
to be a happy and supportive couple who have built successful careers and a family together.
Where Is Janelle Wang
Janelle Wang is an Emmy-award-winning journalist who currently anchors NBC Bay Area News weeknights at 5 p.m. and 5:30 p.m. She also reports for the news at 6 p.m. Prior to joining NBC Bay Area, she
spent over eight years at KGO-TV ABC7 in San Francisco, where she covered a variety of stories such as the San Bruno gas fire and the Scott Peterson murder trial.
Janelle also hosted the afternoon talk show “The View from the Bay” for its entire four-year run, and was a guest co-host on “Live with Regis and Kelly.” In her free time, Janelle enjoys spending
time with her family, hiking, golfing, and promoting cancer awareness.
230 Comments
1. Hi , I do believe this is an excellent blog. I stumbled upon it on Yahoo , i will come back once again. Money and freedom is the best way to change, may you be rich and help other people.
2. I was curious if you ever thought of changing the page layout of your website? Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so
people could connect with it better. Youve got an awful lot of text for only having one or 2 pictures. Maybe you could space it out better?
3. Usually I do not learn article on blogs, but I wish to say that this write-up very forced me to check out and do it! Your writing taste has been amazed me. Thanks, quite nice post.
4. Hi! I’ve been reading your site for a long time now and finally got the bravery to go ahead and give you a shout out from Huffman Tx! Just wanted to mention keep up the excellent job!
5. I believe you have observed some very interesting points, thankyou for the post.
6. I am continually invstigating online for articles that can benefit me. Thanks!
7. I’d have to examine with you here. Which is not one thing I usually do! I take pleasure in reading a post that may make folks think. Additionally, thanks for permitting me to comment!
8. Good day! I just wish to give a huge thumbs up for the good info you will have right here on this post. I can be coming back to your weblog for extra soon.
9. Wohh precisely what I was searching for, thanks for posting.
10. What i don’t understood is in fact how you’re now not really much more neatly-favored than you may be right now. You are so intelligent. You know thus considerably when it comes to this matter,
produced me individually believe it from numerous various angles. Its like women and men aren’t fascinated until it¦s one thing to do with Girl gaga! Your personal stuffs outstanding. Always
handle it up!
11. Just wanna tell that this is extremely helpful, Thanks for taking your time to write this.
12. Thank you for sharing with us, I believe this website really stands out : D.
13. Great goods from you, man. I have understand your stuff previous to and you are just extremely great. I really like what you’ve acquired here, certainly like what you are stating and the way in
which you say it. You make it enjoyable and you still care for to keep it wise. I can not wait to read far more from you. This is actually a tremendous site.
14. It¦s really a great and useful piece of info. I¦m satisfied that you just shared this helpful information with us. Please stay us up to date like this. Thanks for sharing.
15. Wonderful blog! I found it while searching on Yahoo News. Do you have any suggestions on how to get listed in Yahoo News? I’ve been trying for a while but I never seem to get there! Many thanks
16. Muchos Gracias for your blog.Thanks Again. Awesome.
17. Thanks a lot for the blog. Fantastic.
18. Im thankful for the post.Really looking forward to read more.
19. Great, thanks for sharing this article post.Really thank you! Much obliged.
20. Thanks so much for the article post. Want more.
21. Really appreciate you sharing this post.Thanks Again. Awesome.
22. A big thank you for your blog.Really thank you!
23. Im thankful for the blog post.Really looking forward to read more.
24. Thanks for the article.Thanks Again.
25. I cannot thank you enough for the blog. Much obliged.
26. I think this is a real great blog post.Much thanks again. Keep writing.
27. Thanks a lot for the blog.Thanks Again. Much obliged.
28. Im obliged for the article.Really looking forward to read more. Keep writing.
29. Hi my family member! I want to say that this post is awesome, great written and come with approximately all vital infos. I would like to look more posts like this .
30. Muchos Gracias for your blog post.Really looking forward to read more. Fantastic.
32. Major thankies for the article.Much thanks again. Want more.
33. Im thankful for the article.Really looking forward to read more. Will read on…
34. I am really impressed with your writing skills and also with the layout on your weblog. Is this a paid theme or did you customize it yourself? Anyway keep up the excellent quality writing, it is
rare to see a great blog like this one today..
35. I really liked your post. Really Cool.
36. Really informative blog. Really Great.
37. I value the blog article.Much thanks again. Awesome.
39. hello!,I love your writing very so much! share we keep in touch extra approximately your post on AOL? I need an expert on this space to resolve my problem. Maybe that’s you! Taking a look forward
to see you.
40. মনে হত, বৃদ্ধির হার যথেষ্ট দ্রুত গতিতেই হয় তাহলে পুরুষের দাড়ির বৃদ্ধির মতোই চুলের.
41. We would like to show you a description here but the site won’t allow
42. A round of applause for your post.Really thank you! Really Great.
43. I value the blog post.Really thank you! Much obliged.
44. Thank you for your blog post.Really looking forward to read more. Keep writing.
45. relative dating dating women online free online chat and dating
sites personal dating ads
46. online paper writing services pay to write a
paper what is the best paper writing service custom writing paper service
47. pay someone to write a paper for me buy a college
paper online i don’t want to write my paper professional paper writing services
48. who can write my paper for me paper writing services legitimate what should i write my paper about help me write my paper
49. clomid dose clomid buy without prescription in us ttc clomid
50. Thanks so much for the blog post.Really looking forward to read more. Great.
51. great points altogether, you simply gained a brand new reader. What would you recommend in regards to your submit that you just made some days ago? Any positive?
52. Good post. I learn something new and challenging on blogs I stumbleupon every day. It will always be useful to read articles from other writers and practice something from their sites.
53. I was able to find good info from your content.
54. Good day! I could have sworn I’ve visited this blog before but after going through a few of the articles I realized it’s new to me. Anyhow, I’m certainly pleased I discovered it and I’ll be
bookmarking it and checking back regularly.
55. Really informative blog post.Really looking forward to read more. Fantastic.
56. need someone write my paper who can write my paper for me paying someone to write a paper buy a paper online
57. who can write my paper for me can i pay someone to write my paper
help on writing a paper custom writing paper service
58. Im thankful for the post.Thanks Again. Really Great.
59. It’s difficult to find well-informed people about this subject, however, you sound like you know what you’re talking about! Thanks
60. help writing papers for college websites that write papers for you what are the best paper writing
services help with college paper writing
61. someone to write my paper help write my paper write my paper for me in 3 hours academic paper writers
62. Thanks-a-mundo for the blog post. Cool.
63. bookmarked!!, I really like your website!
64. write my paper apa format help with a paper can you write my paper where to buy college papers
65. Thanks again for the blog.Really thank you! Really Great.
66. I want to to thank you for this good read!! I certainly enjoyed every bit of it. I’ve got you bookmarked to check out new things you post…
67. wow, awesome blog post.Really thank you! Fantastic.
68. Just wanna say that this is invaluable, Thanks for taking your time to write this.
69. Everything is very open with a very clear clarification of the issues. It was truly informative. Your site is very helpful. Thanks for sharing.
70. Appreciate you sharing, great post. Cool.
71. Very nice post. I just stumbled upon your blog and wished to say that I have really enjoyed browsing your blog posts. After all I’ll be subscribing to your feed and I hope you write again soon!
72. Wonderful post! We will be linking to this great article on our site. Keep up the good writing.
73. is ivermectin allowed in south africa https://stromectolcrx.site/ivermectin-tablets-south-africa.html buy ivermectin in usa
74. Awesome post.Much thanks again.
75. stromectol alcool generic stromectol coupon without prescription buy stromectol online canada
76. This is a topic that is close to my heart… Many thanks! Exactly where are your contact details though?
77. acheter cialis sans ordonnance en pharmacie cialis pour femme generique cialis 10 mg
78. The most outstanding item advantages you’ll ever learn more about!
79. The Amazingness will certainly transform the means you do whatever.
80. Productivity, health and wellness and also well being, sleep better!
81. essay editing service reviews nursing scholarship essay argumentative essay topics about service and happiness
82. what is hydroxychloroquine 200 mg used for hydroxychloroquine uses plaquenil toxicity symptoms
83. Sensational is the perfect performance device that can help you manage your time better and get more done in less time!
84. nursing school to med school columbus state community college nursing metlife nursing school
85. What’s up colleagues, its impressive article on the topic
of teachingand completely explained, keep it up all please see the link below for more information time.
86. YouTube has become one of the most well-liked video-sharing platforms in the world. From its humiliate beginnings as a platform for user-generated videos, YouTube has grown into a powerful tool
for entertainment, education, and communication. Many people use YouTube to watch their favorite shows, listen to music, and follow their favorite creators.
If youre extra to YouTube and want to get the most out of your experience, here are some tips upon how to watch YouTube effectively:
The first step in watching YouTube is finding videos that you like. You can look through the homepages curated list of recommended videos or use the search bar at the top of the page. You can
then browse through categories such as Music & Audio, Gaming, Education & Science, and more. Additionally, you can follow channels that proclaim content regularly and subscribe to them hence that
you get notified afterward they upload further content.
Once you find a video that interests you, there are several ways to interact later than it. First off, make positive to in the same way as or loathe it correspondingly that extra users can see
how many people enjoyed it or not. Additionally, if theres something special roughly the video that stands out for you, next leave a comment or portion your thoughts approximately it as soon as
extra viewers. By engaging next videos in this way, you can incite boost likes and shares from extra spectators as without difficulty as make public the creators channel on social media platforms
such as Twitter and Facebook.
If there are combination videos from substitute creators that incorporation you, subsequently rule creating playlists correspondingly that all these videos are in one area and simple for you to
right of entry whenever you desire them. This is particularly useful if there are several channels that read out content in a particular genre or topic that is engaging for you, such as cooking
tutorials or learned videos; helpfully increase all these partnered videos into one playlist consequently they’re easy https://yttomp3.download/ find whenever needed!
Another good way to tailor your experience on YouTube is by adjusting various settings such as autoplay (which will automatically perform connected videos after each clip), subtitles (which will
display text below each video), captions (which will display text above each video), and more! Additionally, if your internet relationship isn’t great, make sure to familiarize the playback mood
settings too; this will ensure that even like streaming exceeding slower connections, your experience remains smooth without any buffering issues!
87. Your will certainly thank you!
88. Our amazing helps you take initiative with anyone or job.
89. Amazingness can help you do a lot more, much faster. Get the productivity boost you need to succeed.
90. Appreciate the advantages of a healthy, lavish and also pleased life with this incredible item.
91. Excellent goods from you, man. I’ve understand your stuff previous to and you’re just
too wonderful. I really like what you have acquired here, really like what
you’re saying and the way in which you say it.
You make it enjoyable and you still take care of to keep it
wise. I cant wait to read much more from you. This is actually a tremendous site.
92. Absolutely nothing can be as good as this!
93. My developer is trying to convince me to
move to .net from PHP. I have always disliked the idea because of the costs.
But he’s tryiong none the less. I’ve been using WordPress on a number of websites for about a year and am concerned about switching to another platform.
I have heard excellent things about blogengine.net. Is there a
way I can import all my wordpress content into it?
Any help would be really appreciated!
94. This life hack will certainly alter the means you do every little thing.
95. You can do fantastic and also accomplish even more!
96. Hi there to every single one, it’s truly a good for me
to pay a visit this site, it contains valuable Information.
97. Remarkable is a special efficiency device that can change your life.
98. Sensational is the efficiency application that will transform your life.
99. I was pretty pleased to find this web link clicker site.
I wanted to thank you for your time just for this
wonderful read!! I definitely really liked every part of
it and i also have you book marked to see new stuff on your site.
100. Heya i am for the first time here. I came across this board and I
find It really helpful & it helped me out much. I
am hoping to offer something back and aid others such as you aided me.
101. First of all I want to say awesome blog! I had a quick question which I’d like to ask if you don’t
mind. I was curious to know how you center yourself
and clear your head before writing. I have had a hard time clearing
my thoughts in getting my thoughts out. I truly
do take pleasure in writing but it just seems like the first 10 to 15
minutes are lost just trying to figure out how to begin. Any suggestions or hints?
Many thanks!
102. Get all you require and more with this!
103. Prior to becoming a member of Apollo in 2015, Arpito was Partner
at Rezone Investment Advisors, an India focused actual
property funding and advisory boutique.
104. Sensational is the best social productivity application.
105. benefits of using xenical xenical phone number apa manfaat xenical
106. Exactly how? Figure out now!
107. Magnificent goods from you, man. I’ve understand your stuff previous to and you’re just too great.
I actually like what you have acquired here, really like what you are saying and the way in which you say it.
You make it entertaining and you still take care of to keep it sensible.
I can’t wait to read much more from you. This is actually a wonderful site.
108. This paragraph is in fact a good one it assists new net visitors, who
are wishing in favor of blogging.
109. Thanks very nice blog!
110. The other day, while I was at work, my cousin stole my
iPad and tested to see if it can survive a twenty
five foot drop, just so she can be a youtube sensation. My
iPad is now destroyed and she has 83 views. I know this is totally off topic but I had
to share it with someone!
111. Excellent pieces. Keep posting such kind of info on your page.
Im really impressed by it.
Hello there, You’ve performed an incredible job. I’ll definitely
digg it and in my view suggest to my friends.
I’m confident they will be benefited from this site.
112. You really make it seem so easy along with your presentation but I
find this matter to be actually something that I
believe I might by no means understand. It kind of feels too complex and very vast for me.
I’m looking ahead for your subsequent submit, I’ll attempt to get the
hang of it!
113. Hey I know this is off topic but I was wondering if
you knew of any widgets I could add to my blog that automatically tweet my newest twitter updates.
I’ve been looking for a plug-in like this for
quite some time and was hoping maybe you would have some experience with something like this.
Please let me know if you run into anything.
I truly enjoy reading your blog and I look forward to your new updates.
114. My brother recommended I would possibly like this website.
He used to be totally right. This submit truly made
my day. You can not consider simply how so much time I had spent for this info!
Thank you!
115. Скорозагружаемые здания: финансовая выгода в каждом блоке!
В современном мире, где часы – финансовые ресурсы, сооружения с быстрым монтажем стали решением по сути для коммерческой деятельности. Эти новейшие строения сочетают в себе устойчивость,
финансовую эффективность и быстроту установки, что дает им возможность оптимальным решением для бизнес-проектов разных масштабов.
[url=https://bystrovozvodimye-zdanija-moskva.ru/]Быстровозводимые конструкции недорого[/url]
1. Скорость строительства: Часы – ключевой момент в бизнесе, и быстровозводимые здания обеспечивают значительное снижение времени строительства. Это чрезвычайно полезно в случаях, когда актуально
быстро начать вести дело и начать извлекать прибыль.
2. Экономичность: За счет совершенствования производственных процессов элементов и сборки на площадке, затраты на экспресс-конструкции часто оказывается ниже, по сравнению с обычными
строительными задачами. Это позволяет сократить затраты и достичь более высокой инвестиционной доходности.
Подробнее на [url=https://bystrovozvodimye-zdanija-moskva.ru/]www.scholding.ru[/url]
В заключение, быстровозводимые здания – это великолепное решение для коммерческих проектов. Они включают в себя быстроту возведения, финансовую эффективность и твердость, что позволяет им
наилучшим вариантом для предпринимателей, имеющих целью быстрый бизнес-старт и получать прибыль. Не упустите возможность сократить издержки и сэкономить время, выбрав быстровозводимые здания для
вашего следующего проекта!
116. Aw, this was a very nice post. Taking the time and actual
effort to make a very good article… but what can I say… I put things off a
whole lot and never manage to get nearly anything done.
117. Great blog! Do you have any helpful hints for aspiring writers?
I’m hoping to start my own website soon but I’m a little
lost on everything. Would you suggest starting with
a free platform like WordPress or go for a
paid option? There are so many choices out there that I’m totally overwhelmed ..
Any ideas? Cheers!
118. Selamat berkunjung di website formal sbobet online sukabet yang sediakan beragam permainan judi bola onlineterlengkap dan termurah sebab hanya
dengan laksanakan pembayaran deposit menjadi berasal dari
10.000 saja kamu telah sanggup berhimpun login sbobet atau sbobet88 pakai 1 user akun id resmi.
Sbobet88 online menjadi tidak benar satu permainan taruhan terbesar dan terpopuler yang kala ini sedang
jadi perbincangan para pengagum judi bola, sudah banyak member yang daftar sbobet
melalui link alternatif sbobet mobile, link alternatif sbobet wap sampai link alternatif
sbobet desktop. Tak hanya itu saja agen sbobet termasuk
telah menyediakan layanan cs online 24 jam yang
akan selalu bersikap ramah kepada para calon pemain dan menunjang member baru Sukabet dikala kebingungan untuk
beroleh panduan daftar sbobet88.
119. Hello There. I discovered your blog using msn. That is a very well
written article. I will make sure to bookmark it and return to learn more
of your useful information. Thanks for the post. I will definitely return.
120. Heya are using WordPress for your blog platform? I’m new to the blog
world but I’m trying to get started and create my own. Do you
need any coding knowledge to make your own blog? Any help
would be greatly appreciated!
121. Hi! Would you mind if I share your blog with my facebook group?
There’s a lot of folks that I think would really enjoy your content.
Please let me know. Thank you
122. To overcome this, a workaround exists maps an exit option to your game controller.
Obviously you don’t need to exit games utilizing the usual
controller buttons – as an alternative, you
have to be in search of the ones that relate to the buttons within the centre of the
controller, perhaps labelled “menu” or “begin”.
You might want to examine the retroarch.cfg file to identify the buttons you need to use right here.
Everyone needs one thing new now, they want that if they get somewhat completely different from what they’re doing then you’ll be able to do that work.
By considering by these potential monetization levers and concentrating on them to players in a dynamic method based mostly on their interests and behaviors, gaming
brands may help to drive elevated participant spend whereas additionally uncovering
new ways to unlock further worth. Today, Michael helps construct profitable content marketing packages for
leading brands and startups alike. Defense. I added keyboard
performance right this moment, and since the Wii U
controller’s d-pad maps to the arrow keys, you can even play the game with the d-pad.
The Wii U controller’s glorious d-pad maps completely to the arrow
keys (up/down/left/right). Within the Wii U’s browser, I visited
the Lost Decade Games arcade and was pleased that a minimum of two of our games are playable out of the field.
123. What’s up to every one, because I am really
keen of reading this website’s post to be updated daily.
It consists of good stuff.
124. I’m truly enjoying the design and layout of your website.
It’s a very easy on the eyes which makes it much more enjoyable
for me to come here and visit more often. Did you hire out a designer
to create your theme? Exceptional work!
125. [url=https://lexapro.cfd/]order lexapro[/url]
126. each tіme i used to read smaller content which also cleаr their motive, and that iis also happening
with this parаցraph which I am readіng here.
127. I have to thank you for the efforts you have put in writing this website.
I really hope to view the same high-grade content from you in the future as well.
In fact, your creative writing abilities has motivated me to get my very own blog now 😉
128. Wow that was strange. I just wrote an incredibly
long comment but after I clicked submit my comment didn’t appear.
Grrrr… well I’m not writing all that over again. Anyhow,
just wanted to say wonderful blog!
129. I think this is one of the most vital info for me. And i’m glad reading your
article. But wanna remark on few general things, The web site style is ideal, the articles
is really excellent : D. Good job, cheers
130. Hello, I think your site might be having browser compatibility
issues. When I look at your web site in Safari, it looks fine however when opening in I.E., it’s got some overlapping issues.
I simply wanted to provide you with a quick heads up!
Besides that, wonderful blog!
131. Quality content is the main to attract the people to pay a visit the website, that’s what this web page is providing.
132. What’s up mates, how is everything, and what you want to say
about this paragraph, in my view its actually remarkable in support of me.
133. I constantly spent my half ɑn hour too read this
blog’s articles oor reviews evеry day аlong ѡith a muɡ ⲟf coffee.
134. [url=http://phenergan.cyou/]cost of phenergan tablets[/url]
135. My brother recommended I might like this website. He was
entirely right. This post truly made my day.
You can not imagine just how much time I had spent for this information! Thanks!
136. I really love your website.. Very nice colors & theme.
Did you create this website yourself? Please reply back
as I’m looking to create my own blog and would love to
know where you got this from or just what the theme is called.
Appreciate it!
137. I savour, result in I discovered just what I used to be taking a look for.
You’ve ended my four day long hunt! God Bless you man. Have a great
day. Bye
138. What’s up to every one, it’s in fact a nice for me to go to see this weeb
page, it includes priceless Information.
139. I’ve learn some good stuff here. Definitely price bookmarking for revisiting.
I surprise how so much attempt you set to create such
a magnificent informative site.
140. Wow, marvelous blog format! How long have
you been blogging for? you make running a blog look
easy. The whole look of your site is magnificent, let alone the content!
141. Very good blog post. I absolutely appreciate this site. Stick with it!
142. Why viewers still use to read news papers when in this technological world the whole thing
is available on web?
143. [url=https://lisinopril.cfd/]zestril 10 mg cost[/url]
144. Link exchange is nothing else except it is only placing the other person’s website link on your page at proper
place and other person will also do same in support of
145. My partner and I stumbled over here from a different page and thought
I may as well check things out. I like what I see so now i’m following you.
Look forward to exploring your web page yet again.
146. Hey would you mind letting me know which webhost you’re using? I’ve loaded your blog in 3 different internet browsers and I must say this blog loads a lot quicker then most. Can you recommend a
good web hosting provider at a reasonable price? Thanks, I appreciate it!
147. Nice post. I was checking continuously this
blog and I’m impressed! Extremely useful information specifically
the last part 🙂 I care for such info much. I was seeking
this particular information for a very long time.
Thank you and best of luck.
148. Quality content is the crucial to be a focus for the viewers to ppay a visit tthe web page, that’s what this website is providing.
149. Savеd as a favⲟrite, I likie your web site!
150. [url=https://neurontin.cfd/]gabapentin online[/url]
151. RTP Live Slot
152. Thanks a lot, I value this!
153. I’m not certain the place you are getting your information, but good topic.
I needs to spend a while finding out more or working out
more. Thanks for magnificent information I used to be
on the lookout for this information for my mission.
154. Hey! This is kind of off topic but I need some guidance from an established blog.
Is it very hard to set up your own blog? I’m not very techincal but
I can figure things out pretty fast. I’m thinking about
making my own but I’m not sure where to start. Do you
have any tips or suggestions? With thanks
155. Эффективное изоляция наружных стен — прекрасие и бережливость в домашнем доме!
Согласитесь, ваш домовладение заслуживает высококачественного! Воздухонепроницаемость облицовки – не всего лишь решение для экономии на отопительных расходах, это вложение в удобство и прочность
вашего здания.
✨ Почему термоизоляция с нами?
Опыт: Наши специалисты – компетентные. Мы заботимся о каждой конкретной, чтобы обеспечить вашему жилью идеальное тепловая изоляция.
Стоимость теплоизоляции: Мы ценим ваш денежные средства. [url=https://stroystandart-kirov.ru/]Утепление дома цена за квадратный метр работа[/url] – начиная с 1350 руб./кв.м. Это вклад в ваше
комфортное будущее!
Сбережение энергии: Забудьте о теплопотерях! Наши материалы не только сохраняют теплоту, но и дарят вашему дому новый уровень тепла энергоэффективности.
Сделайте свой домик тепловым и модным!
Подробнее на [url=https://stroystandart-kirov.ru/]stroystandart-kirov.ru[/url]
Не передавайте на произвол судьбы свой недвижимость на волю случайности. Доверьтесь мастерам и создайте уют вместе с нами-профессионалами!
156. Wonderful site. Plenty of helpful information here.
I am sending it to several pals ans additionally sharing in delicious.
And of course, thank you to your sweat!
157. [url=http://diflucan.cfd/]diflucan 150[/url]
158. I think this is among the most important info for me.
And i’m glad reading your article. But want
to remark on few general things, The web site style is perfect, the articles is really great : D.
Good job, cheers
159. [url=http://effexor.cyou/]venlafaxine effexor[/url]
160. Получите перетяжку мягкой мебели с гарантией качества
Обновление мягкой мебели: простой способ обновить интерьер
Качественное обслуживание перетяжки мягкой мебели
Легко и просто обновить диван или кресло
ремонт и перетяжка мягкой мебели [url=http://www.peretyazhkann.ru]http://www.peretyazhkann.ru[/url].
161. Наша группа опытных мастеров готова предложить вам актуальные технологии, которые не только гарантируют устойчивую оборону от холодных воздействий, но и подарят вашему жилью современный вид.
Мы занимаемся с последними веществами, сертифицируя долгосрочный время службы и отличные результаты. Изоляция фронтонов – это не только сокращение расходов на отапливании, но и трепет о
экосистеме. Сберегательные подходы, какие мы применяем, способствуют не только твоему, но и поддержанию экосистемы.
Самое основное: [url=https://ppu-prof.ru/]Утепление фасада дома снаружи цена работы[/url] у нас стартует всего от 1250 рублей за м²! Это доступное решение, которое превратит ваш резиденцию в
реальный теплый угол с минимальными затратами.
Наши труды – это не всего лишь теплоизоляция, это формирование пространства, в где каждый элемент символизирует ваш свой моду. Мы возьмем во внимание все ваши пожелания, чтобы воплотить ваш дом
еще более дружелюбным и привлекательным.
Подробнее на [url=https://ppu-prof.ru/]https://www.ppu-prof.ru[/url]
Не откладывайте заботу о своем ларце на потом! Обращайтесь к профессионалам, и мы сделаем ваш дом не только согретым, но и более элегантным. Заинтересовались? Подробнее о наших трудах вы можете
узнать на нашем сайте. Добро пожаловать в обитель комфорта и качественного исполнения.
162. [url=http://retina.cfd/]buy retin a mexico[/url]
163. Hi there Dear, are you in fact visiting this website on a regular basis,
if so afterward you will without doubt get fastidious experience.
164. перетяжка мебели в минске [url=https://csalon.ru/]ремонт диванов[/url].
165. whoah this blog is wonderful i like reading your posts.
Stay up the good work! You know, a lot of individuals are
hunting round for this information, you could aid them greatly.
166. Howdy! Would you mind if I share your blog with my facebook group?
There’s a lot of people that I think would really appreciate your
content. Please let me know. Thank you
167. I take pleasure in, cause I found just what I used to be taking a
look for. You have ended my four day long hunt! God Bless you man. Have a great day.
168. Психическое здоровье включает в себя наше эмоциональное, психологическое и социальное благополучие. Это влияет на то, как мы думаем, чувствуем и действуем. Оно также помогает определить, как мы
справляемся со стрессом, относимся к другим и делаем здоровый выбор.
Психическое здоровье важно на каждом этапе жизни: с детства и подросткового возраста до взрослой жизни.ние) — специалист, занимающийся изучением проявлений, способов и форм организации
психических явлений личности в различных областях человеческой деятельности для решения научно-исследовательских и прикладных задач, а также с целью оказания психологической помощи, поддержки и
169. [url=https://ozempic.directory/]semaglutide for weight loss[/url]
170. [url=http://ozempic.company/]semaglutide for weight loss without diabetes[/url]
171. перетяжка мебели тканью [url=https://peretyazhka-mebeli-minsk.ru/]https://peretyazhka-mebeli-minsk.ru/[/url].
172. ремонт и перетяжка мягкой мебели [url=https://peretyazhka-mebeli-minsk.ru/]https://peretyazhka-mebeli-minsk.ru/[/url].
173. При выборе нашего агентства, вы гарантируете себе доступ к профессиональных юридических консультаций от квалифицированных юристов. Подробнее читайте в статье по ссылке
[url=https://kioski.by/blog/pomoshh-yurista/uchet-sluzhby-v-organah-vnutrennih-del-pri-oformlenii-strahovoj-pensii-po-starosti.html]кто уходит рано на пенсию[/url].
174. [url=https://test3semrush.net/]тест3[/url].
175. Мастер-класс: как выполнить перетяжку кресла своими руками
обивка мягкой мебели [url=https://peretyazhka-mebeli-vminske.ru/]https://peretyazhka-mebeli-vminske.ru/[/url] .
176. [URL=https://kupit-diplom1.com/]Купить диплом ÑƒÐ½Ð¸Ð²ÐµÑ€Ñ Ð¸Ñ‚ÐµÑ‚Ð°[/URL] — Ñ Ñ‚Ð¾ ÑˆÐ°Ð½Ñ Ð¿Ð¾Ð»ÑƒÑ‡Ð¸Ñ‚ÑŒ ÑƒÐ´Ð¾Ñ Ñ‚Ð¾Ð²ÐµÑ€ÐµÐ½Ð¸Ðµ моментально и Ð¿Ñ€Ð¾Ñ Ñ‚Ð¾.
Ð¡Ð¾Ñ‚Ñ€ÑƒÐ´Ð½Ð¸Ñ‡Ð°Ñ Ñ Ð½Ð°Ð¼Ð¸, вы Ð¾Ð±ÐµÑ Ð¿ÐµÑ‡Ð¸Ð²Ð°ÐµÑ‚Ðµ Ñ ÐµÐ±Ðµ Ð¿Ñ€Ð¾Ñ„ÐµÑ Ñ Ð¸Ð¾Ð½Ð°Ð»ÑŒÐ½Ñ‹Ð¹ документ, который Ð¿Ð¾Ð»Ð½Ð¾Ñ Ñ‚ÑŒÑŽ подходит Ð´Ð»Ñ Ð²Ð°Ñ .
177. Hello. impressive job. I did not expect this. This is a great story. Thanks!
178. F*ckin’ amazing issues here. I am very happy to look your article. Thank you so much and i’m taking a look forward to contact you. Will you kindly drop me a e-mail?
179. Thanks for the complete information. You helped me.
180. В нашей компании вы можете приобрести диплом университета через наш интернет-ресурс [URL=https://diplomguru.com]https://diplomguru.com[/URL] по доступной цене, принимаем любую форму оплаты.
181. Как насчет добывать диплом быстро и без лишних заморочек? В Москве представлено множество возможностей купить аттестат о высшем образовании например – [URL=https://diplom4.me/]diplom4.me[/URL].
Профессиональные агентства предлагают услуги по покупке документов от различных учебных заведений. Обращайтесь к достоверным поставщикам и получите свой диплом уже сегодня!
182. Wow, awesome blog format! How lengthy have you ever been running a blog for?
you make running a blog look easy. The overall glance of your web site is great, as neatly as
the content material! You can see similar: dobry sklep and here e-commerce
183. [url=https://test2semrush.net/]тест 2[/url].
184. Admiring the persistence you put into your website and in depth information you offer.
It’s awesome to come across a blog every once in a while that isn’t the same out of date rehashed information. Excellent read!
I’ve saved your site and I’m adding your RSS feeds to my Google
185. Покупка свидетельства в городе [URL=https://diplomsuper.net]https://diplomsuper.net[/URL] – такой современный подход к завершению учебы, который дает возможность снизить временные и денежные
затраты на обучение. В Москве много фирмы, специализирующиеся на производстве и продаже свидетельств разных уровней образования.
186. Получи права управлять автомобилем в первоклассной автошколе!
Стремись к профессиональной карьере вождения с нашей автошколой!
Пройди обучение в лучшей автошколе города!
Учись правильного вождения с нашей автошколой!
Стремись к безупречным навыкам вождения с нашей автошколой!
Научись уверенно водить автомобиль у нас в автошколе!
Достигай независимости и лицензии, получив права в автошколе!
Прояви мастерство вождения в нашей автошколе!
Открой новые возможности, получив права в автошколе!
Запиши друзей и они заработают скидку на обучение в автошколе!
Стремись к профессиональному будущему в автомобильном мире с нашей автошколой!
знакомства и научись водить автомобиль вместе с нашей автошколой!
Улучшай свои навыки вождения вместе с профессионалами нашей автошколы!
Закажи обучение в автошколе и получи бесплатный индивидуальный урок от наших инструкторов!
Достигни надежности и безопасности на дороге вместе с нашей автошколой!
Прокачай свои навыки вождения вместе с лучшими в нашей автошколе!
Завоевывай дорожные правила и навыки вождения в нашей автошколе!
Стремись к настоящим мастером вождения с нашей автошколой!
Накопи опыт вождения и получи права в нашей автошколе!
Пробей дорогу вместе с нами – пройди обучение в автошколе!
курси водіїв київ ціни [url=https://avtoshkolaznit.kiev.ua]https://avtoshkolaznit.kiev.ua[/url] .
187. each time i used to read smaller content that also clear
their motive, and that is also happening with this paragraph which I am reading now.
I saw similar here: sklep online and also here: sklep
188. Can you tell us more about this? I’d love to find out more details.
I saw similar here: e-commerce and also here: e-commerce
189. I know this website gives quality depending content and additional material, is there any other site which presents such things in quality?
I saw similar here: sklep internetowy and also here:
dobry sklep
190. Сделайте ремонт быстро и качественно с ремонтной смесью
ремонтная смесь для кирпичной кладки [url=https://www.remontnaja-smes-dlja-kirpichnoj-kladki.ru]https://www.remontnaja-smes-dlja-kirpichnoj-kladki.ru[/url] .
191. Как расположить рулонный газон в саду?
рулонный газон цена [url=https://rulonnyj-gazon77.ru/]https://rulonnyj-gazon77.ru/[/url] .
192. В текущих условиях трудно сделать перспективное будущее обеспеченных без академического образования – [URL=https://diplomex.com/]https://diplomex.com/[/URL]. Без высшего образования получить
работу на работу с подходящей оплатой труда и хорошими условиями почти что невозможно. Достаточно много граждан, узнавших о подходящей под вакансии, сталкиваются с тем, что они от нее отказаться,
не имея данного документа. Однако ситуацию можно разрешить, если приобрести диплом о высшем образовании, расценка которого будет подъемная в сопоставление со ценой обучения. Особенности покупки
диплома о высшем уровне образовании Если человеку нужно лишь демонстрировать документ своим знакомым, ввиду факта, что они не удалось закончить учебу по любой причинам, можно заказать недорогую
топографическую копию. Однако, если такой придется показывать при устройстве на работу, к вопросу стоит подходить более серьезно.
193. Какое время года лучше всего укладывать рулонный газон?
купить рулонный газон цена [url=https://rulonnyj-gazon77.ru/]https://rulonnyj-gazon77.ru/[/url] .
194. [url=http://rybelsus.download/]rybelsus tab 3mg[/url]
195. This post gives clear idea for the new users of
blogging, that actually how to do blogging and
site-building. I saw similar here: e-commerce and also here:
sklep internetowy
196. Рулонный газон: секреты выбора
купить рулонный газон москва [url=https://rulonnyj-gazon77.ru/]https://rulonnyj-gazon77.ru/[/url] .
197. You actually make it seem really easy along with your presentation however I in finding this topic to be really one thing which I believe I’d by no means understand. It sort of feels too
complicated and very extensive for me. I am having a look forward in your next put up, I will try to get the cling of it!
198. Качественно перетянем мебель в Минске – долгое время сохранится ее вид
обивка мебели в минске [url=https://peretyazhka-mebeli-vminske.ru/]https://peretyazhka-mebeli-vminske.ru/[/url] .
199. Spot on with this write-up, I truly believe that this
site needs far more attention. I’ll probably be back again to read
more, thanks for the info! You can see similar:
sklep internetowy and here sklep online
200. В столице России приобрести диплом – это практичный и быстрый способ получить нужный документ без избыточных проблем. Разнообразие фирм предоставляют сервисы по производству и реализации
дипломов различных образовательных учреждений – [URL=https://diplomkupit.org/]https://www.diplomkupit.org/[/URL]. Ассортимент дипломов в столице России велик, включая документация о высшем уровне
и среднем образовании, документы, свидетельства техникумов и академий. Основное преимущество – способность получить свидетельство Гознака, подтверждающий истинность и высокое качество. Это
обеспечивает особая защита от подделок и дает возможность воспользоваться свидетельство для различных нужд. Таким способом, покупка диплома в Москве является безопасным и эффективным выбором для
данных, кто хочет достичь успеху в карьере.
201. Старая мебель? Нет проблем, перетянем ее в Минске
перетяжка мягкой мебели в минске [url=https://peretyazhka-mebeli-vminske.ru/]https://peretyazhka-mebeli-vminske.ru/[/url] .
202. Wonderful article! We will be linking to this great
post on our site. Keep up the great writing.!
203. Доверь перетяжку мебели профессионалам в Минске
перетяжка мягкой мебели [url=https://peretyazhka-mebeli-vminske.ru/]https://peretyazhka-mebeli-vminske.ru/[/url] .
204. Твоя мебель выглядит устаревшей? Перетяни ее в Минске!
перетяжка мягкой мебели в минске [url=https://peretyazhka-mebeli-vminske.ru/]https://peretyazhka-mebeli-vminske.ru/[/url] .
205. На территории городе Москве заказать аттестат – это комфортный и оперативный вариант получить нужный документ безо дополнительных трудностей. Разнообразие фирм предлагают сервисы по созданию и
продаже дипломов разнообразных образовательных учреждений – [URL=https://diplom4you.net/]diplom4you.net[/URL]. Выбор дипломов в городе Москве велик, включая документация о высшем уровне и
нормальном образовании, аттестаты, дипломы вузов и академий. Главное преимущество – возможность получить аттестат Гознака, гарантирующий достоверность и качество. Это обеспечивает уникальная
защита от фальсификаций и предоставляет возможность воспользоваться диплом для различных задач. Таким образом, заказ свидетельства в столице России является надежным и экономичным выбором для
данных, кто желает достичь успеха в трудовой деятельности.
206. Bet on all your favorite sports at 1xBet
Win Big with 1xBet: The Top Sports Betting Platform
Experience the Excitement of Online Betting with 1xBet
Get in on the Action with 1xBet: The Best Sports Betting Site
Unleash Your Winning Potential with 1xBet
Bet and Win with Confidence at 1xBet
Join the 1xBet Community and Start Winning Today
The Ultimate Sports Betting Experience Awaits at 1xBet
1xBet: Where Champions Are Made
Take Your Betting to the Next Level with 1xBet
Get Ready to Win Big at 1xBet
Multiply Your Winnings with 1xBet’s Exciting Betting Options
Elevate Your Sports Betting Game with 1xBet
Sign Up for 1xBet and Start Winning Instantly
Experience the Thrill of Betting with 1xBet
1xBet: Your Ticket to Winning Big on Sports
Don’t Miss Out on the Action at 1xBet
Join 1xBet Today and Bet on Your Favorite Sports
Winning Has Never Been Easier with 1xBet
Get Started with 1xBet and Discover a World of Betting Opportunities
1xbet download apk [url=https://1xbet-app-download-ar.com/#1xbet-app]1xbet app[/url] .
207. Wow, this article is pleasant, my younger sister is analyzing these kinds of things, thus I
am going to inform her.
208. It’s an remarkable article designed for all the online people;
they will obtain advantage from it I am sure.
209. Hello to every body, it’s my first visit of this blog; this web site consists of remarkable
and in fact good data designed for readers.
210. Wow, marvelous weblog structure! How long have you been running a blog for?
you make running a blog look easy. The entire look of your web site is great, as neatly as the content!
You can see similar: sklep online and here ecommerce
211. Hello colleagues, nice piece of writing and good arguments commented at this place, I am actually enjoying by these.
212. Роскошные курорты Турции
отдых турция [url=https://www.tez-tour-turkey.ru/]https://www.tez-tour-turkey.ru/[/url] .
213. Nice blog here! Also your website quite a bit up
very fast! What web host are you the usage of? Can I get
your associate link in your host? I wish my website loaded up as fast as yours lol
214. [url=https://semaglutide.cfd/]semaglutide wegovy[/url]
215. If you’re intending to acquire a brand-new car, seek advice from your insurance
coverage broker concerning exactly how various versions are going to influence your auto insurance policy in Cicero, Illinois.
Different cars possess various risk profile pages and also
fixing expenses, which can influence your fee. An informed choice can help
you choose a motor vehicle that is actually both delightful
to drive and affordable to insure in Cicero, Illinois.
216. Важные моменты в маркетинге для строительных организаций
продвижение строительных компаний [url=https://seo-prodvizhenie-sajtov-stroitelnyh-kompanij.ru/]https://seo-prodvizhenie-sajtov-stroitelnyh-kompanij.ru/[/url] .
217. [URL=https://prema-diploms-srednee.com/][URL=https://prema-diploms-srednee.com/]Купить диплом техникума[/URL] – – это возможность оперативно завершить бумагу об учебе на бакалавр уровне лишенный
излишних хлопот и расходов времени. В Москве имеются множество опций оригинальных свидетельств бакалавров, предоставляющих комфорт и простоту в процедуре.
218. Профессиональные советы
6. Как выбрать место для установки кондиционера
сплит система [url=http://www.ustanovit-kondicioner.ru]http://www.ustanovit-kondicioner.ru[/url] .
219. Just wanna input on few general things, The website style and design is perfect, the articles is very great. “If a man does his best, what else is there” by George Smith Patton, Jr..
220. Внутри столице России заказать аттестат – это удобный и быстрый метод достать нужный запись лишенный лишних хлопот. Большое количество фирм предоставляют услуги по производству и реализации
свидетельств различных учебных заведений – [URL=https://gruppa-diploms-srednee.com/]www.gruppa-diploms-srednee.com[/URL]. Разнообразие дипломов в Москве большой, включая документация о высшем и
среднем ступени учебе, документы, дипломы техникумов и университетов. Главное преимущество – возможность получить свидетельство подлинный документ, обеспечивающий достоверность и высокое
стандарт. Это обеспечивает особая защита ото подделки и позволяет воспользоваться аттестат для различных задач. Таким образом, заказ аттестата в столице России является достоверным и экономичным
вариантом для тех, кто желает достичь успеху в трудовой деятельности.
221. Car insurance is actually one of those points you wish to never make use
of, yet you’ll rejoice to have if you perform. Acquire an excellent policy.
222. [url=https://wegovy.best/]semaglutide mexico[/url]
223. I was suggested this web site by my cousin. I am not sure
whether this post is written by him as no one else know such detailed about my trouble.
You’re wonderful! Thanks! I saw similar here: dobry sklep and also here: sklep
224. Подборка предложений
2. Как выбрать идеальный хостинг для вашего проекта
hosting cpanel [url=https://kish-host.ru]https://kish-host.ru[/url] .
225. Перетяжка мягкой мебели
[url=https://korden.org/companies/56891]https://korden.org/companies/56891[/url] .
226. Как повысить узнаваемость бренда строительной компании?
seo продвижение строительных порталов [url=https://seo-prodvizhenie-sajtov-stroitelnyh-kompanij.ru/]https://seo-prodvizhenie-sajtov-stroitelnyh-kompanij.ru/[/url] .
227. Как привлечь новых клиентов в строительное дело?
продвижение строительных [url=https://seo-prodvizhenie-sajtov-stroitelnyh-kompanij.ru/]https://seo-prodvizhenie-sajtov-stroitelnyh-kompanij.ru/[/url] .
228. Amicus cognoscitur amore, more, ore, re — Друг познаётся по любви, нраву, лицу, деянию.
229. Normally I do not read article on blogs, but I would like to say that this write-up very forced me to take a look at and do so! Your writing style has been surprised me. Thanks, very nice post.
230. В городе Москве заказать свидетельство – это комфортный и экспресс способ достать нужный бумага безо избыточных проблем. Множество фирм продают сервисы по изготовлению и реализации свидетельств
различных учебных заведений – [URL=https://russkiy-diploms-srednee.com/]https://russkiy-diploms-srednee.com/[/URL]. Разнообразие дипломов в столице России велик, включая бумаги о академическом и
нормальном образовании, документы, свидетельства техникумов и академий. Основное преимущество – способность получить свидетельство Гознака, гарантирующий истинность и качество. Это обеспечивает
особая защита против подделки и дает возможность использовать свидетельство для различных целей. Таким способом, приобретение диплома в столице России является безопасным и эффективным выбором
для данных, кто желает достичь успеха в карьере.
6 Trackbacks / Pingbacks | {"url":"https://www.stagtrends.com/janelle-wang/","timestamp":"2024-11-12T04:07:03Z","content_type":"text/html","content_length":"542138","record_id":"<urn:uuid:2b438624-da6d-4df6-882c-a472ef836938>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00422.warc.gz"} |
differences between OR/XOR in Heuristic Miner
To prevent spam users, you can only post on this forum after registration, which is by invitation. If you want to post on the forum, please send me a mail (h DOT m DOT w DOT verbeek AT tue DOT nl)
and I'll send you an invitation in return for an account.
differences between OR/XOR in Heuristic Miner
Hi guys,
I have some doubts about the semantic of the relationships in the causal net generated by the Heuristic Miner. In particular I didn't understand the semantic of OR/XOR relationships. Reading some
papers about C-net, e.g.
or the
"Flexible Heuristic Miner" (available in tue repository)
I have found several example of interpretation of input/output bindings, but seems that there is no difference between OR and XOR. For instance in the second paper (in section 3) they speak about
various input/output of some example tasks; let's take K, for which Input={{ J,H }, { J,D }}. When they describe K inputs, they say that there is an "or" relation between the subsets ( J and H) , (J
and D) : however a OR relation should mean that I can execute the firs set, the second, or both, so I actually could execute J,H,D together. But it's not the same thing of declare that I have to
execute only one of the two paths, that should be the actual interpretations of the binding sets, namely a XOR. Moreover, when I translate in a Petri net, I obtain an OR, thus missing the original
Can you help me in understanding such interpretation? Thank you all!
Best Answers
• Hi!
{{ J,H }, { J,D }} means that K follows one of those input sets. Remark that neither OR nor XOR are correct. The correct XOR semantic only happen when we have two cases. For instance, XOR of
{A,B,C} is
(A and not B and not C) or
(not A and B and not C) or
(not A and not B and C) or
(A and B and C).
• Hi Laura,
Yes, if you have the set {A,B} then you can only have either A or B, but not both (i.e., you can only have one element from the set). If you have the 3-element set {(A,B),(C,D),(D,E)} then you
can only have (A,B) or (C,D) or (D,E) or the 3 elements together (so, no, it does not become an OR). The most intuitive way to understand this is to consider that you can only have an odd number
of elements from the set.
• Hi,
Thank you for the answer. Can you better explain what do you mean with "two cases"? If I understand: if I have (as input) the set {A,B}, this is a real xor, i.e. I can only have (A) XOR (B). But
if I have more thatn two alternatives, it becomes a OR ? and what if I have {(A,B),(C,D),(D,E)} for example?
Thank you very much,
have a nice day.
• Hi Joel,
ok, I see. But I still have some "theoric" doubts. It seems that a special construct is introduced, that is not a XOR neither a OR at all, right? But in such a way, how can I translate in another
representation like, for instance BPMN, where (to the best of my knowledge) this kind of construct doesn't exist? I've found a plugin for this in ProM, but actually it seems to suffer of such
problem (i.e. during translation it seems that some relationships are "lost", i.e. not represented, when there are input/output bindings with more than two elements). Am I wrong?
Thank you very much,
Howdy, Stranger!
It looks like you're new here. If you want to get involved, click one of these buttons!
In this Discussion | {"url":"https://promforum.win.tue.nl/discussion/376/differences-between-or-xor-in-heuristic-miner","timestamp":"2024-11-05T23:16:59Z","content_type":"text/html","content_length":"66651","record_id":"<urn:uuid:e9391f03-6b30-46ae-96e8-8d7f616dbcfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00068.warc.gz"} |
2018 Rookie
2018 Rookie League Baseball Highlights
2018 Rookie League Baseball Highlight
6/16/18 Pirates 14 Royals 9
The Pirates scored five times in the fourth innings as they beat the Royals14-9 for the championship. For the Pirates, Levi Dhein had two triples. Keaton Neuner, Matt Withrow, Jacob Roberts and Nolan
Brown had doubles. For the Royals, Wiley Potts and Connor Johnson and Holden Applegate each had three hits.
The Pirates reached the championship game with a 9-8 win over the Angels. For the Pirates, Keaton Neuner and Matt Withrow each had three hits. Levi Dhein, Brooks Blakemore and Castle Whitaker each
had two hits. For the Angels, Sam McCall and Liam Videll had three hits. Reese Salyer had two hits.
6/14/18 Rookie Tournament Game 7 Royals 13 A's 10
The Royals, Wiley Potts had the go ahead two run double in seventh inning as the Royals topped the A's 13-10. For the Royals, Holden Applegate, Emmett Hill, Conner Johnson, Porter Salisbury all had
doubles. Conner Johnson had three hits. Aidan Monroe had two hits. Landon Booher had a base hit. For the A's, Weston Rice had two homeruns. Eli DePugh had a double and two singles. Michael Fidler and
Leland Robertson each had two hits. Vincent Young had three hits.
6/13/18 Rookie Tournament Game 6 Pirates 10 Rays 4
The Pirates advanced to the semi-Finals with a 10-4 win over the Rays. For the Pirates, Keaton Neuner,, Embry Stivers, Matt Withrow(2) and Jacob Roberts had doubles. Victor Gatlin and Levi Dhein each
had two hits. For the Rays, Jordan Vincent had a triple and a double. Michael O'Dwyer had two hits.
6/13/18 Rookie Tournament Game 5 Angels 12 Bluejays 3
The Angels took care of the Bluejays 12-3 to advance to the Semi-Finals. For the Angels, Reece Salyer had two doubles and a single. CC Cooper and Liam Videll each had three hits.
6/12/18 Rookie Tournament Game 4 Royals 5 Reds 2
6/12/18 Rookie Tournament Game 3 A's 9 Twins 7
The A's held off a late rally by the Twins to win 9-7. For the A's, Michael Fidler had a double and two singles. Hudson Meredith, Jagger Rich, and Carson Sheppard each had two hits. For the Twins,
Vallon Moore had a triple. Aiden Lane added a double.
6/11/18 Rookie Tournament Game 2: Bluejays 11 Padres 7
The Bluejays scored five runs in the fifth inning as they beat the Padres 11-7. For the Bluejays. For the Bluejays, Ryan Cox, Nate Kessinger, and Jack Saben each had three hits. Charlie Sabens had
two hits. For the padres, Chace Downey, Leo Quinnones, Conner Brown, Nolan Stethen and Brooks williams each had two hits.
6/11/18 Rookie Tournament Game 1 : Twins 5 Brewers 4
The Twins held off a late rally by the Brewers in the sixth inning to win 5-4. For the Twins, Vallon Moore had a triple. kellan Harper added a double. Charlie Esterly had two hits. For the Brewers,
Ty Bryant had a double and two singles. Nolan Favorite and Luke Davidson each had two hits.
6/9/18 A's 15 Angels 4
6/9/18 Reds 9 Bluejays 4
6/9/18 Rays 5 Pirates 3
The Rays knocked off the Pirates with a 5-3 score. For the Rays, Brandon Fuentes had a two doubles and a single. Isaac Burnett, Lucas Collier, and Casey Mais had doubles. For the Pirates, Keaton
Neuner had three hits. Levi Dhein added a double.
6/9/18 Padres 6 Twins 8
The Twins came back on the Padres as they scored three runs in the fifth inning to win 8-6. For the Twins, Aiden Lane, Vallen Moore(2) and Owen Matthews had doubles. For the Padres, Chace Downey had
two doubles and Carter Hounshell added a double. Brooks Williams and Conner Brown each had two hits.
6/8/18 Brewers 5 Royals 12
A five run fourth inning helped the Royals beat the Brewers 12-5. For the Royals, Aiden Monroe had a triple. Wiley Potts and Holden Applegate had homeruns. Porter Salisbury had a double. For the
Brewers, Nolan Favorite, Ty Bryant, and Luke Davidson had doubles.
6/8/18 Reds 10 Pirates 9
The Reds, Waylon Higdon had the game winning hit as they topped the Pirates 10-9.
6/7/18 Bluejays 6 Royals 15
The Royals defeated the Bluejays 15-6. For the Royals, Connor Johnson had homerun. For the Bluejays, jack Sabens and Nate Kessinger had homeruns.
6/6/18 A's 12 Brewers 6
The A's defeated the Brewers 12-6. For the A's, Carson Sheppard, Michael Fidler, and Vincent Young had doubles. For the Brewers, ty Bryant had a double and two singles.
6/5/18 Angels 13 Padres 2
The Angels took care of the Padres winning 13-2. For the Angels, Sam McCall had a homerun, triple and a single. a CC Cooper had two doubles. Logan Quillen had a triple and a single. For the Padres,
Leo Quinones had two hits.
The Rays topped the Twins 5-3. For the Rays, Graham Vilt and Brandon Fuentes each had a double and two singles. Kaleb Simpson had two triples. Jordan Vincent had a triple and two hits. Carter Koehler
(3), Isaac Burnett, Luke Collier, Michael O'Dwyer, Nikol Samudio had multiple hits. For the Twins, Robert Shearin had a triple and a single. Aiden Lane, Vallon Moore, Lucas Young and Wyatt Hines each
had two hits.
6/3/18 Angels 16 Reds 2
6/2/18 Brewers 2 Bluejays 9
A five run second inning lead the Bluejays to a 9-2 victory over the Brewers. For the Bluejays, Ryan Cox had a homerun and a single. Ty Klingberg had a double. Charlie Sabens, Dean Williams, Colton
Collins, Eagan Spicer, Lukas Herman each had two hits. For the Brewers, Ty Bryant had a double.
6/2/18 Royals 1 Pirates 5
The Pirates downed the Royals 5-1. For the Pirates, Matt Withrow had a triple. Jack Lindauer had a double. Levi Dhein and Jacob Roberts each had two hits. For the Royals, Porter Salisbury had a
6/2/18 Rays 1 Angels 5
The Angels stopped the Rays 5-1 for the win. For Angels, Liam Videll and Billy Keck had doubles .For the Rays, Jordan Vincent had two hits.
6/2/18 Padres 3 A's 13
The A's took care of the Padres with a 13-3 win. For the A's, Michael Fidler had a homerun. Carson Sheppard had a triple. Hudson Meredith, Eli DuPugh, Vincent Young and Braxton Taylor each had a
double. For the Padres, Chace Downey had a homerun. Carter Hounshell had a triple. Conner Brown, Eli Miller, and Quinn Feigel had base hits.
6/2/18 Reds 9 Twins 5
The Reds stopped the Twins to win 9-5. For the Reds, Colin Bauckman had a double and two singles. John Henry tucker(3) Aidan Newkirk(2), Josh Ruffin(3) Aaron Farmer and TJ Konerman(2) had base hits.
For the Twins, Lucas Young had a double and two singles. Cory Campbell, Vallon Moore, Charlie Esterly and Aiden Lane each had two hits. Nathan Lackner had a base hit.
6/1/18 Padres 5 Brewers
5/30/18 Pirates 10 Bluejays 6
The Pirates jumped out to an early 4-0 lead and beat the Bluejays 10-6. For the Pirates, Matt Withrow had a homerun , triple and a single. Levi Dhein had a triple and a double. Embry Stivers had four
hits. Keaton Neuner and Jacob Roberts each had three hits. For the Bluejays, Ryan Cox had a double and single. Dean Williams added a double. Lukas Herman, Grayson Willey, Eagon Spencer and Radley
Webb each had two hits.
5/29/18 A's 6 Rays 3
The A's battle back against the Rays for a 6-4 win. For the A's, Braxton Taylor had a double and a single. Weston Rice had a double. Vincent Young added double. Carson Sheppard, Eli DuPugh and Leland
Robertson had a base hit. For the Rays, Carter Koehler had two doubles. Isaac Burnett had a double and a single. Tanner Simpson, Michael O'Dwyer, Brady Mattingly, Luke Collier, Graham Vilt. Sam Lange
had base hits.
5/25/18 Rays 2 Royals 14
Under the lights on the big field, the Royals scored in every inning to beat the Rays 14-2. For the Royals, Porter Salisbury had two doubles and a single. Holden Applegate had a double and a single.
Colton Smith and Landon Brown each had three hits. Wiley Potts and Josh Hutchens each had two hits. For the Rays, Carter Koehler had a triple. Graham Vilt added a double and a single. Kaleb Simpson
had a single and made a diving catch at shortstop.
5/25/18 Angels 16 Twins 14
5/25/18 A's 13 Pirates 8
The A's knocked off the Pirates as they won 13-8. For the A's, Hudson Meredith had triple, double and two singles. Braxton Taylor had a double and a single, Eli DePugh had a double and two singles.
Carson Sheppard, Vincent young, and Jagger Rich each had two hits. Michael Fidler added a double. For the Pirates, Jack Lindauer had a homerun. Matt Withrow had a triple and a double. Levi Dhein(2)
and Jacob Roberts had doubles. Victor Gatlin had two hits.
5/25/18 Brewers 4 Reds 14
The Brewers handed the Brewers a 14-4 loss. For the Reds, Colin Bauckman had two doubles and Skylar Samuels had one. John Henry Tucker(3) Aiden Newkirk(2), Joshua Ruffin(2) Waylon Higdon(2) Anderson
Parrott(2), Owen Thompson(2) and Abel Guerra had multiple hits. For the Brewers, ty Brant had three hits.
5/24/18 Indians 11 Royals 13
The Royals scored in every inning and held off a late rally by the Twins to win 13-11. For the Royals, Colton Smith had a double and two singles. Porter Salisbury had a double and a single. Holden
Applegate had three hits. Wiley Potts, Gabriel Post, Conner Johnson and Ashton Cancel each had two hits. For the Twins, Vallon Moore had two doubles. Nathan Lackner and Cory Campbell had three hits.
Aiden Lane, Kellan Harper and Robert Shearin each had two hits.
5/23/18 Padres 8 Bluejays 9
The Bluejays Conner Castle knocked in the winning run in the bottom of the sixth inning as they beat the Padres 9-8. For the Bluejays, Eagon Spicer had three hits. For the Padres, Chace Downey,
Conner Brown, and Quinn Fiegel each had three hits.
5/22/18 A's 12 Twins 10
The A's scored five runs in the top of the sixth inning helped by Michael Fidler three run double to beat the Twins 12-10. For the A's, Braxton Taylor and Hudson Meredith had doubles. Eli DePugh and
Vincent Young each had three hits. For the Twins, Lucas Young had a triple. Vallon Moore and Nathan lackner each had three hits. Robert Shearin had a double.
5/21/18 Brewers 2 Angels 14
The Angels took care of the Brewers with a 14-2 win. For the Angels, Sam McCall had a homerun and two singles. Reese Salyer had a triple and two singles. CC Cooper and Billy Keck each added a triple
and a single. Isaac Keisewetter had a homerun. Dalton Miller and Davis Martin each had two hits. Liam Viddell had two doubles. Myles Knizner had a double. Tony Sutton had a double. Nathan Roling,
Javy Tucker,T y Bryant, Luke Davidson and Hardy Deaves had base hits.
5/20/18 Twins 6 Pirates 8
A five run third inning helped the Pirates beat the Twins 8-6. For the Pirates, Levi Dhenins had a triple and two hits. Matt Withrow(2), Jake Lindauer, Keaton Neuner, and Nolan Brown had doubles.
Brook Blakemore had two hits. For the Twins, Charlie Esterly had four hits. Lucas Young had two doubles. Aiden Lane and Kellan Harper each had two hits. Vallon Moore had three hits and made a couple
great plays in the sixth inning.
5/20/18 Rays 13 Padres 2
The Rays defeated the Padres 13-2 to start the Sunday afternoon games. For the Rays, Kaleb Simpson led the charge with homerun and double. Branden Fuente and Carter Koehler each had a triple and a
single. Casey Mathis had double. Micharl O'Dwyer and Luke Collier had two hits. For the Padres, Chace Downey had a triple and a single. Carter Hounshell and Connor Brown had base hits.
5/19/18 Bluejays 5 Twins 4
5/19/18 Royals 8 Angels 9
A three run fourth inning prevailed the Angels over the Royals 9-8. For the Angels, CC Cooper(3), Logan Quillen(2) and Billy Keck(2) had multiple hits. For the Royals, Wiley Potts had a triple.
Colton Smith added a double. Holden Applegate(3), Emmett Hill(2) and Porter Salisbury(2) had multiple hits.
5/19/18 Reds 3 A's 4
The A's survived the Reds winning 4-3. For the A's. Eli DePugh and Weston Rice each had two hits. For the Reds, Colin Bauckman had a triple and a single.
5/19/18 Brewers 5 Pirates 13
The Pirates had two five run innings in the second and the fourth as they defeated the Brewers 13-5. For the Priates, Jack Lindauer led the charge with a homerun, triple and a double. Nolan Brown(2)
and Matt Withrow had doubles. Embry Stivers and Jacob Roberts had three hits. Carter Brown had two hits. For the Brewers, Tanner Shipp had a triple. Ty Bryant had two doubles. Hardy Deaves had two
5/17/18 Angels 17 Bluejays 10
The Angels took care of the Bluejays to win 17-10. For the Angels, Sam McCall had a homerun. CC Cooper had a triple and two doubles. Billy Keck and Liam Videll(2) and Dalton Miller had doubles. Reece
Salyer had three hits. For the Bluejays, Ty Klingberg had three hits. Eagon Spicer and Greyson Willey each had two hits. Ryan Cox and Conner Castle had doubles.
5/16/18 Rays 16 Brewers 4
The Rays scored in every inning as they defeated the Brewers 16-4. For the Rays, Kaleb Simpson had a triple and a two singles. Graham Vilt(2), Carter Koehler, Luke Collier, and Brady Mattingly had
doubles. For the Brewers, Tony Sutton, Ty Bryant, and Mikey Tolle each had two hits. Nolan Favorite had a double.
5/15/18 A's 12 Royals 13
The Royals handed the A's a 13-12 loss for their first loss of the season as Gabriel Pfost score the game winning run in the bottom of the sixth inning. For the Royals, Colton Smith and Emmett Hill
had doubles. Wiley Potts(2), Porter Salisbury(2), Holden Applegate(2) Conner Johnson(2), gabriel Pfost(3), Landon Booher and Ashton Cancel had base hits.
For the A's, Hiudson Meredith(2), Carson SHeppard, Vincent young, and Braxton Taylor had doubles. Eli DuPugh, Michael Fidler, and Leland Robertson each had three hits. Noah Anastasio had two hits
5/14/18 Padres 3 Reds 9
The Reds scored four times in the second inning as the Reds defeated the Padres 9-3. For the Reds ,Skylar Samuels(2) Colin Bauckman had triiples. Abel Guerra(2), Joshua Ruffin(2) and AIden had
doubles. Anderson Parrott had two hits. For the Padres, Chace Downey had a double and a single. Leo Quinones, Eli Miller, and Quinn Fiegel had base hits.
5/12/18 Brewers 5 Twins 20
The Twins scored in every inning to beat the Brewers 20-5. For the Twins, Aiden Lane(2) and Lucas Young had doubles. For the Brewers, Mickey Tolle, hardy Deaves and Ty Bryant each had two hits.
5/12/18 Pirates 9 Angels 14
The Angels broke a 9-9 tie in the bottom of the fifth inning with a five run inning to beat the Pirates 14-9. For the Angels, Liam Videll and Sam McCall had triples. Reece Salyer Beckett Murphy, each
had two doubles. CC Cooper and Issac Kiesewetter each had three hits. Billy Keck turned a double play and had two hits. For the Pirates, Keaton Neuner it a two run homerun out of the park. Levi Dhein
(2), and Embry Stivers had doubles., Matt Withrow had a triple and a double. Castle Whitaker had two hits.
5/12/18 Bluejays 6 A's 9
5/12/18 Royals 13 Padres 7
The Royals started the Saturday games with a 13-7 win over the Padres. For the Royals, Colton Smith and Wiley Potts had triples. Porter Salisbury and Landon Booher each added a double. Holden
Applegate and Landon Brown each had three hits. For the Padres, Chace Downey had a double and two singles. Connor Brown(3)Quinn Fiegel(2) and Eli Miller had bse hits.
5/11/18 Twins, 4 Angels 5
The game need an extra inning as the Angels, Sam McCall had the game winning hit knocking in Billy Keck in the seventh inning for 5-4 win over the Twins. For the Angels, Sam McCall had three hits.
Billy Keck had two hits. For the Twins, Lucas Young had three hits. Aiden Lane had two hits.
5/11/18 Reds 11 Rays 9
No game highlights available due to the scorekeepers
5/10/18 Pirates 3 A's 5
The A's remained undefeated by picking up a 5-3 win over the Pirates. For the A's, Hudson Meredith had a triple and single. Eli DePugh had two doubles Weston Rice Michael Fidler each had one double.
Braxton Taylor had two hits. For the Pirates, Levi Dhein and Matt Withrow each had a double and a single.
5/9/18 Reds 2 Brewers
The Brewers picked up their second win of the season with 6-2 score over the Reds. For the Brewers, Ty Bryant had a homerun, double and a single. Luke Davidson, Dustin Knorr, Javy Tucker, Tanner
Shipp, Ryder Cantrell and Michael Tolle each had two hits. For the Reds, Skylar Samuels had a double. Aiden Newkirk, Anderson Parrott, Mikey Erskine and TJ Konerman each had two hits.
5/8/18 Royals 8 Rays 3
The Royals defeated the Rays 8-3. For the Royals, Colton SMith, Emmett Hill and Porter Salisbury each had a double and a single. Wiley Potts and Aiden Monroe each had two hits. Kaleb Simpson had
inside the park homerun and two hits. Carter Koehler had two hits. Brady Mattingly had a double and a single.
5/7/18 Bluejays 9 Padres 2
5/4/18 Reds 7 Royals 9
The Royals stopped a late rally by the Reds to win 9-7. For the Royals, porter Salisbury had triple and double among his three hits. Holden Applegate had a double. Landon Brown(2), Colton Smith (3),
Gabriel prost(2) Connor Johnson, Emmett Hill and Landon Booher had bSavease hits. For the Reds, Skylar Samuels had a triple. Colin Bauckman had two doubles. Aiden Newkirk added a double. Abel Guerra
had two hits.
5/4/18 Rays 13 Bluejays 4
The Rays had a big 5 run second inning as they beat the Bluejays 13-4. For the Rays, Michael O'Dwyer, Luke Collier, and Graham Vilt had doubles. Brandon Fuentes(4), Kaleb Simpson(3),Carter Koehler
(2), and Isaac Burnett(2) had base hits. For the Bluejays, Nate Kissinger and Lukas Herman each had two hits.
5/4/18 Padres 2 Pirates 13
The Pirates took care of the Padres by a score of 13-2. For the Pirates, Matt Withrow had two homeruns and a double. Keaton Neuner added a homerun and a triple. Hulsman(2), Evan Jack Lindauer, Castle
Whitaker and Brooks Blakemore had doubles. Carter Brown had two hits. For the Padres, Chance Downey had a double. Eli Miller had two hits.
5/3/18 Angels 6 A's 7
The A's scored two runs in the bottom of the sixth inning as Jagger Rich knocked in the winning run with a double to beat the Angels 7-6. For the A's, Eli DuPugh, Hudson Meredith, and Michael Fidler
had doubles. Vincent Taylor(3), Carson Sheppard(2) and Braxton Taylor(2) had base hits. For the Angels, Reece Salyer had a triple and a double. Logan Quillen had a triple. Sam McCall added a double.
5/3/18 Pirates 6 Rays 3
The Pirates knocked off the Rays 6-3. For the Pirates, Embry Stivers had a a triple and a single. Levi Dhein and and Victor Gatlin each had two hits. For the Rays, Kaleb Simpson had a triple and a
double among his three hits. Luke Collier and Graham Vilt had doubles.
5/2/18 Royals 13 Brewers 7
The Royals scored five runs in two different innings to beat the Brewers 13-7. For the Royals,Porter Salisbury had a homerun and a double. Emmett Hill and Wiley Potts had triples. Colton Smith had 3
hits. Grabriele, Holden Applegate, Landon Booher and Aiden Monroe each had two hits. For the Brewers, Tanner Shipp had a double. Hardy Deaves, Dustin Knorr, and Ty Bryant each had two hits.
5/1/18 Twins 17 Padres 1
After a tough weekend the Twins bounced back with a 17-1 win over the Padres. For the Twins, Vallon Moore, Kellan Harper, Charlie Esterly, and Lucas Young had doubles. For the Padres, Eli Miller had
two hits.
4/30/18 Bluejays 3 Reds
The Reds knocked off the Bluejays 5-3. For the Reds, Skylar Samuels and Colin Bauckman each had a triple and a double. Aiden Newkirk had a double and a single. For the Bluejays, Ty Klingenberg and
Dean Williams each had two hits.
4/29/18 Reds 14 Twins 13
The Reds scored three runs in the sixth inning to beat the Twins 14-13. For the reds, John Henry Tucker and Colin Bauckman had doubles. For the Twins, Lucas young had a triple.
4/29/18 Angels 18 Royals 16
An extra inning was needed as the Angels topped the Royals 18-6 in seven innings. For the Angels, Billy Keck, Sam McCall, Liam Videll(2), and Logan Quillen(3) had doubles. Logan Quillen added a
triple. Myles Knizner(2) Isaac Kiesewetter(3), Beckett Murphy, Gabe Turner(2) and Dalton Miller had base hits. For the Royals, Colton Smith(2), Wiley Potts(2), and Porter Salisbury, and Holden
Applegate had doubles. Emmett Hill and Porter Salisbury had triples.
4/29/18 Royals 8 A's 10
The A's held off a late rally by the Royals to win 10-8. For the A's, Eli DuPugh(2), Braxton taylor, Carson Sheppard, For the and Reese Simpson had doubles. Jagger Rich(3), Vincent Young(2), Noah
Anastasio(3) and Leland Robertson(2) had multiple hits. For the Royals, Porter Salisbury and Holden Applegate had doubles. Landon Brown had a triple. Gabriel Prost(2), Colton Smith(3), and Wiley
Potts(2), Aiden Monroe and Emmett Hill had base hits.
4/28/18 Rays 14 Twins 12
The Rays finished the busy day with a 14-12 win over the Twins. For the Rays, Jordan Vincent had two triples. Kaleb Simpson, Brandon Fuent,es, Carter Koehler and Brady Mattingly each had two hits.
For the Twins, Lucas Young(2), Aiden Lane, Keilan Harper(2), and Nathan Lackner had doubles. Aiden Lane added a triple. Charlie Esterly had three hits and Vallon Moore had two hits.
4/28/18 Padres 5 Angels 7
The Angels bounced back fron their first loss with a 7-5 win over the Padres. For the Angels, Logan Quillen had a homerun and a double. Billy Keck and Sam McCall had homeruns. Myles Knizer had a
triple. For the Padres, Landon Walling and Nolan Stethen each had two hits.
4/28/18 Royals 5 Bluejays 6
The Bluejays edged the Royals 6-5. For the Bluejays, Ryan Cox had a homerun. Dean Williams had a double. Jack Sabens had a triple. Colton Collins and Lukas Hermann each had two hits. For the Royals,
Holden Applegate, Conner Johnson, and Emmitt Hill each had a double.
4/28/18 Reds 3 Pirates 5
The Pirates started the day with a 5-3 win over the Reds, For the Pirates, Levi Dhein had a triple and a double. Jacob Roberts, Embry STivers, and Matt Withrow had a double. For the Reds, Colin
Bauckman had atriple and Anderson Parrott had a double.
4/27/18 Brewers 1 A's 11
The A's defeated the Brewers 11-1. For the A's, Carson Sheppard celebrated his birthday with a homerun and a single. Eli DuPugh had a homerun and a double. Weston Rice had a triple. Hudson Meredith
added a double. For the Brewers, Luke Davidson(2), Ty Bryant, Hardy Deaves had base hits.
4/27/18 Angels 1 Rays 5
The Rays handed the Angels their first loss with a 5-1 win. For the Rays, Kaleb Simpson had two doubles and Brady Mattingly added one. Luke Collier had two hits. For the Angels, Reece Salyer had a
triple and a single. Beckett Murphy had two hits.
4/26/18 Pirates 8 Royals 2
A four run sixth inning by the Pirates sealed a 8-2 win over the Royals. For the Pirates, Matt Withrow had a homerun. Levi Dhein(3), Jacob Roberts(2), Embry Stivers, Nolan Brown ,Keaton Neuner, and
Brooks Blakemore had base hits. For the Royals, Holden Applegate(2), Wiley Potts, Porter Salisbury, Colton Smith, Emmett Hill, Connor Johnson, and Aiden Monroe
had base hits.
4/26/18 A's 13 Padres 5
The A's jumped out to a early 4-0 lead and never looked back as they beat the Padres 13-5. For the A"s, Carson Sheppard had two triples. Eli DuPugh, Weston Rice, Vincent Young(2) each had doubles.
Michael Fidler and Hudson Meredith each had three hits a piece. For the Padres, Carter Hounshell had two doubles. Chace Downey had a homerun. Conner Brown and Kole Bridwell each had two hits.
4/25/18 Bluejays 6 Brewers 5
The Bluejays held off a late rally by the Brewers to win 6-5. For the Bluejays, Conner Castle had a homerun. Dean Williams, Charlie Sabens, Radley Webb had doubles. Eagan Spicer and ty Kessinger each
had two hits. For the Brewers, Mickey Tolle had a triple. Hardy Deaves and ty Bryant added doubles. Luke Davidson and Nolan Favorite each had two hits.
4/22/18 Brewers 8 Rays 13
The rays knocked off the Brewers 13-8. For the Rays, Jordan Vincent and Luke Collier had doubles. Michael O'Dwyer, Kaleb Simpson, and Graham Vilt each had three hits.
4/22/18 Twins 12 Pirates 16
The Pirates defeated the twins 16-12. For the Pirates, Levi Dhein had a double and 4 singles.Matt Withrow had a triple. Embry Stivers and Keaton Neuner each had a double. For the Twins, vallon Moore,
Robert Shearin, Kellon Harper, and Lucas Young had doubles. Wyatt Himes and Nathan Lackner had homeruns.
4/22/18 Reds 14 Padres 7
The Reds picked up 14-7 win over the Padres. For the reds, Arron Farmer had a homerun. Colin Bauckman had a homerun and triple. Aiden Newkirk had a triple and a double. Mickey Erskine, John Henry
Tucker , and Abel Guerra had doubles.
4/21/18 Brewers 15 Padres 10
4/21/18 Angels 12 Reds 10
The Angels remained undefeated as they beat the Reds 12-10. For the Angels, CC Cooper had two triples and a double. Sam McCall added a double. For the Reds, John Henry Tucker, Waylon Higdon and
Anderson Parrott added doubles.
4/21/18 Rays 10 A's 12
The A's continued their winning ways by defeating the Rays 12-10. For the A's, Hudson Meredith had a homerun. Eli DePugh had a triple and two doubles Weston Rice(2) and Reese Simpson had doubles. For
the Rays, Michael O'Dwyer and Jordan Vincent had doubles.
4/21/18 Bluejays 9 Pirates 10
The Pirates Keaton Neuner had two doubles and the game winning hit as the Pirates defeated the Bluejays 10-9 in seven innings. For the Pirates, Levi Dhein and Castle Whitaker each had a double. Nolan
Brown had three hits. For the Bluejays, Jack Sabens and Dean Williams each had a double.
4/20/18 Royals 11 Twins 10
The Royalks edged the Twins 11-10. For the Royals, Porter Sailsbury had a homerun and a triple. Emmitt Hill and Willy Potts each had two hits. For the Twins, Charlie Esterly Cory Campbell each had a
double. Lucas Young added a triple.Kellan Harpee had a homerun and a triple.
4/20/18 A's 13 Reds 3
Two five run innings push the A's over the Reds 13-1. For the A's, Carson Sheppard had a homerun and two singles. Eli DuPugh and Michael Fidler each had two triples. Hudson Meredith added a double.
For the Reds, John Henry Tucker had a base hit.
4/19/18 Pirates 11 Brewers 6
The Pirates knocked off the Brewers by a score of 11-6. For the Pirates, Levi Dhein had two triples and a double. Jacob Roberts had three hits. Matt Withrow and Emery Stivers each had a homerun.
Castle Whitaker had two hits. For the Brewers, Liam Fleischer and Ty Bryant each had two hits. Luke Davidson had a double.
4/17/18 Padres 1 Rays 4
The Rays defeated the Padres by a score of 4-1. For the Rays, Michael O'Dwyer had two run homerun in the third inning. Carter Koehler and Jordan Vincent each had a double. For the Padres, Carter
Hounshell had single scored a run. Chace Downey and Brandon Morgan had base hits.
4/13/18 Bluejays 9 Angels 10
Scoring five runs in two different innings was enough as the Angels held off the Bluejays for a 10-9 win. For the Angels, Reece Salyer and Jaelin Cooper, Sam McCall each had a double. Liam Viddell
had two hits. For the Bluejays, Ryan Cox and Jack Sabens each had a double. Charlie Sabens had two hits.
4/13/18 Rays 2 Reds 5
A three run fifth inning propelled the reds past the Rays to win 5-2. For the Reds, Skylar Samuels had a homerun. Colin Bauckman had a double. Mickey Erskine and Aiden Newkirk each had a base hit.
For the Rays, Kaleb Simpson finally getting to play got a base hit and Luke Collier had a base hit.
4/12/18 Royals 8 Padres 1
4/11/1 8 A's 12 Bluejays 1
4/10/18 Pirates 7 Angels 11
4/9/18 Brewers 0 Twins 2
The Twins topped the Brewers 2-0 in the first game of the season. For the Twins, Aiden Lane had two hits and scored two runs. Lucas Young, Owen Matthews, and Charlie Esterly each had two hits.. For
the Brewers,Luke Davidson had two hits. Ty Bryant and Tucker Wallace had base hits. | {"url":"https://www.northoldhamlittleleague.net/Default.aspx?tabid=1954816","timestamp":"2024-11-07T01:03:42Z","content_type":"application/xhtml+xml","content_length":"160862","record_id":"<urn:uuid:2f309f5f-4be3-4465-a596-eb953469efef>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00505.warc.gz"} |
Categories and infinity-categories
Time: Do 16:00-18:00
First meeting: 18.10.2018
Room: KöLu24-26/SR 006 Neuro/Mathe (Königin-Luise-Str. 24 / 26)
Infinity category theory lies in the intersection of two major developments of 20th century mathematics: topology and category theory. Category theory is a very powerful framework to organize and
unify mathematical theories. Infinity category theory extends this framework to settings where the morphisms between two objects form not a set but a topological space (or a related object like a
chain complex). This situation arises naturally in homological algebra, algebraic topology and sheaf theory.
This reading seminar will recall the foundational ideas of usual category theory and then make the transition to homotopical algebra and infinity categories. By the end of the seminar, the student
will be familiar enough with infinity categories that they can navigate texts written in this new language.
For ordinary category theory:
• An introduction to Homological Algebra, C.Weibel
For infinity-category theory:
• Higher topos theory, J.Lurie
• Higher algebra, J.Lurie | {"url":"https://simon-pepin.github.io/teaching/inf_cats_WS18.html","timestamp":"2024-11-08T18:54:38Z","content_type":"text/html","content_length":"6534","record_id":"<urn:uuid:ad3feba3-040b-4a40-8a1a-32fac6340694>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00405.warc.gz"} |
How to compare fractionsHow to compare fractionsHow to compare fractions
How do you determine which fraction is the largest.
In this video I show you three short cuts that help make comparing fractions very easy, and almost fun.
The three shortcuts are,
• If the denominators are the same, then the fraction with the largest nominator is the largest fraction.
• When you subtract the numerator and denominator of both fractions and it is equal then the fraction with the largest numerator is the largest.
• Finally, you can cross multiply, and the fraction with the largest product is the largest fraction.
0 comments: | {"url":"http://www.moomoomathblog.com/2016/07/how-to-compare-fractions.html","timestamp":"2024-11-02T11:57:36Z","content_type":"application/xhtml+xml","content_length":"82333","record_id":"<urn:uuid:6ecaa23c-f2d4-470a-a507-fff734b89070>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00327.warc.gz"} |
internal category
fine. so, till the right functorial setting is not clear, let us call a certain way of going from an n-group to an (n+1)-group just a "rule" R. for abelian groups this rule could be , whereas for an
arbitrary group this could be (unpleasently, the two does not coincide on abelian groups, but this is classical and maybe unavoidable). so for each rule there is an higher cohomology with
coefficients in a group : it is just cohomology with coefficient object . such a general nonsense accounts for cohomology with coefficients in an abelian group and for nonabelian group cohomology,
but it is evidently something too general to be meaningful. so, the question is: what is reasonable to ask to R in order to have a good theory? clearly there could be classical cohomology theories
where is not related to the n-th iteration of a rule R; it could be a good idea to try look at them from the nPOV to see what's really happening there.
The AUT construction is not a functor, as Aut is not an endofunctor on groups (Aside: this is why Giraud's original formulation of nonabelian cohomology from 1970 (or so) wasn't functorial in the
1-group K, but as we now know (following Debremecker, Aldrovandi-Noohi), it is functorial in the 2-group.) Certainly the autoequivalences of the delooping of an n-group form the objects of an (n+1)
-group, but I don't know if the relation between the cohomologies with coefficients BK and Eq(BK) = AUT(K) has been studied for K a general n-group.
a classical example is $H^n(X;\mathbb{Z})=\pi_0 \mathbf{H}(X;\mathbb{B}^n\mathbb{Z})$. this suggests that the natural setting for cohomology in different degrees is a stable $(\infty,1)$
It is true that only if the coefficient object has arbitrary delooping that the above definition of $H^n$ makes sense for all $n$. But there is a priori no requirement that such a definition needs to
make sense. For general nonabelian coefficients degree $n$ cohomology is defined only up to some finite $n$. This is a standard thing in the literature on nonabelian cohomology.
But you can always form loops to go don do lower degrees.
There are ways to define homotopy groups $\pi_n$ for topological groupoids
There is, by the way, something even more general: the notion of homotopy group (of an infinity-stack)
@Domenico, re nonabelian group cohomology
The point of nonabelian cohomology is to _not_ use stable theory, which is, at least in my view, a sort of minimal abelianisation of homotopy theory. But you are correct about using only 0-th
cohomology. Actually I would say that there are, for n-group coefficients, two one would use: 0-th and 1-st. For example, take a 2-group G. We can then form the group H^0(X,G), and the pointed set H^
1(X,BG). The first, for the case G = AUT(K) classifies K-bibundles, the second classifies K-bundle gerbes. This is in general, given an n-group, how much one can do - just H^0 and H^1. If H is an
n-groupoid that can be delooped m times, we can form H^0, H^1,...,H^m by taking successive deloopings of the coefficients.
The most that Schreier theory can do (that I am aware of) for a group K is look at H^3, which we would now call cohomology with coefficients in BAUT(K). This was done in papers by Dedecker in the
50s. After this, I'm not sure what can be done.
@Domenico, re 1-connected model
R//Z is not 1-connected _as a 2-group_, only its component spaces are (if we say that a disconnected space is 1-connected if its path-components are, which is a little bit of a fudge). This is still
a desirable thing, though, as the topological complexity has been reduced (at the cost of introducing more algebraic structure). There are ways to define homotopy groups pi_n for topological
groupoids (at least in print for n<3, and in my notes for all n) such that the map R//Z --> U(1) we have been discussing induces isomorphisms (this is contained in proposition 2.130 in my thesis,
using U(1) = BZ). There is a bit of this in the work by Urs and Konrad Waldorf, but they are looking at the special case of a Cech groupoid.
unfortunately, me too will be unable to seriously work on the lab for a couple of weeks. I'll just write something here for later use.. or immediate use if someone is interested in it and wants to
pick it up :-)
in the nPOV approach to
one basically defines only 0-th cohomology. cohomology in other degrees is then defined by shifting the second variable. a classical example is . this suggests that the natural setting for cohomology
in different degrees is a
stable (infinity,1)-category
. for instance in
nonabelian group cohomology
with coefficients in the group , the natural object to work with seems to be the
of . it would be interesting to compare this to Schreier theory.
Schreier theory is included in group extension entry.
It is instructive the way Faro et al. pharse in their article, placing the role of AUT K via interpreting the Grothendieck construction. I will once write about it.
I find the use of AUT(K) something which works, but which is chosen in quite an arbitrary way. maybe it's worth creating a Schreier theory entry to discuss it
Yes, I agree with all you say here. As far as I am concerned: I don't have the energy and time this particular project right now, though. Probably later sometime.
but on second thought, I think a plausible perspective could be
Yes, precisely: very different looking objects may all be equivalent to the same 2-group, (or the same oo-groupoid). The highest degree of nondegenerate cells is not an invariant under this. Instead,
the main invariants are the intrinsically defined homotopy groups
the above should be somehow relevant to
nonabelian group cohomology
. by the way, as it is stated at the beginning of that entry,
nonabelian group cohomology
largely discusses Schreier theory of nonabelian group extensions – but from the nPOV. this is a bit unsatisfactory:
nonabelian group cohomology
should be something more intrinsic form the nPOV, which is then seen to be equivalent to Schreier theory. namely, I find the use of AUT(K) something which works, but which is chosen in quite an
arbitrary way. maybe it's worth creating a
Schreier theory
entry to discuss it (there's also a lot of material on this at
group extension
), so to make the other entrier related to Schreier theory lighter and more confined on their title subject (cleraly adding links to
Schreier theory
wherever needed). in particular
nonabelian group cohomology
could benefit from such a reorganization, since
is one strong points of the nPOV slogan. my feeling is that
nonabelian group cohomology
could be rewritten making the
fibration sequences
the main tool, but even if this should tun out not to be incorrect, I'm not expert enough to attemp a reorganization of
nonabelian group cohomology
along these lines.
at first sight, the "equivalence of 2-groups" point of view confused me: if and are equivalent, why should I prefer ? but on second thought, I think a plausible perspective could be the following:
both and are models for the same infinity-group; is a 1-connected model. one could go further and look for 2-connected models, 3-connected models.. of as an infinity-group, and it turns out that is
such a model, since is infinity-connected. the same point of view would apply to other groups. e.g., I should not look at the string group as to something different from but rather as a 3-connected
model of .
this could have some relevance in thinking to cohomology with coefficients in . namely, the "true object" is independent of the chosen model, but in carrying out explicit computations an highly
connected model could be convenient (in nice situations, to compute one choses a fibrant replacement for ..).
Just a comment, probably no news to anyone, but just for the record:
to realize the equivalence as a homotopy equivalence (morphisms going back and forth, being weak inverses) one needs to find suitable ana-2-functors, inverting the evident morphism from one
2-groupoid to the other, but for just knowing that the two are equivalent it is sufficient to have that one morphism and checking that it is a weak equivalence.
This is true for topological infinity-groupoid realizations of the situation and even for Lie-infinity-groupoid realizations.
For that one may observe that the functor $\mathbf{B} \mathbb{R}//\mathbb{Z} \to \mathbf{B} U(1)$ is k-surjective for all k on all stalks of Top or Diff: these are the germs of n-dimensional disks
(as described by Dugger in the reference cited at topological infinity-groupoid.) So for $D^n_r$ the standard n-disk of radius r, we form the groups of group-valued maps $Map(D^n_r, \mathbb{R})$,
$Map(D^n_r, \mathbb{Z})$ and $Map(D^n_r, U(1))$ and then check that the functor of ordinary 2-groupoids $\mathbf{B}(Map(D^n_r,\mathbb{R})//Map(D^n_r,\mathbb{Z})) \to \mathbf{B} Map(D^n_r,U(1))$ is
k-surjective for all k for some r small enough. And it clearly is: on any disk, any U(1)-valued function (continuous or even smooth) may be lifted to an $\mathbb{R}$-valued function, and the
nonuniquenss of the lifts are precisely given by $\mathbb{Z}$-valued functions on the disk.
As a small contribution, the discussion about U(1) is a special case of the (weak) equivalence of 2-groups \tilde G // pi_1(G) --> G for a topological group G admitting a universal cover \tilde G -
where we forget the topology. If we try to do this in a 2-category of topological 2-groups, we need to use anafunctors for this to be an equivalence.
well, it's better than that. The "groupal groupoid" (
) is equivalent, as a 2-group to the 2-group which is just the group .
The best way to see this is to notice that the obvious 2-functor of the corresponding
2-groupoids is an equivalence: it is a
k-surjective functor
for all k.
Aha, he fell for my cunning plan! :)
But seriously...
I have proofs in chapter 5 of my thesis establishing Pi_2(X) as a topological bigroupoid for a locally well-behaved space X. I don't think it is too hard, given a fin. dim. manifold, to get the
smooth (Frechet) structure on the _spaces_ involved in Pi_2, the trick is to show all the various structure maps (compositions, associator, unitor and unit/counit) are smooth. This is something I'm
serious about doing, but I only have my spare time in which to do it, so it will be a slow process.
@David: you wrote
In this context one needs to be au fait with Frechet manifolds, sadly an area where I am lacking.
I have a little facility with Frechet manifolds. Is there something here that I could help with?
not sure if this is relevant for what you have in mind, but taking path oo-groupoids can be made into a map from a lined oo-topos to itself. For instance taking a topological oo-groupoid $X$ (might
be just a topological space) to the topological oo-groupoid $\Pi(X)$.
Same for smooth oo-groupoids. For the smooth model of String, one can start with the Lie 2-algebra $\mathfrak{string}$, then form the smooth space (sheaf) $S\mathfrak{string}$ which is such that its
plots $U \to S(\mathfrak{string})$ are precisely the flat $\mathfrak{string}$-valued differential forms on $U$. Then one can form the smooth oo-groupoid $\Pi(S(\mathfrak{string}))$ of that space.
Finally, we can take from this simplicial sheaf degreewise the underlying concrete sheaf (=diffeological space) to get the smooth oo-groupoid $conc(\Pi(S(\mathfrak{string})))$. The claim is that
that's a smooth version of $\mathbf{B}String$.
This is discussed in some detail (though with slightly more antiquated tools than I have now) in section 5.2.3 here.
Concerning the categorical degree, I feel like remarking that it's only the homotopy groups (of an infinity-stack) that have intrinsic meaning, not the degrees of a truncated model.
Fur instance the goup $U(1)$ looks like, well, a group, but it is equivalent to the 2-group that comes from the crossed module $\mathbb{R}//\mathbb{Z} = [\mathbb{Z} \hookrightarrow \mathbb{R}]$.
Similarly, the String 2-group is a $\mathbf{B}U(1)$-central extension of an ordinary group $G$, hence a 2-group extension, but this is equivalent to a $\mathbf{B} (\mathbb{R}//\mathbb{Z})$-central
extension, which looks like a 3-group extension, but is really equivalent to the original 2-group extension.
Thankyou for taking an interest in the corner of the playground I frequent, Domenico. PI_1(X) can indeed be topologised, but composition is only continuous when X is locally nice - locally connected
and semilocally simply connected. This pattern continues higher up, where increasingly stronger local conditions are needed. Note that to go all the way up to infinity-groupoids, one needs local
contractibility. I will respond, but not right now. I only pause to remark that ultimately models of String should be smooth. One thing which I didn't pursue in my thesis, but was aware of, is that
the 'universal' 2-covering space (not that I have proved it universal yet - only that it is 2-connected) is smooth when the base space is smooth. In this context one needs to be au fait with Frechet
manifolds, sadly an area where I am lacking.
Edit: An easy way to see that the universal cover of a topological group X is topological is that the universal cover = the homotopy fibre of X --> Pi_1(X) at a point x = the source fibre of Pi_1(X)
at x. This is a subgroup of the group of arrows, as Pi_1(X) is a strict 2-group as shown by Brown-Higgins in the 70s.
Edit2: The prototypical weak topological 2-group is the fundamental groupoid of a based loop space, subject to the above caveat. The strict case is pretty much as you say.
I think that similar ideas for algebraic Galois group are used in Grothendieck's still unfullfilled ideas in his 1980s manuscripts and is in general known in Galois theory (covering spaces and higher
Postnikov fibers have natural common generalizations in this context).
internal-infinity groupoid
in Top (nice topological spaces) suggest the notion of n-topological group. if is a toplogical space, then can be topologized, becoming a topological 1-groupoid. more interestingly, if is a
topological group, should be a group object in toplogical 1-groupoids, i.e. a topological 2-group (I'd like to consider this example the prototypical topological 2-group, but I guess this is a matter
of taste). by looking at as to a topological 2-group one obtains an obvious answer to the question Why is the universal cover of a (nice) topological group a topological group? namely, there is a
natural topological groupoid map (here both groupoids are topologicla infinity-groupoids, the fact that the one on the left is a 0-groupoid and the one on the right a 1-groupoid tells us that
morphisms are trivial from a certain point, but we do not look at i-groupoids and j-groupoids as objects in different categories from this point of view). taking the homotopy fiber (in groupoids) of
over the identity object of one obtains a topological groupoid which is a 0-groupoid (since taking homotopy fiber decreases the "categorical height" by one). so this homotopy fiber is a topological
group, and by construction it is the universal cover of .
This extends to higher "covers" of . let us denote by the universal cover. then should be topologized to give a group oject in topological 2-groupoids, and so a topological 3-group. taking homotopy
fiber ends up in a natural structure of topological 2-group on the 2-connected cover of . and so on.
From this point of view it is natural to think to String(n) as a topological 3-group. this apparently contrasts both with the topological group point of view on String(n) and on the 2-Lie group point
of view. however one is reconciled with the "correct" categorical height of String(n) if one thinks it is built as a homotopy fiber of the fourth delooping of , which is a topological 4-group.
lowering down the categorical height of String(n) is then based on realizing as a topological 1-group. this way one trades the 2-group for a topological 1-group, and all levels in the construction of
Spin(n) are lowered by one. and since the model for is a Lie 1-group, one can wonder (and actually succeed in finding) whether Sting(n) can be realized as a Lie 2-group. at the topological lever one
can iterate the "lower the base step" trick, i.e. one can choose a model for which is a topological group, and so end up with String(n) as a topological 1-group.
David Roberts
could be interested in expanding this point of view.
I edited the formatting of internal category a bit and added a link to internal infinity-groupoid
it looks like the first query box discussion there has been resolved. Maybe we can remove that box now?
Am hereby moving an old Discussion-section from internal category to here
[begin forwarded discussion]
Previous versions of this entry led to the following discussions
+–{: .query} I think things are mutliply inconsistent in this entry. I do not want to change as I do not know what the intentional notation to start with was. If $p_1; s = p_2; t$ that mean that
target is read at the left-hand side (composition as o, not as ;), while the diagrams before that suggest left to right composition. Then finally the diagram for groupoids has $s$ both for source and
inverse, and there is only for right inverse, and one should also check convention, once it is decided above.-Zoran
You're right; I think that I caught all of the inconsistencies now. Incidentally, one needs only inverses on one side (as long as all such inverses exist), although it's probably best to put both in
the definition. (For groupoids, one also needs only identities on that same side too! This proof generalises.) —Toby =–
+–{: .query}
Question: I’ve looked at the definition of category in $A$ for a while and still haven’t been able to absorb it. Could we walk through an explicit example, e.g. “This is exactly what $C_0$ is, this
is exactly what $C_1$ is, this is exactly what $s,t,i$ are, and this is how it relates to the more familiar context”? For example, an algebra is a monoid in $Vect$. I’ll try to step through it
myself, but it will probably need some correcting. - Eric
Eric, one example to ponder is: how is an internal category in Grp the “same” as a crossed module? As a partial hint, try to convince yourself that given a internal category, part of whose data is $
(C_1, C_0, s, t)$, the group $C_1$ of arrows can be expressed as a semidirect product with $C_0$ acting on $\ker(s)$. The full details of this exercise may take some doing, but it might also be
enjoyable; if you get stuck, you can look at Forrester-Barker. - Todd
Urs: I don’t know, but maybe Eric should first convince himself of what Todd may find too obvious: how the above definition of an internal category reproduces the ordinary one when one works internal
to Set. Eric, is that clear? If not, let us know where you get stuck!
Tim: I have just had a go at 2-group and looked at the relationship between 2-groups and crossed modules in a little more detail, in the hope it will unbug the definition for those who have not yet
’groked’ it.
Eric: Oh thanks guys. I will try to understand how a small category is a category internal to Set first and then move on to category in Grp and the stuff Tim wrote. I’m sure this is all obvious, but
don’t underestimate my ability to not understand the obvious :)
Eric: Ok. Duh. It is pretty obvious for Set EXCEPT for pullback. Pullbacks in Set are obvious, but what about other cases? Why is that important and what is an example where there are not pullbacks?
In other words, is there an a example of something that is ALMOST a category in some other category except it doesn’t have pullbacks, so is not?
Tim: If I remember rightly the important case is when trying to work on ’smooth categories’, that is, general internal categories in a category of smooth manifolds. Unless you take care with the
source and target maps, the pullback giving the space of composible pairs of arrows may not be a manifold. (I remember something like this being the case in Pradines work in the area.) The point is
then that one works with internal categories with extra conditions on $s$ and $t$ to ensure the pullback is there when you need it.
Toby: Usually in the theory of Lie groupoids, they require $s$ and $t$ to be submersions, which guarantees that the pullback of any map along them exists. =–
[end forwarded discussion]
• Renato Betti, Formal theory of internal categories, Le Matematiche Vol. LI (1996) Supplemento 35-52 pdf
at internal category has a bogus pdf link which redirects to internal category! Is somebody having a correct pdf link ?
Urs 25, when you delete a discussion and archive it, then please leave the backlink to the archived version from the old place, otherwise is essentially lost. I done it this time (into references).
I have created a new entry locally internal category and listed it under related notions at internal category.
@Zoran I was not sure what you meant by the first sentence of locally internal category. I have amended it to mean what I think you meant but please check.
I added a bit more.
Very nice, Mike !
On the other hand, why saying “in the sense of the appendix of (Johnstone)” at stack semantics, rather than more correctly attributing phrase “in the sense of Penon 1974”. Is there a slight
difference ? (I did not look at Penon’s article yet; strangely I can find CR Acad Paris at the partly free gallica repository till 1973 and then again from 1979, but not for 1974-1978)
I resolved the bogus pdf link which was asked about it 26 and will correct it at internal category in a minute:
• Renato Betti, Formal theory of internal categories, Le Matematiche Vol. LI (1996) Supplemento 35-52, pdf
I didn’t write that phrase at stack semantics. Feel free to correct it if you know the correct reference.
“in the sense of the appendix of (Johnstone)” was written by Ingo Blechschmidt.
zskoda: Sorry, that was indeed my mistake. It is fixed now.
It is not a mistake, just more people together know more about the history :) and it is good when we agree (as often mathematicians do not agree on history).
I have brushed-up the entry internal category a little. Added the remark on cartesian closure discussed in another thread to a Properties-section
I just noticed how hard it is (or was) to find the crucial discussion at 2-topos – In terms of internal categories if all one does is search the nLab for “internal category”.
So I have now added a pointer to that at internal category by way of a brief paragraph a Properties – In a topos.
Should be expanded.
added one more item to the list of examples:
• A groupoid internal to a category of presheaves is a presheaf of groupoids.
diff, v60, current
added pointer to:
• Francis Borceux, Chapter 8 in Vol 1 Basic Category Theory of: Handbook of Categorical Algebra, Cambridge University Press (1994) (doi:10.1017/CBO9780511525858)
diff, v66, current
reordered the list of references:
all surveys first, then all textbooks, then all original articles.
diff, v66, current
added publication data to:
• Jean Pradines, In Ehresmann’s footsteps: from Group Geometries to Groupoid Geometries, Banach Center Publications, vol. 76, Warsawa 2007, 87-157 (arXiv:0711.1608, doi:10.4064/bc76-0-5 )
diff, v66, current
added pointer to:
• Charles Ehresmann, Catégories structurées, Annales scientifiques de l’École Normale Supérieure, Série 3, Tome 80 (1963) no. 4, pp. 349-426 (numdam:ASENS_1963_3_80_4_349_0)
There are more articles by Ehresmann that would deserve to be cited here, but I am out of patience now and will leave it at that.
diff, v67, current
Added the real original reference:
• Charles Ehresmann, Catégories topologiques et categories différentiables Colloque de Géométrie différentielle globale, Bruxelles, C.B.R.M., (1959) pp.137-150
(Jean Pradines said in his talk at the New Spaces conference that the concept of Lie groupoid was already used, but not formally defined, by Ehresmann in earlier 1958 work on finite pseudogroups a la
Cartan, but I haven’t yet found it. And possibly it is in Gattungen von lokalen Strukturen, but I wouldn’t be able to tell)
diff, v68, current
Added link zbMath review of Ehresmann’s paper 1959.
diff, v68, current
I found Ehresmann’s 1958 paper in Compte Rendus on pseudogroups, and he uses the Lie groupoid of germs and certain Lie subgroupoids, but is really still just doing rather formal differential
geometry, rather than thinking of Lie groupoids per se.
re #42 Thanks! That’s the way to go.
I am producing a standalone pdf of that article. In process…
I can have a look at “Gattungen…” but I haven’t gotten hold of that one yet, as it doesn’t seem to be in the oeuvres pdf
So I uploaded a pdf copy of Ehresmann’s Catégories topologiques et categories différentiables.
Looking at it, I find it may be a little anachronistic to attribute the notion of internalization to this article: It defines topological and Lie categories by explicit description of the conditions,
not by appeal to a general concept of internalization. The would-be ambient categories of $TopologicalSpaces$ and of $SmoothManifolds$ are not even being mentioned, are they? (I have only skimmed
over it, I admit. If the actual notion of internalization is in there, let’s extract the page number and point to it.)
diff, v69, current
It seems to be the actual notion of internal categories only appears later in
• Charles Ehresmann, Catégories structurées, Annales scientifiques de l’École Normale Supérieure, Série 3, Tome 80 (1963) no. 4, pp. 349-426 (numdam:ASENS_1963_3_80_4_349_0)
where it is probably Definition 3 in part II on p. 36.
(Not sure, though. I find reading this article is dizzying. Not the French, but the mathematical notation and terminology. So I don’t claim to have penetrated what it’s saying. If anyone has, let’s
add specific pointers.)
diff, v69, current
Oh, I don’t claim the 1959 paper has any abstract notion of internalisation, but that’s not what the text was saying when I made the edit. But this paper gave what was they key motivation for the
definitions of internal category and groupoid, rather than just a purely formal idea, and in categories without all pullbacks, to boot (a generality many other authors don’t consider).
Ehresmann is hard to read for many reasons, not least the non-Bourbaki-ized mathematical style (probably more influenced by Cartan) and the isolation from other category theorists leading to
non-standard terminology.
I adjusted the text after I suspected that your “real original reference” (#42) is not really about internal categories. It’s instead probably the original reference on topological categories and
smooth categories , and I did copy it to there.
So possibly the first mentioning of internalization of categories is the reference given in #41. If you have the energy, you might sanity check whether we could quote Def. 3 on p. 63 there as the
original definition (as suggested in #47).
I was looking at it, but I’ll have to come back. It seems that either Ehresmann is defining something only equivalent to what we would call an internal category, like maybe an externalisation, or
else he’s only working in a concrete setting, since the definition of internal groupoid, for instance, just says that some data is a (plain!) groupoid, on top of having a structured category.
Okay, thanks.
It seems then my impression (here) is correct that the credit for understanding and articulating the concept of internalization goes to Eckmann-Hilton 1961.
Oh the mysterious ways of attribution. This could well have been “Eckmann-Hilton theory”, but instead their names are hardly mentioned at all in this context, outside of their one eponymous example
(of many they gave), and all authors use their term “group object” as if it was always called that way, since the beginning of time.
This could well have been “Eckmann-Hilton theory”
to be honest, that’s a terrible way to name something. Giving people credit is not the same as using their names as the name of a thing. Imagine if we had “Eilenberg–Mac Lane theory” instead of
“category theory”…
Maybe it’s a citation culture difference. Bénabou once tried publicly shaming me because my anafunctor paper had so many citations, whereas his bicategory paper had so few. Not to mention the whole
issue of there not being category theory journals for so long that results were transmitted at meetings and conferences, and sometimes published in Springer’s LNM.
Hi David, don’t let yourself be distracted that easily. I was hoping you would come back with a closer reading of Ehresmann, as promised in #50. We are still looking for where in his writings we can
first recognize the notion of internalization. I count on you.
Yeah, sorry :-). I’m going to have to ask on the categories mailing list, I think, or else take some more time to digest.
Okay. If you do ask on the mailing list, please make clear that we are looking for the notion of internalization as such, not just for some construction that we can recognize, after the fact, as
equivalent to an internalized structure.
moved a section “Internal vs. enriched categories” from internalization to here
diff, v70, current
The import from internalization needed some adjustment for its new home.
All right, thanks. And maybe we should have the corresponding comment and cross-link also at enriched category. (Can’t edit myself right now.)
We already have this section at enriched category. There are a lot of ideas there. Maybe even we could have a page on enrichment/internalization comparisons, and then link from each side. But that’s
beyond my skills to synthesize.
Bénabou once tried publicly shaming me because my anafunctor paper had so many citations, whereas his bicategory paper had so few.
This is weird, since
16 citations: Roberts, David Michael Internal categories, anafunctors and localisations. Theory Appl. Categ. 26 (2012), No.29, 788–829. (Reviewer: Enrico Vitale) 18D05 (18F10 22A22)
336 Citations: Bénabou, Jean Introduction to bicategories. 1967 Reports of the Midwest Category Seminar pp. 1–77 Springer, Berlin (Reviewer: J. R. Isbell) 18.10
Clearly he meant items listed in the article’s bibliography, not citations to/of the article.
I guess it refers to the idea that the less you cite the more of a bigshot you must be.
Added correct associativity diagram for the definition given here; to avoid the ’triplet isomorphism’ in the diagram, we would need to alter the definition here to include the objects of composable
doubles and triples as part of the data, and this seemed like more work.
diff, v72, current
Urs has it. Apparently it meant my ideas must have been less original (though that’s a shallow reading of bibliographic entries). But this is, as Urs pointed out, off the thread :-)
It is somewhat on-topic in that it illustrates which social mechanisms are behind the desaster we have been struggling with above, of a whole field citing so unprofessionally as to forget the origin
even of its most basic notions (here: internalization in general, which we discovered is due to Eckmann-Hilton, who are never credited for it, their peers possibly fearing to compromise their own
originality if they did; and internal categories in particular, where tradition decided to attribute it to the article Catégories structurées which, however, on actual inspection, is at best a big
I’m still interested in finding, and recording here, which reference first articulated the notion of internal categories, clearly.
Proper citation and attribution is part of professional academia. Not citing your precursors is not a sign of originality but of fraudulency.
I’ve added in a link to the parallel treatment of the internalization/enrichment comparison at enriched category.
diff, v73, current
At long last, we have found the origin of the definition of internal categories (thanks to Dmitri here!):
• Alexander Grothendieck, p. 106 (9 of 21) of: Techniques de construction et théorèmes d’existence en géométrie algébrique III: préschémas quotients, Séminaire Bourbaki: années 1960/61, exposés
205-222, Séminaire Bourbaki, no. 6 (1961), Exposé no. 212, (numdam:SB_1960-1961__6__99_0, pdf)
I have added that reference now, together with the precursor
• Alexander Grothendieck, p. 340 (3 of 23) in: Technique de descente et théorèmes d’existence en géométrie algébriques. II: Le théorème d’existence en théorie formelle des modules, Séminaire
Bourbaki : années 1958/59 - 1959/60, exposés 169-204, Séminaire Bourbaki, no. 5 (1960), Exposé no. 195 (numdam:SB_1958-1960__5__369_0, pdf)
where the general definition of internalization is given.
So then, to the reference of Ehresmann’s “Catégories structurées” – which most authors cite as the origin of internal cateories – I have added the comment that
the definition is not actually contained in there, certainly not in its simple and widely understood form due to Grothendieck61.
diff, v74, current
And, interesting to note, that second, precursor, reference also has the Yoneda embedding, and the fact it preserves finite products!
I have added pointers to Hosgood’s translations and also added pointer to FGA where more information can be found (and can be added, such as pointer to Hosgood’s TeX sources, if that is felt to be
diff, v75, current
Thanks. But checking out the web version on my system, it appears broken: Most of the pages I see there appear empty except for a section headline, and those that are not empty break off in the
middle of a sentence after a few lines. (using Firefox 89.0.1 (64-bit) on Windows 10)
Oh, I see. Great.
added pointer to:
• Peter Johnstone, Chapter 2 of: Topos theory, London Math. Soc. Monographs 10, Acad. Press 1977, xxiii+367 pp. (Available as Dover Reprint, Mineola 2014)
diff, v77, current
added pointer to:
• Enrico Ghiorzi, Complete internal categories (arXiv:2004.08741)
diff, v78, current
I think the s and t in the 2nd-5th pullback diagrams in the Internal categories section are inconsistent with those in the first pullback diagram and the laws specifying the source and target of
composite morphisms; the earlier diagrams have $p_1$ being the first of the morphisms and $p_2$ being the second in the composition (so $s\circ c=s\circ p_1$ for example), whereas the 2nd-5th
pullback diagrams seem to have them the other way round. But I may be wrong, so I am hesitant about making this edit.
On further thought, I’m pretty sure I’m right so I’ll make the changes. Feel free to undo them if I’m wrong!
Make the s-t swaps (hopefully corrections!) as described
Julian Gilbey
diff, v82, current
Fixed links for English translation of FGA.
diff, v84, current
The definition of internal category is due to Grothendieck. However, what’s the earliest reference for internal functors, natural transformations, and profunctors?
For what it’s worth, Johnstone’s “Topos theory” (1977) considers internal functors in section 2.1 and internal profunctors in section 2.4. That seems to be the earliest mentioning of these concepts
among the references already collected in the entry (here), though I have no idea if there is an earlier one.
Internal profunctors and 2-cells between them are already present in §5.1 of Bénabou’s Les distributeurs (1973), which must be the earliest definition for those. I don’t see a definition of internal
functor or natural transformation there, though I would imagine it to be known earlier.
added pointer to
• Jean Bénabou, §5.1 of: Les distributeurs, Université Catholique de Louvain, Institut de Mathématique Pure et Appliquée, rapport 33 (1973) [pdf]
for the definition of internal profunctors (to readers who already know all about internal categories?).
diff, v87, current
We have a page for internal profunctor, but it would seem reasonable to me to collapse that page into the internal category entry. There doesn’t seem to be an advantage to having two different pages.
Would anyone object if I made this change?
Well, I think there may be many places where someone would want to link directly to internal profunctor. It’s a different concept, so why not have a different page for it? The page internal category
is already quite long.
added pointer to:
• Bart Jacobs, Chapter 7 in: Categorical Logic and Type Theory, Studies in Logic and the Foundations of Mathematics 141, Elsevier (1998) [ISBN:978-0-444-50170-7, pdf]
diff, v90, current
added pointer to:
• Saunders MacLane, §XII.1 of: Categories for the Working Mathematician, Graduate Texts in Mathematics 5 Springer (second ed. 1997) [doi:10.1007/978-1-4757-4721-8]
diff, v92, current
added pointer to:
• Jean Bénabou, §5.4.3 in: Introduction to Bicategories, Lecture Notes in Mathematics 47 Springer (1967) 1-77 [doi:10.1007/BFb0074299]
diff, v93, current
added pointer to:
diff, v94, current
Moved content to enrichment versus internalisation.
diff, v96, current
Added Miranda’s master’s thesis.
diff, v99, current
Corrected the proof of finite completeness of $Cat(E)$.
diff, v101, current | {"url":"https://nforum.ncatlab.org/discussion/621/internal-category/?Focus=31595","timestamp":"2024-11-05T22:38:36Z","content_type":"application/xhtml+xml","content_length":"212820","record_id":"<urn:uuid:889c14dd-cae1-42a2-abf9-3adc0f6f5e47>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00118.warc.gz"} |
Maths lessons and private maths tutors in Streatham Hill - 205 teachers
First class free
Roksha Suresh
Sutton (London), Anerley, Bri...
Maths: GCSE Maths
Maths tutor providing lessons for age up to GCSE
I am a dedicated and passionate Maths tutor with a strong academic foundation. I achieved a Grade 9 in GCSE Maths, reflecting my deep under...
Sutton (London), Anerley, Bri...
Maths: GCSE Maths
Maths tutor providing lessons for age up to GCSE
I am a dedicated and passionate Maths tutor with a strong academic foundation. I achieved a Grade 9 in GCSE Maths, reflecting my deep under...
First class free
First class free
Stratford, Anerley, Brixton H...
Maths: GCSE Maths
Maths tutor and scholarship student providing tuition for GCSE students
My name is zaid Lockhart and I am a maths tutor and scholarship studentI want to help you to understand not only the complexities of maths...
Stratford, Anerley, Brixton H...
Maths: GCSE Maths
Maths tutor and scholarship student providing tuition for GCSE students
My name is zaid Lockhart and I am a maths tutor and scholarship studentI want to help you to understand not only the complexities of maths...
First class free
Hussein Hussen
Streatham Hill
I give private lessons tailored to your level and need.
Teaching is one of my passions, so I like to listen and identify your needs so I can make you a personalised offer. I value the human conne...
Streatham Hill
I give private lessons tailored to your level and need.
Teaching is one of my passions, so I like to listen and identify your needs so I can make you a personalised offer. I value the human conne...
First class free
Kingston Upon Thames (London)...
Maths: GCSE Maths, A Level Maths
GCSE Maths (higher/foundation) or A Level Maths tutor
I am 18 and I’ve just finished my A levels (Maths, Further Maths, Physics) so the content is fresh in my mind. I am predicted A*A*A (A in p...
Kingston Upon Thames (London)...
Maths: GCSE Maths, A Level Maths
GCSE Maths (higher/foundation) or A Level Maths tutor
I am 18 and I’ve just finished my A levels (Maths, Further Maths, Physics) so the content is fresh in my mind. I am predicted A*A*A (A in p...
First class free
First class free
First class free
Sutton (London)
Maths: GCSE Maths
I offer affordable maths lessons for students at primary/ GCSE level
I gained a 9 in GCSE maths and am currently studying it for A Level. I have previously taught children aged 12-15 through my school’s tutor...
Sutton (London)
Maths: GCSE Maths
I offer affordable maths lessons for students at primary/ GCSE level
I gained a 9 in GCSE maths and am currently studying it for A Level. I have previously taught children aged 12-15 through my school’s tutor...
First class free
Kingston Upon Thames (London)...
Maths: Basic mathematics, GCSE Maths
I give affordable maths lessons, starting from basic mathematics and GCSE maths
I have been working at KUMON for the past 6 years, where I teach maths and english from ages 3-17. I have also privately tutored many of th...
Kingston Upon Thames (London)...
Maths: Basic mathematics, GCSE Maths
I give affordable maths lessons, starting from basic mathematics and GCSE maths
I have been working at KUMON for the past 6 years, where I teach maths and english from ages 3-17. I have also privately tutored many of th...
First class free
First class free
Mubashir Shafique
City Of Westminster (London)
Maths: Linear algebra
MATHS tutor providing lectures to all ages
I am expert tutor for Mathematical, providing classes for calculas, arithmetic, Linear Algebra and other fields of Mathematics. Having vast...
City Of Westminster (London)
Maths: Linear algebra
MATHS tutor providing lectures to all ages
I am expert tutor for Mathematical, providing classes for calculas, arithmetic, Linear Algebra and other fields of Mathematics. Having vast...
First class free
City Of Westminster (London),...
Maths tutor from primary school up to year 9
Impeccably tailored for the needs of primary school to year 9 students, Batoul's private maths tutorials promise a nurturing and effective...
City Of Westminster (London),...
Maths tutor from primary school up to year 9
Impeccably tailored for the needs of primary school to year 9 students, Batoul's private maths tutorials promise a nurturing and effective...
First class free
Hornsey, Anerley, Brixton Hil...
Maths: GCSE Maths
Exceptional & Experienced Maths Tutor open to teaching all age groups up to GCSE
I have worked and am still working in a tuition centre called Best Tutors for 7 years now. My role is the head of maths, as my years of imp...
Hornsey, Anerley, Brixton Hil...
Maths: GCSE Maths
Exceptional & Experienced Maths Tutor open to teaching all age groups up to GCSE
I have worked and am still working in a tuition centre called Best Tutors for 7 years now. My role is the head of maths, as my years of imp...
First class free
First class free | {"url":"https://www.findtutors.co.uk/maths-tutors-streatham-hill/","timestamp":"2024-11-13T12:21:12Z","content_type":"text/html","content_length":"129704","record_id":"<urn:uuid:d7fde08a-9c1c-49dd-9326-2b418dc2e83d>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00484.warc.gz"} |
nanoscale views
I've had a number of people ask me why I haven't written anything about the recent news and resulting kerfuffle (
, and
for example) in the media regarding possible high temperature superconductivity in Au/Ag nanoparticles. The fact is, I've written before about
unidentified superconducting objects
(also see
), and so I didn't have much to say. I've exchanged some email with the IIS PI back in late July with some questions, and his responses to my questions are in line with what others have said.
Extraordinary claims require extraordinary evidence. The longer this goes on without independent confirmation, the more likely it is that this will fade away.
Various discussions I've had about this have, however, spurred me to try writing down my memories and lessons learned from the
Schon scandal
, before the inevitable passage of time wipes more of the details from my brain. I'm a bit conflicted about this - it was 18 years ago, there's not much point in rehashing the past, and
Eugenie Reich's book
covered this very well. At the same time, it's clear that many students today have never even heard of Schon, and I feel like I learned some valuable lessons from the whole situation. It'll take some
time to see if I am happy with how this turns out before I post some or all of it.
: I've got a draft done, and it's too long for a blog post - around 9000 words. I'll probably convert it to pdf when I'm happy with it and link to it somehow.
I've written in the past (say here and here) about how we think about the electrons in a conventional metals as forming a Fermi Liquid. (If the electrons didn't interact at all, then colloquially we
call the system a Fermi gas. The word "liquid" is shorthand for saying that the interactions between the particles that make up the liquid are important. You can picture a classical liquid as a bunch
of molecules bopping around, experiencing some kind of short-ranged repulsion so that they can't overlap, but with some attraction that favors the molecules to be bumping up against each other - the
typical interparticle separation is comparable to the particle size in that classical case.) People like Lev Landau and others had the insight that essential features of the Fermi gas (the Pauli
principle being hugely important, for example) tend to remain robust even if one thinks about "dialing up" interactions between the electrons.
A consequence of this is that in a typical metal, while the details may change, the lowest energy excitations of the Fermi liquid (the electronic quasiparticles) should be very much like the
excitations of the Fermi gas - free electrons. Fermi liquid quasiparticles each carry the electronic amount of charge, and they each carry "spin", angular momentum that, together with their charge,
makes them act like tiny little magnets. These quasiparticles move at a typical speed called the Fermi velocity. This all works even though the like-charge electrons repel each other.
For electrons confined strictly in one dimension, though, the situation is different, and the interactions have a big effect on what takes place. Tomonaga (shared the Nobel prize with Feynman and
Schwinger for quantum electrodynamics, the quantum theory of how charges interact with the electromagnetic field) and later Luttinger worked out this case, now called a Tomonaga-Luttinger Liquid
(TLL). In one dimension, the electrons literally cannot get out of each other's way - the only kind of excitation you can have is analogous to a (longitudinal) sound wave, where there are regions of
enhanced or decreased density of the electrons. One surprising result from this is that charge in 1d propagates at one speed, tuned by the electron-electron interactions, while spin propagates at a
different speed (close to the Fermi velocity). This shows how interactions and restricted dimensionality can give collective properties that are surprising, seemingly separating the motion of spin
and charge when the two are tied together for free electrons.
These unusual TLL properties show up when you have electrons confined to truly one dimension, as in some semiconductor nanowires and in single-walled carbon nanotubes. Directly probing this physics
is actually quite challenging. It's tricky to look at charge and spin responses separately (though some experiments can do that, as here and here) and some signatures of TLL response can be subtle
(e.g., power law responses in tunneling with voltage and temperature where the accessible experimentally reasonable ranges can be limited).
The cold atom community can create cold atomic Fermi gases confined to one-dimensional potential channels. In those systems the density of atoms plays the role of charge, and while some internal
(hyperfine) state of the atoms plays the role of spin, and the experimentalists can tune the effective interactions. This tunability plus the ability to image the atoms can enable very clean tests of
the TLL predictions that aren't readily done with electrons.
So why care about TLLs? They are an example of non-Fermi liquids, and there are other important systems in which interactions seem to lead to surprising, important changes in properties. In the
copper oxide high temperature superconductors, for example, the "normal" state out of which superconductivity emerges often seems to be a "strange metal", in which the Fermi Liquid description breaks
down. Studying the TLL case can give insights into these other important, outstanding problems.
There has been quite a bit of media attention given to this paper, which looks at whether sound waves involve the transport of mass (and therefore whether they should interact with gravitational
fields and produce gravitational fields of their own).
The authors conclude that, under certain circumstances, sound wavepackets (phonons, in the limit where we really think about quantized excitations) rise in a downward-directed gravitational field.
Considered as a distinct object, such a wavepacket has some property, the amount of "invariant mass" that it transports as it propagates along, that turns out to be negative.
Now, most people familiar with the physics of conventional sound would say, hang on, how do sound waves in some medium transport any mass at all? That is, we think of ordinary sound in a gas like air
as pressure waves, with compressions and rarefactions, regions of alternating enhanced and decreased density (and pressure). In the limit of small amplitudes (the "linear regime"), we can consider
the density variations in the wave to be mathematically small, meaning that we can use the parameter \(\delta \rho/rho_{0}\) as a small perturbation, where \(\rho_{0}\) is the average density and \(\
delta \rho\) is the change. Linear regime sound usually doesn't transport mass. The same is true for sound in the linear regime in a conventional liquid or a solid.
In the paper, the authors do an analysis where they find that the mass transported by sound is proportional with a negative sign to \(dc_{\mathrm{s}}/dP\), how the speed of sound \(c_{\mathrm{s}}\)
changes with pressure for that medium. (Note that for an ideal gas, \(c_{\mathrm{s}} = \sqrt{\gamma k_{\mathrm{B}}T/m}\), where \(\gamma\) is the ratio of heat capacities at constant pressure and
volume, \(m\) is the mass of a gas molecule, and \(T\) is the temperature. There is no explicit pressure dependence, and sound is "massless" in that case.)
I admit that I don't follow all the details, but it seems to me that the authors have found that for a nonlinear medium such that \(dc_{\mathrm{s}}/dP > 0\), sound wavepackets have a bit less mass
than the average density of the surrounding medium. That means that they experience buoyancy (they "fall up" in a downward-directed gravitational field), and exert an effectively negative
gravitational potential compared to their background medium. It's a neat result, and I can see where there could be circumstances where it might be important (e.g. sound waves in neutron stars, where
the density is very high and you could imagine astrophysical consequences). That being said, perhaps someone in the comments can explain why this is being portrayed as so surprising - I may be
missing something.
A reminder to my condensed matter colleagues who go to the APS March Meeting: We know the quality of the meeting depends strongly on getting good invited talks, the 30+6 minute talks that either come
all in a group (an "invited session" or "invited symposium") or sprinkled down individually in the contributed sessions.
Now is the time to put together nominations for these things. The more high quality nominations, the better the content of the meeting.
The APS Division of Condensed Matter Physics is seeking nominations for invited symposia. See here for the details. The online submission deadline is August 24th!
Similarly, the APS Division of Materials Physics is seeking nominations for invited talks as part of their Focus Topic sessions. The list of Focus Topics is here. The online submission deadline for
these is August 29th.
This post is an indirect follow-on from here, and was spawned by a request that I discuss the "modern theory of polarization". I have to say, this has been very educational for me. Before I try to
give a very simple explanation of the issues, those interested in some more technical meat should look here, or here, or here, or at this nice blog post.
Colloquially, an electric dipole is an overall neutral object with some separation between its positive and negative charge. A great example is a water molecule, which has a little bit of excess
negative charge on the oxygen atom, and a little deficit of electrons on the hydrogen atoms.
Once we pick an origin for our coordinate system, we can define the electric dipole moment of some charge distribution as \(\mathbf{p} \equiv \int \mathbf{r}\rho(\mathbf{r}) d^{3}\mathbf{r}\), where
\(\rho\) is the local charge density. Often we care about the induced dipole, the dipole moment that is produced when some object like a molecule has its charges rearrange due to an applied electric
field. In that case, \(\mathbf{p}_{\mathrm{ind}} = \alpha \cdot \mathbf{E}\), where \(\alpha\) is the polarizability. (In general \(\alpha\) is a tensor, because \(\mathbf{p}\) and \(\mathbf{E}\)
don't have to point in the same direction.)
If we stick a slab of some insulator between metal plates and apply a voltage across the plates to generate an electric field, we learn in first-year undergrad physics that the charges inside the
insulator slightly redistribute themselves - the material polarizes. If we imagine dividing the material into little chunks, we can define the polarization \(\mathbf{P}\) as the electric dipole
moment per unit volume. For a solid, we can pick some volume and define \(\mathbf{P} = \mathbf{p}/V\), where \(V\) is the volume over which the integral is done for calculating \(\mathbf{p}\).
We can go farther than that. If we say that the insulator is built up out of a bunch of little polarizable objects each with polarization \(\alpha\), then we can do a self-consistent calculation,
where we let each polarizable object see both the externally applied electric field and the electric field from its neighboring dipoles. Then we can solve for \(\mathbf{P}\) and therefore the
relative dielectric constant in terms of \(\alpha\). The result is called the Clausius-Mossotti relation.
In crystalline solids, however, it turns out that there is a serious problem! As explained clearly here, because the charge in a crystal is distributed periodically in space, the definition of \(\
mathbf{P}\) given above is ambiguous because there are many ways to define the "unit cell" over which the integral is performed. This is a big deal.
The "modern theory of polarization" resolves this problem, and actually involves the electronic Berry Phase. First, it's important to remember that polarization is really defined experimentally by
how much charge flows when that capacitor described above has the voltage applied across it. So, the problem we're really trying to solve is, find the integrated current that flows when an electric
field is ramped up to some value across a periodic solid. We can find that by adding up all the contributions of the different electronic states that are labeled by wavevectors \(\mathbf{k}\). For
each \(\mathbf{k}\) in a given band, there is a contribution that has to do with how the energy varies with \(\mathbf{k}\) (that's the part that looks roughly like a classical velocity), and there's
a second piece that has to do with how the actual electronic wavefunctions vary with \(\mathbf{k}\), which is proportional to the Berry curvature. If you add up all the \(\mathbf{k}\) contributions
over the filled electronic states in the insulator, the first terms all cancel out, but the second terms don't, and actually give you a well-defined amount of charge.
Bottom line: In an insulating crystal, the actual polarization that shows up in an applied electric field comes from how the electronic states vary with \(\mathbf{k}\) within the filled bands. This
is a really surprising and deep result, and it was only realized in the 1990s. It's pretty neat that even "simple" things like crystalline insulators can still contain surprises (in this case, one
that foreshadowed the whole topological insulator boom).
Ever since I learned about them, I thought that hydraulic jumps were cool. As I wrote here, a hydraulic jump is an analog of a standing shockwave. The key dimensionless parameter in a shockwave in a
gas is the Mach number, the ratio between the fluid speed \(v\) and the local speed of sound, \(c_{\mathrm{s}}\). The gas that goes from supersonic (\(\mathrm{Ma} > 1\)) on one side of the shock to
subsonic (\(\mathrm{Ma} < 1\)) on the other side.
For a looong time, the standard analysis of hydraulic jumps assumed that the relevant dimensionless number here was the Froude number, the ratio of fluid speed to the speed of (gravitationally
driven) shallow water waves, \(\sqrt{g h}\), where \(g\) is the gravitational acceleration and \(h\) is the thickness of the liquid (say on the thin side of the jump). That's basically correct for
macroscopic jumps that you might see in a canal or in my previous example.
However, a group from Cambridge University has shown that this is not the right way to think about the kind of hydraulic jump you see in your sink when the stream of water from the faucet hits the
basin. (Sorry that I can't find a non-pay link to the paper.) They show this conclusively by the very simple, direct method of producing hydraulic jumps by shooting water streams horizontally onto a
wall, and vertically onto a "ceiling". The fact that hydraulic jumps look the same in all these cases clearly shows that gravity can't be playing the dominant role in this case. Instead, the correct
analysis is to worry about not just gravity but also surface tension. They do a general treatment (which is quite elegant and understandable to fluid mechanics-literate undergrads) and find that the
condition for a hydraulic jump to form is now \(\mathrm{We}^{-1} + \mathrm{Fr}^{-2} = 1\), where \(\mathrm{Fr} \sim v/\sqrt{g h}\) as usual, and the Weber number \(\mathrm{We} \sim \rho v^{2} h/\
gamma\), where \(\rho\) is the fluid density and \(\gamma\) is the surface tension. The authors do a convincing analysis of experimental data with this model, and it works well. I think it's very
cool that we can still get new insights into phenomena, and this is an example understandable at the undergrad level where some textbook treatments will literally have to be rewritten.
Faculty Position in Experimental Atomic/Molecular/Optical Physics at Rice University
The Department of Physics and Astronomy at Rice University in Houston, TX (http://physics.rice.edu/) invites applications for a tenure-track faculty position in experimental atomic, molecular, and
optical physics. The Department expects to make an appointment at the assistant professor level. Applicants should have an outstanding research record and recognizable potential for excellence in
teaching and mentoring at the undergraduate and graduate levels. The successful candidate is expected to establish a distinguished, externally funded research program and support the educational and
service missions of the Department and University.
Applicants must have a PhD in physics or related field, and they should submit the following: (1) cover letter; (2) curriculum vitae; (3) research statement; (4) three publications; (5) teaching
statement; and (6) the names, professional affiliations, and email addresses of three references. For full details and to apply, please visit: http://jobs.rice.edu/postings/16140. The review of
applications will begin November 1, 2018, but all those received by December 1, 2018 will be assured full consideration. The appointment is expected to start in July 2019. Further inquiries should be
directed to the chair of the search committee, Prof. Thomas C. Killian (killian@rice.edu).
Rice University is an Equal Opportunity Employer with commitment to diversity at all levels, and considers for employment qualified applicants without regard to race, color, religion, age, sex,
sexual orientation, gender identity, national or ethnic origin, genetic information, disability or protected veteran status. | {"url":"https://nanoscale.blogspot.com/2018/08/","timestamp":"2024-11-11T03:41:55Z","content_type":"application/xhtml+xml","content_length":"160251","record_id":"<urn:uuid:35ddf080-5785-46a9-99a9-cc3628e5baba>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00541.warc.gz"} |
In statistics, the median absolute deviation (MAD) is a robust measure of the variability of a univariate sample of quantitative data. It can also refer to the population parameter that is estimated
by the MAD calculated from a sample. For a univariate data set X1, X2, ..., Xn, the MAD is defined as the median of the absolute deviations from the data's median : that is, starting with the
residuals (deviations) from the data's median, the MAD is the median of their absolute values. Consider the data (1, 1, 2, 2, 4, 6, 9). It has a median value of 2. The absolute deviations about 2 are
(1, 1, 0, 0, 2, 4, 7) which in turn have a median value of 1 (because the sorted absolute deviations are (0, 0, 1, 1, 2, 4, 7)). So the median absolute deviation for this data is 1. The median
absolute deviation is a measure of statistical dispersion. Moreover, the MAD is a robust statistic, being more resilient to outliers in a data set than the standard deviation. In the standard
deviation, the distances from the mean are squared, so large deviations are weighted more heavily, and thus outliers can heavily influence it. In the MAD, the deviations of a small number of outliers
are irrelevant. Because the MAD is a more robust estimator of scale than the sample variance or standard deviation, it works better with distributions without a mean or variance, such as the Cauchy
distribution. The MAD may be used similarly to how one would use the deviation for the average. In order to use the MAD as a consistent estimator for the estimation of the standard deviation , one
takes where is a constant scale factor, which depends on the distribution. For normally distributed data is taken to be i.e., the reciprocal of the quantile function (also known as the inverse of the
cumulative distribution function) for the standard normal distribution . The argument 3/4 is such that covers 50% (between 1/4 and 3/4) of the standard normal cumulative distribution function, i.e.
Therefore, we must have that Noticing that we have that , from which we obtain the scale factor . | {"url":"https://graphsearch.epfl.ch/en/concept/10050999","timestamp":"2024-11-13T17:34:05Z","content_type":"text/html","content_length":"152187","record_id":"<urn:uuid:2f317968-5678-416d-b5a0-29df34f2cd5c>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00429.warc.gz"} |
Solutions to Examination 1
CS 294-5: Meshing and Triangulation (Autumn 1999)
Solutions to Examination 1
[1] The quad-edge data structure (1 point). Each quad-edge has four Data fields, two of which can be used to indicate the 2-faces adjoining the quad-edge. The two PSLGs in Figure 1 may be
distinguished by checking whether the small square hole shares a face with the triangular region, or with the quadrilateral region.
Figure 1: Two PSLGs which are topologically different if we consider the 2-faces to be part of the topology.
[2] Deloopsy (1 point). Two bad examples are illustrated in Figure 2. The example on the right is more relevant, because the algorithm was published as a method of retriangulating the void left
behind when a vertex is deleted from a Delaunay triangulation.
Figure 2: Two examples in which the algorithm creates a non-Delaunay triangle.
[3] An upper bound on two-dimensional triangulations I (1 point). Rotate the triangulation so no two vertices have the same x-coordinate. Name the three vertices of each triangle the left vertex, the
middle vertex, and the right vertex in order according to their x-coordinates. Each vertex is the middle vertex of at most two triangles: the one immediately above it, and the one immediately below
it. Hence, if there are n vertices, there are no more than 2n triangles.
Suppose that the outer face of the triangulation has three vertices. The left and right vertices of the outer face have no triangle immediately above or below them. The middle vertex of the outer
face can have a triangle above it or a triangle below it, but not both. Thus, for a triangular outer face, there are at most 2n - 5 triangles. If the outer face has more than three vertices, then the
number of triangles can only be smaller.
[4] An upper bound on two-dimensional triangulations II (1 point). A triangulation of n vertices with only three vertices on the boundary can be created for any every vertex not on the boundary has
one triangle directly above it and one triangle directly below it, and is the only middle vertex of those two triangles, so there must be at least 2(n - 3) distinct triangles in such a triangulation.
Similarly, the middle vertex of the outer face has one triangle either directly above it or directly below it, and is the unique middle vertex of that triangle, bringing the total to at least 2n - 5.
[5] Gift-wrapping gaffes (1 point). Gift-wrapping may unwittingly reduce the untetrahedralized region between six cospherical vertices to a void shaped like Schönhardt's polyhedron, making it
impossible to complete the Delaunay tetrahedralization.
[6] Fast times with Frankensimplex (1 point). Chazelle's polyhedron (Bern and Eppstein, page 58) cannot be tetrahedralized with fewer than
[7] Who needs transformations most? (2 points.) Mesh A makes a sudden transition from small to large elements where the advancing fronts collided. The quality of these elements cannot be
significantly improved without changing the topology of the mesh. (A transition from small to large elements can easily be identified from the connectivity of a mesh alone, disregarding the locations
of the vertices. For example, if you ask how many vertices are within h hops of a vertex in a uniform mesh, the answer grows quadratically with h. If you ask the same question for a vertex on the
large side of a rapid transition from large to small elements, the answer grows exponentially. If the base of the exponent is too large, the transition cannot be accomplished with good-quality
Mesh B has poorly shaped boundary tetrahedra, but many of these occur because the octree cuts the domain too close to the boundary. Smoothing will fix most of the resulting disparities in edge
length, and many of the relatively flat elements on the boundary. The excellent quality of the interior elements provides some slack, so that nodes near the boundary can be smoothed without
compromising the quality of the interior elements. Some poor boundary tetrahedra may require topological transformations to fix, but their number will be smaller than in Mesh A.
[8] Meshing prespecified boundaries (2 points). Advancing front methods, by nature, work well with pretriangulated boundaries, because they generate elements near the boundary first so that the
quality of the boundary elements is as good as possible.
The quadtree approach is ill-suited to pretriangulated boundaries, because intersections of quadtree edges with the domain boundary must be resolved by warping the quadtree so that these
intersections occur only at the input vertices. It may not be possible to do this without generating severely distorted elements.
The Delaunay approach falls somewhere in between. It can certainly be made to honor the prespecified boundaries, by the use of constrained Delaunay triangulations. However, if poor-quality elements
occur at the boundary, there is no recourse for fixing them.
[9] Minimum spanning trees (2 points). Suppose (v, w) is an edge in the minimum spanning tree T, but vw is not an edge of the Delaunay triangulation. Let D be the smallest disk containing v and w (so
that vw is a diameter of D). Because vw is not strongly Delaunay, there must be some other vertex u in D. The line segments uv and uw are shorter than vw.
If (v, w) is removed from T, then T is split into two trees T', which contains v, and T'', which contains w. Without loss of generality, assume u is in T'. If we replace (v, w) with (u, w), we
produce a spanning tree whose total edge length is shorter than that of T, contradicting the assumption that T is a minimum spanning tree.
Hence, by contradiction, every edge of the minimum spanning tree is Delaunay.
[10] Constrained mesh smoothing (2 points). If g is the direction of steepest ascent of a smooth function, and g[proj] is the projection of g onto a plane, then g and g[proj] are separated by an
angle strictly less than 90^o (if g[proj] is nonzero). Hence, the smooth function increases along the direction g[proj].
If there is only one angle in the active set, the utility function we are trying to optimize (the worst angle) appears locally smooth. If there are two or more angles in the active set, the vertex
being smoothed lies at a nonsmooth point in the utility function, and the rules change. For instance, if g is a linear combination of g[proj] might be separated by an angle greater than 90^o (even
though g and g[proj] are still separated by an angle strictly less than 90^o), as Figure 3 illustrates. In this case, f[1] actually decreases along the direction g[proj].
Figure 3: If the vertex moves in the direction g[proj], the skinny angle on the left will get worse.
Since the function we want to optimize is the worst angle, and f[1] is in the active set, if f[1] decreases, our utility function decreases as well.
The fix is to project each vector before computing the search direction g. Then, g is the vector of minimum length whose endpoint falls within the convex hull of the projected vectors.
[11] Off-center subsegment splitting (2 points). Let d be the distance from v to s. If the orthogonal projection of v onto s is at least a distance of d from the nearest endpoint of s, then split s
at the orthogonal projection. Otherwise, choose a splitting point a distance of d from the nearest endpoint.
This method guarantees that the insertion radius of the new vertex is not smaller than lfs[min]. Hence, Theorem 20 still holds.
Figure 4: Projecting an encroaching input vertex onto an encroached segment may unnecessarily reduce the feature size.
[12] Herbert's typo (2 points). Figure 5 offers an example where the contraction of ab is a local unfolding. Note that a is an order 2 vertex, and b is an order 1 vertex.
Figure 5: Lk
[13] Delaunay triangulation of an x-monotone chain (4 points). Name the three vertices of each triangle the left vertex, the middle vertex, and the right vertex in order according to their x
-coordinates. Every triangle above the chain is above its middle vertex, and so it is above two of its edges (see Figure 6).
Let ab be an edge (with a to the left of b) that is either an input edge or has been created by the algorithm. If ab is not on the boundary of the convex hull, let abc be the Delaunay triangle
immediately above ab, and let d be any other vertex above the line that contains ab. Because abc is Delaunay and no four vertices are cocircular, c is inside the circumcircle of abd, and abd is not
Delaunay. A circumcenter is equidistant from the vertices of its triangle, so the circumcenters of abc and abd both lie on the bisector of edge ab. Because a and b have different x-coordinates, the
bisector is not horizontal. Because c and d are both above the line containing ab, and c is inside the circumcircle of abd, the circumcenter of abc is below the circumcenter of abd. (Think of Guibas
and Stolfi's rising bubble: starting with the circumcircle of abc, move the circle's center upward along the bisector of ab until the circle touches d.)
It follows that the circumcenter of abc is lower than the circumcenter of any non-Delaunay triangle atop ab that might be placed in the priority queue Q. Hence, if every Delaunay triangle is created
as soon as the sweepline passes over its circumcenter, no non-Delaunay triangle can ever be created.
In the Voronoi dual of a Delaunay triangulation, every Delaunay triangle dualizes to its circumcenter, and every Voronoi edge is orthogonal to its dual Delaunay edge. Hence, every triangle above the
chain has a circumcenter incident on two Voronoi edges going down to the circumcenters of the two triangles immediately beneath it (if two such triangles exist), and one Voronoi edge going up to the
circumcenter of the triangle immediately above it (if such a triangle exists).
Let t be any Delaunay triangle above the chain. Assume for the sake of induction that every triangle whose circumcenter is lower than t's was created when the sweepline passed over its circumcenter.
Then both of t's lower edges are created before the sweepline passes over t's circumcenter, because each of t's lower edges is either an input edge, or an edge of a Delaunay triangle with a lower
circumcenter. When the second of these edges is created, t is entered on Q. Hence, t is created when the sweepline passes over its circumcenter.
Figure 6: The upper Delaunay triangulation of an x-monotone chain.
[14] Nearest neighbors in curve reconstruction (4 points). For the sake of contradiction, suppose some point x and its Euclidean nearest neighbor y are not adjacent along the curve F.
Let D be the smallest disk containing x and y (so that xy is a diameter of D), as Figure 7 illustrates. Because y is the nearest neighbor of x, no other vertex lies in D. Because x and y are not
adjacent along F, the portion of F incident on x must leave D and pass through some other vertex before it can return to y. Hence, F intersects D in at least two connected components. By Lemma 1 of
Amenta, Bern, and Eppstein, D contains a point of the medial axis.
Figure 7: If y is the nearest neighbor of x, but x and y are not adjacent along the curve F, then F has not been 0.3-sampled at p.
Let l be the distance between x and y. Let C be a circle centered at x whose radius is 0.5 l. Let p be a point on the intersection of C and the curve F. The distance between p and the medial axis
point in D is at most 1.5 l. If the point set 0.3-samples F, then some sample point must lie within a distance of 0.45 l of p. Both x and y are a distance of at least 0.5 l from p, so the sample
point must be a distinct vertex z, which lies within a distance of 0.95 l from x. This contradicts the assumption that y is the nearest neighbor of x.
[15] Triangulate a PSLG in (4 points). Assume we have rotated the plane infinitesimally so that no segment is perfectly horizontal.
Anywhere the pseudocode says "if helper(e) is a merge vertex," we must reinterpret "merge vertex" to include any vertex that does not have an segment going down (including zero-degree vertices).
A vertex with degree one is either the lower endpoint of a segment, and can be treated exactly like a merge vertex, or is the upper endpoint of a segment, and can be treated exactly like a split
Zero-degree vertices may be handled by the following pseudocode.
1 Find the segment e[j] to the left of v[i] (by tree search)
2 create the diagonal v[i]-helper(e[j])
3 helper(e[j]) v[i]
The remaining vertices we must account for are the degree-two vertices that have the interior region on both sides of their segments, and the vertices of degree three or more. For these vertices, use
the following procedure. Partition the incident segments into those going up from the vertex, and those going down. Sort each of these two sets from left to right.
Now, iteratively treat v[i] like a start, end, split, merge, or regular vertex as many times as necessary to process each adjacent pair of edges incident on v[i] for which the region between the pair
of edges is in the interior of the region to be triangulated. For each adjacent pair of upward segments, if the region between them is interior, treat v[i] like an end vertex for that pair of
segments. If v[i] has no downward segments, but the region below it is interior, treat v[i] like a merge vertex for the leftmost and rightmost segments. If v[i] has no upward segments, but the region
above it is interior, treat v[i] like a split vertex for the leftmost and rightmost segments. If v[i] has at least one upward and one downward segment, and the region to its left is interior, treat v
[i] like a regular vertex; repeat for the region to its right if that is interior. For each adjacent pair of downward segments, if the region between them is interior, treat v[i] like a start vertex
for that pair of segments. | {"url":"https://people.eecs.berkeley.edu/~jrs/meshf99/exam1/sol1.html","timestamp":"2024-11-07T18:57:14Z","content_type":"text/html","content_length":"18727","record_id":"<urn:uuid:d36a5a8f-0256-4872-9cbb-be4eee729b68>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00560.warc.gz"} |
Analog, RF/Wireless ASIC and module design, development and manufacturing
A FIR compensation design using the OCTAVE tool.
A compensation filter for a CIC filter was required. We chose to use the free software tool OCTAVE to help us do this. Here is the sequence of tasks that were done. (Be aware that a compensation
filter filter design is not trivial unless you have been doing it for a long while. )
1.0) Design the CIC filter. 2) Measure the droop of the filter . (3) Use the inverse response of the CIC filter to synthesize the compensation filter. (4) Use the frequency response of the inverse
filter to generate a frequency – magnitude table. ( Much like a piecewise linear SPICE signal). This table consists of frequency – magnitude pairs for the compensation filter you want. (5) Use this
table as the input to the function fir2(n,f,m) in Octave. This function provides the coefficients of the filter you need. However, the trick is to choose the right “n”; It took us a while to get the
value of n right for our purposes. You will have to choose yours however you wish. (6) Run fir2 and take the results ( say “b[]”) and generate the impulse response of the filter from it using another
OCTAVE function called impz(). The input argument is the “b[]” you just got from fir2. Once you have the impulse response use the freqz function in Octave to simulate the filter you just designed.
Once you have the frequency magnitude characteristic of the new filter you can do a multiplot using the plot function from OCTAVE. This allows you to compare the two filters. i.e. the filter you
wanted and the filter you designed.
You can make adjustments by using multiple runs of the above sequence until you get the best filter you can get, An example of the multiplot is shown below. The blue line is the original filter and
the orange line is the one we got from using the sequence quoted above. Please visit the Signal Processing Group Inc website for more info and contact information.
You must be logged in to post a comment. | {"url":"https://www.signalpro.biz/wordpress/a-fir-compensation-design-using-the-octave-tool/","timestamp":"2024-11-02T04:25:25Z","content_type":"text/html","content_length":"15425","record_id":"<urn:uuid:691a7811-367e-4c3f-9a73-6f9a5b6cceee>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00047.warc.gz"} |
Caesar: Uppercase not converting to cipher text possibly due to wrong if, else usage
Viewing 1 post (of 1 total)
• Author
• January 27, 2022 at 8:07 am #170
[dm_code_snippet background=”yes” background-mobile=”no” slim=”yes” line-numbers=”yes” bg-color=”#abb8c3″ theme=”dark” language=”clike” wrapped=”yes” height=”” copy-text=”Copy Code”
int main(void)
string name = get_string("Enter: ");
printf("Entered text by user: %s\n", name);
int n = strlen(name);
printf("length of entered text: %i\n", n);
int key = get_int("enter key: ");
char newuppercase_array[n];
for (int i = 0; i < n; i++)
if (isupper(name[i]))
newuppercase_array[i] = ((((name[i] - 65) + key)%26) + 65);
if (islower(name[i]))
newuppercase_array[i] = ((((name[i] - 97) + key)%26) + 97);
newuppercase_array[i] = name[i];
Yes. It is because of how you’ve done your if else statements. In your code, you have two independent if statements. The first if statement checks if the character is an upper case alphabet.
However, there is no else statement for this logic test. What happens then, is that regardless of whether or not the char has passed this first if statement, it will always run your second if
The problem is that your second if statement has an else after it. Thus, when the character fails the second if statement, it will always run the else statement.
If we take the char ‘A’ for example, it is clearly an uppercase letter. Your code will run the first if statement to check if it is upper case, which it will pass therefore it will execute the
rest of the code that is in the if statement. Since the next code block after the first if statement is also an if statement, it will also run this check. ‘A’ will fail the second if statement
because it is obviously not a lower case letter. However, because the second if statement has an else, it will then run the else code (since it did not pass the if statement). Your code is
actually setting the correct value for the capital letters in the string, and then failing the 2nd if statement and then setting it right back to the original.
Side note: you don’t need to create the newuppercase_array because you can access and change the characters in a string individually by accessing their index (name[0] = ‘A’ in your code will turn
the first character in the string called name into an ‘A’.)
• Author
Viewing 1 post (of 1 total)
• You must be logged in to reply to this topic. | {"url":"https://progannum.com/forums/topic/caesar-uppercase-not-converting-to-cipher-text-possible-due-to-wrong-if-else-usage/","timestamp":"2024-11-13T04:36:24Z","content_type":"text/html","content_length":"60936","record_id":"<urn:uuid:a7ce252b-e3ec-4b10-b2b6-e81dcbc999d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00866.warc.gz"} |
a package of soccer accessories costs $25 for cleats, $14 for shin guards , and $12 for a ball. Writ
Staff member
a package of soccer accessories costs $25 for cleats, $14 for shin guards , and $12 for a ball. Write two equivalent expressions for the total cost of 9 accessory package. Then find the cost.
Let c be the number of cleats, s be the number of shin guards, and b be the number of balls. We have the following cost function for 9 accessory packages:
9(25c + 14s + 12b)
But if we multiply through, we get an equivalent expression:
225c + 126s + 108b | {"url":"https://www.mathcelebrity.com/community/threads/a-package-of-soccer-accessories-costs-25-for-cleats-14-for-shin-guards-and-12-for-a-ball-writ.1466/","timestamp":"2024-11-05T19:25:38Z","content_type":"text/html","content_length":"46390","record_id":"<urn:uuid:4ec02b25-091f-4cfb-9642-e21a6a7308e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00722.warc.gz"} |
Kähler differential
Idea and definition
The notion of Kähler differential is a very general way to encode a notion of differential form: something that is dual to a derivation or vector field.
Conceptually, in dual language of algebras, a symmetry of a commutative algebra $A$ is an automorphism $g\colon A\to A$, i.e., $g(a b)=g(a)g(b)$. The ‘infinitesimal’ symmetries are the derivations $X
\colon A\to A$, with $X(a b)=X(a)b+X(b)a$. The module of Kähler differentials $\Omega^1_K(A)$ parametrizes derivations, in the sense that every derivation $X$ corresponds uniquely to a morphism of
$A$-modules $\mu_X: \Omega_K^1 (A)\to A$.
The ordinary definition and its insufficiency
Kähler differentials are traditionally conceived in terms of an algebraic construction of a certain module $\Omega_K^1(Spec R)$ on a given ordinary ring $R$. On spaces modeled (in the sense described
at space) on the site CRing$^{op}$, such as varieties, schemes, algebraic spaces, Deligne-Mumford stacks, this produces the correct notion of differential form in this context. This is the case
discussed in the section
The definition, concrete as it is, applies of course also to function rings on spaces not modeled on $CRing^{op}$, such as rings $C^\infty(X)$ of smooth functions on a smooth manifold. One might
expect that the module of Kähler differentials $\Omega_K(Spec C^\infty(X))$ of $C^\infty(X)$, regarded as an ordinary ring, does reproduce the familiar notion of smooth differential forms on a
manifold. But it does not. This is discussed in the section
This shows that the concrete algebraic construction of Kähler differential forms over plain rings, traditionally thought of as their very definition, does in fact not correctly capture their nature.
There is another definition – obtained from the nPOV – which does capture the situation correctly:
The correct definition of the notion of module …
In fact, already the definition of module has to be freed from it concrete realization in the context of ordinary rings, to exhibit its true nature. What this is has been established long ago in
• Dan Quillen, Homotopical algebra
and in Jon Beck’s thesis, and is discussed in more detail in the entries module and Beck module : Beck and Quillen noticed that the category $R Mod$ of modules over an ordinary commutative ring $R$
is canonically equivalent to the category $Ab(CRing/R)$ of abelian group objects in the overcategory $CRing/R$ of all rings, over the given ring $R$:
$R Mod \simeq Ab(CRing/R) \,.$
Under this equivalence an $R$-module $N$ is sent to the square-0-extension ring $R \oplus N$ that is canonically equipped with a ring homomorphism $R \oplus N \to R$ and with a unital and associative
and commutative product operation
$(R \oplus N) \times_R (R \oplus N) \to (R \oplus N)$
$((r, n_1), (r,n_2)) \mapsto (r, n_1 + n_2)$
that makes it first an object in the overcategory $CRing/R$ and in there an abelian group object, hence an object in $Ab(CRing/R)$.
Conversely, every module arises this way, up to isomorphism. So this gives an equivalent way of defining modules over rings.
And this is the right definition. Notably, this definition does not assume anything about the ring $R$. It does not even assume that $R$ is a ring at all! It could be anything.
Concretely: for $C$any category of test objects – so that we may think of objects in the opposite category $C^{op}$ as function rings on the test objects $C$ – we may define the category of module s
over an object $R \in C^{op}$ by the above equation:
$R Mod := Ab(C^{op}/R) \,.$
Notice that this is now a definition. And that $R$ could be anything, and the definition still makes sense.
The category of all modules over all possible objects $R$ is then nothing but the codomain fibration
$Mod_C := Ab([I,C^{op}]) \stackrel{p_C}{\to} C^{op}$
where $I$ is the interval category and fiberwise (over $C^{op}$) we form abelian group objects.
This turns out to be the correct category theoretic definition of module (as discussed there). In fact, this is is the special case of the higher categorical definition that works for $C$ any (∞,1)
-category. In that case the construction $Ab(C^{op}/R)$ of abelian group objects in the overcategory is generalized (straightforwardly! and in fact even more elegantly) to the notion of tangent (∞,1)
… and the correct definition of derivations and Kähler modules
With the above correct notion of module in hand, all the other concepts of deformation theory, notably those of derivations and of Kähler differentials follow straightforwardly
1. given an $R$-module $N$ regarded as an object $p_N : R \oplus N \to R$; the derivations on $R$ with coefficients in $N$ are precisely the sections $\delta : R \to R \oplus N$ of $p_N$.
2. The assignment $Spec R \mapsto \Omega_K(Spec R)$ of modules of Kähler differentials is the assignment universal with respect to derivations, which means that
$\Omega_K : C^{op} \to Mod_C$
is the left adjoint to the above projection $p_C : Mod_C \to C^{op}$:
this means that every derivation $\delta : R \to \mathcal{N}$ (being a section in $C$ of the module which is the overcategory element $\mathcal{N} \to R$) is identified conversely with a morphism
$\Omega_K(R) \to \mathcal{N}$ in the category of abelian group objects in the overcategory $C^{op}/R$:
$Hom_{Mod_C}(\Omega_K(R), \mathcal{N}) \simeq Hom_{C^{op}}(R, \mathcal{N}) \,.$
Notice that in all of the above now, $C$ is still a completely arbitrary category.
The fully general definition
By allowing $C$ – the collection of test spaces – to be a general (∞,1)-category, the above story gives the following completely general nPOV on the nature of Kähler differentials:
For $C$ any (∞,1)-category of test spaces, write $p_{C^{op}} : Mod := T_{C^{op}} \to C^{op}$ for the tangent (∞,1)-category of its opposite category.
1. the fiber of $Mod \to C^{op}$ over $R \in C^{op}$ is the (∞,1)-category $R Mod$ of modules over $R$;
2. for $(p_{\mathcal{N}} : \mathcal{N} \to R) \in R Mod$, a derivation on $R$ with coefficients in $\mathcal{N}$ is a section $\delta : R \to \mathcal{N}$ of $p_{\mathcal{N}}$.
3. The assignment of modules of Kähler differentials or cotangent complexes is the left adjoint
$\Omega_K : C^{op} \to Mod$
of the tangent (∞,1)-category projection $p_{C^{op}}$.
Its value $\Omega_R(R)$ on an object $R \in C^{op}$ is the module of Kähler differentials on $Spec R \in C$.
Specific definitions
We spell out very concretely definitions of Kähler differentials for special concrete choices of base category $C$ as special cases of the above general story. We start with the familiar cases and
then work our way up to more general or richer cases.
Over ordinary rings
In terms of the above discussion, we now take $C = CRing^{op}$ to be the opposite category of the category of ordinary (commutative unital) ring. In fact without changing anything of the discusson we
may assume that the ring $R$ in question is equipped with a ring homomorphism $k \to R$ from a ring or field $k$. This makes $R$ a $k$-algebra, and we shall often speak of algebras in the following,
where we could just as well speak of rings.
Suppose $A$ is a commutative algebra over a field $k$. We may define Kähler differentials either by an explicit construction or by a universal property. In fact there are two explicit constructions.
The simplest construction, maybe, is as follows. The module of Kähler differentials $\Omega^1_K(A)$ over $A$ is generated by symbols $d a$ for all $a\in A$, subject to these relations:
In particular there are only finite sums in the module of Kähler differentials.
Another more sophisticated construction of $\Omega^1_K(A)$ is given below. But turning to the universal property, note that we can define derivations from $A$ to any $A$-module $M$: they are $k$
-linear maps $X : A \to M$ satisfying the product rule:
$X(a b) = X(a) b + a X(b)$
Then $\Omega^1_K(A)$ may be defined as the universal $A$-module equipped with a derivation. In other words, there is a derivation
$d : A \to \Omega^1_K(A),$
and if $X:A\to M$ is any derivation from $A$ to some $A$-module $M$, then there is a unique $A$-module morphism
$\mu_X:\Omega_K^1(A)\to M$
such that the following diagram commutes:
$\array{ A&\overset{X}\to & M\\ & \underset{d}\searr&\uparrow \mu_X\\ & & \Omega_K^1{A} }$
We say that $X$factors through $d$.
Relative version
We can replace the commutative algebra $A$ more generally by a morphism of commutative unital rings $f:R\to S$. Then the module of Kähler differentials is the $S$-module $\Omega^1_K(S/R)$
corepresenting the functor
$Der_R(S,f_*(-)) : S Mod\to Set : M\mapsto Der_R(S,f_* M)$
that assigns to every $S$-module $M$ the set of derivations on $S$ with values in the (bi)module $f_* M$, where $f_*:S Mod\to R Mod$ is the restriction of scalars.
In other words, $Der_R(S,f_*M)\cong Hom_S(\Omega^1_K(S/R),M)$. In a diagram: for every $R$-derivation $X\colon S \to M$ there is a unique morphism (of $S$-modules) $\mu\colon \Omega^1_K(S/R) \to M$
making the following diagram commute:
$\array{ S&\overset{X}\to & M\\ & \underset{d}\searr&\uparrow \mu\\ & & \Omega^1_K(S/R) }$
This framework also gives another construction of the module of Kähler differentials, instead of the generators and relations definition given above.
Let $I$ be the augmentation ideal, i.e. the kernel of the multiplication map
$I := Ker(m:S\otimes_R S\to S)$
Then $\Omega^1_K_{S/R}= I/I^2$ and there is a canonical induced map $d: S\to \Omega^1_{S/R}$ given by $d s = [1\otimes s - s\otimes 1]$.
Higher degree Kähler forms
Furthermore, if $R$ is in characteristic zero, one may introduce Kähler $p$-forms , which are elements of the $p$-th exterior power $\Omega^p_K_{S/R}:=\Lambda_R^p \Omega^1_K_{S/R}$. The module of
Kähler differentials readily generalizes as a sheaf of Kähler differentials for a separated morphism $f:X\to Y$ of (commutative) schemes, namely it is the pullback along the embedding of the ideal
sheaf of the diagonal subscheme $X\hookrightarrow X\times_Y X$.
Compare the role of universal differential envelope and Amitsur complex for analogous constructions in the noncommutative case. The appropriate extension of the module of relative Kähler
differentials to the derived setting is the cotangent complex of Grothendieck–Illusie.
Relation to Hochschild homology
The module of Kähler differentials on $R$ is isomorphic to the first Hochschild homology of $R$
$\Omega_K^1(R) \simeq HH_1(R,R) \,.$
Under mild conditions the analogous statement is true for higher Kähler differentials and higher Hochschild homology: this is the Hochschild-Kostant-Rosenberg theorem.
Over smooth rings regarded as ordinary rings
We have seen that we define Kähler differentials $\Omega^1_K(A)$ for any commutative algebra $A$.
The following special case deserves special attention:
The algebra $A = C^\infty(X)$ of smooth functions on some smooth space $X$ (a smooth manifold or a generalized smooth space) is in particular a commutative algebra. So one might think that its Kähler
differentials form the ordinary differential forms on $X$ – in analogy to the case when $A$ consists of the algebraic functions on an affine algebraic variety in which case Kähler differentials are
often taken as a definition of 1-forms.
The problem and its solution
However, when $A = C^\infty(X)$ consists of smooth functions on a manifold, the ring theoretic Kähler differentials do not agree with the ordinary smooth 1-forms on this manifold! (Unless $X$ is, for
instance, the point, of course). However, there is a canonical map from the Kähler differentials to the ordinary 1-forms.
But there is a solution to this, and an explanation for why something goes wrong:
Smooth spaces such as manifolds are not modeled on the category $C =$CRing${}^{op}$, as varieties are. Instead, they are modeled on the category $C = \mathbb{L}$ of smooth loci, which is $= C^\infty
Ring^{op}$ the opposite of the category of C-infinity rings.
In particular, the algebra $A = C^\infty(X)$ of smooth functions on a manifold carries naturally the structure of such a $C^\infty$-ring. This does have “underlying” it the ordinary commutative ring
of functions that forget the $C^\infty$-ring structure, but forgetting this structure is precisely what makes the definition of Kähler differentials fail to reproduce that of ordinary smooth 1-forms.
If we do regard $C^\infty(X)$ as a C-infinity ring, the its Kähler differentials do agree with ordinary 1-forms on $X$.
Detailed comparison
We discuss how Kähler differential forms relate to the ordinary notion of differential forms.
Since there are only finite sums in the module of Kähler differentials, the usual $d f=f' d t$ works only if $f$ is a finite polynomial in $t$, say, if $A$ is $C^\infty(\mathbb{R})$ (smooth maps) or
$\mathbb{R}[\![t]\!]$ (power series). For example, let $f = t^n$ then
\begin{aligned} d f &= d(t^n) \\ &= t^{n-1} d t + t(d t^{n-1}) \\ &= 2 t^{n-1} d t + t^2 d(t^{n-2}) \\ &= r t^{n-1} d t + t^r d(t^{n-r}) \\ &= n t^{n-1} d t \\ &= f' d t. \end{aligned}
However, we have
$d e^t e e^t d t$
as Kähler differentials. Intuitively, the reason is that $d$ cannot pass through the infinite sum
$d e^t = d\left(\sum_{n=0}^{\infty} \frac{t^n}{n!}\right) e \sum_{n=0}^{\infty} \frac{d(t^n)}{n!} = e^t d t.$
However, the only proof we know that $d e^t e e^t d t$ is quite tricky: in fact it uses the Axiom of Choice!
• David Speyer, Kähler differentials and ordinary differentials. (Math Overflow)
It would be desirable to either find a proof that avoids the Axiom of Choice, or show that axioms beyond ZF are necessary for this result.
To avoid this annoying property of Kähler differentials we can proceed as follows. Given a commutative algebra $A$, let $Der(A)$ be the $A$-bimodule of derivations.
Define $\Omega^1(A)$ to be the dual of $Der(A)$:
$\Omega^1(A) = Der(A)^*$
in other words, the set of $A$-module maps $\omega : Der(A) \to A$, made into an $A$-module in the usual way. There is a derivation
$d : A \to \Omega^1(A)$
given by
$d f (X) = X(f)$
for $X \in Der(A)$.
Now, suppose $A=C^\infty(M)$ where $M$ is any smooth manifold. Then elements of $\Omega^1(A)$ can be identified with ordinary smooth 1-forms on $M$:
from the Hadamard lemma it follows that $Der(C^\infty(M)) = \Gamma(T M)$ is precisely the $C^\infty(X)$-module of vector fields on $X$, and 1-forms are the $C^\infty(X)$-linear duals of vector
fields, by definition.
And in this case, one can show that any derivation $X: A \to M$ factors through $\Omega^1(A)$ when $M$ is free, and in particular if $M=A$.
We can expand on this remark as follows. Quite generally, for any commutative algebra $A$ over a field $k$, we have
$Der(A) \cong \Omega^1_K(A)^*$
by the universal property of Kähler differentials, which identifies derivations $A \to A$ with $A$-module morphisms $\Omega^1_K(A) \to A$.
Using the definition
$\Omega^1(A) = Der(A)^*$
(of ordinary smooth 1-forms in the case that $A = C^\infty(M)$) we have that these are the linear bidual of the Kähler differentials:
$\Omega^1(A) \cong \Omega^1_K(A)^{**}$
There is always a homomorphism from a module to its double dual, so we have a morphism
$j: \Omega^1_K(A) \to \Omega^1(A)$
In the case when $A = C^\infty(M)$ this map is onto but typically not one-to-one, as witnessed by the fact that $d e^t = e^t d t$ in the ordinary 1-forms $\Omega^1(A)$ but not in the Kähler
differentials $\Omega^1_K(A)$. However, one can show that when $M$ is a free $A$-module, any derivation $X: A \to M$ not only factors through $\Omega^1_K(A)$ (as guaranteed by the universal property
of Kähler differentials), but also $\Omega^1(A)$. More generally this is true for torsionless $M$, i.e. $A$-modules that inject into their double duals, since $\Omega^1(A)$ is torsionless. So $\Omega
^1_K(A)$ is the universal differential module for all modules, while $\Omega^1(A)$ is the universal differential module for torsionless $A$-modules.
Martin Gisser? Couldn’t resist the above uncredentialed drive-by edit…
Is it torsionlessness that ultimately discerns differential geometry from algebraic geometry? (Need to fill in more dots between torsionless modules and “geometric modules” in the sense of Jet
Eric: Does this universal property mean that there is some diagram in some category for which the Kähler differentials can be thought of as a (co)limit?
John Baez: Yeah, take the category all $A$ modules $M$ equipped with a derivation $X : A \to M$, and take the diagram which consists of every object in this category and every morphism, and take the
colimit of that, and you’ll get $\Omega^1_K(A)$.
But this is just a cutesy way to say that $\Omega^1_K(A)$ is the initial object of this category.
And this, in turn, is just a cutesy way to say that there is a derivation
$d : A \to \Omega^1_K(A),$
such that if $X:A\to M$ is also a derivation, then there exists a unique $A$-module morphism
$\mu_X:\Omega_K^1(A)\to M$
such that the following diagram commutes:
$\array{ A&\overset{X}\to & M\\ & \underset{d}\searr&\uparrow \mu_X\\ & & \Omega_K^1{A} }$
All this is general abstract nonsense, nothing special to this example! Any universal property involving maps out of an object says that object is initial in some category — and that, in turn, is
equivalent to saying that object is the colimit of the enormous diagram consisting of all objects of the same kind! There’s a lot less here than meets the eye.
Eric: Thank you! That actually makes a little sense to me. As trivial as it may seem, the fact that I was even able to ask this question represents tremendous progress :)
Herman Stel: Dear Eric and Prof. Baez. There is a mistake in the explanation by John Baez here. The two latter properties (both being that the derivation $d : A \to \Omega^1_K(A)$ is initial) are
correct. The first one is not, though. If $\Omega^1_K(A)$ were the colimit of the huge diagram then for every derivation there would be a morphism from that derivation to the universal derivation,
which is not true. Instead, note that an initial object is the vertex of a colimit of the empty diagram in any category (use that $\forall x\in X P(x)$ is true if $X$ is empty).
Over smooth rings
A $C^\infty$-ring (see generalized smooth algebra) is a ring that remembers that it carries extra smooth structure akin to the smooth structure carried by a ring of smooth functions on a smooth
If we take a smooth function ring $C^\infty(X)$, regard it as a $C^\infty$-ring and then determine its module of Kähler differentials with respect to the category $C = \mathbb{L} = C^\infty Ring^{op}
$, we do recover the ordinary notion of smooth differential forms.
For the moment, this case is described in detail in the entry on Fermat theory.
Over general monoids
An ordinary (commutative) ring is precisely a comutative monoid in the category $Ab$ of abelian groups. The case of Kähler differentials over ordinary rings discussed above may therefore be thought
of as the case where the category of test objects is taken to be
$C = (CMon(Ab))^{op} \,.$
This has an evident generalization: we may replace here $Ab$ with any category $K$ and consider
$C = (CMon(K))^{op} \,.$
In practice $K$ is usually required to be an abelian category, but our definitions so far are general enough not to be concerned about this:
for any such $K$ fixed we follow the general definition, consider the functor
$p : Mod := Ab([I,CMon(K)^{op}]) \to CMon(K)^{op}$
and define the assignment of Kähler differentials
$\Omega_K : CMon(K)^{op} \to Ab([I,CMon(K)^{op}])$
to be the left adjoint of this functor.
For this to make good sense, everything here should be regarded as taking place in (∞,1)-categories, typically modeled by the model structure on simplicial rings.
Then one finds that for $C = sCRing^{op}$ the corresponding notion of module (∞,1)-category reproduces the derived category of ordinary ring modules. This is Example 8.6. in
Over simplicial rings
If in the above setup we choose $K = sAb = [\Delta^{op}, Ab]$ the category of abelian simplicial groups, then $Mon(K)$ is the category of simplicial rings. The category $Mon(K)^{op}$, regarded as a
higher category, is the site used in higher geometry in place of $CRing^{}$
On smooth rings regarded as ordinary rings
For a proof that every derivation of $A = C^\infty(\mathbb{R})$ comes from a smooth vector field on the real line, and an extensive discussion of Kähler differentials versus ordinary 1-forms, see:
• This Week’s Finds in Mathematical Physics (Week 287)
See also the discussion at the $n$-Café:
On the fully general case
A detailed discussion of Kähler differentials and their generalization from algebra to higher algebra is in
For a categorical approach,
introducing a setting in which Kahler differentials live quite naturally (but not yet in as much generality as possibly one might hope), see
This paper establishes a relation between the recently introduced notion of differential category and the more classic theory of Kähler differentials in commutative algebra. A codifferential category
is an additive symmetric monoidal category with a monad, which is furthermore an algebra modality. An algebra modality for a monad T is a natural assignment of an associative algebra structure to
each object of the form T(M). In a (co)differential category, one should imagine the morphisms in the base category as being linear maps and the morphisms in the (co)Kleisli category as being
infinitely differentiable. Finally, a differential category comes equipped with a differential combinator satisfying typical differentiation axioms, expressed coalgebraically.
The traditional notion of Kähler differentials defines the notion of a module of A-differential forms with respect to A, where A is a commutative k-algebra. This module is equipped with a universal
A-derivation. With this in mind, a Kähler category is an additive monoidal category with an algebra modality and an object of differential forms associated to every object. This object of
differential forms satisfies a universal property with respect to derivations. Surprisingly, we are able to show that, under some natural conditions, codifferential categories are Kähler. | {"url":"https://ncatlab.org/nlab/show/K%C3%A4hler+differential","timestamp":"2024-11-14T15:17:11Z","content_type":"application/xhtml+xml","content_length":"123494","record_id":"<urn:uuid:cae5140c-7602-4f3c-bd70-189cd416b9db>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00769.warc.gz"} |
EViews Help: N-Way Tabulation
N-Way Tabulation
This view classifies the observations in the current sample into cells defined by the series in the group. You can display the cell counts in various forms and examine statistics for independence
among the series in the group. which opens the tabulation dialog.
Many of the settings will be familiar from our discussion of one-way tabulation in
“One-Way Tabulation”
Group into Bins If
If one or more of the series in the group is continuous and takes many distinct values, the number of cells becomes excessively large. This option provides you two ways to automatically bin the
values of the series into subgroups.
• Number of values option bins the series if the series takes more than the specified number of distinct values.
• Average count option bins the series if the average count for each distinct value of the series is less than the specified number.
• Maximum number of bins specifies the approximate maximum number of subgroups to bin the series. The number of bins may be chosen to be smaller than this number in order to make the bins
approximately the same size.
The default setting is to bin a series into approximately 5 subgroups if the series takes more than 100 distinct values or if the average count is less than 2. If you do not want to bin the series,
unmark both options.
NA Handling
By default, EViews drops observations from the contingency table where any of the series in the group has a missing value. Treat NA as category option includes all observations and counts NAs in the
contingency table as an explicit category.
This option controls the display style of the tabulation. The Table mode displays the categories of the first two series in
The List mode displays the table in a more compact, hierarchical form. The Sparse Labels option omits repeated category labels to make the list less cluttered. Note that some of the conditional
To understand the options for output, consider a group with three series. Let (i, j, k) index the bin of the first, second, and third series, respectively. The number of observations in the (i, j, k)
-th cell is denoted as
• Overall% is the percentage of the total number of observations accounted for by the cell count.
• Table% is the percentage of the total number of observations in the conditional table accounted for by the cell count.
• Row% is the percentage of the number of observations in the row accounted for by the cell count.
• Column% is the percentage of the number of observations in the column accounted for by the cell count.
The overall expected count in the (i, j, k)-th cell is the number expected if all series in the group were independent of each other. This expectation is estimated by:
Chi-square Tests
If you select the Chi-square tests option, EViews reports
• . EViews reports the following two test statistics for overall independence among all series in the group:
These test statistics are reported at the top of the contingency table. For example, the top portion of the tabulation output for the group containing LWAGE, UNION, and MARRIED in the workfile
“Cps88.WF1” shows:
The three series LWAGE, UNION, and MARRIED, have
• . If you display in table mode, EViews presents measures of association for each conditional table. These measures are analogous to the correlation coefficient; the larger the measure, the larger
the association between the row series and the column series in the table. In addition to the Pearson
Bear in mind that these measures of association are computed for each two-way table. The conditional tables are presented at the top, and the unconditional tables are reported at the bottom of the | {"url":"https://help.eviews.com/content/groups-N-Way_Tabulation.html","timestamp":"2024-11-12T21:52:58Z","content_type":"application/xhtml+xml","content_length":"22883","record_id":"<urn:uuid:923bdcf8-20bb-45f2-b870-f4fc8f17b3eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00624.warc.gz"} |
Impact factor isn’t great. A bibliometric based on entropy reduction may be promising.
\[ \newcommand{cond}[3] { #1\mathopen{}\left(#2\mathbin{\big|}#3\right)\mathclose{} } \]
Impact factor
There are a variety of citation-based bibliometrics. The current dominant metric is impact factor. It is highly influential, factoring into decisions on promotion, hiring, tenure, grants and
departmental funding (Editors 2006) (Agrawal 2005) (Moustafa 2014). Editors preferentially publish review articles, and push authors to self-cite in pursuit of increased impact factor (Editors 2006)
(Agrawal 2005) (Wilhite and Fong 2012). It may be responsible for editorial bias against replications (Neuliep and Crandall 1990) (Brembs, Button, and Munafò 2013). Consequently, academics take
impact factor into account throughout the planning, execution and reporting of a study (Editors 2006).
This is Campbell’s law in action. Because average citation count isn’t what we actually value, when it becomes the metric by which decisions are made, it distorts academic research. In the rest of
this post, I propose a bibliometric that measures the entropy reduction of the research graph.
Full post | {"url":"https://www.col-ex.org/page/1/index.html","timestamp":"2024-11-04T02:17:39Z","content_type":"text/html","content_length":"23820","record_id":"<urn:uuid:92771697-41a3-4222-a801-9b5cd474ceeb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00851.warc.gz"} |
Problem - 1215B - Codeforces
Virtual contest is a way to take part in past contest, as close as possible to participation on time. It is supported only ICPC mode for virtual contests. If you've seen these problems, a virtual
contest is not for you - solve these problems in the archive. If you just want to solve some problem from a contest, a virtual contest is not for you - solve this problem in the archive. Never use
someone else's code, read the tutorials or communicate with other person during a virtual contest.
You are given a sequence $$$a_1, a_2, \dots, a_n$$$ consisting of $$$n$$$ non-zero integers (i.e. $$$a_i \ne 0$$$).
The first line contains one integer $$$n$$$ $$$(1 \le n \le 2 \cdot 10^{5})$$$ — the number of elements in the sequence.
The second line contains $$$n$$$ integers $$$a_1, a_2, \dots, a_n$$$ $$$(-10^{9} \le a_i \le 10^{9}; a_i \neq 0)$$$ — the elements of the sequence.
Print two integers — the number of subsegments with negative product and the number of subsegments with positive product, respectively.
10 4 2 -4 3 1 2 -4 3 2 3 | {"url":"https://mirror.codeforces.com/problemset/problem/1215/B","timestamp":"2024-11-05T03:40:12Z","content_type":"text/html","content_length":"58151","record_id":"<urn:uuid:bf99d21a-1562-4417-af7d-b0d1d39e3526>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00648.warc.gz"} |
Metric vs. Imperial: What's the Difference? • 7ESL
Metric vs. Imperial: What’s the Difference?
When we encounter measurements in our daily lives, whether it’s cooking, traveling, or buying produce, we often come across two distinct systems of measurement: metric vs. imperial. Understanding
these systems is crucial because they are used for different purposes and in various regions around the world.
The Difference between Metric and Imperial
Metric vs. Imperial: Key Takeaways
Metric System
• Base Units: Meter, liter, gram
• Increments: Multiples of 10
Imperial System
• Base Units: Inch, foot, yard, mile, ounce, pound, pint, gallon
• Increments: Varied, such as 12 inches in a foot, 3 feet in a yard
Metric vs. Imperial – Created by 7ESL
Metric vs. Imperial: the Definition
What Does Metric Mean?
The Metric system is an international decimalized system of measurement, based on the meter, kilogram, and second. It’s used for a variety of measurements such as length, mass, and volume. Examples
of metric units include:
• Length: meter (m), centimeter (cm), kilometer (km)
• Mass: kilogram (kg), gram (g)
• Volume: liter (L), milliliter (mL)
Here’s a simple table to illustrate common metric conversions:
Unit Equals
1 meter 100 centimeters
1 kilogram 1,000 grams
1 liter 1,000 milliliters
What Does Imperial Mean?
The Imperial system, or British Imperial, is a system of weights and measures that was officially used in Great Britain from 1824 until the adoption of the metric system. It utilizes units like
inches, pounds, and gallons. Examples of Imperial units include:
• Length: inch (in), foot (ft), mile (mi)
• Weight: pound (lb), stone (st)
• Volume: pint (pt), gallon (gal)
Here’s how some Imperial units translate:
Unit Equals
1 inch 2.54 centimeters
1 pound 0.4536 kilograms
1 gallon 4.54609 liters (UK)
While most countries have transitioned to the Metric system, the Imperial system is still used in some countries, including the United States, for various applications.
Tips to Remember the Difference
To keep the two systems straight, consider the metric system’s uniformity — it’s always in tens. Contrast this with the imperial system, where units don’t follow a single conversion base, such as 12
inches to a foot or 16 ounces to a pound.
Metric vs. Imperial: Examples
Example Sentences Using Metric
• For length: We usually measure our running track in meters, so it’s about 400 meters around for one lap.
• For mass: When we buy flour for baking, we often get a 1-kilogram bag from the store.
• For volume: We filled our water bottle with 500 milliliters of water before our hike.
• For temperature: On a hot day, we might say it’s 35 degrees Celsius outside.
• For speed: We notice that the speed limit is typically around 120 kilometers per hour on many European highways.
Example Sentences Using Imperial
• For length: We measure the height of a person in feet and inches; my cousin is 5 feet 10 inches tall.
• For mass: When we weigh ourselves, we might find that we are 140 pounds.
• For volume: We usually buy milk in gallons, so a common size is one gallon of milk.
• For temperature: We might discuss the weather by saying it’s 75 degrees Fahrenheit today.
• For speed: On the highway, we observe that speed limits are often posted in miles per hour, commonly 65 mph.
Related Confused Words
Metric vs. Standard
Metric refers to the international decimal system of measurement, which is based on multiples of ten. It includes units like meters, liters, and grams. However, in the United States, the term
“Standard” is often used interchangeably with “Imperial”, but it’s a misnomer. Technically, “Standard” should relate to the widely accepted and used system within a given region. So for the U.S.,
“Standard” might be understood as referring to American customary units, which are derived from the British imperial system but have some differences.
Metric Standard
Meter Yard
Liter Gallon
Gram Ounce/Pound
Imperial vs. Customary
The Imperial system is a collection of units first defined in the British Weights and Measures Act of 1824. It includes pounds for weight, gallons for volume, and yards for distance. On the other
hand, Customary units are the system of measurements commonly used in the United States, which, while similar to Imperial, differ in amounts. For instance, a gallon in the U.S. is smaller than the
British Imperial gallon.
Imperial U.S. Customary
Imperial gallon U.S. gallon
Pound Pound (same name, same unit as Imperial)
Yard Yard (same as Imperial)
Through these distinctions, we can see that while these terms are often confused, they refer to different systems or units, and using the correct term can help us avoid misunderstandings.
Latest posts by Emma Grace
(see all) | {"url":"https://7esl.com/metric-vs-imperial/","timestamp":"2024-11-12T06:37:29Z","content_type":"text/html","content_length":"105688","record_id":"<urn:uuid:c5ea4a14-248f-452d-b263-7ff472a9c8f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00880.warc.gz"} |
CSCI 4100/6100 ASSIGNMENT 6 solution
CSCI 4100/6100 RPI Machine Learning From Data
LFD is the class textbook
1. (200) Exercise 3.4 in LFD 2. (200) Problem 3.1 in LFD 3. (200) Problem 3.2 in LFD 4. (200) Problem 3.8 in LFD 5. (200) Problem 3.6 in LFD (6xxx Level Only) 6. (200) Handwritten Digits Data –
Obtaining Features
You can download the two data files with handwritten digits data: training data (ZipDigits.train) and test data (ZipDigits.test). Each row is a data example. The first entry is the digit, and the next
256 are grayscale values between -1 and 1. 256 pixels corresponds to a 16 × 16 image. For this problem, we will only use the 1 and 5 digits, so remove the other digits from your training and test
(a) Familiarize yourself with the data by giving a plot of two of the digit images. (b) Develop two features to measure properties of the image that would be useful in distinguishing between 1 and 5.
You may use symmetry and average intensity (as discussed in class) or anything else you think will work better. Give the mathematical definition of your two features. (c) As in the text, give a 2-D
scatter plot of your features: for each data example, plot the two features with a red ‘×’ if it is a 5 and a blue ‘◦’ if it is a 1. | {"url":"https://jarviscodinghub.com/assignment/csci-4100-6100-assignment-6-solution/","timestamp":"2024-11-03T20:24:57Z","content_type":"text/html","content_length":"100254","record_id":"<urn:uuid:3ce3b516-75dc-4b58-a4e6-61c8a8f77270>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00860.warc.gz"} |
Sub-Symmetries and Conservation Laws
The concept of sub-symmetry of a differential system was introduced in [40], where it was shown that a sub-symmetry is a considerably more powerful tool than a regular symmetry with regard to
deformation of conservation laws. In this paper, we study the nature of a correspondence between sub-symmetries and conservation laws of a differential system. We show that for a large class of
non-Lagrangian systems, there is a natural association between sub-symmetries and local conservation laws based on the Noether operator identity, and we prove an analogue of the first Noether theorem
for sub-symmetries. We also demonstrate that a similar association can be established for infinite sub-symmetries of the system. We discuss the role of sub-symmetries in generation of infinite series
of conservation laws for the constrained Maxwell system and the incompressible Euler equations of fluid dynamics. Despite the fact that infinite symmetries (with arbitrary functions of dependent
variables) are not known for the Euler equations, we find infinite sub-symmetries for the two- and three-dimensional Euler equations with certain constraints. We show that these sub-symmetries
generate known series of infinite conservation laws, and obtain new classes of infinite conservation laws.
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Mathematical Physics
• Euler equations
• Maxwell's equations
• infinite conservation laws
• non-Lagrangian systems
• symmetry properties
Dive into the research topics of 'Sub-Symmetries and Conservation Laws'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/sub-symmetries-and-conservation-laws","timestamp":"2024-11-14T14:35:28Z","content_type":"text/html","content_length":"50191","record_id":"<urn:uuid:80c6869d-debb-4f7d-a953-36b5dae1e130>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00064.warc.gz"} |
Solving Linear Inequalities
These examples illustrate the properties that we use for solving inequalities.
Properties of Inequality
Addition Property of Inequality
If the same number is added to both sides of an inequality, then the solution set to the inequality is unchanged.
Multiplication Property of Inequality
If both sides of an inequality are multiplied by the same positive number, then the solution set to the inequality is unchanged.
If both sides of an inequality are multiplied by the same negative number and the inequality symbol is reversed, then the solution set to the inequality is unchanged.
Because subtraction is defined in terms of addition, the addition property of inequality also allows us to subtract the same number from both sides. Because division is defined in terms of
multiplication, the multiplication property of inequality also allows us to divide both sides by the same nonzero number as long as we reverse the inequality symbol when dividing by a negative
Equivalent inequalities are inequalities with the same solution set. We find the solution to a linear inequality by using the properties to convert it into an equivalent inequality with an obvious
solution set, just as we do when solving equations.
Example 1
Solving inequalities
Solve each inequality. State and graph the solution set.
a) 2x - 7 < -1
b) 5 - 3x < 11
a) We proceed exactly as we do when solving equations:
2x - 7 < -1 Original inequality
2x < 6 Add 7 to each side
x < 3 Divide each side by 2.
The solution set is written in set notation as {x | x < 3} and in interval notation as (-∞, 3). The graph is shown below:
b) We divide by a negative number to solve this inequality.
5 - 3x < 11 Original equation
-3x < 6 Subtract 5 from each side
x > -2 Divide each side by -3 and reverse the inequality symbol
The solution set is written in set notation as {x | x > -2} and in interval notation as (-2, ∞). The graph is shown below:
Example 2
Solving inequalities
≥ -4 Original inequality
≤ -5(-4) Multiply each side by -5 and reverse the inequality symbol.
8 + 3x ≤ 20 Simplify
3x ≤ 12 Subtract 8 from each side.
x ≤ 4 Divide each side by 3.
The solution set is (-∞, 4], and its graph is shown below: | {"url":"https://mathmusic.org/solving-linear-inequalities.html","timestamp":"2024-11-05T00:52:51Z","content_type":"text/html","content_length":"93643","record_id":"<urn:uuid:11d2c32d-5183-4567-be80-3d889aa49152>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00137.warc.gz"} |
How many perfect numbers are there between 1 and 1000?
3 perfect numbers
Equivalently, a perfect number is a number that is half the sum of all of its positive divisors (including itself) i.e. σ1(n) = 2n. There are 3 perfect numbers between 1 and 1000.
What is the most perfect number?
perfect number, a positive integer that is equal to the sum of its proper divisors. The smallest perfect number is 6, which is the sum of 1, 2, and 3. Other perfect numbers are 28, 496, and 8,128.
How do you find perfect numbers?
A number is perfect if the sum of its proper factors is equal to the number. To find the proper factors of a number, write down all numbers that divide the number with the exception of the number
itself. If the sum of the factors is equal to 18, then 18 is a perfect number.
Is 7 a perfect number?
Seven is the number of completeness and perfection (both physical and spiritual). It derives much of its meaning from being tied directly to God’s creation of all things. The word ‘created’ is used 7
times describing God’s creative work (Genesis 1:1, 21, 27 three times; 2:3; 2:4).
How many perfect numbers are there between 1 and 10000?
Around 100 c.e., Nicomachus noted that perfect numbers strike a harmony between the extremes of excess and deficiency (as when the sum of a number’s divisors is too large or small), and fall in the
“suitable” order: 6, 28, 496, and 8128 are the only perfect numbers in the intervals between 1, 10, 100, 1000, 10000, and …
Why is 56 a perfect number?
The number 56 is not a perfect number. The proper divisors of 56 are all of the divisors of 56 that are not equal to 56. These include 1, 2, 4, 7, 8,…
What is beautiful number?
A number ‘N’ is called ‘Beautiful’ if and only if, Count(N) = Sum(N) , where. Count(N) = Count of distinct prime numbers in prime factorization of ‘N’.
What is handsome number?
In Mathematics, Handsome numbers are those number in which the sum of all the left side digit is equal to the last digit. Handsome number examples: 123, 224, 235 etc. Similarly, 347 is Handsome
because 3+4 = 7 which is last digit.
How 28 is a perfect number?
The proper factors of 28 are 1, 2, 4, 7 and 14. The sum of proper factors is 28. According to the definition of perfect numbers, 28 is a perfect number. therefore, 28 is a perfect number.
Why 73 is the best number?
Why? 73 is the 21st prime number. Its mirror (37) is the 12th and its mirror (21) is the product of multiplying 7 and 3. In binary, 73 is a palindrome, 1001001 which backwards is 1001001.”
Is 8126 a perfect number?
Perfect Numbers: 6, 28,496, 8126, 33550336, 8589869056,137438691328, 2305843008139952128, 2658455991569831744654692615953842176,…
What is the number of perfect numbers between 1 to 1000?
Number of perfect numbers between 1 to 1000 is: 3
What is the smallest perfect number in the world?
The smallest perfect number is 6, which is the sum of 1, 2, and 3. How Many Perfect Numbers are there and What are the Perfect Numbers from 1 to 100? There are around 51 known perfect numbers.
What is a perfect number with divisors?
For Example, 6 has divisors 1, 2 and 3 (excluding itself), and 1 + 2 + 3 = 6, so 6 is a perfect number. The sum of divisors of a number, excluding the number itself, is called its aliquot sum, so a
perfect number is one that is equal to its aliquot sum.
How do you find the perfect number?
Here is my suggestion of code: In number theory, a perfect number is a positive integer that is equal to the sum of its proper positive divisors, that is, the sum of its positive divisors excluding
the number itself (also known as its aliquot sum). | {"url":"https://sage-tips.com/most-popular/how-many-perfect-numbers-are-there-between-1-and-1000/","timestamp":"2024-11-04T23:22:27Z","content_type":"text/html","content_length":"118242","record_id":"<urn:uuid:21173feb-845a-4760-8ac7-ce1dcccd54af>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00300.warc.gz"} |
Topics: Theory of Physical Theories
Theory of Physical Theories
Metatheories in General > s.a. correlations [classification of theories]; physical theories [frameworks, theories of everything].
* Semantic view: According to the semantic view of scientific theories, theories are classes of models.
* Change of level: When going from one physical theory to a deeper one, the singularities at the former level tend to be dissolved (see atomic stability from classical to quantum mechanics).
* Types of theories: Einstein [@ London Times 1919] distinguished between theories of principles, and theories of constructs (principle and constructive theories).
* Notions of equivalence of theories: Examples are categorical equivalence, definitional equivalence and generalized definitional (aka Morita) equivalence.
* Examples: The PPN formalism (> see modified newtonian gravity).
@ General references: Giere PhSc(94)jun [cognitive structure]; Smith BJPS(98) [approximate truth]; Halvorson SHPMP(04)qp/03 [insert, state space and information theory]; D'Agostini phy/04-conf
[probabilistic reasoning]; 't Hooft IJMPA(08)-a0707 [grand view]; Matravers CP(07); Allen a1101 [statistical counting and stochastic fluctuations]; French & Vickers BJPS(11) [ontological status];
Halvorson PhSc(12) [against the semantic view of theories]; Jaroszkiewicz a1501 [generalized propositions and a classification scheme for theories].
@ Equivalence of theories: Weatherall a1411 [and Newtonian gravity vs geometrized Newtonian gravity]; Barrett & Halvorson a1506 [definitional, Morita and categorical equivalence]; Weatherall a1810
[types of equivalence, and dualities]; & de Haro; Weatherall a1812 [categorical]; Weatherall a1906-conf, De Haro a1906 [and electromagnetic duality]; > s.a. Einstein Algebras.
@ Relationships, types: Baumann PhSc(05)jan [re "better theories"]; Page a0712-proc [Bayesian meta-theories, "sensible quantum mechanics", and quantum cosmology]; Van Camp SHPMP(11) [types of
theories, explanation, and quantum mechanics]; Bény & Osborne NJP(15)-a1403 [renormalization and effectively indistinguishable microscopic theories]; Oriti a1705-in [principle of proliferation of
theories, and non-empirical theory assessment].
> Related topics: see category theory and physics; Deformations.
Space of Theories > s.a. Theory.
@ References: Cordova et al a1905, a1905 [4D gauge theories, anomalies in the space of coupling constants]; Barth a1910-MS [comparing classical field theories].
> Related topics: see types of distances; types of quantum theories; symplectic structures in physics [on the space of quantum field theories].
Structure of Theories > s.a. computation; Explanations; history of physics; Interpretation of a Theory; logic; Physical Laws.
* Idea: A physical theory consists of a mathematical formalism and an interpretation (definition of symbols, measurement assignments, concepts and principles, ontology).
* Method: Knowledge in physics comes from interplay of theory and experiment; In the theory, one simplifies systems and considers simple, closed ones, supposes that observers are not important,
identifies the simplest measurements, e.g., m, l, t, and sets up the mathematical description of the models; Later, one tries to get rid of ideal elements by making them dynamical, or giving a
natural choice (Leibniz's principle of sufficient reason).
* Ideal elements: Formal element which are contingent (a different choice is possible) and play a role in the evolution of the physical degrees of freedom but are non-dynamical, absolute; Examples:
Correspondence observables-operators, time, inner product; Number of spacetime dimensions, topology in general relativity; Preferred class of inertial observers in pre-general-relativity physics; >
s.a. general relativity; inertia.
* Structure: A theory has a lattice of propositions (including assumptions), structural equations and equations of other origin, about some structure which constitutes a model (or metaphor) for the
systems under consideration [P Duhem considered the metaphor itself as an educational tool, not a part of science, while J Bernstein and J Ziman view them as an integral part of science, @ pw(00)
nov]; As with any metaphor, a key issue is to establish how far each theory can go; Theories can be regarded as organized into hierarchies in many cases, with higher levels sometimes called
'paradigms' and lower-level models encoding more specific or concrete hypotheses.
* Evolution: In the hierarchical point of view, higher-level theory change may be driven by the impact of evidence on lower levels.
@ Books: Holton & Roller 58; Ripley 64 [simple]; Cooper 68; Tonti 76; Shive & Weber 82 [II]; Sklar 85; Pavšič 01-gq/06 [overview]; Helland 09.
@ General references: Caianello RNC(92); Foy qp/00 [logical basis]; Tarantola & Mosegaard mp/00 [use of inference]; Fellman et al a1001 [importance of discourse]; Henderson et al PhSc(10)apr
[hierarchical Bayesian models]; Székely in(11)-a1101 [why-type questions]; Vignale 11; Fayer hp(12)jan [popular misunderstanding of the meaning of the word]; Weatherall a1206-ch ["puzzleball view",
theories as networks of mutually interdependent principles and assumptions]; Wallace a1306 [inferential vs dynamical conceptions]; Coecke et al a1409 [mathematical theory of resources]; Curiel a1603
[kinematics and dynamics]; in Krizek a1707 [classification scheme].
@ Evolution of theories: Lederer a1510 [conflicting theories and scientific pluralism, example of high-temperature superconductivity].
Construction of Theories > s.a. Axioms for Physical Theories; Models; Operationalism.
* Approaches: The main distinction is between operational and deductive ones; The danger with an operational approach is that one may get stuck with technical difficulties and make little progress;
The danger with a deductive approach is that progress in the right direction is more likely to be impeded by idealizations.
* Remark: It is important to understand which are the right variables; Which questions we can ask and which make no sense (see Newton's laws, Einstein's relativity, Bohr-Heisenberg principle).
@ References: Corry 04 [Hilbert and the axiomatization of physics]; Emch SHPMP(07); Moldoveanu a1001-FQXi [complete axiomatization of physics as an achievable goal]; Nguyen et al BJPS-a1712, Bradley
& Weatherall a1904 [the need for surplus structure].
> Specific theories: see cosmology; particle physics; etc.
Criteria for Physical Theories > s.a. Consistency; Fine Tuning; Naturalness; Occam's Razor; paradigms; Simplicity.
* Traditional: Adequacy, i.e., verifiability / falsifiability, and good agreement with experiment.
* Stability under variations: Often assumed as a dogma and not discussed explicitly.
* Also: Accuracy, elegance and simplicity (XX century aesthetic judgment), scope, symmetries.
* Verifiability / Falsifiability: 2015, Some theorists have called for a relaxation of this requirement for a theory, in particular proponents of string theory and multiverse theory.
* Beauty: The sense of what's natural or elegant is subjective, as can be seen in people's opinions of various proposed explanations of the apparent cosmological acceleration; It can be evoked by a
pattern or symmetry (as in gauge theories or cosmology), or by logical or formal simplicity.
* Examples: Ther issue of whether Copernicus' theory of the Solar System was more "harmonious" and simpler than Ptolemy's; The interpretation of the predicted positron in Dirac theory, which
according to "truth" of knowledge at the time was the proton (Dirac), and according to "beauty" it was not (Weyl).
* On unobservable quantities: Around 1926, W Heisenberg advocated using only directly observable quantities in the theory; The point of view was picked up by G Chew in his S-matrix approach to
quantum field theory; It appears that the use of some unobservable quantities is unavoidable.
@ General references: Einstein JFI(36); Mermin PT(00)mar [elegance]; Norton SHPMP(00) [Einstein and simplicity]; Falmagne FP(04) [meaningfulness + order-invariance]; Wells a1211 [consistency, and
effective field theories]; Scorzato Syn(13)-a1402 [simplicity]; Hossenfelder 18 [overreliance on beauty].
@ Considerations on different types of theories: Nelson AS(85); Von Weizsäcker 85; Cushing 90; Tavakol BJPS(91) [fragility]; Elby et al FP(93); Barrett PhSc(03)dec [our best physical theories are
false]; Streater 07 ["lost causes"].
@ Verifiability / Falsifiability: Scott et al a1504 [giving up Falsifiability :-)]; Nemenman a1505, PT(15)oct [quantifying]; Hossenfelder blog(14)jul, Woit blog (14)jul, Ellis & Silk Nat(14)dec, blog
sa(15)dec [defense of falsifiability]; Rovelli a1609-conf, Dawid a1702-ch [on non-empirical confirmation]; Curiel a1804 [Newtonian abduction]; Alamino a1907 [weaker condition, ignoring isolated
spacetime regions]; Patton SHPMP(20)-a2002 [confirmation vs testing; parametrized frameworks and gravity]; Weilenmann & Colbeck PRL(20)-a2003 [theory self-test, and quantum theory].
@ Stability: Bouligand ARB(35); Destouches ARB(35); Duhem 54; Thom 67; Vilela Mendes JPA(94).
@ Beauty: McAllister AS(98); Tsallis PhyA(04) [beauty, truth and new ideas]; Martens SHPSA(09) [beauty and simplicity as metaphors, Copernican theory]; Vignale 11; Spratt & Elgammal a1410; Wilczek 15
; Deser a1706 [elegance/simplicity and supergravity].
main page – abbreviations – journals – comments – other sites – acknowledgements
send feedback and suggestions to bombelli at olemiss.edu – modified 14 sep 2020 | {"url":"https://www.phy.olemiss.edu/~luca/Topics/phys/phys_meta.html","timestamp":"2024-11-11T02:14:37Z","content_type":"text/html","content_length":"20485","record_id":"<urn:uuid:e4620bc0-d3ee-4ef7-b8ae-549cc5cff42c>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00590.warc.gz"} |
answer key
This is a one-step conversion, but the concentration must be written as the reciprocal for the units to work out: 4.88 mol CuSO 4 ×(1 L / 2.35 mol) = 2.08 L of solution. Test Yourself. Using
concentration as a conversion factor, how many liters of 0.0444 M CH 2 O are needed to obtain 0.0773 mol of CH 2 O? Answer. 1.74 L
Read Book Chemistry Solution Concentration Practice Problems Answer Key Concentration and Molarity Test Questions Chemistry Solutions Practice Problems 1. Molar solutions. a. Describe how you would
prepare 1 L of a 1 M solution of sodium chloride. The gram formula weight of sodium chloride is 58.44 g/mol. Answer: To make a 1 M solution of
anions are negatively charged such as for example Cl -, CO 3 -2, and PO 4 -3. The bracket [ ] indicates concentration. Final Practice examination answer Key11 U,˜ -"˜ 3S(%.˝#%#-2 C.+/˜ (+ KNO34 ˝˜%(0
(+ Q.˜,-#(’, 20 -(22. 20. Estimate the approximate solubility of KNO3at 30ºC.
The answer to this question is not simple because there are only very weak type B areas of intensive agriculture with high concentration of agroindustry, can then be used to design key experiments
in the more difficult systems. 7.0 7.0 7.0 a Concentration of compound times contact time. b Data from Floyd et al., time of exposure, the volume of solution, and the volume of sample analyzed, For
all key equations, practical examples of their use are provided. time and concentration; Area Under the Curve (AUC); Practice calculations; Appendix 1 Additional material available from the internet;
Answers to practice questions. Thus, the project addresses two key environmental challenges for the where some calculations have been updated to reflect local emissions from This project was
initiated as a response to this need of information, to develop and show how concentration (PEC) and the predicted no effect concentration (PNEC).
It's approximately what you compulsion currently. This empirical and molecular formula practice answer key, as one of the most on the go sellers here will definitely be among the best options to
review. Page 3/9 The HCl is a strong acid and is 100% ionized in water.
Equilibrium Calculations Example 5 (Using Initial Concentrations and Keq to Calculate WORKSHEET 3-2 IDENTIFYING AND SEPARATING IONS AND Ksp
39 State/UT, district and sub - district, and the questions have coded options to record answers. Literacy rate and educational development are considered to be key. LESSON Practice B 5-4 Identifying
And Writing Proportions Lines SI.ks-ia1 - Kuta Software LLC The Kuta Software Infinite Algebra 1 Answer Key Is ..
Percent composition practice worksheet answer key Example: What percent ofNaCl is sodium? All worksheets come with an answer key on the 2nd page of the file. Volume of the solution is 200 mL.
Empirical Formulas Percent Composition Answer Sheet - Displaying top 8 worksheets found for this concept.. Pretend you care with this exciting worksheet.
per 675 mL of solution.a. Molar Concentration Worksheet Due Friday November 4, 2011 1. What mass, in grams, of calcium nitrate are there in 867mL of a 2.00M calcium nitrate solution?
Concentration is the amount of solute in given solution. We can express concentration in different ways like concentration by percent or by moles.
Fakultetskurser gdpr
When this into the surroundings as a sheet-lamellae type microstructure. This is where the element symbols refer to concentration in wt %. ( ) energy-value is not applicable in practice, as there is
a limit to how much power could be put.
the concentration present in a single sample. utilize the biomonitoring data to answer questions about chemical exposures and. The yield factor was used to calculate new food weights for the
composite task is to collect samples that give the correct answers.
Brun farge klær
Answer Key Answers to practice problems 1-14 Order: Answer: 1. Weight: 150lbs Conversion: lbs to kg 150 / 2.2 = 68.2kg 2. Weight: 130lbs Conversion: lbs to kg 130 / 2.2 = 59.1kg 3. Weight: 25kg
Conversion: kg to lbs 25 * 2.2 = 55lbs 4. Weight: 98kg Conversion: kg to lbs 98 * 2.2 = 216 5. Weight: 130lbs Conversion: lbs to mg
ppm = mass solute mass solution × 10 6 ppm ppb = mass solute mass solution × 10 9 ppb. ppm = mass solute mass solution × 10 6 ppm ppb = mass solute mass solution × 10 9 ppb. Both ppm and ppb are
convenient units for reporting the concentrations of pollutants and other trace contaminants in water.
Director volvo salary
Molecular Formula Practice Answer Keymolecular formula practice answer key that we will extremely offer. It is not all but the costs. It's approximately what you compulsion currently. This empirical
and molecular formula practice answer key, as one of the most on the go sellers here will definitely be among the best options to review. Page 3/9
What mass, in grams, of calcium nitrate are there in 867mL of a 2.00M calcium nitrate solution? (answer: 285g) 2. What is the molarity of a solution made by dissolving 20.0g silver nitrate in 225g of
water? (answer: 0.523M) 3.
Chemistry Solution Concentration Practice Problems Answer Key 1/5 Downloaded from old.biv.com on March 23, 2021 by guest Download Chemistry Solution Concentration Practice Problems Answer Key
Recognizing the pretension ways to acquire this ebook chemistry solution concentration practice problems answer key is
4.3Comparison of the preferred solutions in Sweden, Norway and Germany Accurate calculation of the design flow is therefore of key importance for all types of treatment facilities (Blecken, 2016).
and messages of the kind posted on bulletin boards in the sample data. In areas where there is a concentration of residents with native In: Museum Practice, London, The Museums Association, No. 23, | {"url":"https://hurmanblirrikltmboh.netlify.app/18210/91854","timestamp":"2024-11-11T23:37:32Z","content_type":"text/html","content_length":"10931","record_id":"<urn:uuid:03d638c6-efac-4707-9728-04d3859a36e5>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00371.warc.gz"} |
CSCE 121 HOMEWORK 4 solved
1. (20 points) Chemical formulae are usually written with the proportion of the
different atoms expressed as ratios of small integers, subject to the possible
ionization states. For example, water is H2O, which means that the ratio of the number of hydrogen atoms in a particular volume of water to the number of oxygen atoms is 2 to 1. Different atoms have
different masses, so the ratio of the weights of hydrogen to oxygen in water is (2 * mass of H) to (1 * mass of O), i.e., 2 * 1 to 1 * 16, which is 2 to 16 by weight, so 18 grams of water would
contain 2 grams of H and 16 grams of O.Going the other direction, if we know the ratio of weights, we can divide by the molar masses to get the formula.
Unfortunately, the calculation is often not that simple, sinceatomic masses are measured against carbon-12. That makes sodium 22.989769 instead of 23, the actual number of protons and neutrons per
sodium atom. To further complicate matters, many atoms have several isotopes with different mass numbers. For example, most oxygen is 16, but a tiny fraction is 17 or 18. The result is an average
value slightly over 16. Therefore, when working with real-world measurements,we need to find the closest simple ratio (where “simple” means “having the smallest denominator”).
In the case of ammonia, we may find that Avogadro’s number of ammonia molecules (called “1 mole”) contains 14.01 grams of nitrogen and 3.03 grams of hydrogen. Dividing by the molar masses (14 for
nitrogen and 1 for hydrogen), we get 14.01/14 atoms of N to 3.03/1 atoms of H, i.e., 1.00071 to 3.03000, so the formula of ammonia would be N100071H303000, since the ratio needs to be expressed in
integers. However, it is obviously much easier to say ammonia is NH3, and 1/3 is within 1% of 100071/303000, (and if we used ionization states we could deduce 1/3 is actually correct).
Now that you understand why it is needed, write a program named hw4pr1.cpp which repeatedly reads experimental data from the keyboard and finds the fraction with the smallest denominator which has
less than 1% relative error.
Hint: Use nested for loops to try fractions in order of increasing denominator, e.g., 1/1, 1/2, 1/3, 2/3, …, till you find one close enough. (The sequence will be different if the ratio is greater
than 1.)
In the example of ammonia above, your program should run like this:
What is the symbol for your first atom? N
What is the mass number of N? 14
What is the measured weight of N in grams? 14.01
What is the symbol for your second atom? H
What is the mass number of H? 1
What is the measured weight of H in grams? 3.03
Simplest close ratio of N to H is 1 to 3, so formula is NH3
(By convention, chemists write NH3, not N1H3, so don’t print the 1.)
Note: This process is called stoichiometry, if you want to look up more about it. There is an online reference table including mass numbers at
Here is another test case: For 28.10 grams of silicon and 31.98 grams of oxygen your program should find SiO2.
2. (20 points) There is an old poem which was supposed to help you remember how to spell words containing “i” and “e” together (like “believe” and “receive”):
I before E,
Except after C,
Or sounded as A
In “neighbor” or “weigh.”
However, it was eventually realized that this “rule” is useless, since it is incorrect for so many words (like “ancient,” “weird,” “society,” etc.). Write a program named hw4pr2.cpp to count how many
words in the online dictionary contain “cei” and how many words contain “cie.” The dictionary on build.tamu.edu is at /usr/share/dict/american.
3. (10 points) A version of the calculator program has had three bugs deviously inserted for you to find and correct. Find the errors by running the program and reading the code, fix the errors, and
name your corrected version hw4pr3.cpp. The buggy program is athttp://courses.cse.tamu.edu/daugher/misc/PPP/homeworks/calculator_buggy.cpp. | {"url":"https://codeshive.com/questions-and-answers/csce-121-homework-4-solved/","timestamp":"2024-11-03T02:39:46Z","content_type":"text/html","content_length":"102033","record_id":"<urn:uuid:01587b4d-50a5-4610-9b8a-b66a2156e194>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00866.warc.gz"} |
Create Bivariate Visualizations In R Using ggplot2
Creating visualizations in R using ggplot2 can be a powerful way to explore and understand your data. One common type of visualization is the bivariate plot, which allows you to examine the
relationship between two variables.
In this tutorial, you’ll learn how to produce bivariate visualizations in R using ggplot2. This blog will specifically focus on visualizations that would be difficult to perform in Power BI but
easy to do in R.
Three main topics will be discussed in this tutorial. You’ll learn how to visualize the distributions of a variable by group, and how to visualize correlations and pairwise relationships.
A pairwise relationship refers to the relationship between each pair of variables in a given dataset.
For this tutorial, you need to download the ggplot2 package. Once done, open a blank R script and bring in two libraries: tidyverse and GGally.
GGally is an extension to ggplot2. It’s built to reduce the complexity of combining geometric objects with transformed data.
The Different Bivariate Visualizations In R
A bivariate visualization shows the relationship between two variables.
As an example, let’s create a visualization that shows the relationship between the city and the highway. You need to use the ggplot ( ) function and then assign the appropriate data.
The geom_point ( ) function is then used to generate the scatter plot.
Visualizations In R Showing Correlation
The ggcorr () function is used to visualize the correlation between variables. This will generate a heat map with the lowest to highest correlation values displayed. You can further improve the
visualization by adding an argument that will show the labels.
Visualizations In R Showing A Pairwise Relationship
For the pairwise plot, you need to use the ggpairs ( ) function.
Since the data frame in this example contains a large dataset, it first needs to be filtered to only show numeric values or else, the results will show an error.
To filter data, use the pipe operator and the select_if ( ) function.
In the Plots tab, you can see the pairwise visualization generated by the code. You can also see the graph and correlation value between each variable.
Another thing you can do with pairwise plots is to add extra elements to augment the visualization. You can add another variable and change the color of the data.
In this case, the drive column is added to the code, and the aesthetic mapping function is used to change its color.
When you run the code, you’ll see that the plot shows scatter plots and the correlation values by drive. The diagonal also shows the distribution according to each drive.
***** Related Links *****
R Scripting For Power BI Using RStudio
Seaborn Function In Python To Visualize A Variable’s Distribution
ggplot2 Plots In Excel For Advanced Data Visualizations
If you want to create robust and statistically backed visualizations such as histograms, scatter plots, and box plots, it’s recommended to use ggplot2 with GGally.
The R programming language together with various visualization packages like ggplot2 allows users to build visualizations that show the relationship and correlation between variables.
GGally extends ggplot2 by augmenting several functions that reduce complexity. If you try to create bivariate and multivariate visualizations in Power BI, they’ll prove to be a challenge. However,
within the R programming language, you only need to write a single line of code to arrive at the statistical plot you need.
All the best,
George Mount
Learn R by working on practical, real-world projects.
Unlock the full potential of R in your data analysis tasks and elevate your skills from proficient to expert.
Learn to master the art of data visualization using R. This guide covers everything from basic plots to complex, interactive visualizations.
This thread explores advanced topics in data analytics, focusing on building data pipelines, comparing SQL and R for data transformation, and applying predictive modeling techniques such as customer
churn analysis and time series forecasting in R.
A hands-on guided project to discover hidden patterns and relationships in retail transaction data using the Apriori algorithm in R.
An in-depth, hands-on course designed to teach the practical application of hierarchical clustering in R, complete with real-world examples, to enhance advanced analytical skills.
This project aims to teach the principles of prescriptive analytics and optimization through hands-on examples using the R programming language.
A comprehensive guide to effectively manipulate and transform data using the dplyr package in R.
Learn how to harness the power of Random Forest models to tackle real-world business challenges.
A comprehensive guide to writing efficient, reusable code and performing analysis using the R language.
A comprehensive guide to predicting stock price trends using Random Forest models in R.
A project aimed at optimizing inventory levels for a manufacturing company through predictive modeling using Random Forests in R.
Mastering R with Practical Projects
Advanced Data Analysis with R: From Proficiency to Mastery
The Ultimate Guide to Visualization in R Programming
Comprehensive Guide to Data Transformation and Prediction with R
Market Basket Insights Using Association Rule Learning in R
Mastering Hierarchical Clustering with R: Dendrograms and Cluster Trees in Action
Mastering Prescriptive Analytics with R: A Practical Guide
Mastering Data Manipulation in R with dplyr
Mastering Random Forest Models for Business Applications
Mastering Reusable Code and Analysis in R
Forecasting Stock Price Movements Using Random Forest in R
Supply Chain Optimization Using Random Forests and R | {"url":"https://blog.enterprisedna.co/create-bivariate-visualizations-in-r-using-ggplot2/","timestamp":"2024-11-01T19:05:12Z","content_type":"text/html","content_length":"467677","record_id":"<urn:uuid:3c26418f-96fd-49d4-b784-5e00f80cdc81>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00794.warc.gz"} |
Stability for a System of N Fermions Plus a Different Particle with Zero-Range Interactions
Title Stability for a System of N Fermions Plus a Different Particle with Zero-Range Interactions
Publication Journal Article
Year of 2012
Authors Correggi, M, Dell'Antonio, G, Finco, D, Michelangeli, A, Teta, A
Journal Rev. Math. Phys. 24 (2012), 1250017
We study the stability problem for a non-relativistic quantum system in\\r\\ndimension three composed by $ N \\\\geq 2 $ identical fermions, with unit mass,\\r\\ninteracting with a
different particle, with mass $ m $, via a zero-range\\r\\ninteraction of strength $ \\\\alpha \\\\in \\\\R $. We construct the corresponding\\r\\nrenormalised quadratic (or energy) form
$ \\\\form $ and the so-called\\r\\nSkornyakov-Ter-Martirosyan symmetric extension $ H_{\\\\alpha} $, which is the\\r\\nnatural candidate as Hamiltonian of the system. We find a value of
Abstract the mass $\\r\\nm^*(N) $ such that for $ m > m^*(N)$ the form $ \\\\form $ is closed and bounded from below. As a consequence, $ \\\\form $ defines a unique self-adjoint and bounded from
below extension of $ H_{\\\\alpha}$ and therefore the system is stable. On the other hand, we also show that the form $ \\\\form $ is unbounded from below for $ m < m^*(2)$. In analogy
with the well-known bosonic case, this suggests that the system is unstable for $ m < m^*(2)$ and the so-called Thomas effect occurs.
URL http://hdl.handle.net/1963/6069
DOI 10.1142/S0129055X12500171 | {"url":"https://www.math.sissa.it/publication/stability-system-n-fermions-plus-different-particle-zero-range-interactions","timestamp":"2024-11-08T14:54:11Z","content_type":"application/xhtml+xml","content_length":"29679","record_id":"<urn:uuid:67cb0628-dcd1-4ce7-803e-7bd8e56c8611>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00810.warc.gz"} |
Comparing splits using information
Quantifying information
How can we formalize the intuition that some splits contain more information than others? More generally, how can we quantify an amount of information?
Information is usually measured in bits. One bit is the amount of information generated by tossing a fair coin: to record the outcome of a coin toss, I must record either a H or a T, and with each of
the two symbols equally likely, there is no way to compress the results of multiple tosses.
The Shannon (1948) information content of an outcome \(x\) is defined to be \(h(x) = -\log_2{P(x)}\), which simplifies to \(\log_2{n}\) when all \(n\) outcomes are equally likely. Thus, the outcome
of a fair coin toss delivers \(\log_2{2} = 1\textrm{ bit}\) of information; the outcome of rolling a fair six-sided die contains \(\log_2{6} \approx 2.58\textrm{ bits}\) of information; and the
outcome of selecting at random one of the 105 unrooted binary six-leaf trees is \(\log_2{105} \approx 6.71\textrm{ bits}\).
Unlikely outcomes are more surprising, and thus contain more information than likely outcomes. The information content of rolling a twelve on two fair six-sided dice is \(-\log_2{\frac{1}{36}} \
approx 5.16\textrm{ bits}\), whereas a seven, which could be produced by six of the 36 possible rolls (1 & 6, 2 & 5, …), is less surprising, and thus contains less information: \(-\log_2{\frac{6}
{36}} \approx 2.58\textrm{ bits}\). An additional 2.58 bits of information would be required to establish which of the six possible rolls produced the seven.
Application to splits
The split \(S_1 =\) AB|CDEF is found in 15 of the 105 six-leaf trees; as such, the probability that a randomly drawn tree contains \(S_1\) is \(P(S_1) = \frac{15}{105}\), and the information content
\(h(S_1) = -\log_2{\frac{15}{105}} \approx 2.81\textrm{ bits}\). Steel & Penny (2006) dub this quantity the phylogenetic information content.
Likewise, the split \(S_2 =\) ABC|DEF occurs in nine of the 105 six-leaf trees, so \(h(S_2) = -\log_2{\frac{9}{105}} \approx 3.54\textrm{ bits}\). Three six-leaf trees contain both splits, so in
combination the splits deliver \(h(S_1,S_2) = -\log_2{\frac{3}{105}} \approx 5.13\textrm{ bits}\) of information.
Because \(h(S_1,S_2) < h(S_1) + h(S_2)\), some of the information in \(S_1\) is also present in \(S_2\). The information in common between \(S_1\) and \(S_2\) is \(h_{shared}(S_1, S_2) = h(S_1) + h
(S_2) - h(S_1,S_2) \approx 1.22\textrm{ bits}\). The information unique to \(S_1\) and \(S_2\) is \(h_{different}(S_1,S_2) = 2h(S_1,S_2) - h(S_1) - h(S_2) \approx 3.91\textrm{ bits}\).
These quantities can be calculated using functions in the ‘TreeTools’ package.
library("TreeTools", quietly = TRUE)
treesMatchingSplit <- c(
AB.CDEF = TreesMatchingSplit(2, 4),
ABC.DEF = TreesMatchingSplit(3, 3)
## AB.CDEF ABC.DEF
## 15 9
## AB.CDEF ABC.DEF
## 0.14285714 0.08571429
## AB.CDEF ABC.DEF
## 2.807355 3.544321
treesMatchingBoth <- TreesConsistentWithTwoSplits(6, 2, 3)
combinedInformation <- -log2(treesMatchingBoth / NUnrooted(6))
sharedInformation <- sum(splitInformation) - combinedInformation
## [1] 1.222392
## [1] 1.222392
Entropy is the average information content of each outcome, weighted by its probability: \(\sum{-p \log_2(p)}\). Where all \(n\) outcomes are equiprobable, this simplifies to \(\log_2{n}\).
Consider a case in which Jane rolls a dice, and makes two true statements about the outcome \(x\):
\(S_1\): “Is the roll even?”.
• Two equally-possible outcomes: yes or no
• Entropy: \(H(S_1) = \log_2{2} = 1\textrm{ bit}\).
\(S_2\): “Is the roll greater than 3?”
• Two equally-possible outcomes: yes or no
• Entropy: \(H(S_2) = \log_2{2} = 1\textrm{ bit}\).
The joint entropy of \(S_1\) and \(S_2\) is the entropy of the association matrix that considers each possible outcome:
\(S_2: x \le 3\) \(x \in {1, 3}; p = \frac{2}{6}\) \(x = 2; p = \frac{1}{6}\)
\(S_2: x > 3\) \(x = 5; p = \frac{1}{6}\) \(x \in {4, 6}; p = \frac{2}{6}\)
\(\begin{aligned} H(S_1, S_2) = \frac{2}{3}\log_2{\frac{2}{3}} + \frac{1}{3}\log_2{\frac{1}{3}} + \frac{1}{3}\log_2{\frac{1}{3}} + \frac{2}{3}\log_2{\frac{2}{3}} \approx 1.84 \textrm{ bits} \end
Note that this less than the \(\log_2{6} \approx 2.58\textrm{ bits}\) we require to determine the exact value of the roll: knowledge of \(S_1\) and \(S_2\) is not guaranteed to be sufficient to
unambiguously identify \(x\).
The mutual information between \(S_1\) and \(S_2\) describes how much knowledge of \(S_1\) reduces our uncertainty in \(S_2\) (or vice versa). So if we learn that \(S_1\) is ‘even’, we become a
little more confident that \(S_2\) is ‘greater than three’.
The mutual information \(I(S_1;S_2)\), denoted in blue below, corresponds to the sum of the individual entropies, minus the joint entropy:
\[\begin{aligned} I(S_1;S_2) = H(S_1) + H(S_2) - H(S_1, S_2) \end{aligned}\]
If two statements have high mutual information, then once you have heard one statement, you already have a good idea what the outcome of the other statement will be, and thus learn little new on
hearing it.
The entropy distance, also termed the variation of information (Meila, 2007), corresponds to the information that \(S_1\) and \(S_2\) do not have in common (denoted below in yellow):
\[\begin{aligned} H_D(S_1, S_2) = H(S_1, S_2) - I(S_1;S_2) = 2H(S_1, S_2) - H(S_1) - H(S_2) \end{aligned}\]
The higher the entropy distance, the harder it is to predict the outcome of one statement from the other; the maximum entropy distance occurs when the two statements are entirely independent.
Application to splits
A split divides leaves into two partitions. If we arbitrarily label these partitions ‘A’ and ‘B’, and select a leaf at random, we can view the partition label associated with the leaf. If 60/100
leaves belong to partition ‘A’, and 40/100 to ‘B’, then the a leaf drawn at random has a 40% chance of bearing the label ‘A’; the split has an entropy of \(-\frac{60}{100}\log_2{\frac{60}{100}}-\frac
{40}{100}\log_2{\frac{40}{100}} \approx 0.97\textrm{ bits}\).
Now consider a different split, perhaps in a different tree, that assigns 50 leaves from ‘A’ to a partition ‘C’, leaving the remaining 10 leaves from ‘A’, along with the 40 from ‘B’, in partition
‘D’. This split has \(-\frac{50}{100}\log_2{\frac{50}{100}}-\frac{50}{100}\log_2{\frac{50}{100}} = 1\textrm{ bit}\) of entropy.
Put these together, and a randomly selected leaf may now bear one of three possible labellings:
• ‘A’ and ‘C’: 50 leaves
• ‘A’ and ‘D’: 10 leaves
• ‘B’ and ‘D’: 40 leaves.
The two splits thus have a joint entropy of \(-\frac{50}{100}\log_2{\frac{50}{100}} -\frac{10}{100}\log_2{\frac{10}{100}} -\frac{40}{100}\log_2{\frac{40}{100}} \approx 1.36\textrm{ bits} < 0.97 + 1\)
The joint entropy is less than the sum of the individual entropies because the two splits contain some mutual information: for instance, if a leaf bears the label ‘B’, we can be certain that it will
also bear the label ‘D’. The more similar the splits are, and the more they agree in their division of leaves, the more mutual information they will exhibit. I term this the clustering information,
in contradistinction to the concept of phylogenetic information discussed above.
More formally, let split \(S\) divides \(n\) leaves into two partitions \(A\) and \(B\). The probability that a randomly chosen leaf \(x\) is in partition \(k\) is \(P(x \in k) = \frac{|k|}{n}\). \(S
\) thus corresponds to a random variable with entropy \(H(S) = -\frac{|A|}{n} \log_2{\frac{|A|}{n}} - \frac{|B|}{n}\log_2{\frac{|B|}{n}}\) (Meila, 2007). The joint entropy of two splits, \(S_1\) and
\(S_2\), corresponds to the entropy of the association matrix of probabilities that a randomly selected leaf belongs to each pair of partitions:
\(S_2: x \in A_2\) \(P(A_1,A_2) = \frac{|A_1 \cap A_2|}{n}\) \(P(B_1,A_2) = \frac{|B_1 \cap A_2|}{n}\)
\(S_2: x \in B_2\) \(P(A_1,B_2) = \frac{|A_1 \cap B_2|}{n}\) \(P(B_1,B_2) = \frac{|B_1 \cap B_2|}{n}\)
\(H(S_1, S_2) = P(A_1,A_2) \log_2 {P(A_1,A_2)} + P(B_1,A_2) \log_2 {P(B_1,A_2)}\)
\(+ P(A_1,B_2)\log_2{P(A_1,B_2)} + P(B_1,B_2)\log_2{P(B_1,B_2)}\)
These values can then be substituted into the definitions of mutual information and entropy distance given above.
As \(S_1\) and \(S_2\) become more different, the disposition of \(S_1\) gives less information about the configuration of \(S_2\), and the mutual information decreases accordingly. | {"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/TreeDist/vignettes/information.html","timestamp":"2024-11-08T15:42:09Z","content_type":"text/html","content_length":"135654","record_id":"<urn:uuid:8884cb96-acae-4e2b-bef3-d1741fe47c4b>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00765.warc.gz"} |
The basics of encryption
This guide gives an overview of encryption and describes the differences between two types: symmetric and asymmetric.
The key to encryption
Just like with doors, keys are used to lock (encrypt) and unlock (decrypt) information to keep it safe. But instead of a physical key, an encryption key is an long string of random characters.
As an example, a message with sensitive information needs to be sent securely from one person to another. The sender will encrypt the message with one key, and the receiver will decrypt it with
Symmetric encryption
A symmetric key uses the same string for both encryption and decryption which means that both the sender and receiver need the same key. Symmetric encryption is secure when using a strong cipher in
combination with a strong key. A challenge with symmetric encryption is securely storing and exchanging a symmetric key (If exchanged: PSK, short for Pre-Shared Key) between trusted parties, the risk
being that the key falls into the wrong hands.
Symmetric encryption can be used for encrypting files and data, both stored, like on a hard drive, and in transit, like over a computer network.
Learn how to encrypt a file using symmetric encryption
Asymmetric encryption
With asymmetric encryption, a mathematically linked key pair is used for encryption and decryption, one key being "public" (shared) and one being "private" (secret). If a message (like an email) is
encrypted with a public key, it can only be decrypted with the corresponding private key.
If you want to use asymmetric encryption for message exchange, you first create a key pair (a public and a corresponding private key). You then share the public key with whoever you want to be able
to communicate securely with, and you store the private (secret) key somewhere secure and private. Whoever has access to your public key can then encrypt a message with the public key that you have
shared, which ensures only you, who holds the corresponding private key, can decrypt the message.
For example, if Sarah wants to send a message to John, she would use John's public key to first encrypt the message before sending it to him. John would then use his private key to decrypt the
message. In this manner, Sarah can be sure that only John can read the message. Likewise, John knows that the message was intended for him.
Asymmetric encryption can be used for encrypting files and data, both stored and in transit, but is more commonly used for exchanging symmetric keys and digital signatures. A digital signature is
meant to ensure the claimed identity of a person or computer. When digitally signing something, like an email, you use your private key is used to encrypt (sign), and the public key is used to
decrypt (verifying the signature). Since it is assumed that you and only you hold your private key, it can be assumed that the signature (encryption) originates from you as well, thereby proving your
identity to the recipient of your signature.
Sarah can do this by signing the message with her private key. John can then use Sarah's public key to verify that the message was sent by her, as only the combination of Sarah's true public and
private keys would give a valid result. They both now know that Sarah is Sarah and that John is John. Mission accomplished!
Learn how to send messages using asymmetric encryption
The importance of key management
Challenges when using encryption and cryptography, whether symmetric or asymmetric, is (1) securely storing and exchanging secret keys, (2) choosing a strong cryptographic algorithm and key length
(number of bits). You must store your secret keys in a secure location and ensure strong algorithms are used.
Integrate encryption into your email program
Now that you've learned the basics of encrypting and decrypting files, learn how to automate the encryption and decryption of emails with our guide on integrating encryption into your email program. | {"url":"https://mullvad.net/zh-hant/help/basics-encryption?Topic=privacy","timestamp":"2024-11-04T08:53:21Z","content_type":"text/html","content_length":"39194","record_id":"<urn:uuid:9744f2f9-d11e-4068-b95e-4dae15461ab3>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00504.warc.gz"} |
Rotational angle measurement apparatus - Patent 2333492
1. Field of the Invention
The present invention relates to a rotational angle measurement apparatus including a magneto-resistance element (MR element) having a pinned magnetic layer. The invention particularly relates to a
rotational angle measurement apparatus capable of correcting a pin-angle error.
2. Description of the Related Art
A rotational angle measurement apparatus using such an MR element is known, for example, by Japanese Patent No.
, etc.
Examples of known magneto-resistance elements (MR element) include a giant magneto-resistance element (GMR element) and a tunneling magneto-resistance element (TMR element). The outline of MR element
is to be described by way of a magnetic field measurement apparatus using a GMR element as an example.
The GMR element has a first magnetic layer 13 (pinned magnetic layer) and a second magnetic layer 11 (free magnetic layer) in which a non-magnetic layer 12 (spacer layer) is sandwiched between both
of the magnetic layers. When an external magnetic field is applied to the GMR element, while the magnetization direction of the pinned magnetic layer does not change and remains fixed as it is, the
magnetization direction 20 of the free magnetic layer changes in accordance with the direction of the external magnetic field.
The angle of magnetization direction in the pinned magnetic layer is referred to as a pin angle and represented by θp.
When a voltage is applied across the end of the GMR element, a current flows in accordance with the resistance of the element, and the magnitude of the resistance of the element changes depending on
the difference: Δθ = θf - θp between the magnetization direction (pin angle) θp of the pinned magnetic layer and the magnetization direction θf of the free magnetic layer. Accordingly, when the
magnetization direction θp of the pinned magnetic layer is known, the magnetization direction θf of the free magnetic layer, that is, the direction of the external magnetic field can be detected by
measuring the resistance value of the GMR element with the use of the property described above.
The mechanism in which the resistance value of the GMR element changes according to Δθ = θf - θp is as described below.
The magnetization direction in the thin-film magnetic film is concerned with the direction of electrons' spin in a magnetic material. Accordingly, in the case where Δθ = 0, for the electrons in the
free magnetic layer and the electrons in the pinned magnetic layer, the ratio of electrons with the directions of spins being identical is high. By contrast, in the case where Δθ = 180°, the ratio of
electrons with the directions of the spins being opposite to each other is high for the electrons in both of the magnetic layers.
Fig. 2 schematically shows a cross section of the free magnetic layer 11, the spacer layer 12, and the pinned magnetic layer 13. Arrows shown in the free magnetic layer 11 and the pinned magnetic
layer 13 schematically show the direction of the spin for majority electrons.
Fig. 2A shows a case where Δθ= 0 in which the directions of spins are aligned in the free magnetic layer 11 and the pinned magnetic layer 13. Fig. 2B shows a case where Δθ = 180° in which the
directions of spins are opposite to each other in the free magnetic layer 11 and the pinned magnetic layer 13.
In the case of θ = 0 in Fig. 2A, since electrons of an identical spin direction are predominant in the free magnetic layer 11, the right spin electrons emitting from the pinned magnetic layer 13 are
less scattered in the free magnetic layer 11 and pass along the trajectory as an electron trajectory 810.
On the other hand, in the case of Δθ = 180° in Fig. 2B, electrons of right spin emitting from the pinned magnetic layer 13 are scattered more frequently and pass along the trajectory as an electron
trajectory 810 when entering the free magnetic layer 11, since there are many electrons of opposite spin. As described above, in the case where Δθ = 180°, since electrons are scattered more
frequently, electric resistance is increased.
In an intermediate case where Δθ is in the range between 0 and 180°, it is in an intermediate state between Fig. 2A and Fig. 2B. The resistance value R of the GMR element is represented as:
in which G/R is referred to as a GMR coefficient which is from several % to several tens %.
Since way of current flow (that is, electric resistance) can be controlled depending on the direction of the electrons' spin, the GMR element is also referred to as a spin-valve device.
Further, in a magnetic film of thin film thickness (thin-film magnetic film), since the demagnetizing factor in the direction normal to the surface is extremely large, the magnetization vector cannot
rise vertically in the normal direction (direction of film thickness) and lies in the plane. Since both of the free magnetic layer 11 and the pinned magnetic layer 13 constituting the GMR element are
sufficiently thin, respective magnetization vectors lie in the in-plane direction.
Fig. 3A shows a case where a Wheatstone bridge 60A is formed by using four GMR elements R1 (51-1) to R4 (51-4). The bridge 60A is used as a magnetic sensor.
In this case, the magnetization direction in the pinned magnetic layer of the GMR element R1 (51-1) and R3 (51-3) is set as θp = 0, and the magnetization direction in the pinned magnetic layer of the
GMR element R2 (51-2) and R4 (51-4) is set as θp = 180°. Since the magnetization direction θf in the free magnetic layer is determined by an external magnetic field, and the magnetization direction
θf is identical for four GMR elements. Therefore, a relation: Δθ = θf - θ2 = θf - θp1-π = Δθ1 + π is established. Since θΔ1 is based on θp = 0, it is substituted as: Δθ1 = θ. Accordingly, as can be
seen from the equation (1), the GMR elements R1, R3 are each represented by:
in which (n = 1, 3), and the GMR elements R2, R4 are each represented by:
in which (n = 2, 4).
When an excitation voltage e0 is applied to a bridge 60A, a differential voltage Δv = V2 - V1 between terminals V1 and V2 is represented by the following equation (4):
When substituting the equation (2) and the equation (3) into the equation (4), assuming Rn0 as equal for n = 1 to 4, and setting as: R0 = Rn0, it is represented as:
As described above, since the signal voltage Δv is in proportion to cosθ, the direction 8 of the magnetic field can be detected. Further, since the bridge outputs a signal in proportion to cosθ, it
is referred to as a COS bridge.
Further, Fig. 3B shows a bridge 60B in which the direction in the pinned magnetic layer is changed by 90° from that of the COS bridge in Fig. 3A. That is, the bridge is constructed with GMR elements
at θp = 90° and 270°. By calculating in the same manner as described above, we obtain the signal voltage as follows: :
Since the signal voltage is in proportion to sinθ, the bridge 60B is referred to as a SIN bridge.
By calculating the arctangent for the ratio of two output signals of the COS bridge and the SIN bridge, the direction θm of the magnetic field vector (angle of magnetic field) is determined as:
As described above, the magneto-resistance element has a feature capable of directly detecting the direction of the magnetic field.
The magnetic field dependent term for the resistance of the magneto-resistance element is determined by the difference ΔS = θm - θp between the magnetization direction (pin angle) θp of the pinned
magnetic layer and the angle of the external magnetic field θm as shown in the equation (1). In other words, the pin angle θp is a reference angle. Accordingly, when the setting for the pin angle
includes an error, the equation (5) and the equation (6) are not valid and the angle determined according to the equation (7) no more shows an exact angle of magnetic field θm.
As an example, it is assumed that the pin angle of the GMR elements R2, R4 of the COS bridge shown in Fig. 3A is deviates by 0.5° from the respective correct angle, and the pin angle of the GMR
elements R2, R4 of the SIN bridge shown in Fig. 3B deviates by -1°.
Fig. 4 shows a difference (i.e. measurement error) between the angle θ1 determined from signals Vx and Vy from each of the bridges in accordance with the equation (7) and a real angle of the magnetic
field θm in the case described above. The measurement error changes depending on the real angle of magnetic field θm and has amplitude of about 1°. As described above, the pin-angle error of 1°
corresponds to an angle measurement error of about 1°. Accordingly, in order to obtain measurement accuracy, for example, of ±0.2°, it is necessary to set all pin angles at an accuracy of about 0.2°.
A method of manufacturing a magnetic sensor having a plurality of pin angles therein includes, for example, a method of arranging magneto-resistance elements (corresponding to each of Ri (i = 1 to 4)
in Fig. 3) or a method of changing the direction of the external magnetic field applied upon depositing the pinned magnetic layer. However, in any of the methods, it is extremely difficult to set all
pin angles each at a high accuracy of about 0.2°.
Concerning to this problem, a method of correcting an angle measurement error caused by the pin-angle error has been known (for example, refer to
, a rotational angle θ and a measurement angle θ (meas) measured by a magnetic sensor at this instance are measured, and then an error ΔΦ (θ) between both of them is determined as function of the
rotational angle θ. That is, the error is represented as:
Then, since the error ΔΦ(θ) is in the form of a 180° cycle as shown in Fig. 4, correction function S1(θ, α) is defined as shown by the equation (9) as:
Then, a parameter α is determined such that a function E1 (α) defined by the following equation (10) is minimum:
where integration is a 1 cycle integration for θ = 0 to 360°.
After the error of a second harmonic component is removed as described above, a fourth harmonic component is left. Then, the correction function S2(θ, β) for fourth harmonic is defined as shown in
the following equation (11);
Then, a parameter β is determined such that a function E2 (β) defined by the following equation (12) is minimum:
During operation of the magnetic sensor, the error is corrected by using the correction function determined as described above according to the following equation
As described above, the magnetic field measurement apparatus using the magneto-resistance element having the pinned magnetic layer involves a problem that error occurs in the measured angle when
there is a setting error for the magnetization direction of the pinned magnetic layer (pin angle).
Concerning to this problem, the correction method described in
involves three problems.
1. (1) First, the amount of calculation operation is enormous for obtaining the correction parameters α and β, since integrations E1 and E2 are minimized by repeating the integrations E1 and E2
while changing α and (β,
2. (2) Secondly, since the correction functions S1 and S2 are functions of 2θ and 4θ, an absolute value for the angle of magnetic field is necessary for the correction, and it requires a control
device with a known angle, such as an encoder.
3. (3) Thirdly, since the correction equation (13) used during sensor operation includes a plurality of trigonometric functions, which require much amount of operation, the amount of calculation
operation is large to necessitate a high speed microcontroller or the like. The correction during the sensor operation requires high speed operation
That is, the existent method of correcting the measurement error caused by the pin-angle error involves a problem that the amount of calculation operation required for correction is enormous.
The present invention intends to provide a rotational angle measurement apparatus capable of correcting an error caused by a pin-angle error with a small amount of calculation operation.
In the present specification, function SQRT(y) represents "Square root of y".
1. (1) To attain the purpose, the present invention provides a rotational angle measurement apparatus having a magnetic sensor and a signal processing unit, the magnetic sensor including two bridges
that comprises magneto-resistance elements each having a pinned magnetic layer, and the signal processing unit receiving an output signal Vx from a first bridge as an input signal Vx and an
output signal Vy from a second bridge as an input signal Vy and outputting an angle of magnetic field θ in which the difference between a ratio Vy/Vx and a tanθ is a constant non-zero value when
the absolute value |Vx| of the output signal Vx is larger than or equal to the absolute value |Vy| of the output signal Vy in the signal processing unit.
With the constitution as described above, an error caused by a pin-angle error can be corrected with a small amount of operation.
(2) Assuming the constant value as x in (1) described above, the constant value x preferably satisfies (1/SQRT(1-x^2))× (Vy/Vx) - tanθ = x and the constant value x does not depend on the θ. In the
present specification, a function SQRT(y) represents "square root of y".
(3) The signal processing unit in (1) described above preferably includes a ratio-calculation unit that calculates the ratio Vy/Vx of the output signals Vx, Vy, a parameter correction unit that
subtracts a predetermined correction parameter β from the ratio Vy/Vx calculated by the ratio-calculation unit, and an atan-processing unit that conducts an arctangent processing on the value
calculated by the parameter correction unit and calculates the angle of magnetic field θ.
(4) The parameter correction unit in (3) described above preferably divides the calculated value by Bx = SQRT(1 -β^2).
(5) The apparatus in (3) described above preferably includes an offset-subtraction unit that subtracts predetermined offsets bx and by from the output signal Vx of the first bridge and the output
signal Vy of the second bridge respectively, in which the output signal from the offset-subtraction unit is inputted to the ratio-calculation unit of the signal processing unit.
(6) In (3) described above, the signal processing unit preferably includes an averaging unit that calculates the correction parameter β from an average value for the duration in which the direction
of the magnetic field turns for one rotation relative to the ratio Vy/Vx calculated by the ratio-calculation unit.
(7) The apparatus in (6) described above preferably includes a window function processing unit that multiplies the ratio Vy/Vx calculated by the ratio-calculation unit by a window function W(r)
having the ratio Vy/Vx as an argument, in which the averaging unit calculates the average value relative to the output from the window function processing unit for the duration in which the direction
of the magnetic field turns for one rotation.
(8) In (7) described above, the window function W(r) is an even function.
(9) In (7) described above, the parameter correction unit preferably divides the calculated value by Bx = SQRT(1 -β^2) .
(10) In (1) described above, the magneto-resistance element is preferably a giant magneto-resistance element.
(11) The present invention provides, for attaining the aforementioned purpose, a rotational angle measurement apparatus comprising a magnetic sensor and a signal processing unit, the magnetic sensor
including two bridges comprising magneto-resistance elements each having a pinned magnetic layer, the signal processing unit receiving an output signal Vx from a first bridge as an input signal Vx
and an output signal Vy from a second bridge as an input signal Vy, and outputting an angle of magnetic field θ, in which the signal processing unit includes an averaging unit that calculates the
correction parameter β from an average value for the duration in which the direction of the magnetic field turns for one rotation relative to the ratio Vy/Vx of the output signal.
With the constitution described above, an error caused by a pin-angle error can be corrected with a small amount of operation.
(12) In (11) described above, the apparatus preferably includes a window function processing unit that multiplies a window function W(r) having the ratio r (=Vy/Vx) as an argument to the ratio Vy/Vx
calculated by the ratio-calculation unit, in which the averaging unit calculates an average value relative to the output from the window function processing unit for the duration in which the
direction of the magnetic field turns for one rotation.
(13) The apparatus in (11) described above, preferably includes an offset-subtraction unit that subtracts predetermined offsets bx and by from the output signal Vx of the first bridge and the output
signal Vy of the second bridge respectively, in which the output signal from the offset-subtraction unit is inputted to the ratio-calculation unit of the signal processing unit.
(14) In (13) described above, the averaging unit preferably determines the offset voltages bx, by upon first rotation during twice rotation of the magnetic field at a constant angular velocity, and
the offset-subtraction unit preferably calculates values Vx' = Vx - bx and Vy' = Vy - by obtained by subtracting the offset voltages bx, by from the signals Vx, Vy respectively, and the averaging
unit preferably determines the amount of pin-angle error β for the values Vx', Vy' upon second rotation of the magnetic field.
(15) In (11) described above, the magneto-resistance element is preferably a giant magneto-resistance element.
According to the invention, the error caused by the pin-angle error can be corrected with a small amount of calculation operation.
Fig. 1 is a schematic view showing the constitution of a giant magneto-resistance element;
Figs. 2A and 2B are schematic views showing the behavior of electrons in the giant magneto-resistance element;
Figs. 3A and 3B are schematic views showing a sensor bridge in a magnetic sensor used in a rotational angle measurement apparatus using the giant magneto-resistance elements;
Fig. 4 is a view showing an error contained in a measurement angle in the case where the pin angle includes an error;
Fig. 5 is a block diagram showing a first constitution of a rotational angle measurement apparatus for examining a pin-angle error α according to a first embodiment of the invention;
Fig. 6A and 6B are constitutional view of a magnetic sensor used in the rotational angle measurement apparatus according to the first embodiment of the invention;
Figs. 7A and 7B are schematic views showing the phase difference of each bridge in the magnetic sensor used in the rotational angle measurement apparatus according to the first embodiment of the
Fig. 8 is a block diagram showing a first constitution of a rotational angle measurement apparatus for correcting a pin-angle error α according to a second embodiment of the invention;
Fig. 9 is an explanatory view for an estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the second embodiment of the invention;
Fig. 10 is an explanatory view for an estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the second embodiment of the invention;
Fig. 11 is a block diagram showing a first constitution of a rotational angle measurement apparatus for examining the pin-angle error α and correcting the pin-angle error α according to a third
embodiment of the invention;
Fig. 12 is an explanatory view for the waveform of a ratio r = Vy/Vx of a signal in the rotational angle measurement apparatus according to the first embodiment of the invention;
Fig. 13 is a block diagram showing a second constitution of the rotational angle measurement apparatus for examining the pin-angle error α according to the third embodiment of the invention;
Fig. 14 is an explanatory view for a window function used in a window function processing unit of the rotational angle measurement apparatus according to the third embodiment of the invention;
Fig. 15 is an explanatory view for the window function used in the window function processing unit of the rotational angle measurement apparatus according to the third embodiment of the invention;
Fig. 16 is an explanatory view for the estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the third embodiment of the invention;
Fig. 17 is an explanatory view for the estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the third embodiment of the invention;
Fig. 18 is an explanatory view for the estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the third embodiment of the invention;
Fig. 19 is a block diagram showing a second constitution of a rotational angle measurement apparatus for examining a pin-angle error α and correcting the pin-angle error α according to a fourth
embodiment of the invention;
Fig. 20 is a block diagram showing a second constitution of a rotational angle measurement apparatus for correcting a pin-angle error α according to a fifth embodiment of the invention;
Fig. 21A and 21B are explanatory view for the estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the fifth embodiment of the invention;
Fig. 22 is an explanatory view for the estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the fifth embodiment of the invention;
Fig. 23 is a block diagram showing a third constitution of a rotational angle measurement apparatus for examining a pin-angle error α and correcting the pin-angle error α according to a sixth
embodiment of the invention;
Fig. 24 is a block diagram showing the constitution of a rotational angle measurement apparatus according to a seventh embodiment of the invention;
Fig. 25 is a block diagram showing a third constitution of a rotational angle measurement apparatus for correcting a pin-angle error α according to an eighth embodiment of the invention;
Fig. 26 is a constitutional view of a motor system using the rotational angle measurement apparatus according to each of the embodiments of the invention;
Fig. 27 is a constitutional view of a motor system using the rotational angle measurement apparatus according to each of the embodiments of the invention;
Fig. 28 is a constitutional view of an electric power steering system using a rotational angle measurement apparatus according to each of the embodiments of the invention; and
Fig. 29 is an explanatory view of an inspection system upon manufacturing a magnetic sensor using the rotational angle measurement apparatus according to each of the embodiments of the invention.
The constitution and the operation of a rotational angle measurement apparatus according to a first embodiment of the invention are to be described with reference to Figs. 5 to 7.
First of all, a first constitution of the rotational angle measurement apparatus for examining a pin-angle error α according to this embodiment is to be described with reference to Fig. 5.
Following abbreviations are used in Figs. 5, 8, 11, 13, 19, 20, 23, 24, and 25 :
"ROT. AGL. MEA. APPR" stands for "rotational angle measurement apparatus"; "MAG.SENS" stands for "magnetic sensor"; "DETC.CKT" stands for "detection circuit unit";
"SIG.PROC" stands for "signal processing unit"; "AVR." stands for "averaging unit"; "DUR.DETM" stands for "duration-determination unit"; "MEM" stands for "parameter-storing unit".
Fig. 5 is a block diagram showing the first constitution of the rotational angle measurement apparatus for examining the pin-angle error α according to the first embodiment of the invention.
A rotational angle measurement apparatus 201D of this embodiment has a magnetic sensor 301 and a detection circuit unit 302D. The detection circuit unit 302D has a signal processing unit 303D. The
magnetic sensor 301 has two bridges (COS bridge 60A and SIN bridge 60B) comprising GMR elements. A differential amplifier 351A detects a difference voltage between terminals V1 and V2 of the COS
bridge to output a difference signal Vx, in which it is set as Vx = -ΔVc = -(V2-V1). A differential amplifier 351B detects a difference voltage between terminals V1 and V2 of the SIN bridge to output
a difference signal Vy, in which it is set as Vy = ΔVs.
In the present specification, the difference signals Vx and Vy are referred to as output signals of the respective bridges.
The constitution of the magnetic sensor used in the rotational angle measurement apparatus according to this embodiment is to be described with reference to Figs. 6A and 6B.
Fig. 6A and 6B is a constitutional view of the magnetic sensor used in the rotational angle measurement apparatus according to the first embodiment of the invention.
The magnetic sensor used in this embodiment comprises a COS bridge 60A shown in Fig. 6A, and an SIN bridge 60B shown in Fig. 6B.
The pin angle of magneto-resistance elements R1 (51-1) and R3 (51-3) constituting the COS bridge 60A is set to θp = 0, and the pin angle of the magneto-resistance elements R2 (51-2) and R4 (51-4) is
set as: θp = 180°.
The pin angle of magneto-resistance elements R1 (52-1) and R3 (52-3) constituting the SIN bridge 60B is set to θp = 90°, and the pin angle of the magneto-resistance elements R2 (52-2) and R4 (52-4)
is set as: θp = 270°.
As described above, the actual magnetic sensor contains an error in the setting of the pin angle. The pin-angle error (error) of each of the magneto-resistance elements is assumed as αi (i = 1 to 4).
That is, as shown in Fig. 6A and 6B, respective pin angles of the COS bridge are assumed as αp = 0 - α1, 180° - α2 and respective pin angles of the SIN bridge are assumed as θp = 90 - α3, 270° - α4.
The pin angle is set, for example, by setting the magnetization direction αp by applying an external magnetic field upon depositing a pinned magnetic layer. Accordingly, the pin-angle error αi of the
magneto-resistance elements of an identical pin angle in each of the bridges is identical. Therefore, the model adapted to have four types of error αi in the pin-angle setting as shown in Fig. 6A and
6B is valid without loss of generality.
In this embodiment, an error due to the pin-angle error is detected in the rotational angle measurement apparatus using the magnetic sensor having an error of the pin angle as shown in Fig. 6A and
6B. Further, in other rotational angle measurement apparatus to be described later, an error due to the detected pin-angle error is corrected by the output rotational angle.
At first, it is described that the problem of four types of pin-angle errors αi (i = 1 to 4) is attributable to the problem of one type of pin-angle error α.
At first, the effect of the error of two types of pin angles in the COS bridge shown in Fig. 6A is to be described.
When a portion depending on the direction of the magnetic field is expanded and arranged, the following equation (17) can be obtained:
Assuming A = cosα1 + cosα2, B = sinα1 + sinα2, and r = SQRT (A
+ B
) in the equation (17), sinα
is represented as:
Then, the amplitude r of the equation (18) is estimated. In a case where α1 = α2 (that is, no pin-angle error in the bridge), r = 2. Further, in the case where the pin-angle error in the bridge is
4°, for example, α1 = +2° and α2 = -2°, r = 2 × 0.9994, in which the amplitude difference is only 0.06%. This is a level that cannot be detected experimentally and therefore, in the case where the
pin-angle error is 4°, there is no substantial amplitude variation. Also in the case where the α1 = +5° and α2 = -5° (pin-angle error of 10°), r = 2 × 0.996 which is a level with no substantial
amplitude variation. Accordingly, in the case where the pin-angle error in the bridge is 10° or less, there is no substantial amplitude variation, and therefore, only the phase variation should be
taken into consideration.
That is, it can be seen that the output signal of the COS bridge where a pin-angle error is present in the bridge may be considered on the coordinate system with the average value for two pin-angle
errors as the angle origin. This is also applicable to the SIN bridge output.
In the following description, the angle origin of the coordinate system is referred to as "the reference angle of pinned magnetic layer of a bridge".
As can be seen from the result described above, the angle origin of the COS bridge 60A is αc represented by the equation (20), and the angle origin of the SIN bridge 60B moves to αs = (α3 + α4)/2.
Fig. 7 schematically shows the present situation. In Fig. 7, an effective coordinate axis is denoted by a dotted line. The X axis 70 of the effective coordinate axis acts as the reference angle of
pinned magnetic layer of a bridge.
Referring to Fig. 7, the ratio between the signal Vx of the COS bridge and the signal Vy of the SIN bridge is represented by the following equation.
in which α = αs - αc.
As described above, also when 4 types of pin-angle error αi (i = 1 to 4) are included, correction can be made by the pin angle α represented by the equations (20) and (21).
In this case, we put θ = θ' + αc, and therefore, αc is unknown. αc can be determined easily by correlating the origin of the rotational angle measurement apparatus with the system origin of an
equipment to which the rotational sensor is applied.
From the result described above, signals from the COS bridge having the pin-angle errors α1, α2, and signals from the SIN bridge having the pin-angle errors α3, α4 can be defined by the following
equations (22) and (23).
in which C is a proportional constant, α = αs - αc, αc = (α1 + α2)/2, and αs = (α3 + α4)/2.
Assuming as; Vx = -ΔVc, and Vy = ΔVs and determining the ratio Ryx of Vx and Vy, the ratio is defined as:
When a sin function for the numerator is expanded, the following equation (25) can be obtained.
Since tanθ is an odd function, the first term is reduced to zero by averaging the equation (25) over a range of θ = 0 to 360°, and therefore, sinα, is determined. This is represented by the equation:
Here, average( ) represents a processing for averaging the first argument in the interval of the second argument. The averaging interval [0, 2π) shows "starting from 0 to just before 2π". 2π is not
included so as to avoid double calculation with a case of θ = 0.
According to the equation (26), a pin-angle error α to be determined is obtained. It is set in this embodiment as β = sinα. As will be described later, when correction is conducted based on the
pin-angle error α during the operation of the GMR rotational sensor, β = sinα is used. Accordingly, in the actual correction, it may suffice to determine β and arcsine calculation is not necessary.
As can be seen from the equation (25), a barycenter for the ratio Ryx is determined in the equation (26). Accordingly, in the equation (25), the equation (26) may be averaged by sampling at an equal
interval with respect to θ. For example, Ryx may be sampled at a constant time interval while rotating a magnetic field generator at a constant angular velocity.
In the actual calculation of a correction coefficient, since Ryx diverges infinitely in the vicinity of Vx = 0 in the equations (25) and (26), a conditional operation based on absolute values of Vx
and Vy is introduced. That is, it is conditioned as:
In the equation (27), since one-half of sampling points is taken in the interval [0, 2π), the value for the equation (27) is equal to β = sinα in view of the nature of an odd function of tanθ in the
equation (23).
Then, the constitution and the operation of the signal processing unit 303D are to be described again with reference to Fig. 5.
The output signal Vx of the COS bridge, that is, the output signal Vx of the differential amplifier 351A is defined as the input signal Vx to the signal processing unit 303D, and the output signal Vy
of the SIN bridge, that is, the output signal Vy of the differential amplifier 351B is defined as the input signal Vy to the signal processing unit 303D.
The signal processing unit 303D has a ratio-calculation unit 381, an averaging unit 386, a duration-determination unit 387, and a parameter-storing unit 390.
The ratio-calculation unit 381 receives the input signals Vx, Vy inputted to the signal processing unit 303D and calculates the ratio Vy/Vx. Specifically, the signals Vx, Vy are inputted to an A/D
converter of a microcontroller and the ratio-calculation unit 381 may be disposed in the microcontroller. Upon calculation of the ratio Vy/Vx, the calculation error can be reduced by the conditional
branching based on comparison of the absolute values as shown in the equation (27).
Then, the averaging unit 386 receives the ratio r = Vy/Vx and averages the same. Averaging is conducted for the duration in which the direction of the magnetic field turns for one rotation. To detect
the rotational duration, the duration is determined by using a duration-determination unit 387. Specifically, the duration is determined as one duration till which the voltage of the signal Vx twice
passes the value equal to the starting voltage. Since the signal Vx is in proportion to a cosθ, twice passage through the identical value corresponds to 1 cycle. As shown in the equation (27), the
average value is equal to the sine of the angle error α ((β = sinα).
Duration for averaging processing may also be the duration in which the direction of the magnetic field rotates for a plurality of times. When the averaging duration is an integer multiple of 360°,
that is, [0, 2Nrπ), the obtained average value is equal to the sine of the pin-angle error α ((β= sinα), since the first term in the equation (25) is reduced to zero. Nr is an integer of 1 or
greater, which is the number of cycle for the rotation of the direction of the magnetic field. Further, when the magnetic field is rotated by plural times, since the number of sampling points of data
to be averaged increases, this provides an effect of improving the calculation accuracy for the β value.
The thus obtained β value (sine value for the pin-angle error α) is stored in the parameter-storing unit 390.
The step of obtaining the parameter by determining the pin-angle error α as described above in this embodiment has the following features.
1. (a) In the step of determining the amount of pin-angle error α (Fig. 5), the value of the angle origin is not required. This is because the output signal of the COS bridge having pin-angle errors
in the bridge can be processed on the coordinate system whose angle origin is the average value for the two pin-angle errors. Accordingly, no encoder is required, and it may suffice to conduct
sampling at a constant time interval by rotating a magnet at a constant velocity. Therefore, on-site correction in a state assembled in an application system is also possible.
2. (b) Since the calculation for trigonometric function is not necessary, the amount of calculation operation is small.
3. (c) Since parameter fitting is not conducted, α value is determined uniquely.
As described above, according to this embodiment, correction of an error generated due to the pin-angle error of the rotational angle measurement apparatus can be attained without using an encoder
for calibration.
The calculation for the pin-angle error can be corrected with a small amount of calculation operation.
The constitution and the operation of a rotational angle measurement apparatus according to a second embodiment of the invention are to be described with reference to Figs. 8 to 10.
First, a first constitution of the rotational angle measurement apparatus for correcting the pin-angle error α according to this embodiment is to be described with reference to Fig. 8.
Fig. 8 is a block diagram showing the first constitution of the rotational angle measurement apparatus for correcting the pin-angle error α according to the second embodiment of the invention.
Fig. 8 shows a circuit constitution for executing correction processing during operation as a rotational angle sensor in which a rotational angle measurement value is corrected by using the sine β (=
sinα) of the error α determined by the constitution of Fig. 5.
A rotational angle measurement apparatus 201M of this embodiment includes a magnetic sensor 301 and a detection circuit unit 302M. The detection circuit unit 302M has a signal processing unit 303M.
The magnetic sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference voltage between terminals V1, V2 of the COS bridge
and outputs a difference signal Vx. In the same manner, a differential amplifier 351B detects a difference voltage between terminals V1 and V2 of the SIN bridge and outputs a difference signal Vy. In
the present specification, the difference signals Vx and Vy are referred to as output signals of the respective bridges. The bridge output signals Vx and Vy are input signals Vx and Vy inputted to
the signal processing unit 303M.
A ratio-calculation unit 381 receives input signals Vx and Vy inputted to the signal processing unit 303M and determines a ratio Vy/Vx. Specifically, the signals Vx and Vy are inputted to an A/D
converter of a microcontroller and a ratio-calculation unit 381 may be disposed in the microcontroller. Then, a parameter correction unit 382 reads out a correction parameter β stored in a
parameter-storing unit 390 and conducts the correction processing. Specifically, the parameter β is subtracted from the ratio Vy/Vx. Then, an atan-processing unit 383 conducts arctangent processing
to calculate an angle of magnetic field θ.
The atan-processing unit 383 calculates an angular value θ corrected for the pin-angle error by the calculation as follows:
In this specification, the processing of the equation (28) is deemed to be a processing of appropriately outputting a value in the 4-quadrant over θ = 0 to 360° as shown in the following equation.
That is, θ is equivalent to the following equation (29).
θ = atan2 (y, x) is a function of appropriately outputting the value: θ = 0 to 360° (or -180 to 180°) depending on whether the arguments x, y are positive or negative. For example, when both of x and
y are positive, atan2 (y, x) = ArcTan (y/x), whereas when both of x and y are negative, atan2 (y, x) = ArcTan (y/x) + 180°.
The equation (28) is equivalent to approximation of cosα = 1 in the equation (25). According to the inventor's study, this approximation is effective in the case of |α| ≤ 4°. This is to be described
later with reference to data.
That is, the correction method by the circuit in Fig. 8 is particularly preferred since a sufficient accuracy can be obtained when the method is applied to a case where the difference α of the
reference angle of the pinned magnetic layer of each bridge in the magnetic sensor is 4° or less.
As described above, the correction processing during a sensing operation in the correction method of the embodiment has the following features.
1. (a) The operation added to the correction processing is only the subtraction of the β value, and therefore, the burden on the correction operation process during sensing operation that requires
real-time response is extremely small.
2. (b) Since the correction value β does not depend on the angle of magnetic field θ, the angle origin is not required in the correction processing. Accordingly, even when the angle origin has an
error, the output angle value is correct as a relative value.
As apparent from Fig. 8 and the equation (28), the feature of this embodiment is to obtain more correct angular value θ corrected for the error due to the pin-angle error by subtracting a constant
value (β) from the ratio of the output signals Vx and Vy from the COS bridge and the SIN bridge respectively. In a case where the β value is negative, a constant value is added.
In the foregoing and subsequent descriptions, the output signal Vx from the bridge means the difference signal Vx = V1 - V2 between the terminals V1 and V2 of the bridge, or a signal obtained by
multiplying an appropriate amplification factor to the difference signal. In Fig. 8, this signal corresponds to the output signal of the differential amplifier 351A. The output signal Vy of the SIN
bridge is a difference signal Vy = V2 - V1 or a signal obtained by multiplying an appropriate amplification factor to the difference signal.
Assuming the angle outputted from the rotational angle measurement apparatus in this embodiment as θ, tanθ is (Vy/Vx - β) as shown in the equation (28). Accordingly, the difference between the ratio
Vy/Vx of the output signals from the COS bridge and the SIN bridge, and the tanθ for the output value θ of the rotational angle measurement apparatus is a constant non-zero value (not zero) (β) not
depending on the rotational angle. That is, when taking notice on the relation between the input and the output of the signal processing unit 303M shown in Fig. 8, the input signals are Vx and Vy,
and the output thereof is θ. Then, the difference between the ratio Vy/Vx of the input signals and the tanθ of the output signal is β. As can be seen from the equation (28), β is a constant non-zero
value not depending on the rotational angle. Therefore, the correction method shown in Fig. 8 and represented by the equation (28) is equivalent to that the difference between the ratio Vy/Vx of the
input signals to the signal processing unit 303M and the tanθ for the output value θ of the rotational angle measurement apparatus is the constant non-zero value (that is not zero) (β) not depending
on the rotational angle.
Since β = 0 corresponds to a case in which the correction processing is not conducted, when the process of this embodiment is conducted, the β value is a constant non-zero value.
While the relation between the equation (28) and the equation (29) is correct, the ratio Vy/Vx diverges as Vx approaches zero. Accordingly, the calculation error increases when the calculation is
conducted with a finite digit number. Further, when the circuit operation is tested, the effect of the measurement error is expanded. Then, in the case of |Vx| < |Vy|, the equation (24) is
transformed as in the following equation (30) by using the ratio r2 = Vx/Vy.
That is, for testing the operation of the circuit in Fig. 8, the equation (28) may be used in the case of |Vx| ≥ |Vy| and the equation (30) may be used in the case of |Vx| < |Vy|. Then, the operation
can be tested with a minimum effect of the calculation error or the measurement error. Since the relation using the atan2 function of the equation (29) contains conditional branching process
depending on the magnitude relation of |Vx|, |Vy| in the internal algorithm of the atan2 function, equation (29) is valid in any of the cases.
While a constitution in which the differential amplifiers 351A and 351B are included in the detection circuit unit 302M is shown in Fig. 8, it may be constituted such that the differential amplifiers
351A and 351B are included in the magnetic sensor 301 and the output signals Vx and Vy are transmitted by way of wirings and inputted to the detection circuit unit 302M. The constitution described
above less undergoes the effect of external noises by lowering the impedance of output from the differential amplifier.
Then, description is to be made to an estimation accuracy of the pin-angle error α in the rotational angle measurement apparatus according to this embodiment with reference to Figs. 9 and 10.
Figs. 9 and 10 are explanatory views for the estimation accuracy of the pin-angle error α in the rotational angle measurement apparatus according to the second embodiment of the invention.
In this simulation, Vx and Vy signals including the pin-angle error α are generated and the signals are processed as shown in Fig. 8 to determine an estimated value αe of the pin-angle error. The
estimation error (αe - α) is determined as described above.
Fig. 9 shows the result of the simulation. Fig. 9 is a graph formed by changing the pin-angle error α in the range from 0 to 2° and then plotting the estimation errors (αe - α). The amount of
estimation error is determined by using the number of sampling signals (number of sampling points) N during one rotation of the direction of the magnetic field as a parameter. When the number of
sampling points is N = 50, the α value is estimated correctly when α < 2°. However, when the number of sampling points N is increased to 100 points, an estimation error of about 1° is generated for α
≥ 1°.
Then, Fig. 10 shows the result of examining the estimation error when changing the starting angle θstart. The starting angle θstart shows that sampling range is set for [θstart, 2π + θstart). The
sampling points are set as N = 100. As a result, as shown in Fig. 10, when the starting angle θstart is 5°, the estimation error increases to 0.5° or more and the estimation error increases even in
the case where the amount of error is: α < 1°. In the actual correction coefficient calculation, since the origin for the direction of the magnetic field is unknown, it is necessary that the α value
can be estimated accurately for any θstart value. When the starting angle θstart is 4° or less, the estimation error is small and within a range of practical use. This is to be described later with
reference to Fig. 22.
As described above, according to this embodiment, an accurate rotational angle can be measured even by using a magnetic sensor including an error in the pin angle setting.
Further, since tolerance for setting the pin angle increases upon manufacturing the magnetic sensor, this facilitates manufacture.
Further, the error due to the pin-angle error can be corrected with a small amount of calculation operation.
Further, correction for the error generated by the pin-angle error of the rotational angle measurement apparatus can be attained without using an encoder for calibration.
Then, description is to be made to a first constitution of a rotational angle measurement apparatus for examining a pin-angle error α and correcting the pin-angle error α according to a third
embodiment of the invention with reference to Fig. 11.
Fig. 11 is a block diagram showing the first constitution of the rotational angle measurement apparatus for examining the pin-angle error α and correcting the pin-angle error α according to the third
embodiment of the invention. In Fig. 11, identical reference numerals to those of Figs. 5 and 8 denote identical portions.
A rotational angle measurement apparatus 201DM of this embodiment includes a magnetic sensor 301 and a detection circuit unit 302DM. The detection circuit unit 302DM has a signal processing unit
303DM. The magnetic sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference voltage between terminals V1 and V2 of the
COS bridge and outputs a difference signal Vx. In the same manner, a difference amplifier 351B detects a difference voltage between terminals V1 and V2 of the SIN bridge and outputs a difference
signal Vy.
The signal processing unit 303DM has a signal processing unit 303D for detecting a pin-angle error α and a signal processing unit 303M for correcting the detected pin-angle error α. The signal
processing unit 303D has a constitution described with reference to Fig. 5, and the signal processing unit 303M has a constitution described with reference to Fig. 8. That is, the signal processing
unit 303D has a ratio-calculation unit 381, an averaging unit 386, a duration-determination unit 387, and a parameter-storing unit 390. The operation of the signal processing unit 303D is as
described in Fig. 5. The signal processing unit 303M has the ratio-calculation unit 381, a parameter correction unit 382, an atan-processing unit 383, and a parameter storing unit 390. The operation
of the signal processing unit 303M is similar to what has been described in Fig. 8.
As described above, according to this embodiment, an accurate rotational angle can be measured even by using a magnetic sensor including an error in the pin-angle setting.
Further, since the tolerance for setting the pin angle increases upon manufacturing the magnetic sensor, this facilitates manufacture.
Further, the error due to the pin-angle error can be corrected with a small amount of calculation operation.
Further, correction for the error generated by the pin-angle error of the rotational angle measurement apparatus can be attained without using an encoder for calibration.
Then, the constitution and the operation of the rotational angle measurement apparatus according to the third embodiment of the invention are to be described with reference to Figs. 12 to 18.
As described in Figs. 9 and 10, in the method of the first embodiment (Fig. 5), a range where the pin-angle error can be estimated at a sufficient accuracy is restricted to some extent.
Then, the present inventors have made an earnest study for the cause of degrading the estimation accuracy and have found the following points.
Then, description is to be made to the waveform of the signal ratio r = Vy/Vx in the rotational angle measurement apparatus according to the first embodiment with reference to Fig. 12.
Fig. 12 is an explanatory view for the waveform of the signal ratio r = Vy/Vx in the rotational angle measurement apparatus according to the first embodiment of the invention.
In Fig. 12, there are segments, in which the signal ratio r = Vy/Vx is not caclulated, owing to the conditional branching in equation (27) with respect to the absolute values |Vx| and |Vy|. The
equation (27) calculates the average for the ratio r = r(θ) in the form shown in Fig. 12. As can be seen from Fig. 12, since the ratio r(θ) has a good symmetry, intermediate values cancel out between
positive and negative values in the averaging process. Accordingly, the average value is substantially dominated by data at several points for maximum and minimum values of r(θ) indicated by symbol
"o" in Fig. 12. Since the maximum and minimum values of r(θ) change greatly by a slight change of θ, they suffer from a significant effect depending on the processing conditions such as a number of
sampling points for the signals Vx and Vy. As a result, the β value calculated by the equation (26) undergoes the effect to result an error in the pin-angle error estimated value αe.
Then, description is to be made to a second constitution of the rotational angle measurement apparatus for examining a pin-angle error α according to this embodiment with reference to Fig. 13.
Fig. 13 is a block diagram showing the second constitution of the rotational angle measurement apparatus for examining the pin-angle error α according to the third embodiment of the invention.
A rotational angle measurement apparatus 201DA includes a magnetic sensor 301 and a detection circuit unit 302DA. The detection circuit unit 302DA has a signal processing unit 303DA. The magnetic
sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference voltage between terminals V1 and V2 of the COS bridge and
outputs a difference signal Vx. In the same manner, a differential amplifier 351B detects a difference voltage between the terminals V1 and V2 of the SIN bridge and outputs a difference signal Vy. In
the present specification, the difference signals Vx and Vy are referred to as output signals of the respective bridges. The output signals Vx and Vy of the bridges are input signals Vx and Vy
inputted to the signal processing unit.
A ratio-calculation unit 381 receives the input signals Vx, Vy inputted to the signal processing unit and determines the ratio Vy/Vx. Specifically, the signals Vx, Vy are inputted to an AD converter
of a microcontroller, and a ratio-calculation unit 381 may be disposed in the microcontroller. Upon calculation of the ratio Vy/Vx, the calculation error can be decreased by conditional branching
process based on magnitude comparison between absolute values |Vx| and |Vy|.
Then, a window function processing unit 385 receives the ratio r = Vy/Vx and applies an appropriate window function to be described later with respect to Fig. 14. An averaging unit 386 receives the
signals subjected to the window function processing and conducts averaging processing. The averaging processing is conducted for the duration in which the direction of the magnetic field turns for
one rotation. Duration is determined by using a duration-determination unit 387 for detecting the duration for one rotation. Specifically, the duration-determination unit 387 determines the duration
till which the Vx signal voltage twice passes a value equal to the starting voltage. Since the Vx signal is in proportion to cosθ, twice passage through the identical value corresponds to 1 cycle. As
shown in the equation (27), the average value is equal to the sinβ of the pin-angle error α (= sinα).
The duration for averaging process may be the duration in which the direction of the magnetic field rotates for several times. Since the first term in the equation (25) is reduced to zero when the
averaging duration is a multiple integer of 360°, that is, [0, 2Nrπ), the obtained average value is equal to the sin for the pin-angle error α (β = sinα). Here, Nr is an integer of 1 or greater which
is the number of cycles for the rotation of the direction of the magnetic field. Further, since the number of sampling points of data to be averaged is increased by rotation for a plurality of times,
it also provides an effect of improving the calculation accuracy for the β value.
Next, a window function W(r) used in the window function processing unit 385 in the rotational angle measurement apparatus according to this embodiment is described with reference to Figs. 14 and 15.
Figs. 14 and 15 are explanatory views for the window function used in the window function processing unit of the rotational angle measurement apparatus according to the third embodiment of the
As a specific example of the window function W(r) used in the window function processing unit 385, the following equation (31) is used.
Fig. 14 shows a function form of the window function W(r) represented by the equation (31). The requirements for the window function applied to the window function processing unit 385 of the
processing circuit 303DA in Fig. 13 are the following two conditions:
1. (a) It is an even function symmetrical with respect to r = 0.
2. (b) It has a function form in which the value is smaller toward the both ends of the input range.
As shown in Fig. 14, the window function W(r) of the equation (31) satisfies the conditions (a) and (b).
Fig. 15 is a graph formed by plotting "r × W(r)" prepared by multiplying the ratio r by the window function W(r) of the equation (26) relative to the angle of magnetic field θ. It can be seen that
discontinuous points present in the ratio r are eliminated by multiplying the window function to form a smooth waveform with respect to θ. Accordingly, even when conditions such as the number of
sampling points or sampling start angle are changed, for instance, the average value of r × W(r) changes scarcely. That is, since a stable and robust estimation method is obtained by applying the
window function, it is more preferable.
The process of a signal processing circuit, in Fig. 13, which is made robust by applying the window function, can be described using the following equation (32).
The coefficient A is a conversion coefficient formed by introducing the window function. Conversion coefficient A is A = 5.5 when the window function of the equation (31) is used. When the form of
the window function is changed, the coefficient A also changes.
Then, description is to be made to estimation accuracy for the amount of the pin-angle error α in the rotational angle measurement apparatus according to this embodiment with reference to Figs. 16 to
Figs. 16 to 18 are explanatory views for the estimation accuracy of the amount of pin-angle error α in the rotational angle measurement apparatus according to the third embodiment of the invention.
The estimation error when the pin-angle error α is estimated by the constitution shown in Fig. 15 or by the equation (32) is to be described. The method of determining the estimation error (αe - α)
is as described above.
Fig. 16 shows a result of examining the estimation error (αe - α) of the pin-angle error α by changing the number of sampling points N. When the number of sampling points N is 50 points, the
estimation error also increases as the pin-angle error α increases. On the other hand, when N = 100 points, the estimation error is within ±0.1°, and a sufficient accuracy is obtained. The accuracy
is further enhanced at N = 200 points and the error is reduced to 0.03° or less. Further, it can be seen that the estimation accuracy of ±0.2' can be obtained by setting the sampling points as: N ≥
Fig. 17 shows a result of examining the dependence on a starting angle θstart. The pin-angle error is estimated while changing sampling range to [θstart, 360° + θstart). The estimation error falls
within a range of ±0.1° even when the range is changed as θstart = 0 to 2, 22° and it can be seen that the error can be estimated stably by the introduction of the window function W(r).
Fig. 18 shows the result of examining the effect of noises. The effect of superimposing noises on the signals Vx and Vy signals is examined. Noise components at amplitude ratio b(%) with respect to
the cos or sin component are superimposed on the signal voltages Vx and Vy and the estimated value αe of the pin-angle error α is determined based on the signals (Vx, Vy) including the noises. Fig.
18 shows the estimation error. The estimation error is ±0.1° or less at the amplitude ratio b = 0.50 of the noise, and it is ±0.25° or less at the amplitude ratio b = 1% of the noise. As the noise
amplitude increases as: b = 2%, the estimation error increases to ±1%. It can be seen from Fig. 18 that the pin-angle error can be estimated at a sufficient accuracy when the noise is 0.5% or less.
An accurate estimation value αe can be given stably and robustly even when various signal obtaining conditions are changed according to the constitution of Fig. 13, that is , a constitution of
determining the sine of the pin-angle error α (β = sinα) by averaging a value obtained by multiplying the ratio variable r = Vy/Vx by a window function W(r).
In the parameter estimation processing method shown by the equation (32), the angle of magnetic field θ may be for one rotation but it may be for plural rotations. That is, it may be Nr rotations (Nr
> 1). By Nr rotation, since the number of sampling points substantially increases and the accuracy of the parameter estimation is improved, it is further preferable.
As described above, according to this embodiment, correction for the error generated due to pin-angle error in the rotational angle measurement apparatus can be attained without using an encoder for
Further, calculation for the pin-angle error can be corrected with a small amount of calculation operation.
Then, description is to be made to the second constitution of a rotational angle measurement apparatus for examining a pin-angle error α and correcting the pin-angle error α according to a fourth
embodiment of the invention with reference to Fig. 19.
Fig. 19 is a block diagram showing a second constitution of the rotational angle measurement apparatus for examining a pin-angle error α and correcting the pin-angle error α according to the fourth
embodiment of the invention. In Fig. 19, identical reference numerals to those in Figs. 8 and 13 denote identical portions.
A rotational angle measurement apparatus 201DMA of this embodiment includes a magnetic sensor 301 and a detection circuit unit 302DMA. The detection circuit unit 302DMA has a signal processing unit
303DMA. The magnetic sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference voltage between terminals V1 and V2 of the
COS bridge and outputs a difference signal Vx. In the same manner, a differential amplifier 351B detects a difference voltage between the terminals V1 and V2 of the SIN bridge and outputs a
difference signal Vy.
The signal processing unit 303DMA includes a signal processing unit 303D for detecting a pin-angle error α and a signal processing unit 303M for correcting the detected pin-angle error α. The signal
processing unit 303D has a constitution explained with reference to Fig. 13, and the signal processing unit 303M has a constitution explained with reference to Fig. 8. That is, the signal processing
unit 303D includes a ratio-calculation unit 381, a window function processing unit 385, an averaging unit 386, a duration-determination unit 387, and a parameter-storing unit 390. The operation of
the signal processing unit 303D is as described with reference to Fig. 13. The signal processing unit 303M includes the ratio-calculation unit 381, a parameter correction unit 382, an atan-processing
unit 383, and the parameter-storing unit 390. The operation of the signal processing unit 303M is as described with reference to Fig. 8.
As described above, according to this embodiment, accurate rotational angle can be measured by using the magnetic sensor including an error in a pin-angle setting.
Further, since the tolerance in the pin angle setting is increased upon manufacturing the magnetic sensor, this facilitates manufacture.
Further, error due to the pin-angle error can be corrected with a small amount of calculation operation.
Further, correction for the error generated due to the pin-angle error of the rotational angle measurement apparatus can be attained without using an encoder for calibration.
Then, description is to be made to a constitution and an operation of a rotational angle measurement apparatus according to a fifth embodiment of the invention with reference to Figs. 20 to 22.
First, description is to be made to a second constitution of the rotational angle measurement apparatus for correcting a pin-angle error α according to this embodiment with reference to Fig. 20.
Fig. 20 is a block diagram showing the second constitution of a rotational angle measurement apparatus for correcting the pin-angle error α according to the fifth embodiment of the invention.
Fig. 20 shows a circuit constitution for executing the correction processing during operation as a rotational angle sensor which corrects the rotational angle measurement value by using the sine of
the error α (β = sinα) determined by the constitution shown in Fig. 13. The constitution can correct the pin-angle error at the good accuracy even when it is large.
Correction according to the equation (28) is effective in the case where the pin-angle error |α| ≤ 4°. This embodiment can conduct correction effectively even when the pin-angle error |α| > 4°.
A rotational angle measurement apparatus 201MA includes a magnetic sensor 301 and a detection circuit unit 302MA. The detection circuit unit 302MA has a signal processing unit 303MA. The magnetic
sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference voltage between terminals V1 and V2 of the COS bridge and
outputs a difference signal Vx. In the same manner, a differential amplifier 351B detects a difference voltage between terminals V1 and V2 of the SIN bridge and outputs a difference signal Vy. In the
present specification, the difference signals Vx and Vy are referred to as output signals of the respective bridges. The output signals Vx and Vy of the bridges are input signals Vx and Vy inputted
to the signal processing unit.
A ratio-calculation unit 381 receives the input signals Vx and Vy inputted to the signal processing unit and determines the ratio Vy/Vx. Specifically, the signals Vx and Vy are inputted to an AD
converter of a microcontroller and the ratio-calculation unit 381 may be disposed in the microcontroller. Then, a parameter correction unit 382 subtracts β from the ratio r and then divides the
difference by a coefficient Bx. The parameters β and Bx are read out from a parameter-storing unit 390.
Then, an atan-processing unit 383 conducts arctangent processing to calculate an angle of magnetic field θ.
Description is to be made specifically. From the equation (25), the following equation (33) is obtained.
in which Bx = SQRT(1 - β
Then, according to the equation (33), a value in the 4-quadrant for 0 to 360° is outputted appropriately in consideration of positive and negative sign for Vx and Vy. That is, θ can be expressed by
the following equation (34).
The parameter correction unit 382 calculates the content in the bracket in the equation (33). The atan-processing unit 383 conducts processing for outputting the value in the 4-quadrant for 0 to 360°
as represented by the equation (34).
As apparent from Fig. 20, assuming the angle outputted from the rotational angle measurement apparatus 201M according to this embodiment as θ, the following relation is established between tanθ and
the output signals Vx and Vy of the magnetic sensor 301.
in which x = β is a constant non-zero value (that is, not zero) not depending on the rotational angle θ.
Since β = 0 corresponds to a case in which the correction processing is not conducted, when the processing of this embodiment is conducted, the β value is a constant non-zero value.
While the relation described in the equation (33) and the equation (34) is correct, the ratio Vy/Vx diverges as Vx approaches zero. Accordingly, the calculation error increases when calculation is
conducted with a finite digital number. Further, when the circuit operation is tested, the effect of the measurement error is expanded. Then, in the case of | Vx | < |Vy|, the equation (33) is
transformed by using the ratio r2 = Vx/Vy as described below.
in which r2 = (Vx/Vy).
That is, to test the operation of the circuit in Fig. 20, the equation (33) may be used in the case of |Vx| ≥ |Vy| and the equation (36) may be used in the case of |Vx| < |Vy|. Therefore, the
operation can be tested with a minimum effect of the calculation error or measurement error. Since the relation using the atan2 function of the equation (34) contains conditional branch processing
depending on the magnitude relation between |Vx| and |Vy| in the internal algorithm of the atan2 function, it is valid in any of the cases.
Then, description is to be made to estimation accuracy for the amount of the pin-angle error α in the rotational angle measurement apparatus according to this embodiment with reference to Figs 21 and
Figs 21 and 22 are explanatory views for the estimation accuracy for the amount of the pin-angle error α in the rotational angle measurement apparatus according to the fifth embodiment of the
Fig. 21A is a graph formed by plotting errors of the rotational angle θ after correction for the case of correction by the correction circuit in Fig. 8 (A in the drawing) and a case of correction by
the correction circuit in Fig. 20 (B in the drawing) at the pin-angle error α of 4°.
Fig. 21B is a graph formed by plotting errors of the rotational angle θ after correction for the case of correction by the correction circuit in Fig. 8 (A in the drawing) and a case of correction by
the correction circuit in Fig. 20 (B in the drawing) at the pin-angle error α of 20°.
Fig. 21A shows a case in which the pin-angle error α = 4° and the error is a maximum of 0.07° by using the correction circuit of Fig. 8, that is, only by the correction for the β value and a
sufficient accuracy can be obtained. On the other hand, Fig. 21B is a case at α = 20° in which the error is a maximum of 1.7° by the correction circuit in Fig. 8 and the error is increased. However,
when the correction circuit in Fig. 20 is used, the error is zero as shown in the curve B in Fig. 21B and a sufficient accuracy can be obtained.
Fig. 22 shows a relation between various pin-angle errors and the maximum error for the output angle θ by each of the correction methods. In the drawing, the curve A shows the result using the
correction circuit in Fig. 8, and the curve B is a case of using the correction circuit in Fig. 20. As can be seen from the drawing, the error falls within 0.1° or less at α ≤ 4° and a sufficient
accuracy can be obtained by the correction circuit in Fig. 8. On the other hand, in the case of α 4°, it can be seen that a sufficient accuracy can be ensured by using the correction method of Fig.
20 (curve B).
As described above, according to this embodiment, an accurate rotational angle can be measured by decreasing the estimation error for the pin angle even a magnetic sensor including an error in the
pin angle setting is used.
Further, since the tolerance for setting the pin angle increases upon manufacturing the magnetic sensor, this facilitates manufacture.
Further, the error due to the pin-angle error can be corrected with a small amount of calculation operation.
Further, correction for the error generated due to the pin-angle error of the rotational angle measurement apparatus can be attained without using an encoder for calibration.
Then, description is to be made to a third constitution of a rotational angle measurement apparatus for examining a pin-angle error α and correcting the pin-angle error α according to a sixth
embodiment of the invention with reference to Fig. 23.
Fig. 23 is a block diagram showing the third constitution of the rotational angle measurement apparatus for examining the pin-angle error α and correcting the pin-angle error α according to the sixth
embodiment of the invention. In Fig. 23, identical reference numerals to those of Figs. 13 and 20 denote identical portions.
A rotational angle measurement apparatus 201DMB of this embodiment includes a magnetic sensor 301 and a detection circuit unit 302DMB. The detection circuit unit 302DMB has a signal processing unit
303DMB. The magnetic sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference voltage between terminals V1 and V2 of the
COS bridge and outputs a difference signal Vx. In the same manner, a difference amplifier 351B detects a difference voltage between terminals V1 and V2 of the SIN bridge and outputs a difference
signal Vy. In the present specification, the difference signals Vx and Vy are referred to as output signals of respective bridges. The output signals Vx and Vy of the bridges are input signals Vx and
Vy inputted to the signal processing unit.
The signal processing unit 303DMB has a signal processing unit 303D for detecting a pin-angle error α and a signal processing unit 303M for correcting the detected pin-angle error α. The signal
processing unit 303D has a constitution described with reference to Fig. 13, and the signal processing unit 303M has a constitution described with reference to Fig. 20. That is, the signal processing
unit 303D has a ratio-calculation unit 381, a window function processing unit 385, an averaging unit 386, a duration-determination unit 387, and a parameter-storing unit 390. The operation of the
signal processing unit 303D is as described in Fig. 13. The signal processing unit 303M has the ratio-calculation unit 381, a parameter correction unit 382, an atan-processing unit 383, and the
parameter storing unit 390. The operation of the signal processing unit 303M is as described in Fig. 20.
As described above, according to this embodiment, an accurate rotational angle can be measured even by using a magnetic sensor including an error in the pin-angle setting.
Further, since the tolerance for setting the pin angle increases upon manufacturing the magnetic sensor, this facilitates manufacture.
Further, the error due to the pin-angle error can be corrected with a small amount of calculation operation.
Further, correction for the error generated due to the pin-angle error of the rotational angle measurement apparatus can be attained without using an encoder for calibration.
Then, the constitution of a rotational angle measurement apparatus according to a seventh embodiment of the invention is to be described with reference to Fig. 24.
Fig. 24 is a block diagram showing the constitution of the rotational angle measurement apparatus according to the seventh embodiment of the invention.
The error of the measurement accuracy in the rotational angle measurement apparatus using the GMR sensor is attributed to an error due to a pin-angle error; in some cases, the error of the
measurement accuracy is also attributed to a signal offset. This embodiment enables measurement at high accuracy by also removing such a cause of error.
The signal offset is generated due to variations in the angle-independent term Rn0 of GMR elements. The signal offset that may be included in the output signals Vx and Vy of the GMR sensor is to be
When the resistance of a GMR element is separated into a magnetic field-independent term Rn0 and a magnetic field dependent term ΔR and represented as:
The output signal ΔV of the GMR bridge is represented by the following equation (38):
in which C is:
In the equation (38), when the magnetic field-independent resistance components are equal to each other, no offset is generated in signal ΔV since R10 x R30 = R20 x R40 is established. On the other
hand, when R10 x R30 ≠ R20R40 due to the variations in resistance values, an offset component, which is independent of the direction of the magnetic field, is generated.
Since the equations (22) and (23) are not valid when the offset is present, the correction algorithm of the equation (27) or the equation (32) is not valid. Accordingly, prior to the application of
the correction algorithm, it is necessary to remove the signal offset.
As can be seen from the equation (37), in the case where the offset is present, the equations (22) and (23) are represented by the following equations (40) and (41).
Since negative and positive components of the cos function and the sin function are offset from each other by 1 cycle integration, the offset voltages VCofs and VSofs are determined by rotating the
direction of the magnetic field by 0 to 360° and averaging the same. That is, the offset voltages VCofs and VSofs can be calculated by following equations;
Accordingly, both of the offset voltage attributable to the scattering of the resistance and the pin-angle error can be corrected by the following correction procedures.
1. (a) The magnetic field turns for 2 rotation at a constant angular velocity and
2. (b) the respective offset voltages bx, by of Vx and Vy are determined at the first rotation according to the equations (42) and (43).
3. (c) At the second rotation, the value Vx' = Vx -bx and Vy' = Vy - by obtained by subtracting bx and by from Vx and Vy respectively are calculated to determine the amount of pin-angle error
βrelative to Vx' and Vy' according to the algorithm of the equation (27) or the equation (32).
(d) Bx is calculated from the β value according to Bx = SQRT(1-β^2).
Description has been made to an example of turning the direction of the magnetic field for one rotation in the detection step for the offset voltages bx and by and turning the direction of the
magnetic field for one rotation in the detection step for the amount of pin-angle error (correction parameter) β. Alternatively, it is also possible to turn the direction of the magnetic field for
(n+m) or more rotation, and turn the direction of the magnetic field for n rotations (n > 1) in the detection step for offset voltages bx and by, and then, turn the direction of the magnetic field
for m rotations (m > 1) in the detection step for the amount of pin-angle error (correction parameter) β.
Since the direction of the magnetic field may be rotated between the offset voltage detection step and the detection step for the correction parameter β, the rotational direction of the direction of
the magnetic field is (n+m) times in total. When the direction of the magnetic field is turned by plural rotations in each of the detection step for the offset voltage and the detection step for the
correction parameter β, this provides an advantage of enhancing the obtaining accuracy for each of the parameters since the number of sampling points is increased.
The circuit shown in Fig. 24 shows a circuit constitution which is used for examining the pin-angle error α in the case where the magnetic sensor includes a signal offset and a pin-angle error.
A rotational angle measurement apparatus 201DB of this embodiment includes a magnetic sensor 301 and a detection circuit unit 302DB. The detection circuit unit 302DB has a signal processing unit
303DB. As described in Fig. 6A and 6B, the magnetic sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference voltage
between terminals V1 and V2 of the COS bridge and outputs a difference signal Vx. In this embodiment, it is set as Vx = -ΔVc = -(V2 - V1). Further, a differential amplifier 351B detects a difference
voltage between terminals V1 and V2 of the SIN bridge and outputs a difference signal Vy. In this case, Vy = ΔVs. In the present specification, the difference signals Vx and Vy are referred to as
output signals of the respective bridges. The output signals Vx and Vy of the bridges are input signals Vx and Vy inputted to the signal processing unit.
A ratio-calculation unit 381 receives the input signals Vx and Vy inputted to the signal processing unit and determines the ratio Vy/Vx. Specifically, the signals Vx and Vy are inputted to an AD
converter of a microcontroller, and the ratio-calculation unit 381 may be disposed in the microcontroller. Upon calculation of the ratio Vy/Vx, the calculation error can be decreased by branching due
to comparison between absolute values as shown in the equation (27).
Then, a window function processing unit 385 receives the ratio r = Vy/Vx and applies an appropriate window function described in Fig. 14.
An averaging unit 386 receives the signal subjected to the window function processing and conducts averaging processing. The averaging unit 386 averages the output signals Vx and Vy at the first
rotation of the magnetic field rotation to determine respective offsets bx and by in accordance with the equation (42) and the equation (43), and stores them in a parameter-storing unit 390. At the
second rotation of the magnetic field rotation, the output signals Vx and Vy are subtracted by using the offset voltages bx and by stored in the storing unit 390 in the offset-subtraction units 353A
and 353B respectively.
Signals Vx' = Vx - bx and Vy' = Vy - by corrected for the offset are the input signals inputted to the signal processing unit 303DB. The input signals Vx' and Vy' inputted to the signal processing
unit 303DB is processed as described above by the ratio-calculation unit 381, the window function processing unit 385, and the averaging unit 386 so that the sine of the pin-angle error α β = sinα)
is obtained.
Description has been made to an example of turning the direction of the magnetic field for one rotation in the detection step for the offset voltages bx and by and turning the direction of the
magnetic field for one rotation in the detection step for the amount of pin-angle error (correction parameter) β. Alternatively, it is also possible to turn the direction of the magnetic field for
(n+m) rotations or more, turn the direction of the magnetic field for n rotations (n > 1) in the detection step for the offset voltages bx, by and, thereafter, turn the direction of the magnetic
field for m rotations (m > 1) in the detection step for the amount of pin-angle error (correction parameter) β. Since the direction of the magnetic field may be rotated between the detection step for
the offset voltage and the detection step for the correction parameter β, the rotational direction for the direction of the magnetic field is (n+m) times in total. Since the number of sampling points
increases when the direction of the magnetic field turns for a plurality of times in each of the detection steps for the offset voltage and the detection step for the correction parameter, this
provides an advantage of improving the obtaining accuracy for each parameter β.
As described above, in this embodiment, the error attributable to the variations of elements of the GMR sensor can be corrected only by the subtraction of three parameters β, bx and by, and
multiplication of coefficient 1/Bx. Since the calculation processing gives less calculation load, they can be executed easily by an inexpensive general-purpose microcontroller.
As described above, according to this embodiment, error generated due to the pin-angle error in the rotational angle measurement apparatus can be corrected without using an encoder for calibration.
Further, the error due to the pin-angle error can be corrected with a small amount of calculation operation.
Then, the constitution and the operation of a rotational angle measurement apparatus according to an eighth embodiment of the invention are to be described with reference to Fig. 25.
Fig. 25 is a block diagram showing the third constitution of the rotational angle measurement apparatus for correcting the pin-angle error α according to the eighth embodiment of the invention.
Fig. 25 shows a circuit constitution for executing correction processing during operation as the rotational angle sensor, which corrects a measured rotational angle value by using offset voltages bx
and by and the sine of an error α β = sinα) determined by the constitution of Fig. 24.
A rotational angle measurement apparatus 201MB of this embodiment includes a magnetic sensor 301 and a detection circuit unit 302MB. The detection circuit unit 302MB has offset-subtraction units 353A
and 353B, and a signal processing unit 303M. The magnetic sensor 301 has two bridges (COS bridge and SIN bridge) each comprising GMR elements. A differential amplifier 351A detects a difference
voltage between terminals V1 and V2 of the COS bridge and outputs a difference signal Vx. In the same manner, a difference amplifier 351B detects a difference voltage between terminals V1 and V2 of
the SIN bridge and outputs a difference signal Vy.
The offset-subtraction units 353A and 353B subtract offset voltages bx and by stored in a storing unit 390 from the output signals Vx and Vy respectively. Vx' = Vx - bx and Vy' = Vy - by corrected
for the offset are input signals inputted to the signal processing unit 303M.
The input signals Vx' and Vy' are inputted to the signal processing unit 303M. A ratio-calculation unit 381 included in the signal processing unit 303M receives the input signals Vx' and Vy' to
determine a ratio Vy'/Vx'. Specifically, the signals Vx' and Vy' may be inputted to an A/D converter of a microcontroller and the ratio-calculation unit 381 may be disposed in the microcontroller.
Then, a parameter correction unit 382 reads out a correction parameter β stored in the parameter storing unit 390 and conducts correction processing. Specifically, the parameter β is subtracted from
the ratio Vy'/Vx'. Then, an atan-processing unit 383 conducts arctangent processing to calculate an angle of magnetic field θ. The atan-processing unit 383 conducts processing of the equation (29).
As described above, according to this embodiment, an accurate rotational angle can be measured by correcting the offset error and decreasing the estimation error for the pin angle even when a
magnetic sensor including an error in the pin-angle setting is used.
Further, since the tolerance for setting the pin angle increases upon manufacturing the magnetic sensor, this facilitates manufacture.
Further, the error due to the pin-angle error can be corrected with a small amount of calculation operation.
Further, correction for the error generated due to the pin-angle error of the rotational angle measurement apparatus can be attained without using an encoder for calibration.
In each of the embodiments described above, while a method of signal processing based on the ratio Vy/Vx has been explained, the signal processing may also be conducted based on Vx/Vy. The equation
(28) and the equation (33) are processed in the actual processing by the equation (29) and the equation (34) respectively. In the processing for atan2 (y, x) in the equation (29) and the equation
(34), the angle is calculated as ArcTan(y/x), as well as an angle is obtained also by processing as ArcCot(x/y). In the case of | x | > | y |, calculation accuracy is higher for ArcTan (y/x) and in
the case of | x | < | y |, calculation accuracy is higher for ArcCot (x/y).
Then, the constitution of a motor system using the rotational angle measurement apparatus according to each of the embodiments described above is to be explained with reference to Figs. 26 and 27.
Figs. 26 and 27 show constitutional views of the motor system using the rotational angle measurement apparatus according to each of the embodiments of the invention.
The motor system in this embodiment includes a motor unit 100 and a rotational angle-measurement unit 200.
The motor unit 100 generates a rotational torque by rotation of a plurality of rotatable magnetic poles under the magnetic interaction between a plurality of fixed magnetic poles and a plurality of
rotatable magnetic poles. The motor unit 100 includes a stator 110 providing a plurality of fixed magnetic poles and a rotor 120 providing a plurality of rotatable magnetic poles. The stator 110
includes a stator core 111 and a stator coil 112 attached to the stator core 111. The rotor 120 is disposed opposite to the inner circumferential side of the stator 110 by way of a gap and supported
rotatably. In this embodiment, a three-phase AC surface permanent magnet synchronous motor is used as the motor 100.
A case includes a cylindrical frame 101, and a first bracket 102 and a second bracket 103 disposed on both axial ends of the frame 101. A bearing 106 is disposed in a hollow portion of the first
bracket 102 and a bearing 107 is disposed in a hollow portion of the second bracket 103 respectively. The bearings 106 and 107 rotatably support a rotation shaft 121.
A sealant (not illustrated) is disposed between the frame 101 and the first bracket 102. The sealant is an O-ring disposed in a ring-like form and sandwiched and compressed in the axial direction and
the radial direction by the frame 101 and the first bracket 102. A portion between the frame 101 and the first bracket 102 can be sealed to provide water proof on the front side. Further, also a
portion between the frame 101 and the second bracket 103 is made water proof by a sealant (not illustrated).
The stator 110 includes the stator core 111 and the stator coil 112 attached to the stator core 111 and disposed to the inner circumferential surface of the frame 101. The stator core 111 is a
magnetic material formed by stacking a plurality of silicon steel sheets in the axial direction (magnetic path formation body). The stator core 111 includes an annular-back core and a plurality of
teeth arranged at regular intervals in the circumferential direction while protruding inside the radial direction from the inner circumference of the back-core.
Winding conductors constituting the stator coil 112 are wound concentrically around each of the plurality of teeth. The plurality of winding conductors are electrically connected on every phase by
connection members arranged in parallel on one axial end on one coil end of the stator coil 112 (on the side of the second bracket 103) and further connected electrically as three phase windings. The
connection system for three phase windings includes a Δ (delta) connection system and a Y(star) connection system. This embodiment adopts the Δ (delta) connection system.
The rotor 120 includes a rotor core fixed on the outer circumferential surface of the rotation shaft 121, a plurality of magnets fixed on the outer circumferential surface of the rotor core, and
magnet covers 122a, 122b disposed on the outer circumferential side of the magnets. The magnet cover 122 is used for preventing the magnets from scattering from the rotor core, and this has a
cylindrical structure or a tube-like structure formed of a non-magnetic material such as stainless steel (generally referred to as SUS).
The rotational angle-measurement unit 200 includes a rotational angle measurement apparatus 201DM (hereinafter referred to as "magnetic sensor module 201DM") and a sensor magnet 202. The rotational
angle-measurement unit 200 is disposed in a space surrounded by a housing 203 and the second bracket 103. The sensor magnet 202 is disposed to a shaft that rotates interlocking with the rotation
shaft 121. As the rotation shaft 121 changes the rotational position, the direction of the magnetic field generated in accordance with the change is changed. Therefore, the rotational angle
(rotational position) of the rotation shaft 121 can be measured by detecting the direction of the magnetic field by the magnetic sensor module 201DM.
The magnetic sensor module 201DM is preferably disposed on the center line of rotation 226 of the rotation shaft 121 since the error in the spatial distribution of the magnetic field generated from
the sensor magnet 202 is decreased.
The sensor magnet 202 is a 2-pole magnet magnetized in 2-pole form, or a multi-pole magnet magnetized in multiple pole form.
The magnetic sensor module 201DM includes, as shown in Fig. 8, a magnetic sensor 301 and a detection circuit unit 302M. The detection circuit unit 302M has a signal processing unit 303M.
The magnetic sensor 301 changes its output signal in accordance with the direction of the magnetic field and comprises GMR elements.
The magnetic sensor module 201DM detects the direction of the magnetic field θm at a place where the magnetic sensor is disposed with reference to a reference angle θm0 of the magnetic sensor. That
is, the magnetic sensor module 201DM outputs a signal corresponding to θ = θm - θm0. The magnetic sensor 301 used in this embodiment includes two bridges comprising GMR elements; and the two bridges
output signals in proportion to cos (θm - θm0) and sin (θm - θm0 + α) respectively. Here, α represents a pin-angle error.
The magnetic sensor module 201DM is disposed in the housing 203. The housing 203 is preferably formed of a material having a relative permeability of 1.1 or less such as aluminum or resin so as not
to give an effect on the direction of magnetic flux. In this embodiment, the housing is formed of aluminum.
It may suffice that the magnetic sensor module 201DM is fixed to the motor unit, and it may be fixed to a constituent element other than the housing 203. So long as the sensor module is fixed to the
motor unit, the rotational angle of the rotation shaft 121 can be detected by detecting the change of the direction of the magnetic field by the magnetic sensor 301 in the case where the rotational
angle of the rotation shaft 121 is changed and the direction of the sensor magnet 202 is changed.
A sensor wiring 208 is connected to the magnetic sensor module 201DM. The sensor wiring 208 transmits the output signal from the magnetic sensor 301 to the outside.
The magnetic sensor module 201DM includes, as shown in Fig. 11, a magnetic sensor 301 and a detection circuit unit 302DM. The magnetic sensor 301 includes a plurality of GMR elements arranged in a
bridge structure. The magnetic sensor 301 has the structure shown in Fig. 6A and 6B. The detection circuit unit 302DM includes a driving circuit unit for supplying a voltage applied to the GMR
elements, a differential amplifier 351 for detecting and amplifying signals from the GMR elements and a signal processing unit 303DM for processing the signals outputted from the differential
amplifier 351. The signal processing unit 303DM has a constitution shown in Fig. 11.
Then, the constitution of a motor system when a correction parameter obtained is to be described with reference to Fig. 27. Signals from the magnetic sensor module 201DM are inputted to an electronic
control unit 411 (simply referred to as ECU). The ECU 411 sends a control command to a driving unit 412. The driving unit 412 controls the angular velocity and the position of the rotation shaft and
the like of the rotor 120 by outputting an appropriate voltage waveform to the stator 110 of the motor unit 100.
When the correction parameter is obtained, the rotor 120 is rotated at a constant velocity by sending a command for rotating the rotor 120 at a constant angular velocity from the ECU 411 to the
driving unit 412. In this process, the signal processing unit 303DM of the magnetic sensor module 201DM obtains the correction parameter and stores the same in a parameter-storing unit 390 by the
constitution shown in Fig. 13.
Alternatively, the magnetic sensor module 201DM may be composed only of the magnetic sensor 301 and the detection circuit unit 302DM may be formed inside the ECU 411.
In this embodiment, the correction parameter can be updated on every certain time interval. In this constitution, even when the correction parameter shows aging change by the use of the rotational
angle measurement apparatus for a long time, an accurate measuring result can be maintained by using the updated correction parameter.
The magnetic sensor module 201DM may have a constitution of the rotational angle measurement apparatus 201DMA shown in Fig. 19 or the rotational angle measurement apparatus 201DMB shown in Fig. 23.
Further, when the correction parameter is previously obtained by using an apparatus to be described later with reference to Fig. 29, the magnetic sensor module 201DM may have a constitution of the
rotational angle measurement apparatus 201M shown in Fig. 8, the rotational angle measurement apparatus 201MA shown in Fig. 20, and the rotational angle measurement apparatus 201MB shown in Fig. 25.
In this case, a previously obtained correction parameter is stored in the storing unit 390.
Then, description is to be made to the constitution of an electrically power-assisted steering system using the rotational angle measurement apparatus according to each of the embodiments described
above with reference to Fig. 28.
Fig. 28 is a constitutional view of the electrically power-assisted steering system using the rotational angle measurement apparatus according to each of the embodiments of the invention.
In the electrically power-assisted steering system shown in Fig. 28, a steering shaft 503 coupled mechanically to a steering wheel 501 moves interlocking with the rotational shaft 121 by way of a
joint unit 504 including gears, etc. The rotation shaft 121 is a rotation shaft of the motor 100 in which a sensor magnet 202 is disposed to one end of the rotation shaft 121. A rotational angle
measurement apparatus 201DM (hereinafter referred to as "magnetic sensor module 201DM") is disposed in the vicinity of the sensor magnet 202 and measures the rotational angle of the rotation shaft
121 and transmits the same to the ECU 411. The ECU 411 calculates an appropriate amount of motor driving based on the signal from the torque sensor (not illustrated) disposed in a steering column 502
and the rotational angle signal from the magnetic sensor module 201DM; then, the ECU 411 transmits the signal obtained by the calculation to the motor drive unit 412. The motor 100 assists the
movement of the steering shaft 503 by way of the rotation shaft 121.
For a calibration of the system, the system is set to the system origin, i.e., the origin of an angle as the system of the electrically power-assisted steering apparatus; and the rotational angle θr0
of the rotation shaft 121 is read out in this state. Specifically, when the steering wheel 501 is set to an appropriate position, a signal from the magnetic sensor module 201DM is measured to
determine the angle of magnetic field θm in this state, and the rotational angle θm0 of the angle of magnetic field corresponding to the system origin is stored and held in the controlling apparatus
(electronic control unit ECU) 411 of the electrically power-assisted steering apparatus.
Even when a mounting-position error is present upon installing the rotational angle measurement apparatus to the system, the error can be corrected so long as the angle of magnetic field θm0
corresponding to the system origin is known.
Information necessary in the system such as the electrically power-assisted steering apparatus is an angle θsys as the system, that is, a rotational angle of the steering wheel. According to this
embodiment, the angle θsys as the system can be obtained accurately from the angle of magnetic field θm obtained from the output signal of the magnetic sensor module 201DM.
The magnetic sensor module 201DM may have the constitution of the rotational angle measurement apparatus 201DMA shown in Fig. 19 or the rotational angle measurement apparatus 201DMB shown in Fig. 23.
Further, when the correction parameter is previously obtained by using the apparatus to be described later with reference to Fig. 29, the magnetic sensor module 201DM may have the constitution of the
rotational angle measurement apparatus 201M shown in Fig. 8, the rotational angle measurement apparatus 201MA shown in Fig. 20, or the rotational angle measurement apparatus 201MB shown in Fig. 25.
In this case, a previously obtained correction parameter is stored in the storing unit 390.
Then, description is to be made to an inspection system upon manufacturing the magnetic sensor 301 by using the rotational angle measurement apparatus according to each of the embodiments described
above with reference to Fig. 29.
Fig. 29 is an explanatory view of the inspection system upon manufacturing the magnetic sensor by using the rotational angle measurement apparatus according to each of the embodiments of the
In this embodiment, the correction parameter is obtained in the inspection step upon manufacturing the magnetic sensor 301. As shown in Fig. 29, the magnetic sensor 301 including GMR elements is
disposed on a stage and, while rotating a magnetic field generator 202 that generates a uniform magnetic field, (Vx, Vy) signals of each of the magnetic sensors are measured. In this process, the
correction parameter is obtained on every sensor according to the methods of the equations (32),(42), and (43) by using the rotational angle measurement apparatus 201D shown in Fig. 5, the rotational
angle measurement apparatus 201DA shown in Fig. 13, or the rotational angle measurement apparatus 201DB shown in Fig. 24. Thus, the pin-angle error value α (or β = sinα), signal offset voltages bx
and by, and the Bx value defined by the equations (42) and (43) can be determined for every respective magnetic sensor 301.
As described above, the magnetic sensor 301 obtaining the correction parameter is incorporated into the rotational angle measurement apparatus 201MA. The signal processing unit 303MA of the
rotational angle measurement apparatus 201MA has the constitution shown in Fig. 20 and records the correction parameters β and Bx in the parameter storing unit 390. In this way, since the rotational
angle measurement apparatus 201MA can decrease the effect of the pin angle setting error, measurement at high accuracy is possible.
In the foregoing description, while GMR elements are used as the magnetic sensor, this invention is effective also to the rotational angle measurement apparatus using TMR elements (Tunneling
Magneto-Resistance elements) as the magnetic sensor. The TMR element uses an insulator layer as the spacer 12 in Fig. 2 in which the resistance value changes in accordance with the angle formed
between the magnetization direction of the pinned magnetic layer (pin angle) θp and the magnetization direction θf of the free magnetic layer (the magnetization direction of the free magnetic layer
is aligned with the direction of the external magnetic field). Accordingly, the same effect can be obtained by applying the invention. | {"url":"https://data.epo.org/publication-server/rest/v1.2/publication-dates/20110615/patents/EP2333492NWA1/document.html","timestamp":"2024-11-03T03:29:11Z","content_type":"text/html","content_length":"189131","record_id":"<urn:uuid:d800e735-886a-4b96-99d8-24a5b2ffa520>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00552.warc.gz"} |
The Stacks project
Lemma 40.6.4. Let $S$ be a scheme. Let $(U, R, s, t, c)$ be a groupoid over $S$. Let $\tau \in \{ Zariski, \linebreak[0] fppf, \linebreak[0] {\acute{e}tale}, \linebreak[0] smooth, \linebreak[0]
syntomic\} $ ^1. Let $\mathcal{P}$ be a property of morphisms of schemes which is $\tau $-local on the target (Descent, Definition 35.22.1). Assume $\{ s : R \to U\} $ and $\{ t : R \to U\} $ are
coverings for the $\tau $-topology. Let $W \subset U$ be the maximal open subscheme such that $s|_{s^{-1}(W)} : s^{-1}(W) \to W$ has property $\mathcal{P}$. Then $W$ is $R$-invariant, see Groupoids,
Definition 39.19.1.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 03JC. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 03JC, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/03JC","timestamp":"2024-11-09T00:25:12Z","content_type":"text/html","content_length":"15402","record_id":"<urn:uuid:033e7c1e-f588-4ddf-a58a-b04d5575050a>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00095.warc.gz"} |
A Method for the Determination of the Number of Stars for Different Population Types
A method for the determination of the number of stars within given absolute magnitudes with an apparent magnitude interval is presented. The relative solar normalizations (Table 1) for Population I,
Intermediate Population II, and Population II transform Gliese's [5] total solar densities to the solar densities for these individual populations for a given (M_{i}(G), M_{i+1} (G)) absolute
magnitude interval. The combination of these solar densities with the corresponding model curve gives the density of the pyramid whose height and centroid distances are r and \vec{r} respectively,
where $r$ correspods to the faintest magnitude G_{k+1} of the interval (G_{k},G_{k+1}). The number of stars, N_{k+1} with given absolute magnitudes and not fainter than G_{k+1} is the density of the
pyramid times its volume. Finally, if N_{k} corresponds to the apparent magnitude G_{k}, then N=N_{k+1}-N_{k} gives the number of stars in the interval (G_{k}, G_{k+1}) with given absolute
magnitudes. The application of the method to stars not fainter than G=16 magn. in the absolute magnitude intervals 4
Recommended Citation
KARAALİ, S. (1997) "A Method for the Determination of the Number of Stars for Different Population Types," Turkish Journal of Physics: Vol. 21: No. 9, Article 2. Available at: https:// | {"url":"https://journals.tubitak.gov.tr/physics/vol21/iss9/2/","timestamp":"2024-11-04T10:53:55Z","content_type":"text/html","content_length":"56928","record_id":"<urn:uuid:28f9a87c-9920-4b8b-8c3f-7582214c7a59>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00068.warc.gz"} |
Power Triangle: Understanding the Relationship Between Voltage, Current, and Power - Easy Electronics
Power Triangle: Understanding the Relationship Between Voltage, Current, and Power
If you’re studying electrical engineering or working in the power industry, you’ve likely heard of the power triangle. This simple geometric representation of the relationship between voltage,
current, and power is an essential concept for understanding how electrical power systems work.
In this article, we’ll take a closer look at the power triangle and explore its various components.
What is the Power Triangle?
• The power triangle is a geometric representation of the relationship between voltage, current, and power in an electrical system. The triangle is formed by three sides, each representing one of
the three elements:
• Voltage (V) is the electrical potential difference between two points in a circuit, measured in volts (V).
• Current (I) is the flow of electrical charge through a circuit, measured in amperes (A).
• Power (P) is the rate at which energy is transferred or converted, measured in watts (W).
• These three elements are related to each other by the following formula:
\mathbf{P = V \times I \times \cos(\theta)}
where cos(θ) represents the power factor of the circuit.
Also Read
The Components of the Power Triangle
• The power triangle has three components, each representing a different aspect of the relationship between voltage, current, and power.
1. Apparent Power
• Apparent power (S) is the total power in a circuit, measured in volt-amperes (VA). It is calculated by multiplying the voltage and current in the circuit:
\mathbf{S = V \times I}
• Apparent power represents the total amount of power that is flowing through the circuit, but it doesn’t account for the fact that some of the power is lost due to the resistance in the circuit.
2. Real Power
Real power (P) is the actual power that is being used by the load, measured in watts (W). It is calculated by multiplying the apparent power by the power factor:
\mathbf{P = S \times \cos (\theta)}
Real power represents the useful power that is being delivered to the load and can be used to do work.
3. Reactive Power
Reactive power (Q) is the power that is being stored and released by the reactive components in the circuit, measured in volt-amperes reactive (VAR). It is calculated by subtracting the real power
from the apparent power:
\mathbf{Q = \sqrt{(S^2 - P^2)}}
Reactive power represents the power that is being used to maintain the electric and magnetic fields in the circuit.
Importance of the Power Triangle in Electrical Engineering
• The power triangle is a critical concept in electrical engineering, and it’s used in various applications. Here are some of the key uses of the power triangle:
1. Power Factor Correction
• The power factor angle is an important factor in power factor correction. Power factor correction is the process of improving the efficiency of a circuit by reducing the reactive power. By
reducing the reactive power, more real power can be used by the resistive elements, making the circuit more efficient.
2. Circuit Design
• The power triangle is used in circuit design to ensure that the circuit is designed to handle the expected power demands. Understanding the power triangle helps in selecting the appropriate
components for a circuit and ensuring that the circuit is designed for optimal efficiency.
3. Energy Management
• The power triangle is used in energy management to help manage power consumption and reduce energy costs. By understanding the power triangle, it’s possible to identify areas where energy is
being wasted and implement measures to improve efficiency.
4. Troubleshooting
• The power triangle is also useful in troubleshooting electrical circuits. By analyzing the power triangle, it’s possible to identify the cause of power quality issues, such as
Factors Affecting the Power Triangle
Several factors affect the power triangle, including the type of load, the phase angle between voltage and current, and the power factor. Understanding these factors is essential in optimizing the
power triangle for efficient circuit performance.
1. Type of Load
The type of load connected to a circuit affects the power triangle. Resistive loads, such as heaters and lights, have a power factor angle of 0°, while inductive loads, such as motors and
transformers, have a power factor angle of more than 0°. Capacitive loads, such as capacitors, have a power factor angle of less than 0°.
2. Phase Angle Between Voltage and Current
The phase angle between voltage and current affects the power triangle. When voltage and current are in phase, the power factor angle is 0°, and the circuit is efficient. When voltage and current are
out of phase, the power factor angle is larger, and the circuit is less efficient.
3. Power Factor
The power factor is a measure of the efficiency of a circuit. It’s calculated by dividing the real power by the apparent power. A high power factor means that the circuit is efficient, while a low
power factor means that the circuit is less efficient.
• In conclusion, the power triangle is an essential concept for anyone working in the power industry or studying electrical engineering. By understanding the relationship between voltage, current,
and power, engineers can design more efficient power systems and troubleshoot problems when they occur. The power triangle also helps reduce energy waste and improve the reliability of power
FAQs on power Triangle
1. What is the importance of the power triangle in power engineering?
The power triangle is essential in power engineering as it helps engineers understand the relationship between voltage, current, and power. This knowledge is crucial in designing efficient power
systems and troubleshooting problems when they occur.
2. What is the significance of power factor in the power triangle?
Power factor is a measure of how efficiently a circuit is using the power being delivered to it. It is essential to understand the power factor as it indicates how much of the power being
delivered to the circuit is being used to do work.
3. How does the power triangle help reduce energy waste?
By understanding the power triangle, engineers can design power systems that are more efficient and waste less energy. For example, they can reduce the reactive power in a circuit by installing
power factor correction devices.
4. What is the difference between real power and reactive power?
Real power is the actual power being used by the load, while reactive power is the power being stored and released by the reactive components in the circuit. While real power is used to do work,
reactive power is used to maintain the electric and magnetic fields in the circuit.
Leave a Comment
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://easyelectronics.co.in/what-is-power-triangle/","timestamp":"2024-11-14T21:34:38Z","content_type":"text/html","content_length":"166118","record_id":"<urn:uuid:0759cf9e-c267-4af5-b439-0f1a5f88c3de>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00176.warc.gz"} |
Multiplication As A Comparison Worksheet
Math, specifically multiplication, creates the foundation of countless scholastic disciplines and real-world applications. Yet, for several learners, mastering multiplication can present a challenge.
To resolve this obstacle, instructors and parents have accepted a powerful device: Multiplication As A Comparison Worksheet.
Intro to Multiplication As A Comparison Worksheet
Multiplication As A Comparison Worksheet
Multiplication As A Comparison Worksheet -
Course 4th grade Unit 3 Lesson 1 Comparing with multiplication Multiply by 1 digit numbers FAQ Comparing with multiplication Comparing with multiplication and addition giraffe Comparing with
multiplication and addition money Comparing with multiplication magic Compare with multiplication Compare with multiplication word problems
These lessons with videos examples and solutions help Grade 4 students learn to interpret a multiplication equation as a comparison e g interpret 35 5 7 as a statement that 35 is 5 times as many as 7
and 7 times as many as 5 Represent verbal statements of multiplicative comparisons as multiplication equations
Importance of Multiplication Method Understanding multiplication is crucial, laying a strong foundation for innovative mathematical concepts. Multiplication As A Comparison Worksheet use structured
and targeted technique, fostering a deeper comprehension of this basic arithmetic operation.
Evolution of Multiplication As A Comparison Worksheet
Multiplication And Division Partner Word Problems Worksheets 99Worksheets
Multiplication And Division Partner Word Problems Worksheets 99Worksheets
Set B2 Multiplication Comparisons Equations Blackline Use anytime after Unit 2 Session 13 Set B2 H Independent Worksheet 2 INDEPENDENT WORKSHEET Multiplication Comparisons with Coins 1 Write a
multiplication equation for each problem Then write a multiplicative comparison to show how much each group of coins is worth
This Multiplicative Comparisons Worksheet packet includes 8 sheets created for 4th grade students focusing on comparisons using multiplication and division Students have to write multiplicative
comparisons as well as solve them Differentiation is included with two worksheets focusing on diagrams to help students visualize the work problems
From traditional pen-and-paper workouts to digitized interactive layouts, Multiplication As A Comparison Worksheet have actually developed, catering to varied learning designs and choices.
Types of Multiplication As A Comparison Worksheet
Standard Multiplication Sheets Straightforward exercises focusing on multiplication tables, helping students develop a solid arithmetic base.
Word Trouble Worksheets
Real-life situations integrated right into problems, enhancing vital reasoning and application skills.
Timed Multiplication Drills Tests made to improve rate and accuracy, helping in rapid psychological math.
Benefits of Using Multiplication As A Comparison Worksheet
Worksheet On Multiplication Table Of 1 Word Problems On 1 Times Table
Worksheet On Multiplication Table Of 1 Word Problems On 1 Times Table
Multiplicative Comparisons and Equations Worksheets This is a fantastic bundle which includes everything you need to know about Multiplicative Comparisons and Equations across 15 in depth pages These
are ready to use Common core aligned Grade 4 Math worksheets Each ready to use worksheet collection includes 10 activities and an answer guide
Liveworksheets transforms your traditional printable worksheets into self correcting interactive exercises that the students can do online and send to the teacher 1061955 Main content Multiplication
2013181 Multiplicative comparison practice Share Print Worksheet Google Classroom Microsoft Teams Facebook
Improved Mathematical Skills
Consistent method develops multiplication proficiency, improving general mathematics capacities.
Enhanced Problem-Solving Talents
Word troubles in worksheets develop logical thinking and strategy application.
Self-Paced Learning Advantages
Worksheets fit specific learning rates, promoting a comfortable and adaptable learning environment.
Exactly How to Develop Engaging Multiplication As A Comparison Worksheet
Incorporating Visuals and Shades Vivid visuals and colors catch focus, making worksheets aesthetically appealing and involving.
Consisting Of Real-Life Scenarios
Connecting multiplication to everyday circumstances includes relevance and functionality to exercises.
Customizing Worksheets to Different Skill Degrees Customizing worksheets based on varying proficiency degrees makes sure inclusive understanding. Interactive and Online Multiplication Resources
Digital Multiplication Tools and Games Technology-based sources offer interactive knowing experiences, making multiplication interesting and pleasurable. Interactive Sites and Applications On-line
systems provide varied and obtainable multiplication method, supplementing conventional worksheets. Personalizing Worksheets for Different Learning Styles Visual Learners Aesthetic aids and diagrams
help comprehension for students inclined toward aesthetic knowing. Auditory Learners Verbal multiplication issues or mnemonics accommodate students that realize principles via auditory methods.
Kinesthetic Learners Hands-on activities and manipulatives support kinesthetic learners in comprehending multiplication. Tips for Effective Implementation in Understanding Uniformity in Practice
Regular technique reinforces multiplication abilities, promoting retention and fluency. Stabilizing Rep and Variety A mix of repetitive workouts and varied issue styles keeps interest and
understanding. Giving Useful Feedback Comments aids in identifying locations of improvement, motivating continued progression. Obstacles in Multiplication Technique and Solutions Motivation and
Engagement Difficulties Boring drills can result in disinterest; cutting-edge techniques can reignite inspiration. Overcoming Worry of Mathematics Unfavorable understandings around math can impede
development; developing a favorable learning setting is vital. Influence of Multiplication As A Comparison Worksheet on Academic Performance Studies and Study Findings Research shows a positive
connection between consistent worksheet usage and enhanced mathematics performance.
Final thought
Multiplication As A Comparison Worksheet emerge as functional devices, cultivating mathematical proficiency in learners while fitting diverse discovering designs. From fundamental drills to
interactive on-line resources, these worksheets not just enhance multiplication abilities however also promote essential reasoning and analytic capabilities.
Multiplicative Comparison Worksheets Times Tables Worksheets
Check more of Multiplication As A Comparison Worksheet below
Fact Master Addition Worksheets Worksheet Hero
Multiplicative Comparison Worksheets Free
Multiplication Chart Higher Than 12 AlphabetWorksheetsFree
Multiplicative Comparison Worksheets Times Tables Worksheets
16 Multiplicative Comparison Worksheets Pdf Rasyashariati
Multiplication Elementary Algebra Worksheet Printable
Multiplicative Comparison Grade 4 Online Math Help And Learning
These lessons with videos examples and solutions help Grade 4 students learn to interpret a multiplication equation as a comparison e g interpret 35 5 7 as a statement that 35 is 5 times as many as 7
and 7 times as many as 5 Represent verbal statements of multiplicative comparisons as multiplication equations
span class result type
A rabbit can go 2 feet in one jump A kangaroo can go five times as far as a rabbit Write a multiplication equation to represent finding how far a kangaroo goes in one jump 5 k 2 k 2 5 2 5 k Practice
Set Solve multiplicative comparison word problems Question 1 The shortest living man on Earth is 21 inches tall
These lessons with videos examples and solutions help Grade 4 students learn to interpret a multiplication equation as a comparison e g interpret 35 5 7 as a statement that 35 is 5 times as many as 7
and 7 times as many as 5 Represent verbal statements of multiplicative comparisons as multiplication equations
A rabbit can go 2 feet in one jump A kangaroo can go five times as far as a rabbit Write a multiplication equation to represent finding how far a kangaroo goes in one jump 5 k 2 k 2 5 2 5 k Practice
Set Solve multiplicative comparison word problems Question 1 The shortest living man on Earth is 21 inches tall
Multiplicative Comparison Worksheets Times Tables Worksheets
Multiplicative Comparison Worksheets Free
16 Multiplicative Comparison Worksheets Pdf Rasyashariati
Multiplication Elementary Algebra Worksheet Printable
Multiplication Problems Printable 5th Grade
Multiplicative Comparison Worksheets Times Tables Worksheets
Multiplicative Comparison Worksheets Times Tables Worksheets
Multiplication Comparison Worksheet
FAQs (Frequently Asked Questions).
Are Multiplication As A Comparison Worksheet ideal for every age groups?
Yes, worksheets can be tailored to different age and ability levels, making them adaptable for numerous learners.
Just how frequently should students exercise using Multiplication As A Comparison Worksheet?
Regular method is essential. Regular sessions, ideally a few times a week, can yield substantial renovation.
Can worksheets alone improve mathematics abilities?
Worksheets are a valuable tool yet should be supplemented with diverse discovering techniques for thorough skill development.
Are there on-line platforms supplying free Multiplication As A Comparison Worksheet?
Yes, many educational websites provide free access to a large range of Multiplication As A Comparison Worksheet.
Exactly how can moms and dads sustain their youngsters's multiplication technique in the house?
Motivating consistent method, supplying support, and producing a favorable discovering setting are beneficial steps. | {"url":"https://crown-darts.com/en/multiplication-as-a-comparison-worksheet.html","timestamp":"2024-11-06T11:22:57Z","content_type":"text/html","content_length":"28648","record_id":"<urn:uuid:2875b227-1f45-4cd1-87ef-edfa2e2d979b>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00455.warc.gz"} |
What Is Nominal Data? | Examples & Definition
Nominal data is a type of qualitative data that is characterized by its categorical nature. It is often used to describe characteristics or attributes of individuals, objects, or events, and it is
typically represented as a label or category.
Nominal labels or categories don’t have an inherent rank or numerical value, which means you can’t logically order them. Researchers often use this type of data in conjunction with other types of
quantitative data to provide a more complete understanding of a research question or problem.
Nominal data examples
• Religion (e.g., Christian, Muslim, Hindu, Buddhist, Jewish)
• Gender (e.g., male, female, nonbinary)
• Country of origin (e.g., Netherlands, China, Russia, Peru)
• Colors (e.g., red, green, blue, purple, yellow)
• Vehicle types (e.g., bus, truck, car, motorcycle)
The data for each of these variables can be categorized with labels, but there’s no inherent order to them. For instance, the labels for gender could be ranked in any random order.
What is nominal data?
In statistics, there are four levels of measurement: nominal, ordinal, interval, and ratio. Nominal is the first level of measurement. Like ordinal variables, nominal variables are categorical
(instead of quantitative) in nature.
Nominal data is similar to ordinal data because they can both be categorized with labels, but the nominal labels can’t be ranked in a logical order.
Nominal data examples
Nominal data is often expressed in words, but sometimes numerical labels are used. However, these numerical labels can still not be ranked in a logical or meaningful way. You also can’t perform
arithmetic operations with the data.
Nominal variables are often used in social science research that studies the effects of gender, religious background, or ethnicity. Each data point (e.g., response, observation) fits into exactly one
Nominal data examples
Nominal variable Nominal levels
• Female
Gender • Male
• Nonbinary
• Sunny
• Rainy
Weather conditions • Cloudy
• Windy
• Stormy
• Hockey
• Soccer
Sports • Tennis
• Swimming
• Running
• Italian
• Japanese
Cuisines • Thai
• Mexican
• Ghanaian
• TikTok
• Snapchat
Social media platforms • Instagram
• Facebook
• X
Nominal vs ordinal data
Nominal data and ordinal data are similar because they’re both types of categorical data. However, ordinal data has an inherent order or ranking, whereas nominal data doesn’t. This means that you
can’t rank nominal labels in a meaningful way.
Nominal data is typically used when you want to categorize data, but you don’t need to make comparisons or rank the data. This type of data is often used to provide descriptive statistics about the
demographic characteristics of a population or sample (e.g., calculating the frequency of each gender category in a dataset).
Example: Nominal vs ordinal data
Nominal data
An example of nominal data is marital status. You could use the following labels for this variable:
• Married
• Divorced
• Single
• Widowed
There’s no logical order to the ranking, which makes this a nominal variable. You could rank the labels in any order.
Ordinal data
An example of ordinal data is level of education. You could use the following categories for this variable:
• High school degree
• Associate degree
• Bachelor’s degree
• Master’s degree
• Doctoral degree
There’s a logical order to the ranking, which makes this an ordinal variable. You could start with the lowest or highest degree, but it would be odd to start with one of the middle categories.
How to collect nominal data
Nominal data is typically collected with open-ended or closed-ended survey questions.
• You use closed-ended questions if the variable of interest has only a few possible categories to cover all the data.
Nominal data example: Closed-ended questions
Question Answer options
• Public gym
Where do you prefer to work out? • Private gym
• Home
• No
Do you have a sports membership? • Yes
• Tennis
• Running
• Swimming
• Soccer
What is your favorite sport? • American football
• Softball
• Gymnastics
• Lacrosse
• Boxing
• You use open-ended questions if your variable has many possible categories or if you’re unable to create an exhaustive list of labels.
Nominal data example: Open-ended questions
1. What is your place of residence?
2. What is your employee ID?
3. What is your zip code?
How to analyze nominal data
You can use your nominal data to create tables or charts. This will help you collect descriptive statistics about the dataset, which can tell you something about the central tendency and variability
of your data.
Since nominal is the lowest level of measurement with the lowest precision, you’re not able to calculate most measures of central tendency or variability for it.
Analyzing nominal data: Dataset example
You conduct a survey with a closed-ended question about people’s gender. Your dataset consists of a list of values:
Male Female Female
Male Female Male
Female Male Male
Nonbinary Female Female
Male Male Nonbinary
Female Female Female
Male Nonbinary Nonbinary
Female Female Male
Male Male Female
You can organize the dataset by creating a frequency distribution table that shows you the number of responses for each gender label.
Analyzing nominal data: Frequency distribution example
You can create a simple frequency distribution table with all possible labels in the left column and the number of responses for each label in the right column.
Gender Frequency
Female 12
Male 11
Nonbinary 4
It’s also an option to convert the frequencies to percentages. You divide each frequency by the total number of values in the dataset (27) and multiply that by 100.
Gender Frequency
Female 44.4%
Male 40.7%
Nonbinary 14.8%
The simple frequency distribution table can be converted into a bar graph, where the categories are plotted on the horizontal axis and the frequencies on the vertical axis. The order of the
categories doesn’t matter since there’s no inherent order to them.
The percentage frequency distribution can be converted into a pie chart, where each slice of pie corresponds to the percentage of a particular category in the dataset.
Central tendency
The central tendency of your dataset is the point where most of your values lie. Three common measures of central tendency are the mode, mean, and median. The mode (aka the most frequently recurring
value) is the only applicable measure for nominal data due to the categorical nature and low precision of this type of data.
You’d have to use arithmetic operations (e.g., addition, division) to calculate the mean. This is not possible for this qualitative type of data. To find the median, your values need to be ordered
from low to high, which is not possible for nominal labels.
Analyzing nominal data example: Finding the mode
You can find the mode of your nominal dataset by looking for the most frequently recurring value in your simple frequency distribution table.
The highest number of people in your research identify as female, so that’s the mode.
Statistical tests for nominal data
You can test hypotheses about your data with the help of inferential statistics.
Parametric tests can’t be used with nominal data because this type of data violates some of the assumptions (e.g., a normal distribution). This means you’ll always have to use a nonparametric
statistical test to analyze nominal data.
Nominal data is typically analyzed using a chi-square test. There are two types that apply to nominal data:
Chi-square goodness of fit
You use the chi-square goodness of fit test when your dataset has only one variable and you’ve collected data from just one population with a probability sampling method, such as simple random
sampling, stratified sampling, or cluster sampling.
The test shows you whether the frequency distribution of your random sample corresponds with your expectations of the population as a whole. It helps you determine how representative your sample is
of the population.
Chi-square goodness of fit test example
You expected 40% of your sample to identify as female, 50% as male, and 10% as nonbinary, but your sample shows that 45% identify as female, 41% as male, and 15% as nonbinary.
The chi-square goodness of fit test statistic provides information on how different your observation is from the expectation based on chance. A test statistic of zero shows that there’s no difference
between your observation and your expectation.
Chi-square test of independence
The chi-square test of independence allows you to test if a relationship between two categorical variables is statistically significant.
Chi-square test of independence example
You’ve collected data on the participants’ gender and favorite sport. This allows you to test your hypothesis on whether these two variables correlate.
The test helps you determine whether two nominal variables from the same sample are independent of each other.
Frequently asked questions about nominal data
You can’t use an ANOVA test if the nominal data is your dependent variable. The dependent variable needs to be continuous (interval or ratio data).
The independent variable for an ANOVA should be categorical (either nominal or ordinal data).
Data at the nominal level of measurement is qualitative.
Nominal data is used to identify or classify individuals, objects, or phenomena into distinct categories or groups, but it does not have any inherent numerical value or order.
You can use numerical labels to replace textual labels (e.g., 1 = male, 2 = female, 3 = nonbinary), but these numerical labels are random and are not meaningful. You could rank the labels in any
order (e.g., 1 = female, 2 = nonbinary, 3 = male). This means you can’t use these numerical labels for calculations.
No, nominal data can only be assigned to categories that have no inherent order to them.
Categorical data with categories that can be ordered in a meaningful way is called ordinal data.
Data at the nominal level of measurement typically describes categorical or qualitative descriptive information, such as gender, religion, or ethnicity.
Contrary to ordinal data, nominal data doesn’t have an inherent order to it, so you can’t rank the categories in a meaningful order.
Nominal data and ordinal data are similar because they can both be grouped into categories. However, ordinal data can be ranked in a logical order (e.g., low, medium high), whereas nominal data
can’t (e.g., male, female, nonbinary).
You have already voted. Thanks :-) Your vote is saved :-) Processing your vote... | {"url":"https://quillbot.com/blog/research/nominal-data/","timestamp":"2024-11-14T04:01:25Z","content_type":"text/html","content_length":"127526","record_id":"<urn:uuid:6ec33d51-2495-46fd-8b43-35700041e1df>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00547.warc.gz"} |
Details: The precise control over parameters such as the trapping frequencies and atomic interactions renders Bose-Einstein condensates (BECs) one of the widely used nonlinear systems to study the
turbulent dynamics in quantum fluids, where the turbulence is referred to as quantum turbulence. In two-dimensional quantum fluids, a topological excitation is a vortex with a quantized circulation
around the vortex core with a finite size. The multicomponent BEC setting, either of the same atomic species or of different atomic species, significantly enriches the phenomenology of vortices due
to the presence of two competing energy scales of intra- and inter-component interactions. Depending on the strength of the intra- and inter-component interactions, the system resides either in a
miscible regime or in an immiscible one. We present turbulent dynamics in two- component BECs modelled by the Gross-Pitaevskii equation. The turbulent dynamics is induced via a stirring scheme that
is commonly used in experiments. We considered both the symmetric and asymmetric setup of the system parameters where the asymmetry is introduced through the difference of the trap frequencies or
that of the intra-component interaction strength. Since it is known that the trap geometry plays a significant role in the vortex cluster formation, we implement the dynamics in a harmonic trap and
also in a steep-wall trap. We find that the initial turbulence generated via a stirring potential decays to the interlaced vortex-antidark structures which, in turn, bear a large size of the vortex
core. The corresponding incompressible spectrum develops a k−3 power law for the wave numbers determined by the inverse of the spin healing length, ξs and a flat region for the range of the wave
number determined by the density healing length, ξ , due to the bottleneck effect. This feature is enhanced for larger inter-component coupling strength. In the case of the steep-wall trap, where
formation of the Onsager cluster characterised by the large dipole moment of the vortex charges is expected in a single-component BEC, the presence of the inter-component coupling also causes the
decay of vortices, preventing the persistence of the cluster configuration [1]. | {"url":"http://calendar.iiserkol.ac.in/view_event/1208298/","timestamp":"2024-11-11T08:23:48Z","content_type":"application/xhtml+xml","content_length":"4719","record_id":"<urn:uuid:7f2610c3-09de-421b-8db1-5f805ec97648>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00009.warc.gz"} |
Day Count Bases and Dates
Day Count Bases and Dates
Many financial functions require date arguments, and depend on differences between two dates, either as a number of days or as a fraction of a year. This chapter discusses the date format expected by
AIMMS’s financial functions and the different methods to compute date differences used from which you can choose in many functions.
Format of Date Arguments
All date arguments in AIMMS’s financial functions should be provided in the fixed string date format "ccyy-mm-dd". So, 15 August, 2000 should be passed to a financial function as the string
"2000-08-15". If you want to pass an element from a daily calendar as a date argument, you should convert it to the fixed string date format using the function TimeSlotToString
Day Count Bases
The result of many financial functions depends on the way with which differences between two dates are dealt with. Such functions have a day count basis argument, which determines how the difference
between two dates is calculated, either in days or as a fraction of a year. AIMMS supports 5 different day count basis methods, each of which is commonly used in the financial markets. Each of these
methods is specified by a way to count days and a way to determine how many days are in a year.
Method 1 - NASD Method / 360 Days
Calculating with day count basis method 1 means that a year is assumed to consist of 12 periods of 30 days. A year consists of 360 days. The difference between this method and method 5 is the way
the last day of a month is handled.
Method 2 - Actual / Actual
Calculating with day count basis method 2 means that both the number of days between two dates and the number of dates in a year are actual.
Method 3 - Actual / 360 Days
Calculating with day count basis method 3 means that the number of days between two dates is actual and that the number of days in a year is 360. When using this method, you should note that the
year fraction of two dates that are one year apart is larger than 1 (365/360) and that this may lead to unwanted results.
Method 4 - Actual / 365 Days
Calculating with day count basis method 4 means that the number of days between two dates is actual and that the number of days in a year is 365.
Method 5 - European Method / 360 Days
Calculating with day count basis method 5 means that a year is assumed to consist of 12 periods of 30 days. A year consists of 360 days. The difference between this method and method 1 is the way
the last day of a month is handled.
When the day count basis argument is optional, AIMMS assumes the NASD method 1 by default.
Date Differences
AIMMS supports the following functions for computing differences between two dates: | {"url":"https://documentation.aimms.com/functionreference/elementary-computational-operations/financial-functions/day-count-bases-and-dates/index.html","timestamp":"2024-11-04T21:51:35Z","content_type":"text/html","content_length":"17921","record_id":"<urn:uuid:adfd7670-05ce-4e4a-998d-77308a2196ca>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00661.warc.gz"} |
ICON Documentation
Cryptographic primitives are the basic building blocks of cryptography. They are the foundation upon which more complex cryptographic algorithms and protocols are built. These primitives include
functions such as encryption, decryption, digital signature, and key exchange. They are designed to provide specific security properties, such as confidentiality, integrity, and authenticity.
One of the main distinguishing factors of a blockchain compared to other data stores is in the way it uses cryptography. Because all transactions are cryptographically secured, there is no need for
an intermediary to handle identity and access management. Cryptographic primitives allow for secure and decentralized management of user identities and access to the blockchain network.
Hashing Functions
A hashing function is a mathematical function that takes an input and returns a fixed-size string of characters, or hash. Hashing functions are commonly used in cryptography to create digital
signatures and to verify the integrity of data.
SHA3-256 is a commonly used hash function.
SHA3-256("Hello world") = "369183d3786773cef4e56c7b849e7ef5f742867510b676d6b38f8e38a222d8a2"
SHA3-256("hello world") = "644bcc7e564373040999aac89e7622f3ca71fba1d972fd94a31c3bfbf24e3938"
Modifying one letter completely changes the result and neither are interpretable. The only way to reverse-engineer the input string "Hello world" from
"369183d3786773cef4e56c7b849e7ef5f742867510b676d6b38f8e38a222d8a2" is through brute-force, which is computationally infeasible because of the number of permutations.
Security Properties of Hashing Functions
Hashing functions have several important security properties that make them useful for cryptographic applications. These properties include: Determinism: A given input will always produce the same
output. Pre-image resistance: Given a hash, it is computationally infeasible to find an input that would produce that hash. Collision resistance: It is computationally infeasible to find two
different inputs that produce the same hash. Fixed-length output: Regardless of the size of the input string, the hash function will always generate an output string of the same length.
How Blockchains Use Hashing Functions
Blockchains use hashing functions to secure the blockchain and efficiently validate transactions.
Asymmetric Key Cryptography
Asymmetric key cryptography, or public-key cryptography, is a method of encrypting and signing data using two different keys - a public key and a private key. The public key can be freely shared,
while the private key must be kept secret. The private key is randomly generated. The public key is derived from the private key using a public key generation algorithm, which holds additional
security constraints. Data that is encrypted using the public key can only be decrypted using the corresponding private key, and vice versa. This allows for secure communication even if the public
key is known to an attacker.
Digital Signatures
A digital signature algorithm is used to ensure the authenticity of a digital message. generating a unique hash of the message or document and encrypting it using the sender’s private key
Elliptic Curve Digital Signature Algorithm (ECDSA)
An elliptic curve is a curve defined by the equation y^2 = x^3 + ax + b. It is particularly useful because it uses relatively short key lengths compared to other digital signature algorithms.
Key Generation
The key generation process of ECDSA involves choosing a specific elliptic curve and a point on that curve, known as the generator point. These two elements are publicly known and are used to generate
a public/private key pair. The generator point is a selected point on the elliptic curve that can generate every point on the curve through scalar multiplication. Scalar multiplication is a process
of adding a point on the curve to itself a certain number of times.
For example, the generator point (Gx, Gy) on secp256k1, the elliptic curve used by Bitcoin, ICON, and many other cryptocurrencies, is:
Gx = 0x79BE667EF9DCBBAC55A06295CE870B07029BFCDB2DCE28D959F2815B16F81798
Gy = 0x483ADA7726A3C4655DA4FBFC0E1108A8FD17B448A68554199C47D08FFB10D4B8
The private key is a randomly generated number, and the public key is a point on the curve that is obtained by performing scalar multiplication on the generator point and the private key.
Signature Generation
In ECDSA, the private key is used to create a signature by generating a random number and performing a series of mathematical operations, including modular arithmetic and elliptic curve point
multiplication, on the nonce and the hash of the message.
Public Key Recovery
Public key recovery is a technique that allows a recipient to obtain the public key of a sender from a digital signature. This is commonly useful in blockchains, where you want to verify that a
signature came from a certain public key.
How Blockchains Use Asymmetric Key Cryptography
Blockchains use asymmetric key cryptography for wallet addresses and digital signatures. Each user on the blockchain network has a unique public and private key pair, and transactions on the
blockchain are signed using the private key. The public key can be used to verify the authenticity of the signature and confirm that the transaction was indeed initiated by the owner of the
corresponding private key. The user's public key corresponds to a wallet address, or username, and the private key corresponds to the password. Both the username and password are automatically
Merkle Trees
A Merkle tree is a data structure that efficiently verifies the integrity of large sets of data. It is a binary tree where each leaf node is a hash of some data, and each non-leaf node is the hash of
its child nodes.
Blockchains use Merkle trees to represent a set of transactions in a block and to efficiently verify that a specific transaction is included in the block. Each transaction in a block is hashed. The
leaf nodes of the Merkle tree are the hashes of the transactions, and each non-leaf node is the hash of its child nodes. The root of the Merkle tree represents the entire set of transactions in the
block. The Merkle root is included in the block header, along with other information such as the previous block hash and the current block height.
The block, along with its header and the Merkle root, is broadcasted to the network for verification. When a node receives a new block, it can verify the integrity of the transactions in the block by
reconstructing the Merkle tree and checking that the root hash matches the one included in the block header.
Traversing a Merkle tree involves starting from the desired transaction and working up the tree to the root hash. The path from the transaction to the Merkle tree root is called a Merkle proof.
Inserting into a Merkle tree typically involves creating a new leaf node for the data being inserted, and then repeatedly combining pairs of sibling nodes and hashing their concatenated values to
create new parent nodes until a single root node is reached. This may require reconstructing log(n) parts of the tree.
Merkle Proofs
A Merkle proof is a way to prove that a specific piece of data is included in a Merkle tree without revealing the entire tree. It is a small set of hashes that starts from the desired data and works
its way up the tree to the root hash. By providing the Merkle proof and the root hash, it can be verified that the desired data is included in the tree without revealing any other information.
How Blockchains Use Merkle Proofs
Let’s say a user wants to prove that a specific transaction is included in a particular block on the blockchain. This is called a proof of inclusion. The naive approach is for the prover to generate
a merkle proof and submit that to the verifier. Here’s how it works:
1. The prover starts by locating the leaf node in the Merkle tree that corresponds to the desired transaction.
2. The prover then follows the path from the leaf node to the root of the tree, collecting the hashes of the sibling nodes along the way.
3. The prover now has a set of hashes that make up the Merkle proof for the transaction.
4. The prover then sends the Merkle proof, along with the root hash of the tree and the block header, to the verifier for verification. The block header contains the aggregate signature of
validators that signed the block.
5. The verifier receives the Merkle proof, root hash, and block header and performs the following steps:
□ The verifier recovers the public key(s) of the aggregate signature of validators using public key recovery.
□ Using the block header, the verifier calculates the root hash of the Merkle tree for that block.
□ The verifier reconstructs the path from the leaf node to the root, using the hashes provided in the Merkle proof.
□ The verifier compares the root hash from the Merkle proof to the one calculated from the block header.
6. If the root hash from the Merkle proof matches the one calculated by the verifier, the transaction is included in the block.
A more complex approach would be to optimize the size of the proof and verification process, which is what zero-knowledge proofs are for. | {"url":"https://docs.icon.community/advanced-topics/learn/cryptographic-primitives","timestamp":"2024-11-11T14:56:11Z","content_type":"text/html","content_length":"167595","record_id":"<urn:uuid:0474181a-38b0-493b-bae6-385b798a6f0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00213.warc.gz"} |
Computational Thermodynamics
Computational Thermodynamics is a pivotal subfield within thermodynamics that leverages computational methods to solve complex thermodynamic problems. This discipline is essential in engineering,
where it aids in the design, analysis, and optimization of systems across various industries, including aerospace, automotive, chemical, and energy sectors. By utilizing advanced algorithms and
software, engineers can predict the behavior of materials and processes under different conditions, thereby enhancing efficiency, safety, and innovation.
Basic Principles and Concepts
At its core, Computational Thermodynamics involves the application of numerical methods to solve thermodynamic equations. The fundamental principles include:
• First Law of Thermodynamics: This law, also known as the law of energy conservation, states that energy cannot be created or destroyed, only transformed from one form to another.
• Second Law of Thermodynamics: This law introduces the concept of entropy, stating that the total entropy of an isolated system can never decrease over time.
• Gibbs Free Energy: A thermodynamic potential that measures the maximum reversible work that may be performed by a thermodynamic system at constant temperature and pressure.
• Phase Equilibria: The study of the equilibrium between different phases (solid, liquid, gas) in a chemical system.
Key Terms
• Enthalpy (H): A measure of the total energy of a thermodynamic system, including internal energy and the energy required to displace its environment.
• Entropy (S): A measure of the disorder or randomness in a system.
• Heat Capacity (C): The amount of heat required to change the temperature of a system by one degree.
• Thermodynamic Equilibrium: A state where macroscopic properties such as pressure, temperature, and chemical composition remain constant over time.
Historical Development
The development of Computational Thermodynamics can be traced back to the early 20th century with the advent of computers. Key milestones include:
• 1940s: The development of the first digital computers, which allowed for the numerical solution of complex equations.
• 1960s: The introduction of the CALPHAD (Calculation of Phase Diagrams) method by Larry Kaufman, which revolutionized the way phase diagrams are calculated.
• 1980s: The development of commercial software packages such as Thermo-Calc, which made computational thermodynamics accessible to a broader audience.
• 2000s: Advances in computational power and algorithms, enabling more accurate and faster simulations.
Notable figures in the field include Josiah Willard Gibbs, who laid the groundwork for modern thermodynamics, and Larry Kaufman, who pioneered the CALPHAD method.
Computational Thermodynamics has a wide range of applications across various industries:
In the aerospace industry, computational thermodynamics is used to design materials that can withstand extreme temperatures and pressures. For example, the development of heat-resistant alloys for
jet engines relies heavily on phase diagram calculations and thermodynamic modeling.
The automotive industry uses computational thermodynamics to optimize fuel efficiency and reduce emissions. By modeling combustion processes and material behavior, engineers can design more efficient
engines and exhaust systems.
Chemical Industry
In the chemical industry, computational thermodynamics is used to design and optimize chemical processes. For instance, the production of ammonia through the Haber process involves complex
thermodynamic calculations to maximize yield and minimize energy consumption.
Energy Sector
The energy sector benefits from computational thermodynamics in the design of more efficient power plants and renewable energy systems. For example, the optimization of thermal cycles in nuclear
reactors and the development of advanced battery materials are heavily reliant on thermodynamic modeling.
Advanced Topics
Recent Research and Innovations
Recent advancements in computational thermodynamics include the integration of machine learning algorithms to predict thermodynamic properties more accurately. Researchers are also exploring the use
of quantum computing to solve complex thermodynamic problems that are currently intractable with classical computers.
Future Trends
The future of computational thermodynamics lies in the development of more sophisticated models and algorithms that can handle multi-scale and multi-physics problems. This includes the integration of
thermodynamics with other fields such as fluid dynamics and materials science to create comprehensive simulation tools.
Challenges and Considerations
Despite its many advantages, computational thermodynamics faces several challenges:
• Computational Complexity: Solving thermodynamic equations can be computationally intensive, requiring significant processing power and time.
• Data Accuracy: The accuracy of computational models depends on the quality of input data, which can sometimes be limited or uncertain.
• Model Limitations: Current models may not fully capture the complexities of real-world systems, leading to approximations and potential errors.
Potential solutions include the development of more efficient algorithms, the use of high-performance computing, and the integration of experimental data to validate and refine models.
In summary, Computational Thermodynamics is a vital field in engineering that enables the design and optimization of complex systems across various industries. By leveraging advanced computational
methods, engineers can predict the behavior of materials and processes under different conditions, leading to more efficient, safe, and innovative solutions. Despite its challenges, ongoing research
and technological advancements promise to further enhance the capabilities and applications of computational thermodynamics, solidifying its importance in the field of thermodynamics in engineering. | {"url":"https://www.discoverengineering.org/computational-thermodynamics/","timestamp":"2024-11-08T04:30:33Z","content_type":"text/html","content_length":"112221","record_id":"<urn:uuid:a03efa7e-753c-4b42-a3d1-d8fd3c1324a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00083.warc.gz"} |
VBMFX: Vanguard Total Bond Market Index Fund | Logical Invest
What do these metrics mean?
'Total return, when measuring performance, is the actual rate of return of an investment or a pool of investments over a given evaluation period. Total return includes interest, capital gains,
dividends and distributions realized over a given period of time. Total return accounts for two categories of return: income including interest paid by fixed-income investments, distributions or
dividends and capital appreciation, representing the change in the market price of an asset.'
Which means for our asset as example:
• The total return, or increase in value over 5 years of Vanguard Total Bond Market Index Fund is -2.2%, which is smaller, thus worse compared to the benchmark SPY (101.5%) in the same period.
• Looking at total return, or performance in of -7.4% in the period of the last 3 years, we see it is relatively smaller, thus worse in comparison to SPY (29.7%).
'The compound annual growth rate (CAGR) is a useful measure of growth over multiple time periods. It can be thought of as the growth rate that gets you from the initial investment value to the ending
investment value if you assume that the investment has been compounding over the time period.'
Using this definition on our asset we see for example:
• Looking at the annual return (CAGR) of -0.4% in the last 5 years of Vanguard Total Bond Market Index Fund, we see it is relatively smaller, thus worse in comparison to the benchmark SPY (15.1%)
• Looking at compounded annual growth rate (CAGR) in of -2.5% in the period of the last 3 years, we see it is relatively smaller, thus worse in comparison to SPY (9.1%).
'Volatility is a rate at which the price of a security increases or decreases for a given set of returns. Volatility is measured by calculating the standard deviation of the annualized returns over a
given period of time. It shows the range to which the price of a security may increase or decrease. Volatility measures the risk of a security. It is used in option pricing formula to gauge the
fluctuations in the returns of the underlying assets. Volatility indicates the pricing behavior of the security and helps estimate the fluctuations that may happen in a short period of time.'
Using this definition on our asset we see for example:
• Looking at the historical 30 days volatility of 6% in the last 5 years of Vanguard Total Bond Market Index Fund, we see it is relatively lower, thus better in comparison to the benchmark SPY
• During the last 3 years, the 30 days standard deviation is 6.9%, which is lower, thus better than the value of 17.6% from the benchmark.
'Risk measures typically quantify the downside risk, whereas the standard deviation (an example of a deviation risk measure) measures both the upside and downside risk. Specifically, downside risk in
our definition is the semi-deviation, that is the standard deviation of all negative returns.'
Applying this definition to our asset in some examples:
• Looking at the downside volatility of 4.3% in the last 5 years of Vanguard Total Bond Market Index Fund, we see it is relatively smaller, thus better in comparison to the benchmark SPY (14.9%)
• Looking at downside deviation in of 4.9% in the period of the last 3 years, we see it is relatively lower, thus better in comparison to SPY (12.3%).
'The Sharpe ratio was developed by Nobel laureate William F. Sharpe, and is used to help investors understand the return of an investment compared to its risk. The ratio is the average return earned
in excess of the risk-free rate per unit of volatility or total risk. Subtracting the risk-free rate from the mean return allows an investor to better isolate the profits associated with risk-taking
activities. One intuition of this calculation is that a portfolio engaging in 'zero risk' investments, such as the purchase of U.S. Treasury bills (for which the expected return is the risk-free
rate), has a Sharpe ratio of exactly zero. Generally, the greater the value of the Sharpe ratio, the more attractive the risk-adjusted return.'
Using this definition on our asset we see for example:
• Looking at the ratio of return and volatility (Sharpe) of -0.49 in the last 5 years of Vanguard Total Bond Market Index Fund, we see it is relatively lower, thus worse in comparison to the
benchmark SPY (0.6)
• During the last 3 years, the risk / return profile (Sharpe) is -0.73, which is smaller, thus worse than the value of 0.37 from the benchmark.
'The Sortino ratio measures the risk-adjusted return of an investment asset, portfolio, or strategy. It is a modification of the Sharpe ratio but penalizes only those returns falling below a
user-specified target or required rate of return, while the Sharpe ratio penalizes both upside and downside volatility equally. Though both ratios measure an investment's risk-adjusted return, they
do so in significantly different ways that will frequently lead to differing conclusions as to the true nature of the investment's return-generating efficiency. The Sortino ratio is used as a way to
compare the risk-adjusted performance of programs with differing risk and return profiles. In general, risk-adjusted returns seek to normalize the risk across programs and then see which has the
higher return unit per risk.'
Using this definition on our asset we see for example:
• The ratio of annual return and downside deviation over 5 years of Vanguard Total Bond Market Index Fund is -0.69, which is lower, thus worse compared to the benchmark SPY (0.84) in the same
• During the last 3 years, the ratio of annual return and downside deviation is -1.02, which is lower, thus worse than the value of 0.53 from the benchmark.
'Ulcer Index is a method for measuring investment risk that addresses the real concerns of investors, unlike the widely used standard deviation of return. UI is a measure of the depth and duration of
drawdowns in prices from earlier highs. Using Ulcer Index instead of standard deviation can lead to very different conclusions about investment risk and risk-adjusted return, especially when
evaluating strategies that seek to avoid major declines in portfolio value (market timing, dynamic asset allocation, hedge funds, etc.). The Ulcer Index was originally developed in 1987. Since then,
it has been widely recognized and adopted by the investment community. According to Nelson Freeburg, editor of Formula Research, Ulcer Index is “perhaps the most fully realized statistical portrait
of risk there is.'
Applying this definition to our asset in some examples:
• Compared with the benchmark SPY (9.32 ) in the period of the last 5 years, the Ulcer Ratio of 9.43 of Vanguard Total Bond Market Index Fund is higher, thus worse.
• Looking at Ulcer Index in of 11 in the period of the last 3 years, we see it is relatively greater, thus worse in comparison to SPY (10 ).
'Maximum drawdown is defined as the peak-to-trough decline of an investment during a specific period. It is usually quoted as a percentage of the peak value. The maximum drawdown can be calculated
based on absolute returns, in order to identify strategies that suffer less during market downturns, such as low-volatility strategies. However, the maximum drawdown can also be calculated based on
returns relative to a benchmark index, for identifying strategies that show steady outperformance over time.'
Applying this definition to our asset in some examples:
• Looking at the maximum drop from peak to valley of -18.9 days in the last 5 years of Vanguard Total Bond Market Index Fund, we see it is relatively greater, thus better in comparison to the
benchmark SPY (-33.7 days)
• During the last 3 years, the maximum reduction from previous high is -17.7 days, which is higher, thus better than the value of -24.5 days from the benchmark.
'The Maximum Drawdown Duration is an extension of the Maximum Drawdown. However, this metric does not explain the drawdown in dollars or percentages, rather in days, weeks, or months. It is the
length of time the account was in the Max Drawdown. A Max Drawdown measures a retrenchment from when an equity curve reaches a new high. It’s the maximum an account lost during that retrenchment.
This method is applied because a valley can’t be measured until a new high occurs. Once the new high is reached, the percentage change from the old high to the bottom of the largest trough is
Using this definition on our asset we see for example:
• Compared with the benchmark SPY (488 days) in the period of the last 5 years, the maximum days under water of 1067 days of Vanguard Total Bond Market Index Fund is greater, thus worse.
• Looking at maximum time in days below previous high water mark in of 749 days in the period of the last 3 years, we see it is relatively greater, thus worse in comparison to SPY (488 days).
'The Drawdown Duration is the length of any peak to peak period, or the time between new equity highs. The Avg Drawdown Duration is the average amount of time an investment has seen between peaks
(equity highs), or in other terms the average of time under water of all drawdowns. So in contrast to the Maximum duration it does not measure only one drawdown event but calculates the average of
Using this definition on our asset we see for example:
• The average time in days below previous high water mark over 5 years of Vanguard Total Bond Market Index Fund is 471 days, which is higher, thus worse compared to the benchmark SPY (123 days) in
the same period.
• Looking at average days below previous high in of 374 days in the period of the last 3 years, we see it is relatively greater, thus worse in comparison to SPY (177 days). | {"url":"https://logical-invest.com/app/mutual_fund/vbmfx/vanguard-total-bond-market-index-fund","timestamp":"2024-11-04T11:23:52Z","content_type":"text/html","content_length":"61070","record_id":"<urn:uuid:e383d9b6-cc14-4c47-a6c4-29d81a50000d>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00347.warc.gz"} |
s - Brittany Palandra
« on: January 19, 2019, 07:24:22 PM »
We can make the substitution $y = 3x - C$ because we are restricting $u(x, y)$ to the characteristic curves, so I believe we can treat $y$ as equal to $3x - C$ when finding the general solution. We
do this because we need the $xydx$ totally in terms of $x$ or we will not be able to integrate both sides. After integrating, we have to get rid of $C$ by replacing it with $3x-y$ again because we
want our final solution $u$ to be a function of $x$ and $y$, not of $C$. $C$ is just a constant but it is still in terms of $x, y$ by the characteristic curves.
$C$ is a constant only along integral curves. V.I. | {"url":"https://forum.math.toronto.edu/index.php?PHPSESSID=a1vnqconpd65m1nglbo4ptdrg6&action=profile;area=showposts;sa=messages;u=1760","timestamp":"2024-11-11T19:54:34Z","content_type":"application/xhtml+xml","content_length":"15840","record_id":"<urn:uuid:3cb7d635-bfb0-4829-ba9c-c68c89f93b9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00811.warc.gz"} |
Three Glass Puzzle, Graph Theoretical Approach
Three Glass Puzzle
Graph Theoretical Approach
Oystein Ore gave a worldly twist to the Three Glass puzzle and solved it in the framework of the Graph Theory.
There are three jugs A, B, C, with capacities 8,5,3 quarts, respectively. The jug A is filled with wine, and we wish to divide the wine into two equal parts by pouring it from one container to
another - that is, without using any measuring devices other than these jugs.
Every distribution of wine in the three jugs A, B, and C, can be described by the quantities b and c of wine in the jugs B and C, respectively. Thus every possible distribution of wine is described
by a pair (b, c). Initially b=c=0 so that one starts with the distribution (0, 0). The target distribution is obviously (4, 0).
puz(WaterPuzzle) in this case consists of all integer pairs (b,c) connected by edges wherever it's possible to move from one node to another by pouring wine between the jugs. Thus, from (0, 0) we can
move to (5, 0) and (0, 3). From (5, 0) it's possible to move back to (0, 0) but also to (2, 3) by pouring from B to C and to (5, 3) by pouring from A to C. Continuing in this manner we'll discover
that the nodes of the graph corresponding to the feasible configurations of wine are located on the perimeter of the 6x4 rectangle in the first quadrant. (Why?)
On the diagram I only showed some of the edges. In particular, there is a walk from the starting node (0, 0) to the target node (4, 0), viz.
(0,0) (A->B)
(5,0) (B->C)
(2,3) (C->A)
(2,0) (B->C)
(0,2) (A->B)
(5,2) (B->C)
(4,3) (C->A)
Complete puz(WaterPuzzle), as it follows from the proof of one of the statements derived previously, consists of all possible horizontal, vertical, and diagonal (upper-left-to-bottom-right) edges
that connect perimeter points.
It's perhaps relevant to note that puz(WaterPuzzle) with nodes on the perimeter of the 6×4 rectangle resembles the diagram obtained in describing a slanted cut of the torus with a rational slope. In
this case the cut here proceeds in the mirror direction of that on the torus. As I just noted all the diagonal lines connecting points on the perimeter are included. This shows that the serpentine
band is indeed of constant width.
1. O. Ore (R. J. Wilson), Graphs And Their Uses, MAA, New Math Library, 1990.
|Contact| |Front page| |Contents| |3 Jugs Puzzle| |Algebra|
Copyright © 1996-2018 Alexander Bogomolny | {"url":"https://www.cut-the-knot.org/wgraph.shtml","timestamp":"2024-11-03T18:40:43Z","content_type":"text/html","content_length":"16391","record_id":"<urn:uuid:61e64b49-9926-4d84-a0ef-b139c0e2f824>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00230.warc.gz"} |
Jesse Goodman
I am a Postdoctoral Fellow at UT Austin, hosted by David Zuckerman. I recently received my PhD in computer science from Cornell University, where I was advised by Eshan Chattopadhyay. Before that, I
was an undergraduate student at Princeton University, and received a BSE in computer science.
My primary interests lie in combinatorics, complexity theory, and pseudorandomness.
• Improved Condensers for Chor-Goldreich Sources
Jesse Goodman, Xin Li, David Zuckerman
FOCS 2024
• Extractors for polynomial sources over F[2]
Eshan Chattopadhyay, Jesse Goodman, Mohit Gurumukhani
ITCS 2024
• The space complexity of sampling
Eshan Chattopadhyay, Jesse Goodman, David Zuckerman
ITCS 2022 [video]
• Improved extractors for small-space sources
Eshan Chattopadhyay, Jesse Goodman
FOCS 2021 [video]
• Extractors and secret sharing against bounded collusion protocols
Eshan Chattopadhyay, Jesse Goodman, Vipul Goyal, Ashutosh Kumar, Xin Li, Raghu Meka, David Zuckerman (merge of [CGGL] and [KMZ])
FOCS 2020 [video]
• Extractors for adversarial sources via extremal hypergraphs
Eshan Chattopadhyay, Jesse Goodman, Vipul Goyal, Xin Li
STOC 2020 [video]
• On the approximability of Time Disjoint Walks
Alexandre Bayen, Jesse Goodman, Eugene Vinitsky
Journal of Combinatorial Optimization 2020 | {"url":"https://jpmgoodman.com/","timestamp":"2024-11-10T22:12:12Z","content_type":"text/html","content_length":"9636","record_id":"<urn:uuid:5acae86f-75ff-43ef-9a6b-e263e05697d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00290.warc.gz"} |
Perform the following conversion and report the answer to the correct number of significant figures.
c) The estimated water content in moon rock is 0.100 % by mass. Determine the mass of moon rock needed to extract 1.00 gallon of water.
This is another density problem, because we are given a volume of water and told to determine the a mass. This problem is a little more complicated. We know how many grams of water are in 100. grams
of moon rock. If the moon rock is 0.100% water by mass, then if we have 100 grams we know there is 0.1 grams of water. So how many grams of water are in a gallon of water is the next question. To
determine that we'll begin with a gallon of water and convert that to grams of water in the following way. Oh, remember the density of water is 1.0 g mL^-1.
So now we know how many mLs there are in a gallon. We can now use density to find the grams,
Using the relationship between grams of water and grams of moon rock,
0.1 g water = 100 g moon rock
we determine the mass of the moon rock which contains 3784 g of water. | {"url":"https://intro.chem.okstate.edu/1314F00/Lecture/Chapter1B/Convc.html","timestamp":"2024-11-11T21:34:21Z","content_type":"text/html","content_length":"2097","record_id":"<urn:uuid:52236bc3-fe12-4d05-aa14-d4df156c08d3>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00491.warc.gz"} |
Brane Space
We now examine the solutions for the problems in Part 22, on series and parallel circuits.
1)Two resistances of 20 ohms and 5 ohms are connected in parallel by a student. He then connects this combination in series with a 3 ohm resistance and a battery of 1 ohm resistance.
a) Draw a conventional diagram of the circuit (i.e. not in Wheatstone Bridge format).
b) Find the resistance of the resistors in parallel.
c) Find the total resistance of the circuit.
The circuit diagram is shown above, labelled for 'Problem 1'.
b) The resistance of the 2 resistors, 20 ohms and 5 ohms, in parallel is easily found from the total for the special case of two resistors in parallel:
R = R1 R2 / (R1 + R2) = (20 0hms)(5 ohms) / 25 ohms = 100 ohms^2/25 ohms
R = 4 ohms
c) The total resistance for all contributors in series is just the sum:
4 ohms + 3 ohms + 1 ohm = 8 ohms
2) The photo shows a sketch of a Wheatstone Bridge Circuit at Harrison College, to be used to find the resistances of two resistors (R1 and R2) connected as shown, with two differing positions of the
galvanometer. The total resistance R is also to be taken.
A student using the diagram finds his voltmeter reads 1.0 V and the total resistance is 4 ohms when the slide wire is at the end position noted.
If the balance for obtaining G = 0 (e.g. galvanometer reading zero) is the intermediate position, and the slide wire is then at 45 cm, find the values of R1 and R2.
The total resistance from the end position (100 cm) = 4 ohms. We know from the photo that the two resistors R1 and R2 are connected in
so that:
R1 + R2 = 4 ohms
We are given the first position of the slide wire as L1 = 45 cm, then:
R1/ 4 ohms = 45 cm/ 100 cm
and: R1 = 0.45 (4 ohms) = 1.8 ohms
R2 = 4 ohms - 1.8 ohms = 2.2 ohms
3) A resistance R2 is connected in parallel with a resistance R1. What resistance R3 must be connected in series with the combination of R1 and R2 so that the equivalent resistance is equal to the
resistance R1? Draw a circuit diagram of the arrangement.
We let R1 = r and let R2 = 2r.
These are in parallel so the total resistance for them is:
R(T) = R1 R2/ (R1 + R2) = r(2r)/ (r + 2r) = 2r^2/ 3r = 2r/3
We require that the total resistance in series be such that the equivalent resistance is equal to the resistance R1, or r.
Thus, we require:
R3 + 2r/3 = r
and, solving by algebra:
R3 = r - 2r/3 = r/3
The circuit diagram is shown, labelled as 'Prob. 3'.
4) Three equal resistors are connected in series. When a certain potential difference is applied across the combination, the total power dissipated is 10 watts. (Note: Power = V x I, voltage x
What power would be dissipated if the three resistors were connected in parallel across the same potential difference?
You need to bear in mind that for the case in parallel the current is divvied up, e.g. total current I = I1 + I2 + I3. Since all the resistors are equal (call each r ohms) and the voltage is the same
then the power for the parallel case would be one third the power for the series case, or 10/3 Watts.
Let me say off the bat, I can't stand Mark Halperin, the overpaid hack asshole who writes a weekly column for TIME and an MSNBC "senior political analyst". I think the guy is a smug, hyper-entitled,
narcissistic little piece of sewage and a know-nothing twerp from the corporate echo chamber who has no business analyzing anything....except maybe doog poop specimens.
Be that as it may, my estimation of this putz reached even new lows this morning, while watching MSNBC's Morning Joe when the little dick said about President Obama (see also my previous blog
applauding Obama's fiery stance yesterday):
"I thought he was a dick yesterday,"
So evidently this hack-dick didn't like what Obama said, in terms of his (for once) fiery demeanor to try to get the intransigent Repukes to cooperate in a debt ceiling solution. This, as opposed to
letting the fuckers roll him, as they did with last year's tax cut extension deal.
But hey, what's new? Most of the corporate media axis including many in the alleged "liberal domains" of the WaPo and NY Times, like their Dem presidents mellow...usually too mellow...and never, ever
acting or talking as fighters. For these collective dicks, populism of any kind is terrifying and brings images to mind of people with torches and pitchforks fighting the elites for a small piece of
pie....like we now see in Greece and in England.
In this regard, the pusillanimous Halperin somewhat resembles one late alleged "dean of the Washington Press corps" by the name of David Broder who used to write for the Washington Post. He too
brayed loudly whenever a Dem showed a minuscule bit of spunk and fight. But say one thing, say the next, at least Broder possessed an IQ above room temperature, unlike the smirky Halperin! At least
Broder wrote with some flair and a modicum of gravitas when he issued his injunctions to moderation.
Meanwhile, we hear Halperin has apologized for his Obama slur, and has been "suspended indefinitely." Scarborough blamed a producer for not hitting the delay button, instead of ... blaming the person
who actually said it. But then Scarborough is another overpaid dick and the only person worth watching on his show (or hearing) is his co-host Mika Brzezinski.
What Halperin ought to have done, if he was going to use any slur starting with d at all (as in dickheads) was to aim it at the disgusting Republicans who adamantly refuse to broker a deficit
reduction deal that includes taxes. Thus, they are acting the part of sociopaths.
Hopefully, Halperin's juvenile outburst will impel Mr. Obama to even greater forcefulness in the coming weeks in his battles with Republicans.
I was finally elated to see some fire in the belly of Mr. Obama at his press conference. His eyes flared, and he aimed emotional daggers at the obstructing repukes - who remain foursquare against
raising any revenues as part of deficit reduction. But as I've repeatedly shown, any sane person or group would know we can't get to $2.4 trillion in deficit reduction by cutting spending alone! As
one British economist noted, quoted in The Economist: "Saying you can improve the economy by just cutting taxes is like saying you can run faster by cutting off a foot!"
So true! But why don't the Republicans get it, and cooperate for a true, beneficial deal for this country? Well, because they've all signed "pledges" (compliments of Grover Norquist) not to. If any
ONE of them breaks from the pack, there'll be hell to pay and not only will Norquist go ballistic, but all the other Rs will lose it as well - probably even more than my bro over me denying him his
"loan" some days ago. (At least in his mind, I actually didn't!)
This obstinacy and perverse obstructionism, refusal to genuinely cooperate, puts the Dems and Mr. Obama in a hellacious bind. As calamitous as a default would likely be, giving in to crazed, spending
cut frenzied repukes would be much worse. Bank on it! If the Ds cave on this extortion, and give in to allow ALL spending cuts with few or no tax hikes, then as I showed a few blogs ago it will mean
the implosion of the economy (due to 40-50% reduction in aggregate demand) and plausibly 18% unemployment by next summer. This will be an unmitigated catastrophe for the Democrats and Mr. Obama -
especially with the general election only months away - but the Republicans will be in 7th heaven. They will have effectively gotten the Dems and the President to cooperate in the destruction of the
economy to advance the Repukes' own partisan political agenda. It means that what transpired in November last year, that election debacle, will be like a 'tea party' compared to what will occur next
In effect, what this means is that Mr. Obama (like Mr. Clinton also facing the repukes over a gov't shutdown 16 years ago) must choose the lesser of two evils. As incredible (or appalling) as it may
sound, this means he must not merely use rhetoric - even charged rhetoric - but be fully prepared to take the Repukes to the wire, and - if it means allowing a default- then he must do it! He cannot
allow himself to submit - or the nation to submit- to this vicious blackmail by the rethugs, to this fiscal "gun" to the nation's head. (To use blogger Andrew Sullivan's turn of phrase).
The polling figures are on Mr. Obama's side, as only 8% believe he's responsible for the debt impasse and problems. What people are thus looking for, is bold and unintimidated leadership such as Bill
Clinton displayed when he stood up to similar Repug extortion threats (from Gingrich & Co.) on his watch. Then the government actually did shut down, at least temporarily. Clinton never blinked,
Gingrich and the repukes did. The same must play out this time, no exceptions. Under NO circumstances must Obama blink, or he will look like a rook, a newbe, a callow former state Senator ...and an
easy mark for all further dealings (including the extension of the horrific Bush tax cuts in 2012, an election year!)
The bottom line: Obama cannot and must not go along with the Republicans in their insane, economy-destroyng spending cut orgy! Any give-in here can only be a Pyrrhic victory, at unaceptable cost and
far worse than any bond pirates might threaten! If it means default, so be it. We must also stop hearing ANYTHING, any mild words from the Ds circulating in the media, to the effect that even smacks
of Repuglican-memes and narratives. Thus, Max Baucus must be made to shut up about "Medicare cuts" (which plays into the Repuke dynamic) just as Joe Lieberman in his courting of Tom Coburn to try and
elicit $600b in such cuts. Obama needs to repeatedly remind these cheese eaters that his own Affordable Health Plan incorporates $500b in Medicare cuts and so this is already the solution.
Meanwhile, Mr. Obama himself must cease his own tax cut pandering, especially after last December's extension of the Bush tax cuts. That means ceasing forthwith any more talk of another year of
"payroll tax cuts". No, no, no and NO! As an article in MONEY magazine noted, the benefits from the existing payroll tax freeze have only been marginal and consumers continue to be defensive in their
spending. One year of such a freeze was bad enough as it denied badly needed money to support Social Security and Medicare. TWO years would be a freaking catastrophe and play right into Repuke memes
that both are approaching insolvency and lacking money! By adding an additional year of payroll tax freezing, Mr. Obama would abet this, so even if he wants it congress must turn it down. (At least
the still D-held Senate).
We will wait to see what occurs, but I want to see not just strong rhetoric but action driven by spine...plain old-fashioned intestinal fortitude as opposed to another capitulation. That means taking
the Republicans to the cliff and beyond, if need be, in order not to destroy this nation's economy! As I said...the lesser of two evils.
One of the claims often made by what I call the flat Earth or climate change denier brigade, is that volcanic emissions dwarf human fossil fuel or other related activity (e.g felling rain forests) in
terms of producing CO2. This canard has now been fairly well shattered with the recent publication of a paper, Volcanic Versus Anthropogenic Carbon Dioxide in the journal Eos: Transactions of the
American Geophysical Union( Vol. 92, No. 24, June 14, 2011, p. 201).
The essential data of the paper is shown in the accompanying graph, for which the dots show a time series of the anthropogenic carbon dioxide CO2 multiplier (ACM) calculated from time series data on
anthopogenic CO2 emission rates and Marty & Tolstikhin's (1998) study of preferred volcanic emission rates. In their paper appearing in Chemical Geology (Vol. 145, p. 233) the latter authors gave a
preferred estimate of 0.26 gigaton per year for present day global volcanic emission rate and injection. Their study encompassed CO2 emissions from divergent tectonic plates, intraplates (plumes) and
convergent plates - e.g. displaying volcanism.
Moreover as the current Eos paper observes, their computations "assessed the highest preferred minimum and maximum global estimates, making them appropriate high end volcanic limits for the
comparisons with anthropenic CO2 emissions covered with in this article".
To that extent, the Eos author (Terry Gerlach of the U.S. Geological Survey)showed from his time series that the projected anthropogenic CO2 emission rate of 35 gigatons per year is 135 times greater
than the 0.26 gigatons per year emission rate for volcanoes, plumes etc. This ratio of 135:1 (anthropogenic to volcanic CO2) is what defines the anthropogenic multiplier, an index of anthropogenic
CO2's dominance over volcanic inputs.
Supporting Gerlach's 2010 projection and ACM data, is the just announced word - from Thomas Karl of The National Climate Data Center (see: The Denver Post), June 29, p. 4) that in their 'Annual State
of the Climate Report for 2010', that year was "tied with 2005 as the warmest on record". According to Director Karl: "The indicators show, unequivocally, that the world continues to warm."
Meanwhile, the Eos paper puts the final nails in the coffin of the "volcanoes did it" excuse! It is also worth mentioning how the ACM data show an astounding rise in the CO2 multiplier from about 18
in 1900, to roughly 38 in 1950, which parallels the vastly enhanced use of automobiles as a primary mode of personal transport - with the planet now saddled with nearly 600 million vehicles! Every
manjack in a third world nation even seeks to own one!
Interestingly the only volcanic event which even came close to human emissions was the eruption of Mt. Pinatubo in the Philippines in 1992. It generated CO2 emission rates roughly between 0.001 and
0.006 gigaton per hour, or closely approximating to the 0.004 gigaton anthropogenic per hour (e.g. based on 35 gigatons per year).Thus, as the Eos article observes:
For a few hours individual volcanoes may emit as much or more CO2 than human activities. But volcanic emissions are ephemeral while anthropogenic CO2 is emitted relentlessly from from ubiquitous
Which means human activity is a vastly more significant source of CO2 and the major reason we are approaching a CO2 concentration (taken to be from 550- 600 ppm)that marks the threshold to the
runaway greenhouse effect.
Let us hope humans are smart enough to get the message and act on it before it's too late!
We now look at series and parallel circuits using the Wheatstone Bridge method. The same set up is used as shown in the photo for Part 21 and the circuit diagram (for the slide wire set up) is in
Fig. 1. The separate circuit diagrams are shown in the accompanying diagrams (Fig. 2a and 2b)for the series and parallel cases.
Apparatus used:
i) Wheatstone Bridge
ii) d.c. voltmeter (0- 3 V)
iii) ammeter (0- 1A, 0- 10A)
iv) sensitive galvanometer
v) low voltage d.c. source (2V or under)
vi) wires for connections
Main Background points:
1. Series Circuits.
The general diagram is such as shown in Fig. 2(a). Here, separate components are connected in such a way the source of emf (battery) forces electric current through all components (e.g. resistors) in
sequence. Once steady conditions are established the same current flows through all resistances. The total resistance is the sum of the individuals:
R = R1 + R2 + R3 + ........R_n
where R1, R2 are the resistances of the separate resistors.
The voltage in a series circuit is such that each component causes a drop in potential and the sum of all potential drops is equal to the emf applied to the total circuit:
E = V1 + V2 + V3 + ......V_n
Current in a series circuit is constant so the total is the same flowing through any given component:
I = I1 = I2 = I3 etc.
2. Parallel circuits
This set-up is such as shown in Fig. 2(b). In parallel circuits the current paths branch, hence are not sequential. Since several paths are available for current flow instead of just one, then the
total resistance is less than the resistance of any ONE of the alternative paths, or resistors. The total is:
1/R = 1/R1 + 1/R2 + 1/ R3 + .....1/R_n
Thus, the reciprocal of the total resistance of a parallel circuit is just the sum of the reciprocals of the separate resistances.
Special case:
This is for two in parallel, which makes easy computation for the total:
1/R = 1/R1 + 1/R2 = R1 + R2 / (R1 R2)
Then: R = (R1 R2)/ (R1 + R2)
The voltage in a parallel circuit is the same across any component of the circuit, as it is across the circuit as a whole, so:
E = V1 = V2 = V3 etc.
The current in a parallel circuit is the sum of all the currents flowing in the separate components, i.e.
I = I1 + I2 + I3..+
Procedures for experiment
A) Series circuit.
Use the Wheatstone Briddge slide wire set up (Fig. 1) to determine the resistance of each resistor and use an emf no larger than 1.0 V. Connect all resistors in series and first measure the total
resistance with Wheatstone Bridge (R_T). Connect ammeter and voltmeter across each as indicated (Fig. 2(a)) to obtain resistance based on Ohm's law, R= V/I)
B) Parallel circuit.
Use the same Wheatstone Bridge, but now connecting all the resistors in parallel using the resistance board (see, e.g. the left side of the photograph in Part 21 of the set up). Use the voltmeter and
ammeter to take all necessary readings as per Part (A) but now with Rs in parallel. Take care in selection of meter readings so accurate readings will be obtained (preferably to three signficant
Practical Problem:
A student at Harrison College uses the Wheatstone Bridge method to connect up resistors in parallel analogous to Fig. 2(b) He uses his ammeter and voltmeter (0- 1V) to obtain the following readings.
For R1: I1 = 0.50 A, V1 = 0.75 V
For R2: I2 = 0.25 A, V2 = 0.75 V
For R3: I3 = 0.75 A, V2 = 0.75V
a) Find the individual resistances based on his measurements.
b) Find the total resistance.
a) Using Ohm's law, one obtains:
R1 = V1/ I1 = 0.75V/ 0.50A = 1.50 ohms
R2 = V2/ I2 = 0.75V/0.25A = 3.0 ohms
R3 = V3/I3 = 0.75V/ 0.75A = 1.0 ohms
b) The total for resistors in parallel:
1/R = 1/ R1 + 1/R2 + 1/ R3
1/R = 1/ 1.5 + 1/3 + 1/1
1/R = 1/(3/2) + 1/3 + 1 = 2/3 + 1/3 + 1 = 2
So: R = 1/2 = 0.5 ohms
Check: Is this smaller than any of the individual values? Yes!
Other Problems:
1)Two resistances of 20 ohms and 5 ohms are connected in parallel by a student. He then connects this combination in series with a 3 ohm resistance and a battery of 1 ohm resistance.
a) Draw a conventional diagram of the circuit (i.e. not in Wheatstone Bridge format).
b) Find the resistance of the resistors in parallel.
c) Find the total resistance of the circuit.
2) The photo shows a sketch of a Wheatstone Bridge Circuit at Harrison College, to be used to find the resistances of two resistors (R1 and R2) connected as shown, with two differing positions of the
galvanometer. The total resistance R is also to be taken.
A student using the diagram finds his voltmeter reads 1.0 V and the total resistance is 4 ohms when the slide wire is at the end position noted.
If the balance for obtaining G = 0 (e.g. galvanometer reading zero) is the intermediate position, and the slide wire is then at 45 cm, find the values of R1 and R2.
3) A resistance R2 is connected in parallel with a resistance R1. What resistance R3 must be connected in series with the combination of R1 and R2 so that the equivalent resistance is equal to the
resistance R1? Draw a circuit diagram of the arrangement.
4) Three equal resistors are connected in series. When a certain potential difference is applied across the combination, the total power dissipated is 10 watts. (Note: Power = V x I, voltage x
What power would be dissipated if the three resistors were connected in parallel across the same potential difference?
As President Obama begins more negotiations with the GOP's stalwarts on the handling of the debt ceiling increase, he needs to bear in mind (and take to heart) Sen. Bernie Sanders recent exhortations
to "stand tall" and refuse to make any quick and easy deals to avoid confrontation. Sanders then referred to the tax reform dealings back last fall, and how Obama got gamed by the repukes. He can't
let that happen this time around, as his re-election may well hinge on him showing the determined leadership a President is supposed to have. Obama's no longer a lowly state senator, and he needs to
act the part. Also, he can't be afraid to piss people off, even the Repuglicans!
The Rethuglicans' bad faith is the problem at heart. If they have all signed "pledges" not to raise taxes,,ever, then Obama's hands are putatively bound in the making of ANY deficit reduction deal!
He simply cannot be seen to give in to one-sided demands, nor to lopsided demands! The burdens must be shared between higher revenue (taxes) and spending cuts, not all on the latter, or even
four-fifths. (Some ideas circulating have increased taxes at $400b, but spending cuts more than $1.6 b. This is nonsense and insane! It must be at least HALF and HALF! Also why I refuse to give any
more $$ to the DSCC or any other Dem organizations until I see some evidence of spine. So far I don't!)
Anyway, an essay to read carefully is the recent one entitled Our Greek Tragedy, appearing in TIME (July 4, p.. 26) by Rana Forhoohar. She writes in one segment that I advise all Repug followers to
"There's still a belief that the government can cut spending wholesale and expect consumers to pick up the slack. This is magical thinking!"
Indeed, it is! The reason is that the Republicans in their grossly stubborn behavior, don't appreciate or understand the economic concept of aggregate demand. It is the index of aggregate demand that
ultimately determines investment potential, and also unemployment and whether an economy is "paralyzed" - as ours seems to be (as disclosed by the constant references of pundits).
Aggregate demand is composed of two parts: 1) demand generated by consumers for goods and services, and 2) the demand for investment goods. When the level of aggregate demand is high, both these
components are generally equally high, and the levels of production and employment are high. On the other hand, when aggregate demand is low - or even one of the components (e.g. (1)) is VERY low,
then levels of production and employment plummet. Right now we are seeing a tabulated rate of 9% unemployment and more like 16% real unemployment which is signaling that the aggregate demand is low.
In addition, we see almost no movement of corporate dollars (now nearing two trillion) to invest in labor or labor infrastructure - to enable more workers to be hired. Thus, also low demand for labor
investment goods.
The two are clearly feeding on each other.
Now, the next thing, why is consumer demand still low, falling almost quarterly? It is because the wages that support it are low! The average workers' wages have remained static or gone down since
the recession theoretically ended. Thus, consumers are buying less, and resorting to unusual savings devices (like extreme coupon clipping) when they do buy! Also, when people eat out now they're
more likely to get meal coupons off Groupon.com then go to the cheapest place that takes them, and after eating ding the business by leaving zero tip.
In other words, the demand side of the landscape looks so poorly for consumers they are hunkering down...likely expecting another shoe to drop. That "shoe" - when they pay attention to the news,
sounds like a severe cut in possible social services, thanks to the Republicans' rhetoric. If they foresee such cuts, say even for Medicaid for sick granny or Uncle Tim, then they will pull back on
spending in case they will need the money to help care for them! Ditto with seniors on Medicare, if they suspect cuts are in the works (as Sen. Max Baucus has intimated). Thus, they also pull back on
their spending, or even going out to eat, say at Bob Evans.
The total effect is to put more jobs (at those places, services) under pressure, and possibly engender more job loss. Thus, the mere mention of the GOP mammoth (~ $2 trillion)spending cuts puts
consumers under such potential threat that they already psychologically act as if it's happening. Meanwhile, nothing is being said forcefully (even by the nominal opposition) to defend higher taxes
and revenues that would save programs - Pell Grants, Medicare, Medicaid, environmental laws...or whatever.
The word is even leaking out that the Repukes want spending on water regulation cut, meaning an epidemic of cryptosporidium - such as struck Milwaukee in 1994- could easily occur again!
Meanwhile, investor demand for investment goods is largely hinging on their optimism or not. If investors are pessimistic (as many are now...because of the volatility of stocks) then they will
withhold their investments.
Here's a numerical example of how this all works. Assume we have "full employment" (e.g. 4%) and it generates a total of $1,000 worth goods and services in a day. This is also the sum total of the
profits and incomes the employees and employers share. Let households comprised of workers and employers use a large fraction (e.g. 90% or $900) of their income to purchase goods and services for
consumption. The remaining 10% or $100 is saved but eventually purchased by investors as investment goods.
Now, say an agent or effect appears (e.g. states cutting pensions and benefits) which causes consumers to pull back on their spending such that the $1,000 becomes $900. Then, with an income of only
$900, the consumption is also reduced, say to only $800. $200 in "savings" accrues but inveestors are so pessimistic and traumatized by the consumption decline that they only purchase $50 of the
$200. In this case, the aggregate demand has shrunk 15% from $1000 ($900 consumption + $100 investment) to $850 ($800 consumption + $50 investment).
In this way, the stage is setting up for a major financial disaster, and a new recession or even depression.
Let's say at this point (as shown above) spending cuts -mammoth ones - are now imposed by a derelict government which thinks it can get a handle on deficits almost solely via cutting. Then, consumers
will pull back even further and incomes will drop to the 50% level or $500. They will still spend $450 on necessities, and investors will not budge from their $50 investments. The total of aggregate
demand is now $500 ($450 + $50) or 50% of the original. But because of the contraction there is no accumulation of inventory (unsold goods) even as services such as bars & restaurants go out of
business for lack of clientele (after all, only 50% of the original income is now available!) We do see a kind of equilibrium restored, but because production is now at only 50% of the full
employment level, unemployment is now 50%.
As a comparison, the maximum unemployment in the Great Depression hit 35%. The above scenario played out on the national stage for next year, shows a projected net loss of aggregate demand in the
neighborhood of 40-50% if the Repukes get their way before Aug. 2nd and $2 trillion in nothing but spending cuts is used to "solve" the deficit and enable the debt ceiling to be raised. This
translates into an unemployment level of more than 18% this time next year (30% real unemployment) and a new recession bordering on depression. Think the repukes will be happy? Hell, they'll be
having wet dreams at the prospect of such an abomination (which they deliberately created) within months of the general election.
THIS is what we face if the Republicans get their way, and no taxes are a significant part of a deficit reduction package. This is why Obama has to finally take off the kid gloves and put on the
brass knuckles! He's fighting as much for his own 2nd term as he is for the nation! That means not taking any repuke shit, but rather dishing it out to them! Acting "Grownup"? Hell, grown-ups don't
allow themselves to be pushed around by 2-bit punks with an attitude.
Some ten months ago I blogged about my second brother “Donnie” (pseudonym) and his gambling problem – actually an addiction- and the serious problem of giving money away to him, when you know damned
well it’s enabling his habits. I called this a moral hazard. See, e.g.
To recap briefly,I told him to find something constructive to do, or anything - that would keep his mind occupied and away from slots. I also told him that given his already known heart problems, he
needed to knock off ALL smoking. I informed him that I certainly couldn’t or wouldn’t be subsidizing his ill health problems brought on by his bad habits, nor should he expect me to. He was now a
“big boy” (62 years old)and needed to take charge of his life, as opposed to endlessly copping for handouts to support his smoking, gambling, or pure laziness. As I pointed out to him, he’s getting
$2200 a month for doing absolutely nothing, and others are busting their humps across the nation, often working TWO scut jobs and not earning near as much as he receives gratis from the gov't. So,
there was NO excuse to graft for money, especially from siblings who have their own financial issues and budgets to deal with.
At that time, after begging for $40 over the phone for something, and my refusal, he let loose a stream of expletives and asserted he “disowned” me. However, a couple of months later the relationship
was tentatively repaired (at least nominally) and I sent him a Xmas gift of $20 (which in hindsight I perhaps ought not have done, but I’d hoped he’d see the difference between receiving a gift and a
“loan”). I told him I no longer give loans because I never receive them back! A point reinforced in a recent issue of MONEY magazine, wherein their finance advisor said under no circumstances give
out family loans unless specific conditions are met, including: setting a repayment deadline date, interest -if any- to be paid if deadline is missed. Since I knew none of this would wash with
Donnie, I declined all loans to him period, but left the room open for gifts. Obviously, the latter would only occur at most twice a year: for his birthday and Xmas!
Sure enough, in late May he returned asking for a “loan” of $25. Again, I made it clear to him I no longer give loans, period. However, I would give him a $25 gift for his 63rd birthday (on June
9th). I figured this would be the end of it until Xmas but I was fooling myself. Anyway, soon thereafter he suffered an apparent heart attack and was told by the doctors to shape up (again stop
smoking) and the best thing would be to have open heart surgery right there and then – as they suspected major blockage. Donnie declined, fearing such surgery and opting instead for an expensive
Plavix regimen which also required catching taxicabs to purchase it at distant private pharmacies, at high cost (according to his version). I told him a more practical solution to his problems was
simply to get the operation.
Now, rejecting the latter was a choice he made. A deliberate choice. Knowing he has limited finances, or so he claims(though he does have VA benefits!), and also trots out other reasons why he "must
eat out and not cook", must take cabs everywhere - not buses, can't save money etc., WHY choose the most expensive heart health path? Just where the hell do you think you are going to get the money?
DO you think you can endlessly tap family, brothers, forever?
So two weeks ago he wrote and asked to borrow $50 for “cab fare” to go and have his blockage scan appointment. This is now $75 asked for in the span of about a month! For five days I deliberated over
this, swinging back and forth between whether this was another load of codswallop (designed to provoke sympathy and get money to buy smokes or gamble) or whether it was for real. After two more days
of thrashing it out, I opted to write him in a short letter: “You can have this $50, not as a loan, but outright – provided this is the last time you ask for any money”. I also asked him to put it in
writing, then I'd send the money.
Two days after that, I received a letter from him, but not in reply to the one I sent (because that would have been too rapid). This letter automatically assumed there’d be no money coming and let
loose a barrage of hate and vitriol that caught me by surprise, and had my wife shaking her head as she read it. My first reaction was, Who the bloody fuck does this little shit think he is? Does he
believe he's entitled to my money? How dare the little fuck launch into a venomous tirade including 'burning forever in hell' merely because I declined to gratify his wishes (at that time, though as
noted I did send a letter offering the $ but with strings). As for my wife, her immediate response was:
“THIS is your brother?”she asked. “All you did is turn him down for a loan!”
I replied that I didn’t even do that, since I offered him the money but with one major string attached, never to ask again! However, he decided to attack me and spit in my face, before even getting
it! I posted two segments to show some of the hostility in the letter, which I will use to make a point (Note: most of his letter was omitted because I wouldn’t put the content on a public blog. I
put these sections on because they're relevant to my arguments. Also, if anyone at anytime sends me anything, I consider it fair game for my blogging – especially if what they send has drastic
negative connotations, displays an openly hateful or vitriolic attitude or whatever. Be sure you know what the hell you’re doing before sending me anything, especially material that can be scanned
and published! I regard everything sent to me in my personal domain, for personal use - howsoever I see fit!)
As to his letter(any other names redacted to protect those he involves), note his first words are: ‘Hey X-Brother!’ (He doesn't even know the appropriate form is 'Ex-Brother')
But in truth and fact Donnie has never been a brother. Oh, he is by blood, but he lacks the emotional wiring to relate as anything other than a perpetual snake oil salesman, con artist, beggar and
grafter. Looking back all through the years at about a dozen contacts he initiated (as opposed to me) at least ten of them were purely to graft for money, or beg. He begins letters cordially enough,
but then doesn’t even waste one more paragraph before the money begging commences – and it’s always for a “loan” despite the fact he’s never paid them back. (I take that back. He did “pay one $150
loan back” from ten years or so ago, by giving me a set of laminated 19980s baseball and football cards). So obviously, loans have no meaning for him.
Note also from the letter the sense of absolute entitlement. He feels – by his words and attitude- he’s fucking entitled to MY money! Note the violent reaction is almost as if I was the one who had
ROBBED HIM of $50, when all I did(in his mind) was decline to give it to him. (But again, I had written him to make arrangements to give it provided he agreed to no more asking at all in the future)
Thus, his whole reaction – including the ‘burn in Hell” bullshit at the end, is totally out of proportion to the stimulus of a perceived rejection!
Then, the accusations of being “cheap” are priceless, as is his woefully wrong perception that $50 is nothing much. In fact, some people almost kill themselves in tough manual jobs to earn that in
one day! Think of Florida’s sugar cane cutters. So, what he is really assuming is that parting with $50 is nothing, no big deal for me.
“Cheap”? Only an purblind, half-monkey idiot would believe that judiciously managing one’s expenses, cash outflows is “cheap”. The sad thing is he isn’t capable of doing it himself. Money is “cheap”
to him because he disrespects it (e.g. "you can't even give me a lousy fifty bucks" on p. 2), and uses it as if it has no value. He pisses it away senselessly on smokes, slots or whatever…then
expects others to bail his sorry ass out when he exhausts the greenbacks. Then, when others who manage their money deny him the use of theirs to piss away, he loses his composure and unleashes a
barrage of hate. Showing, of course, he was never a true brother to begin with, but a pathological hollow man or cipher: a pretender, user, manipulator and exploiter merely sharing the same surname.
His other bellyaching about his funeral and “advanced directives” is also choice. The sad fact is that Donnie’s choices are totally setting him up to die alone, by himself, and in a pauper’s grave.
And with NO one there to see him off..
But in the end, that will have been his choice, as it was his uncle David’s – another guy (my dad’s younger brother) who could never handle money, never use it properly and was hostage to alcohol and
gambling addictions. I can still recall when he came around our home in Milwaukee in 1954 and stayed a few days, before he started begging for money "to find a job downtown", and came in drunk late
one night. Dad asked him to leave the next morning, since there was no place for a drunkard with five young kids in the house. He never returned and died some years later, of alcohol poisoning.
"Donnie" in a similar way has already left whatever family he once had, and he did it by his own choice, as David actually did. As for those who say I ought not write about “family”, sorry! When they
ask for it, and act in certain ways, there are no holds barred. If they don’t wish to be blogged about, then they shouldn’t send vitriolic letters through the mail. Especially when they ensure they
can never see my replies (e.g. by sending my letter back without reading). "Family" is too often a cover to do whatever, use emotional extortion and get away with every and any thing (or invoke as a
pretext to let others, fellow siblings etc. get away with anything!). To me, family means a constellation of people who can relate to you, who often have your back (and don't talk behind your back!)
and may or may not be blood relations. I have a rather large extended family (over 255 people), thanks to my wife's huge clan, as well as a long time friend of 40 years. I know in a pinch I can count
on any of them to have my back - and not stab me in the back! In the end, that's what real family is about! Not the extent to which selfish demands can be appeased at the drop of a hat - at the beck
and call of an unreconstructed, overaged sibling brat!
We now undertake the solutions for the problems in Part 21, Introducing Basic Physics: Simple electric circuits. The problem will be given then the solution.
1) A 12 V battery has an internal resistance of 2 ohms. If it is connected in series with a voltmeter and another resistance R = 4 ohms, what would the voltmeter read? What would an ammeter read
placed in the same circuit?
The circuit set-up is shown for Problem (1). We first need to find the ammeter reading:
I = E/ (R + r) = 12V /(4 ohms + 2 ohms)= 12V/ 6 ohms = 2 A
Then the voltmeter (V) reading may be obtained from Ohm's law for the circuit:
V = I(R) = 2A (4 ohms) = 8V (at the position indicated)
This may also be validated by use of:
V = E - Ir = 12V - (2A)(2 ohms) = 12V - 4V = 8V
(e.g. 4V of total emf is lost through the source)
2) A Wheatstone Bridge circuit is connected as shown in Fig. 1(b). The galvanometer is found to read zero when point C is located exactly midway along a wire 1m in length (e.g. connecting A and B in
the diagram). A known resistance coil R is used which is made of copper (rho = 1.72 x 10^-8 ohm-m) and is 50 m long, wound tightly in a coil.
a) If the cross sectional area A = πr^2 and r = 0.001m, find the value of R(x).
The experimental circuit is shown for Problem 2, with L1 and L2 denoting the respective lengths.
We first need to obtain the known resistance, but this must be done using the resistivity of the wire that's given (rho = 1.72 x 10^-8 ohm-m) in conjunction with the resistance as a function of
resistivity eqn.
R = rho(L)/A = rho(L)/ πr^2
R = (rho = 1.72 x 10^-8 ohm-m)(50m)/(Ï€ x.001m^2) = 0.27 ohms
And, from the Wheatstone Bridge set up:
R(x)/ R = L1/L2
But since: L1 = L2 = 50 cm, then:
R(x) = R (L1/L2) = R (1) = 0.27 ohms
b) If a new resistor R made up of 100m length of the same copper wire is then used, then how must the lengths L1 and L2 change to achieve a galvanometer reading of zero?
We assume the only change made is to R, and R(x) is still 0.27 ohms. Then only the lengths L1, L2 will vary.
The new known resistance, call it R' = rho(2L)/A = 2R = 0.54 ohms
(since 100m = 2(50m))
R(x)/ R' = L1/ L2 = 0.27 ohms/ 0.54 ohms = 0.5
So: L1 = 0.5 (L2) or L2 = 2L1
The total length is 1m, so:
L1 + L2 = 100 cm
substituting for L2 (2L1):
L1 + 2L1 = 3L1 = 100 cm and L1 = 100cm/ 3 = 33.3 cm
So: L2 = 100 cm - L1 = 100cm - 33.3 cm = 66.7 cm
William Bonner, aka "Billy the Kid", was a loathsome scumball, one of the worst vermin to inhabit the West. Before he was gunned down (by Pat Garrett) he had one of several last photos taken about
130 years ago in which is called "tintype" today. The photos, slightly larger than modern baseball cards, should all have found their way into sewers or dumpsters by now, but one actually made its
way to a Denver Auction.
There it was purchased....get this... for TWO MILLION DOLLARS! This sent waves of absolute revulsion through me, when I considered what that sum of money could otherwise have accomplished...how much
good it could have done... as opposed to pissing it away for the 130-year old image of a rat. For example:
- Paid twenty secondary school teachers' salaries for two years
- Paid forty school librarians' salaries for 4 years (many school librarians are now being laid off across the nation as states come to grips with budget woes)
- Paid for health care, and needed (overdue) medical treatments for one thousand homelesss adults.
- Purchased six months worth of nutritious meals for 100 homeless kids.
- Enabled construction of a 40-room shelter for homeless families.
But what was the money used for? To purchase a little tintype photo of one of the worst villains of the Old West!
Later word has it that the purchaser was one of the billionaire Koch brothers. The same Koch brothers who use their other extra, excess monies to fund the Tea Party's exploits. (Though most Tea
Baggers don't have clue one who's behind their agenda!)
It is now time, given that they have so much EXTRA, unneeded money to piss away, to tax the bastards to the hilt! That's why I now propose increasing the marginal tax rate at the top to what it was
during the Eisenhower years: 91%. Also, bring back the estate taxes and put all kinds of provisos on them (i.e. no gifting beforehand to family members, trusts etc.) to ensure all gets paid to Uncle
Anytime a rich guy can just piss so much way discloses he has excess money, in fact more than he reasonable knows what to do with....or NEEDS. I make the same claim for the Koch bro that purchased
the Billy the Kid photo, as I do for the other rich dude that paid $6 million (at an auction sponsored by Debbie Reynolds) for the dress Marilyn Monroe wore (when the subway draft blew it up around
her waist) in 'Seven Year Itch' (1955) and the other character that tossed out $1.8 million for Michael Jackson's favorite jacket.
All this money wastage shows the rich have way too much...as well as time on their hands.
Of course, with the repukes in power, and now yammering incessantly to give the richest even BIGGER tax cuts (while they take Medicare from poor seniors and replace it with useless vouchers), we will
have to expect even more obscene purchases that will only make normal humans wince at the chutzpah, arrogance and depraved flaunting. This is the Republicans' plan: to give each rich millionaire
another Lexus each year in tax cut equivalents, and each billionaire enough to purchase a new Lear jet.....or maybe....the last photo ever taken of Adolf Hitler before heading into his Berlin bunker
as the Russians approached.
I am sure the next billionaire who buys that will have much to share in common with the subject!
In this instalment we begin the examination and investigation of simple electric circuits, starting with some basic rules and principles and very simple experiments to determine the resistance, R, in
a circuit. There are two methods that can be used one I call "simple", otherwise known as the ammeter-voltmeter method, and the other which is more complex (shown in Fig. 1(b). The latter actual
practical set up is also shown in Fig. 2.
Generally, at Harrison College, we employed the simple A-V method for the introductory physics students in the 3rd form (equivalent to the U.S. 9th grade) and used the Wheatstone Bridge method for
the Upper fifth form (equivalent to the U.S. 12th grader, or HS senior). Thus, by the time the Barbadian physics student arrives at his senior year he's already been well exposed to simple electric
circuits, and knows how to set up both series and parallel circuits (which we will see in the next instalment).
Some Basics:
1.Ohm's Law:
In effect, this is what the student is really seeking to show in the experiment:
I = V/R
where I is the current (in amperes) and V the voltage in volts, while R is the resistance in ohms. In terms of the units then:
Amperes (or 'amps') = Volts/ Ohms
2. Internal Resistance
Technically speaking, every battery, or source of emf (electro-motive force) also has an internal resistance, r. Thus, the theoretical emf (E-t) will always be larger than the actual, measured emf,E.
And hence also, any terminal voltage in the circuit (call it V(ab)) will be less than E. Thus:
V(ab)= E - Ir
or, if an additional resistance R is connected:
E - Ir = IR, or
I = E / (R + r)
The fine points of (2) are usually not introduced until the upper fifth form.
3. Resistivity
This features prominently when lengths of resistance wire come into play and hence the resistance will change depending on the length. If rho denotes resistivity of a material, e.g. metal wire, and
the wire is L feet long, then the resistance R (in ohms) is:
R = rho (L)/ A
where A is the cross-sectional area of the wire.
Again, (3) is a consideration usually left until the more advanced levels.
I. Ohm's Law by the Simple Method
The student is issued an ammeter (with letter I adjacent, for current) to measure the amperes, and a voltmeter (with E adjacent) to denote volts or emf at that point, and a set of 5 unknown or test
resistors, as well as a battery or other emf source.
For each test resistor inserted, the student records the voltage and current in volts and amps, then obtains the resistance using:
R = E/ I = V/I
At the end of the experiment the student is given the actual resistor values which he must compare with the empirical or test values he found, and then estimate the percentage error for each.
II. Wheatstone Bridge Method:
This circuit is connected as shown in diagram Fig. 1(b). In this case, the unknown resistance R(x) is wired in as shown. The instrument denoted by (G) is a galvanometer. The known resistance is
denoted R, and the student adjusts or moves a clip along a length of wire from A to B yielding different lengths L1 and L2. At specific points where 'C' is located on AB, the galvanometer will read 0
(N.B. the value R should be chosen before hand so that point C falls on the middle third of AB when G reads zero).
Practical example:
In performing the Ohm's law experiment (to find an unknown resistance R(x)) a student at Harrison College makes the following measurements:
Length AC = 35 cm
Length CB = 65 cm
R = 5 ohms
G = 0
Using this data, find the value of R(x):
R(x) / R = (AC)/ (BC)
R(x)/ 5 ohms = 35/65 = 7/13
Therefore: R(x) = 5 ohms (7/13) = 2.7 ohms
1) A 12 V battery has an internal resistance of 2 ohms. If it is connected in series with a voltmeter and another resistance R = 4 ohms, what would the voltmeter read? What would an ammeter read
placed in the same circuit?
2) A Wheatstone Bridge circuit is connected as shown in Fig. 1(b). The galvanometer is found to read zero when point C is located exactly midway along a wire 1m in length (e.g. connecting A and B in
the diagram). A known resistance coil R is used which is made of copper (rho = 1.72 x 10^-8 ohm-m) and is 50 m long, wound tightly in a coil.
a) If the cross sectional area A = πr^2 and r = 0.001m, find the value of R(x).
b) If a new resistor R made up of 100m length of the same copper wire is then used, then how must the lengths L1 and L2 change to achieve a galvanometer reading of zero?
Jason Lisle, in his Ph.D. dissertation ('Probing the Dynamics of Solar Supergranulation and its Interaction with Magnetism'), makes the claim that by using Michelson Doppler Imager (MDI) velocity
data (from the SOHO spacecraft) in concert with local correlation tracking techniques, he was able to develop adequate "refinements" in the latter and thereby excavate enough signal to ascertain a
"persistent N-S alignment" which is taken to be a polarity preference associated with the supergranules. As I intend to show, this conclusion is suspect and doesn't hold up when the foundational
assumptions (and error techniques) are examined more closely.
One thing any solar researcher or worker ought to be able to relate to, and to concede, is the ever present trap of selection effects. These can often creep into an investigation even conducted with
the best intents. As an example, in my (1980-86) investigations of the origin of SID flares associated with certain active regions, an early conclusion was that the most geo-effective (i.e. able to
disrupt terrestrial communications, or cause the most intense sudden ionospheric disturbances) were associated with the most rapidly growing and magnetically complex sunspots. (Fig. 1). These were
typically the largest delta -class spots with numerous magnetic polarity intrusions (e.g. one magnetic polarity intruding deeply into another causing large magnetic field gradients).
However, on performing a deeper analysis which included cross-referencing all SIDs to all optical flares (of all classes) appearing on all available H-alpha films (e.g. over all Carrington rotations
in 1980) and validated via the x-ray signatures from the SMS-GOES satellite) it was found this was a premature conclusion. Indeed, contrary to the overall rubric that only large area, delta spot-
populated ARs spawned powerful SIDs (and hence associated SID flares) I found that nearly 35% of all major SID flares (which generated the largest SID effects) were associated with optical subflares.
(Solar Phys., Vol. 92, p. 259). In other words, the counter-intuitive finding was that just over one-third of the largest SID flares arose from the smallest energy optical flares (typically 10^21 J
). A test of the sampling errors using the coefficient of determination confirmed this.
I point this out because very early in his dissertation (p. 17), Lisle concedes an inability to properly resolve granules via his MDI data, though he does make an appeal to local correlation tracking
(LCT) as a kind of savior since it allows motions to be deduced from the images even when individual moving elements aren't well resolved. (I.e. the granules are the individual elements or units of
supergranules). The question that arises, of course, is whether the product of such LCT manipulation is real, or to put it another way, an objective physical feature that is not associated with
instrumental effects, distortions or errors.
Exacerbating this suspicion is Lisle's own admission (ibid.) that LCT "suffers from a number of artifacts". One thing any worker in the field ought to know is that any time an artifact appears,
however it does so, it's time to put on the warning radar. (Lisle evidently does this but as I will show, this isn't enough).In Lisle's case the warning is sounded by the emergence of a "large-scale,
disk centered convergence anomaly". He notes this is due to a selection effect on granules and the anomaly can be mitigated by further treatment.
This is normally attainable, but again, removal of scale selection effects depends on the scale of the anomaly- especially in relation to the sought after signal. In my SID flare studies at a
particular scale, it appeared that solar spicules in the vicinity of large, delta-spot penumbra were scenes of emergence of optical subflares that spawned major SIDs. But when all the incidents of
these correlated spicules were assembled, then subjected to a Fourier analysis (and compared with velocity field and vector magnetogram data) the signal vanished. Evidently, aberrantly bright
spicules (in the centerline) were erroneously recorded as optical subflares though they were nearly 10-15 x less in energy. (Correction could be made when the spectrum was observed in the wings,
which disclosed darkness). In this sense, the anomaly manifested itself as a much diminished signature which appeared more energetic than the actually sought signal, while in Lisle's case, the
anomaly was already much larger than the sought signal scale (by about ten times). In their Astrophysical Journal paper (Vol. 608, p. 1170), 'Persistent North-South Alignment of the Supergranulation'
, Lisle, Rast and Toomre assert that:
supergranular locations have a tendency to align along columns parallel to the solar rotation axis.... this alignment is more apparent in some data cubes than others
They also add: The alignment is not clearly seen in any single divergence image from the time series of this region (Figs. 1a–1b) but can be detected when all 192 images comprising that series are
This also ought to have raised some suspicions as to whether it is really a spurious signal emerging in the series averages. For example, an averaging of SID-flare intensity data would show that the
original (spurious) conclusion I gave above was the appropriate one, i.e. "only large area, delta spot- populated ARs spawned powerful SIDs (and hence asociated SID flares)"
Going back to the same Ap.J. paper, the authors write:
The anisotropy observed in the time-averaged image of Figure 1c is not a property of the individual supergranules but is instead due to a weak positional alignment, producing vertical striping with
nearly the same horizontal length scale as the supergranulation itself.
The north-south alignment of the solar supergranulation is observed only after long temporal averages of the flow. After 8 days of averaging, the striping is quite strong, while shorter averaging
times show the effect less well.
In Figure 3a, s[x], s[y] , and their ratio s[x]/s[y] are plotted as a function of averaging time. The plots show the average value obtained from two independent well-striped 15 deg x 15 deg
equatorial subregions of the data set
In the above, s[x], s{y] refer to the rms or root mean square errors.
Now, the key link to the persistence of the "polarity" alignment as the authors put it:
" s[x]/s[y] shows a smooth transition from values near one at low averaging times to values exceeding 2.5 for the full time series. This increase reflects the slow decrease in x compared to y due to
the underlying longitudinal alignment of the evolving flow. The random contributions of individual supergranules to s[x] or s[y] scale as 1/(Nl)^½, where Nl is the number of supergranular lifetimes
spanned over the averaging period. Vertical striping of the average image emerges visually when the averaging time exceeds the supergranular lifetime by an amount sufficient to ensure that the
contribution of the individual supergranules falls below that of the long-lived organized pattern. The slowly increasing value of s[x]/s[y] at very long averaging times suggests that the underlying
organization is persistent in time, with a lifetime > or = 8 days.
The underscored segment above is critical, since Lisle in his dissertation refers to a "false flow" whenever his larger tile size includes more granules. In effect, an additional flow anomaly has
been engendered because of the effects of including larger boxes (usually 5 pixels by 5 pixels) which have more granules. This is in addition to the large central convergence artifact. Which brings
us to the use of the asymmetrical outcomes for the rms error ratio: s[x]/s[y] upon which the conclusion of persistence appears to rest, i.e. from the authors' foregoing description, which is also
based on Lisle's dissertation - but which one is prevented from extensively quoting from (owing to copyright restrictions!) Hmmm...wonder why?
Anyway, the use of the error ratio s[x]/s[y] as a "rosetta stone" to uncover this "persistence" of pattern can't really be justified, and it's a pity that the referees of the Ap. J. paper didn't see
this - but then my own original Solar Physics paper had to be referred to more capable statistics-trained referees because the first one couldn't handle it!
Without wishing to make this overly long, I refer readers to the excellent paper by James G. Booth and James P. Hobert ('Standard Errors of Prediction in Generalized Linear Mixed Models', appearing
in The Journal of the American Statistical Association (Vol. 93, No. 441, March 1998, p. 262) in which it is noted that standard errors of prediction including use of rms errors, and ratios thereof
are "clearly inappropriate in parametric models for non-normal responses". This certainly appears to apply to Lisle et al's polarity "model" given the presence of significant "false flows" and "large
convergence artifacts" (the latter with 10 times the scale size of the sought signal). The authors meanwhile recommend instead a conditional mean-squared error of prediction which they then describe
at great length. They assert that their methods allows for a "positive correction that accounts for the sampling variability of parameter estimates".
In Lisle et al's case, the parameters would include the propagation speed of supergranule alignments, after their s[x]/s[y] is replaced by the rubric offered by the J. Am. Stat. Soc. authors,
including the computation of statistical moments (p. 266) to account for all anisotropies and other anomalies which appear.
Until this is done, it cannot be said that there exists any "persistent alignment" in the solar supergranulation! Sorry, boys...and Ap.J. refs, and editors!
Problems from Pt. 20:
(1) A telephoto lens consists of a converging lens of focal length 6 cm placed 4 cm in front of a diverging lens of focal length (-2.5 cm).
a) Do a graphical construction of the system showing where the image would be.
b) Compare the size of the image formed by this combination with the size of the image that would be formed by the positive lens alone.
a) The graphical constuction is shown in the accompanying diagram, showing the image is 10 cm from the optical axis of the diverging lens. If the negative (diverging ) lens had not been used then the
image AB would have been formed at the principal focal plane of the +6 cm (converging) lens, 6 cm from it. However, the diverging lens decreases the convergence of the rays (left side) refracted by
the converging lens and causes them to focus at A'B', 14 cm from the converging lens - and 10cm from the diverging lens, as shown.
b) The image AB that would have been formed by the converging lens alone is (6 cm - 4 cm) = 2 cm beyond the f = (-2.5 cm) lens and is taken as the virtual object for that lens. ThenL s1 = -2 cm, and:
1/s1' = 1/f - 1/s1 = 1/ (-2.5) - 1/(-2) = -1/2.5 + 1/2 = 1/10
Then: s1' = 10 cm
Thus, the final image A'B' is real and 10 cm beyond the divergin lens - as the graphical construction shows.
The linear magnification: M1 = (-s1'/ s1) = (10 cm/ 2 cm) = 5
and since, h'/h = 5, then h' = 5h so the image formed by the combination is 5x larger than that formed by the (+) lens alone.
2) The objective lens of an astronomical telescope has a focal length of 6 ft. The eyepiece has a focal length of 2 inches.
a) Find the angular magnification that the telescope will produce when used for distant objects.
b) A rule for observing extended astronomical objects, such as planets or nebulae, is that the telescope magnification should not exceed 60x per inch of objective aperture.
(a) M = F/ f(e) where F = 6' = 72" and f(e) = 2"
Then: M = 72"/ 2" = 36 x
(b) It is not possible to strictly assert the condition is met (since no aperture is provided) but given the long focal length (6') it is more likley the aperture is at least 6" so the condition is
easily met. (60x per in. would be 360x. Even a 1" aperture would easily meet the condition, however.)
If the astronomical telescope of this problem is used to observe the planet Jupiter,is the condition met or not? If not, what focal length eyepiece is needed to get the maximum angular magnification?
Assuming a 6" aperture to get 360x for Jupiter then we'd need: M= 360 and
f(e) = F/M = 72"/360 = 1/5"
3) The objective of a telescope has a focal length F = 30 in. When it is used for an object at a great distance, then the distance between the objective and eyepiece is 32 in. What is the angular
In this case, the focal length of the eyepiece f(e) = 32 in. - 30 in. = 2 in.
We have: M = F/ f(e) = 30 in./ 2 in. = 15 x
I was enjoying my breakfast of freshly brewed coffee along with a fancy bear claw this morning, when all of a sudden I turned the page of our local rag to find another incompetent load of economic
drivel from "Uncle Tom" Sowell staring at me. I nearly dropped my coffee all over the paper (which might have been better) on reading some of his moronic codswallop which shows he hasn't the faintest
clue about the Pareto Distribution, or Pareto Efficiency - which I covered in two previous blogs.
Since I've dealt with the noisome and dishonest Uncle Tom before, let me merely stick to his more outrageous claims in this particular column. He writes, for example:
"They (American seniors) want their Social Security and their Medicare to stay the way they are- and their anger is directed against those who want to change the financial arrangements that pay for
these benefits"
Now that's a really neat euphemism, "change the financial arrangements"! But as I already showed, the "change in financial arrangements" these miscreants want for Medicare is NO Medicare! Paul Ryan's
pseudo -Medicare plan essentially converts the whole system into a voucher system. The senior will be handed a $12,000 voucher (if that) then be sent on his or her merry way to try and purchase a
policy on his or her own. Good luck on that! I tried it, in perfect health, merely 3 years ago, and the BEST policy I was offered was one with an $8,500 deductible (which didn't cover all medical
issues) and for $450 a month. A senior in poor health would be lucky to get anything! The reason is the medical loss ratio for the insurance company would be too excessive, no profits! The senior
could as well kill herself, or...do what one enterprising senior recently did in NC, rob a bank (sticking up a teller for $1) in order to be jailed and receive health care there. (No one could make
this shit up, believe me!)
THIS is what the great Ryan "financial arrangement" will mean for most poor or sickly elderly. The CBO itself estimates that out of pocket costs will rise to 67% of totals, compare to about 25% now
for standard Medicare.
As for Social Security, their idea of "financial arrangements" is to put the money into the stock market or what they call "privatizing it". Just what do these genii think would have happened had
Bushie jr. gotten his way in 2005, and Social Security had been privatized? Well, seeing now in hindsight the stock market crash in the fall of '08, most seniors would be eating cat food out on the
streets- assuming they could find spare cans in enough dumpsters!
Sowell obviously doesn't know dick or diddly, or he is simply too dishonest to come out with the real facts.
He then bloviates:
"Their anger should be directed at those politicians who were irresponsible enough to set up those programs without putting aside enough money to pay for the promises that were made- promises that
cannot now be kept"
More nitwit bollocks! In fact, when FDR set up Social Security (read the history of this in the excellent book Social Security and Its Enemies by Max J. Skidmore) he knew the ONLY practical way to
make social insurance feasible was to implement it as a payment system via current workers to current retirees. NO other way would work. This was known by ALL from the outset, and also that it would
be paid for by payroll taxes. Thus, Sowell is disingenuous in asserting they didn't put aside money to pay for it. In fact, the payroll taxes accumulated as the implementers knew they would, and
built up huge cash reserves! (Even more was infused in 1983, after Alan Greenspan proposed a higher payroll tax, to the current 6.2%, to take into account the coming baby boomer onslaught).
The problem? Despicable politicos and pork mongers have raided it to disguise the size of deficits, starting with Reagan. Thus, the money was there, but stolen! Hence, it is ignorant and wrong to say
"promises were made that couldn't be kept". Indeed, as recently as 2004 more than $3.3 trillion remained in S.S. Trust funds (which DO exist and are kept in special bonds) but that has been drawn
down by the protracted military adventures, occupations (see my previous blogs)
As for Medicare, that was also designed to be paid for by payroll taxes and it did have the money to sustain it. But idiot Sowell doesn't mention (or mayhap he forgets) the changes that caused its
monies to bring it to near insolvency:
i) Not allowing Medicare from the outset to bargain for lowest prescription drug prices like the VA does.
ii) Not keeping a tighter rein on Medicare fraud.
iii) the 2003 Bush Medicare Act which created "Medicare Advantage" plans that consume $12 billion more per year than standard Medicare
All those in concert have placed Medicare near insolvency, but that problem can be reversed - not by killing Medicare like Ryan wishes - but reversing all the above policies: e.g. telling the Big
PhrmA to go get fucked and allowing bargaining like the VA, eliminating all Medicare fraud, and eliminating all Medicare Advantage plans.
In addition, raising payroll taxes another 1 % wouldn't hurt, and increasing the payroll cap to at least $1 million, would also help sustain it.
Thus, Sowell's rejoinder to the effect "Don't you understand the money is not there any more? is pure B.S. It was there, but was raided by filthy political thieves and collaborators! (E.g. lobbyists)
The last bit of bullshit is the worst:
"..The way Social Security was set up was so financially shaky that anyone who set up a similar retirement scheme in the private sector would be sent to prison for fraud"
Again, this shows what a disreputable fraud he is! In fact, Social Security was never set up as a "retirement scheme", it was set up as a social insurance program. In the way it pays for current
retirees, it is exactly like social insurance programs in other countries. For example, the National Insurance program of Barbados uses the exact same approach, and it started before FDR's. A certain
% is taken out of the worker's pay each period and this goes to pay current retirees.
As for American Social Security, social insurance, as Max Skimore notes (op. cit.) it was always made clear to retirees that Social Security was to be but ONE prop of their retirement income, not the
whole enchilada! They were expected to supplement it by pensions from private sources, or other means (e.g. annuities). Funny that a nabob like Sowell can't even communicate that simple truth about
the program - but then, knowing his dishonest stance, maybe he prefers not to! It's easier to call it a "retirement scheme" analogous to a privately run operation.
As for tossing old ladies off of cliffs, no matter what changes are proposed for future retirees (i.e. the current Gen X and Y'ers) if they sow a lack of confidence in the system, such that these
current workers lose faith in the programs, then that will impact all current beneficiaries negatively - if only by brazen political acts to reduce their S.S. COLAS while increasing the cost of
Medicare premiums.
Why wouldn't Uncle Tom know that? Who knows? Maybe those who take his words as "gospel" ought to inquire!
"Let me be clear! Tax hikes are OFF the table!"
So said House Speaker John Boehner on Thursday, confirming again the GOP's bargaining position from bad faith in terms of raising the nation's debt ceiling, since they are prepared to reject ab
initio the ONE solution that would most easily and rapidly solve our problems! This shows the GOP, aka repukes, are not serious about deficit control, but only wish to exploit it as a means of
extortion on the Democrats to cut long-term social programs. (Ironically, at the same time, the GOP refuses to entertain any significant military spending cuts!)
This brash posturing comes on the heels of a non-partisan Congressional Budget Office (CBO) report on Wednesday, outlining a projected "explosion" in government spending. The CBO names Social
Security and Medicare, but let's be frank about what's really going on here. We need the perceptions of deep politics for this, not superficial or pundit-cheap politics such as on the tube.
The population has more than tripled in the U.S. since FDR initiated the Social Security program, and more than doubled since LBJ initiated Medicare (in 1966) and so basic arithmetic stands at the
foundation of most the increase. In addition, it was a Republican president, Richard M. Nixon, who approved the go ahead for cost of living (COLA) increases in Social Security, realizing if this
wasn't done, then rising medical expenses and fees from Medicare would soon swallow ALL of a retiree's money.
The real basis then for the "explosion" of government costs is NOT Social Security and Medicare per se, but the unwillingness to fund them! (While simultaneously raiding Social Security monies to pay
for military adventures and pork to hide the size of deficits!)
The Republicans have known all along that if enough tax cuts and expensive military interventions, occupations could be mounted, the government would eventually spend down its assets and reach the
point where the social programs would be in jeopardy- unless drastically pared back. The GOP and anti-tax terrorist Grover Norquist (who designed an insane "pledge" against raising taxes that all new
Goop-ers must sign) have succeeded in achieving this condition by:
- ten years of military occupations at a total cost now of nearly $3.4 trillion
- ten years of Bush tax cuts at a cost of over $3.1 trillion (including the last extension in December), and
- increasing the defense budget to 3.9% of GDP in 2004 (which former defense analyist Chuck Spinney called "the start of a war on Social Security and Medicare")
Thus, the "ballooning costs" of Social Security and Medicare as reported by the CBO have only been because the Repukes have refused to allow the tax revenue which is needed to pay for them -
including for their own Republican constituents known as "values voters" (especially in the Deep South). They have even disallowed raising the payroll cap to $1m, requiring more rich folks to pay in,
which would immediately make Social Security viable for another 75 years. But they know if they did that, the current Generation X and Y'ers would also be assured of receiving their benefits so might
well become long time program defenders - by becoming Dem voters!
Meanwhile, in a separate pragraph, the CBO validated this by noting that these current and future expenditures could be kept pace with provided the Bush tax cuts be allowed to expire next year and
allow the AMT (alternative minimum tax) to hit higher income families. If not, according to the CBO, "under current tax policies, revenue will barely cover the cost of the health and retirement
programs alone by 2035."
But again, this is what the GOP wants! They want to put the Democratic Party under severe political pressure,in any ways they can. Just as they're now doing it behind the scenes in many Repug-held
states (by governorship) thus requiring strict photo IDs for any future voters (i.e. next year), which they know many of the elderly, African-American and young will never be able to meet.
Here's the dirty filthy truth these anti-tax miscreants don't want people to know: current tax rates as a percentage of GDP are the LOWEST they've been in nearly 40 years! The other dirty secret is
that even without the military adventures and "wars" of choice, the Bush tax cuts themselves would've bled us into massive debt. The extended occupations, being unpaid for by higher taxes, just
exacerbated the horrific debt ratio much more! The CBO in its report states if current tax rejection policies remain unchanged (and the national debt continues to grow as a result) then U.S. economic
output could be as much as 6% smaller than current projections by 2018 and 18% by 2035.
This again, isn't startling or amazing! As early as 1995, economists James Medoff and Andrew Harless, in the The Indebted Society (p. 84, 'Let Them Eat Cake'), found that that "high tax rates are
associated with higher productivity growth". There is a consistent and strong relationship. By contrast, for the years when supply side dogma held (during the Reagan and Bush Sr. years), productivity
retreated by more than 30% and debt exploded- exactly the opposite of what we've been sold. As they wrote:
"For the health of the economy, Reagan's policies turned out to be just about the worst thing that could have happened: investment did not increase, growth continued to stagnate, and the federal
deficit ballooned to new dimensions.
This was validated (for the Bush tax cuts, or as we call them, "Reagan Supply side II") by a Financial Times detailed analysis of the Bush Tax cuts in its Sept. 15, 2010 issue (page 24), wherein it
was observed:
"“The 2000s- that is the period immediately following the Bush tax cuts – were the weakest decade in U.S. postwar history for real, non-residential capital investment. Not only were the 2000s by far
the weakest period but the tax cuts did not even curtail the secular slowdown in the growth of business structures. Rather the slowdown accelerated to a full decline”
Contrast this with the hike in taxes (to only 39.5%) immediately after Bill Clinton took office, leading to the accumulation of more than $600 billion in surpluses by the time he left in 2000, and
the creation of 20 million jobs.
Meanwhile, the FT analysis observes that “during each decade from the 1950s to the 1990s, growth in real gross non-residential investment averaged between 3.5 percent and 7.4 percent a decade. During
the 2000s it averaged a mere 1%”
It is evident to anyone but a certified idiot, that higher taxes are the path out of our financial morass. On the other hand, the continued stubbonr refusal to raise taxes is the path to national
fiscal suicide and the U.S. rapidly becoming a very large third world country - with a few elites at the top, but the mass of people groveling.
THIS is why the GOP anti-tax position is not only dishonest and despicable, but fiscally traitorous as well. The Dems, for their part, must not yield to this extortion - no matter what threats the
Repukes make! Or how many times Eric Kantor pitches a tantrum or Boner weeps his eyes out!
Barack Obama did a magnificent job in his speech two nights ago, in performing a flexible bit of political tap dancing that would make King Solomon proud in terms of splitting differences. In
addition, he managed to upset the Whacko Right (which will always want tax cuts and indefinite military spending to weaken domestic safety nets) as well as the Left. How to score him? I gave him a 10
out of 10 for lucidity, but a 5 out of 10 for policy effect, including worsening the debt position-deficit over the next 3 1/2 years.
The sad and inescapable fact is that his decision to have only 10,000 leave the Afghan theater by the end of the year, and only 23,000 by the end of next year, merely brings us back to "square one".
In other words, what we had before Mr. Obama's surge in 2009. This means at least a minimum of $100 billion pissed down that rathole every year...until 2014, given now as the putative date for final
clear out. That means by the end of it all, at least $800 BILLION will have been squandered on a nation which is really in a Tribal Civil war while immense domestic needs (like infrastructure repair)
go unattended. This is in addition to nearly $3 trillion similarly squandered in Iraq. (Not including $280 billion to pack & transport the expended -leftover materiel out of Iraq to Kuwait)
Between this fiasco and the extension of the Bush tax cuts in December, nearly $1.7 trillion will have been added to the deficit - which we are to understand is now reaching a critical mass (though
from the behavior of most politicos you'd never know it).
The Repukes are the worst hypocrites, in that on every TV appearance they're whining about the "deficit" this and that, yet refuse to do the ONE thing that will most effectively cut it: RAISE REVENUE
via taxes! They believe, in their insipid little dwarf brains, they can actually cut the deficit by 40% merely via spending cuts. Are they insane? One believes so, but more on this in the next blog.
Given Obama's feeble draw down it's therefore passing crazy for the Reeps to be barking like rabid dogs about "betraying the generals" and our "fighting men". How about betraying all the citizens of
this country, many of whom don't know where the next meal is coming from...or mortgage payment? How about allowing insane and wasteful spending to continue without even having the guts to at least
pay for what you endorse, Reepos? How about the insanity of continuing a defense policy which in the end gets us nothing, because - make no mistake- Afghanistan will be what it is 3 years from now,
and 33 years from now! Several empires and wannabes (the last the former Soviet Union) have already learned that to their eternal pain and sorrow!
And then we hear "the generals don't like it" or "hate it" or whatever! But who the fuck are the generals? The generals are NOBODY! It is the President who is Commander in Chief so it is HIS
decisions that are to be followed! The "generals" - mainly the Joint Chiefs - will always want their little military escapades and interventions....hell, it keeps them in business and the $$$ flowing
to defense contractors! If the "generals" had their way we'd never have a moment's peace because their precious military-industrial complex would face de-funding. That's why departing Defense
Secretary Robert Gates' words (concerning too large defense cuts making the U.S. military less capable) need to be taken with a grain of salt, as well as those of Gen David Petraeus.
John Fitzgerald Kennedy heard the same crap from his JCS when he held the highest office. Not only did those dicks want him to actually invade Cuba and bomb it during the Missile crisis in October,
1962,(which would have let all Hell loose) but Gen. Curtis LeMay compared him to Neville Chamberlain as an even "worse appeaser". Can you believe that? But JFK had the stones to stand his ground.
Moreso in the case of Vietnam, when after assessing all the evidence in September, 1963 he then signed National Security Action Memorandum 263 to have all the troops out by calendar year 1965. The
ARVN (South Vietnamese) were to take over ALL military functions by then, come hell or high water.
Of course the "generals" didn't like it. Not one bit! Their favorite bit of codswallop was the now discredited "domino theory". In fact, this was perhaps the most infamous slippery slope "argument"
(actually a logical fallacy) of all, invoked to delay or prevent the U.S. departure from a losing effort in Viet Nam. LBJ bought into it, big time, which was why once he became President, his first
act was to fire off NSAM-273 to repeal JFK's NSAM 263 draw down mandate. After that, he merely needed to engineer the right ruse to ramp up U.S. involvement and manpower, which he via the phoney
Turner Joy and Maddox humbug incident in August, 1964.
But even after more than 10 years of bloody jungle war, the Reeps and their warmonger sidekicks (and Generals) kept yapping:
"Oh, we can’t leave, not now! Not at this time! If we just pull out Viet Nam will be the first of many dominoes to fall across Southeast Asia! Next will be Cambodia, then Thailand (Myanmar), then
Laos, then Malaysia and who knows where it will end? The Philippines too?”
As it transpired, the depletion of treasure ($269 billion) forced the U.S. to finally bail out in 1975, with over 58,000 American dead by that time. Whereas, the numbers would have been much less had
fewer powerful minds not succumbed to the slippery slope “domino theory” nonsense.
Similar nonsense arguments have been enlisted to try to prevent the U.S. from leaving another military quagmire in Iraq. (Though to be sure, the cost in casualties is more by way of slow attrition
and medical costs than from outright large battles like in ‘Nam). This time, the slippery slope is that if the U.S. leaves, the entire Middle East will fall to "terrorists". And after that, they will
follow us home and attack us in our Malls. Now, since we can’t have that, it follows that we will have to stay in Iraq for generations! Problem is, where’s the money going to come from to pay for it?
It's all very well to proclaim we'll be in those places indefinitely, but how many loans do you think the Chinese are going to make - especially with Bernanke's near zero interest rates?
Nobody seems to be able to address the money question, despite the fact in all previous serious interventions (such as WWII, Korea and Vietnam) taxes were raised to pay for their increasing costs -
and/or the debts engendered in the aftermath. Now, all the reep-tards and their tea party idiots (who ought to be siding with Ron Paul to demand pullout)seem to believe two full bore occupations can
be managed with TAX CUTS!
In the end, the real head-on combat is coming, in about 40 days, when the debt ceiling will need to be raised. The sad truth is, without all the military interventions of the past ten years - plus
the insane and idiotic tax cuts- we wouldn't be anywhere near this predicament, nor would blow-dried pundits be bloviating about cutting "entitlements". The true fact was as former defense analyst
Chuck Spinney put it back in 2004, the escalating military-defense budget has effectively caused a fiscal war to be waged on Medicare and Social Security.
One hopes Mr. Obama grasps this more securely the next time he gives a speech on Afghanistan! | {"url":"https://brane-space.blogspot.com/2011/06/","timestamp":"2024-11-09T06:38:39Z","content_type":"text/html","content_length":"255134","record_id":"<urn:uuid:0b2aca04-cdbe-429b-b316-082033a36ad9>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00231.warc.gz"} |
Math 185 LEC3: Introduction to Complex Analysis
Date Reading Content notes video and passcode
Aug 27 Thu [S] 1.1.1 [A] 1.2 Overview of the course. Complex Numbers. note
Sep 1 Tue [S] 1.1.3, 1.2.2 Review of topology and Holomorphic Functions. note video Y^?bY700
Sep 3 Thu [S] 1.2.3 Power Series note video ##cDRb5e
Sep 8 Tue [S] 1.3 Integration Along Curve note video vT+=b2Xi
Sep 10 Thu [S] 1.3, 2.1 Finish Ch 1. Begin Goursat's Thm note video ^=AhAr58
Sep 15 Tue [S] 2.1, 2.2 Goursat, Cauchy theorem on disk note video $Cd@kAe0
Sep 17, Thu [S] 2.4(a), 2.3 Cauchy Integral Formula, and Sample Calculations note video eA2!V7oR
Sep 22 Tue [S]2.3, 2.4 More on contour integral examples. Cauchy estimate note video +6%m*Hsp
Sep 24 Thu [S] 2.4, 2.5.1 Corollary to Cauchy integral Formula note video ZQF.q$0&
Sep 29 Tue [S] 2.5 Schwarz Reflection Principle, note video h3=KBA21
Oct 1 Thu Runge Approximation Theorem note video FWA46%k5
Oct 6 Tue Midterm 1 ( review notes) sol'n stat
Oct 13 Tue [S] 3.1 zero, poles and residues note video B?*MH1bG
Oct 15 Thu [S] 3.2 [A] 4.2 residues theorem, winding number note video @k!6@pNt
Oct 20 Tue [S] 3.3 classification of singularities note video f+2&L#Po
Oct 22 Thu [S] 3.3, 3.4 global meromorphic functions are rational, argument principle note video ih0XF3X#
Oct 27 Tue [S] 3.4 Rouche theorem, open mapping theorem note video 4Ox&345s
Oct 29 Thu [S] 3.5 Homotopy invariance of Contour integral note video ^v.S7P?Z
Nov 3 Tue [S] 3.6 Multivalued Function and Log note video Tt0T=D#8
Nov 5 Thu [S] 3.7, [A] 4.6 Harmonic Functions and Summary note video PY+0MQ*c
Nov 10 Tue Midterm 2 stat
Nov 12 Thu Review Midterm 2 note video 8#W#6Z0O
Nov 17 Tue [A] Ch5 section 1 and 2, partial fraction, Mittag-Leffler problem note video 0WxX%$K7
Nov 19 Thu [A] Ch5 section 2.1, 2.2 Infinite Product note video LV&5rj$6
Nov 24 Tue [A] Ch 5 section 5, Normal Family note video A9Ce%=yR
Dec 1 Tue [A] Ch 5 section 5, Normal Family, Arzela-Ascoli Thm note video aSk5?Sb2
Dec 3 Thu [A] Ch 6.1 [S] Ch 8 Riemann Mapping Theorem note video ^?a71a4M
Final Exam review Dec 15(Tue) 12:00noon - Dec 17(Thu) 12:00 noon solution | {"url":"https://www.ocf.berkeley.edu/~pengzhou/courses/math185-2020/start","timestamp":"2024-11-11T21:25:54Z","content_type":"text/html","content_length":"42414","record_id":"<urn:uuid:de5dc598-865c-47c9-8354-0ac29e88f969>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00581.warc.gz"} |
Poisson kernel
In potential theory, the Poisson kernel is an integral kernel, used for solving the two-dimensional Laplace equation, given Dirichlet boundary conditions on the unit disc. The kernel can be
understood as the derivative of the Green's function for the Laplace equation. It is named for Siméon Poisson.
Poisson kernels commonly find applications in control theory and two-dimensional problems in electrostatics. In practice, the definition of Poisson kernels are often extended to n-dimensional
Two-dimensional Poisson kernels
On the unit disc
In the complex plane, the Poisson kernel for the unit disc is given by
This can be thought of in two ways: either as a function of r and θ, or as a family of functions of θ indexed by r.
If is the open unit disc in C, T is the boundary of the disc, and f a function on T that lies in L^1(T), then the function u given by
is harmonic in D and has a radial limit that agrees with f almost everywhere on the boundary T of the disc.
That the boundary value of u is f can be argued using the fact that as r → 1, the functions P[r](θ) form an approximate unit in the convolution algebra L^p(T). As linear operators, they tend to the
Dirac delta function pointwise on L^p(T). By the maximum principle, u is the only such harmonic function on D.
Convolutions with this approximate unit gives an example of a summability kernel for the Fourier series of a function in L^1(T) (Katznelson 1976). Let f ∈ L^1(T) have Fourier series {f[k]}. After the
Fourier transform, convolution with P[r](θ) becomes multiplication by the sequence {r^|k|} ∈ l^1(Z). Taking the inverse Fourier transform of the resulting product {r^|k|f[k]} gives the Abel means A[
r]f of f:
Rearranging this absolutely convergent series shows that f is the boundary value of g + h, where g (resp. h) is a holomorphic (resp. antiholomorphic) function on D.
When one also asks for the harmonic extension to be holomorphic, then the solutions are elements of a Hardy space. This is true when the negative Fourier coefficients of f all vanish. In particular,
the Poisson kernel is commonly used to demonstrate the equivalence of the Hardy spaces on the unit disk, and the unit circle.
The space of functions that are the limits on T of functions in H^p(z) may be called H^p(T). It is a closed subspace of L^p(T) (at least for p≥1). Since L^p(T) is a Banach space (for 1 ≤ p ≤ ∞), so
is H^p(T).
On the upper half-plane
The unit disk may be conformally mapped to the upper half-plane by means of certain Möbius transformations. Since the conformal map of a harmonic function is also harmonic, the Poisson kernel carries
over to the upper half-plane. In this case, the Poisson integral equation takes the form
for . The kernel itself is given by
Given a function , the L^p space of integrable functions on the real line, then u can be understood as a harmonic extension of f into the upper half-plane. In analogy to the situation for the disk,
when u is holomorphic in the upper half-plane, then u is an element of the Hardy space , and, in particular,
Thus, again, the Hardy space H^p on the upper half-plane is a Banach space, and, in particular, its restriction to the real axis is a closed subspace of . The situation is only analogous to the case
for the unit disk; the Lebesgue measure for the unit circle is finite, whereas that for the real line is not.
On the ball
For the ball of radius r, , in R^n, the Poisson kernel takes the form
where , (the surface of ), and is the surface area of the unit n−1-sphere.
Then, if u(x) is a continuous function defined on S, the corresponding Poisson integral is the function P[u](x) defined by
It can be shown that P[u](x) is harmonic on the ball and that P[u](x) extends to a continuous function on the closed ball of radius r, and the boundary function coincides with the original function u
On the upper half-space
An expression for the Poisson kernel of an upper half-space can also be obtained. Denote the standard Cartesian coordinates of R^n+1 by
The upper half-space is the set defined by
The Poisson kernel for H^n+1 is given by
The Poisson kernel for the upper half-space appears naturally as the Fourier transform of the Abel kernel
in which t assumes the role of an auxiliary parameter. To wit,
In particular, it is clear from the properties of the Fourier transform that, at least formally, the convolution
is a solution of Laplace's equation in the upper half-plane. One can also show easily that as t → 0, P[u](t,x) → u(x) in a weak sense.
See also
This article is issued from
- version of the 5/25/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files. | {"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Poisson_kernel.html","timestamp":"2024-11-13T13:15:35Z","content_type":"text/html","content_length":"25443","record_id":"<urn:uuid:29e8f3fa-bd04-42cf-b4e9-715e35a96234>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00262.warc.gz"} |
Questions for KW MACHMETER
Answer the following questions
1. What is the LSS at msl ISA?
2. When descending at constant CAS, the TAT and mach meter indications should?
3. It temperature decreases when flying at constant CAS at FL 200, the mach meter indication will _________ and the true mach number will _________?
4. A mach meter compares?
5. Mach meter indications?
6. A mach meter is made up of?
7. The speed of sound at 25000 ft ISA is?
8. A mach meter comprises of?
9. What is the LSS at 50000 ft in the ISA?
10. If temperature increases by 5 degrees C during a constant mach number descent, what will happen to CAS?
11. A mach meter indicates mach number based on the ratio of?
12. How will mach meter indication vary in a constant CAS cilmb?
13. What does mach number represent?
14. Mach meter indications are derived from?
15. How will mach meter indication respond if an aircraft is flying at constant CAS at FL 270 when it experiences a reduction in OAT?
16. Which of the following best defines Mach number?
17. If the static source becomes blocked the mach meter will ________ as an aircraft climbs?
18. If an aircraft climbs at constant TAS from FL 200 to FL 400 the mach meter indication will?
19. How will the mach meter respond in a constant CAS climb if the static source becomes blocked?
20. What is the LSS at 30000 ft if ambient temperature is -40°C?
21. The speed of sound at ISA msl is?
22. How will the mach meter respond in a constant mach number climb if the static source becomes blocked?
23. Mach number is the ratio of?
24. How will CAS respond if temperature increases by 5 degrees C when flying at a constant indicated mach number at FL290?
25. What happens to mach meter indication in a constant RAS climb?
26. The indications on a mach meter are independent of?
27. What is the local speed of sound at sea level if the ambient temperature is 20°C?
28. What happens to TAT when an aircraft descends at constant indicated mach number?
29. VMO is calculated based on?
30. How will mach meter indication respond if an aircraft passes through a cold front when flying at constant CAS and altitude?
31. What would happen if the static pipe became detached from the back of a mach meter in a pressurised aircraft at high altitude?
32. When climbing at constant mach number below the torpopause in the ISA, the CAS will?
33. If ambient temperature is -10°C, what is the mach number when TAS in 594 Kts?
34. What is true mach number at 25000 ft ISA the TAS is 500 Kts?
35. If ambient temperature increases by 10 degrees, for an aircraft flying at constant TAS, the indicated mach number will ______ and the true mach number will _______?
36. Mach meter Indications?
37. What should the mach meter indicate when flying at 500 kts TAS at FL 250, if the ambient temperature is -30° C?
38. What is actually measured by a mach meter?
39. When descending at constant CAS, if temperature remains constant the indicated mach number will? | {"url":"https://vayudootaviation.com/question-bank/subtopic/132","timestamp":"2024-11-11T17:25:34Z","content_type":"text/html","content_length":"188832","record_id":"<urn:uuid:c7d1e12f-1902-4428-a626-297a57f1f356>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00716.warc.gz"} |
How to Declare A Function In Haskell?
In Haskell, functions are declared using the keyword "fun" followed by the function name and then a list of parameters. The parameters are enclosed in parentheses and separated by commas. The
function body is defined after an equal sign and can include multiple expressions separated by semicolons. Haskell uses pattern matching to define multiple cases for the same function. The result of
the function is the value of the last expression in the function body. Haskell functions are pure and do not have side effects, meaning they only depend on their input parameters and do not modify
any external state.
How to pass multiple arguments to a function in Haskell?
In Haskell, you can pass multiple arguments to a function by currying the function. Currying is the technique of transforming a function that takes multiple arguments into a series of functions that
each take only one argument.
For example, if you have a function that takes two arguments like this:
1 add :: Int -> Int -> Int
2 add x y = x + y
You can pass both arguments at once by providing one argument to the function, which will return a new function that takes the second argument:
Alternatively, you can partially apply the function by providing the first argument and storing the resulting function in a variable for later use:
This way, you can pass multiple arguments to a function by applying the arguments one at a time using currying.
How do you define a function in Haskell?
In Haskell, a function is defined using the following syntax:
1 functionName :: Type1 -> Type2 -> ... -> TypeN -> ReturnType
2 functionName arg1 arg2 ... argN = expression
• functionName is the name of the function
• :: denotes the type signature of the function
• Type1, Type2, ..., TypeN are the types of the function arguments
• ReturnType is the type of the return value
• expression is the body of the function, which uses the arguments to compute the return value
For example, a simple function add that adds two numbers could be defined as:
1 add :: Int -> Int -> Int
2 add x y = x + y
What is lazy evaluation in Haskell functions?
Lazy evaluation is a method used in Haskell where expressions are only evaluated when their results are actually needed. This means that intermediate results are not computed until they are required
by the program. This allows for more efficient and optimized execution of code as only necessary computations are performed. Lazy evaluation also enables the implementation of infinite data
structures and allows for the use of potentially infinite lists without causing the program to crash due to running out of memory.
How to handle errors in Haskell functions?
In Haskell, errors are typically handled using the Either type or by throwing exceptions. Here are some common ways to handle errors in Haskell functions:
1. Using the Either type: Instead of having a function that can fail return a value directly, you can make it return an Either type that represents either the successful result or an error message.
For example:
1 divide :: Int -> Int -> Either String Int
2 divide _ 0 = Left "Cannot divide by zero"
3 divide x y = Right (x `div` y)
You can then pattern match on the result of calling this function to handle the error case:
1 case divide 6 0 of
2 Left err -> putStrLn err
3 Right result -> print result
1. Using the Maybe type: Similar to Either, you can also use the Maybe type to represent the possibility of a failure. This type can simply be Just a for a successful result or Nothing for an error.
2. Using exceptions: In Haskell, you can also use the throw and catch functions from the Control.Exception module to handle exceptions. For example, you can throw an exception using throw:
1 import Control.Exception
3 divide :: Int -> Int -> Int
4 divide _ 0 = throw (ErrorCall "Cannot divide by zero")
5 divide x y = x `div` y
7 main :: IO ()
8 main = do
9 result <- catch (evaluate (divide 6 0)) (\e -> return (error "Division by zero"))
10 print result
These are just a few ways to handle errors in Haskell functions. Different approaches may be more appropriate depending on the specific context and requirements of your program. | {"url":"https://studentprojectcode.com/blog/how-to-declare-a-function-in-haskell","timestamp":"2024-11-10T11:25:00Z","content_type":"text/html","content_length":"352892","record_id":"<urn:uuid:e366a879-980a-45b4-b8a9-3342f3005dea>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00253.warc.gz"} |
Frontiers | Interpretability With Accurate Small Models
• ^1Department of Computer Science and Engineering, IIT Madras, Chennai, India
• ^2Department of Computer Science and Engineering, Robert Bosch Centre for Data Science and AI, IIT Madras, Chennai, India
Models often need to be constrained to a certain size for them to be considered interpretable. For example, a decision tree of depth 5 is much easier to understand than one of depth 50. Limiting
model size, however, often reduces accuracy. We suggest a practical technique that minimizes this trade-off between interpretability and classification accuracy. This enables an arbitrary learning
algorithm to produce highly accurate small-sized models. Our technique identifies the training data distribution to learn from that leads to the highest accuracy for a model of a given size. We
represent the training distribution as a combination of sampling schemes. Each scheme is defined by a parameterized probability mass function applied to the segmentation produced by a decision tree.
An Infinite Mixture Model with Beta components is used to represent a combination of such schemes. The mixture model parameters are learned using Bayesian Optimization. Under simplistic assumptions,
we would need to optimize for O(d) variables for a distribution over a d-dimensional input space, which is cumbersome for most real-world data. However, we show that our technique significantly
reduces this number to a fixed set of eight variables at the cost of relatively cheap preprocessing. The proposed technique is flexible: it is model-agnostic, i.e., it may be applied to the learning
algorithm for any model family, and it admits a general notion of model size. We demonstrate its effectiveness using multiple real-world datasets to construct decision trees, linear probability
models and gradient boosted models with different sizes. We observe significant improvements in the F1-score in most instances, exceeding an improvement of 100% in some cases.
1. Introduction
As Machine Learning (ML) becomes pervasive in our daily lives, there is an increased desire to know how models reach specific decisions. In certain contexts this might not be important as long as the
ML model itself works well, e.g., in product or movie recommendations. But for certain others, such as medicine and healthcare (Caruana et al., 2015; Ustun and Rudin, 2016), banking^1, defense
applications^2, and law enforcement^3 model transparency is an important concern. Very soon, regulations governing digital interactions might necessitate interpretability (Goodman and Flaxman, 2017).
All these factors have generated a lot of interest around “model understanding.” Approaches in the area may be broadly divided into two categories:
1. Interpretability: build models that are inherently easy to interpret, e.g., rule lists (Letham et al., 2013; Angelino et al., 2017), decision trees (Breiman et al., 1984; Quinlan, 1993, 2004),
sparse linear models (Ustun and Rudin, 2016), decision sets (Lakkaraju et al., 2016), pairwise interaction models that may be linear (Lim and Hastie, 2015), or additive (Lou et al., 2013).
2. Explainability: build tools and techniques that allow for explaining black box models, e.g., locally interpretable models such as LIME, Anchors (Ribeiro et al., 2016, 2018), visual explanations
for Convolutional Neural Networks such as Grad-CAM (Selvaraju et al., 2017), influence functions (Koh and Liang, 2017), feature attribution based on Shapley values (Lundberg and Lee, 2017; Ancona et
al., 2019).
Our work addresses the problem of interpretability by providing a way to increase accuracy of existing models that are considered interpretable.
Interpretable models are preferably small in size: this is referred to as low explanation complexity in Herman (2017), is seen as a form of simulability in Lipton (2018), is a motivation for
shrinkage methods (Hastie et al., 2009, section 3.4), and is often otherwise listed as a desirable property for interpretable models (Lakkaraju et al., 2016; Ribeiro et al., 2016; Angelino et al.,
2017). For instance, a decision tree of depth = 5 is easier to understand than one of depth = 50. Similarly, a linear model with 10 non-zero terms might be easier to comprehend than one with 50
non-zero terms. This indicates an obvious problem: an interpretable model is often small in its size, and since model size is usually inversely proportional to the bias, a model often sacrifices
accuracy for interpretability.
We propose a technique to minimize this tradeoff for any model family; thus our approach is model agnostic. Our technique adaptively samples the provided training data, and identifies a sample on
which to learn a model of a given size; the property of this sample being that it is optimal in terms of the accuracy of the constructed model. What makes this strategy practically valuable is that
the accuracy of this model may often be significantly higher than one learned on the training data as-is, especially when the model size is small.
1. accuracy(M, p) be the classification accuracy of model M on data represented by the joint distribution p(X, Y) of instances X and labels Y. We use the term “accuracy” as a generic placeholder for
a measure of model correctness. This may specifically measure F1-score, AUC, lift, etc., as needed.
2. $trai{n}_{{F}}\left(p,\eta \right)$ produce a model obtained using a specific training algorithm, e.g., CART (Breiman et al., 1984), for a given model family ${F}$, e.g., decision trees, where the
model size is fixed at η, e.g., trees with depth = 5. The training data is represented by the joint distribution p(X, Y) of instances X and labels Y.
If we are interested in learning a classifier of size η for data with distribution p(X, Y), our technique produces the optimal training distribution ${p}_{\eta }^{*}\left(X,Y\right)$ such that:
$pη*=arg maxqaccuracy(trainF(q,η),p) (1)$
Here q(X, Y) ranges over all possible distributions over the data (X, Y).
Training a model on this optimal distribution produces a model that is at least as good as training on the original distribution p:
$accuracy(trainF(p,η),p)≤accuracy(trainF(pη*,η),p) (2)$
Furthermore, the relationship in Equation (2) may be separated into two regimes of operation. A model trained on ${p}_{\eta }^{*}$ outperforms one trained on the original distribution p up to a model
size η′, with both models being comparably accurate beyond this point:
$For η≤η′,accuracy(trainF(p,η),p)<accuracy(trainF(pη*,η),p) (3)$
$For η>η′,accuracy(trainF(p,η),p)=accuracy(trainF(pη*,η),p) (4)$
Our key contributions in this work are:
1. Postulating that the optimal training distribution may be different than the test distribution. This challenges the conventional wisdom that the training and test data must come from the same
distribution, as in the LHS of Equations (2), (3), and (4).
2. Providing a model-agnostic and practical adaptive sampling based technique that exploits this effect to learn small models, that often possess higher accuracy compared to using the original
3. Demonstrating the effectiveness of our technique with different learning algorithms, $trai{n}_{{F}}\left(\text{ }\right)$, and multiple real world datasets. Note that our benchmark is not a
specific algorithm that learns small models; the value of our approach is in its being model-agnostic: it works with arbitrary learners.
4. We show that learning the distribution, ${p}_{\eta }^{*}$, in the d dimensions of the data, may be decomposed into a relatively cheap preprocessing step that depends on d, followed by a core
optimization step independent of d: the optimization is over a fixed set of eight variables. This makes our technique scalable.
We do not impose any constraints on the specification of $trai{n}_{{F}}\left(\text{ }\right)$ for it to create interpretable models; our technique may be used with any model family. But the fact that
we see increased accuracy up to a model size (η′ in Equation 3), makes the technique useful in setups where small sized models are preferred. Applications requiring interpretability are an example of
this. There may be others, such as model compression, which we have not explored, but briefly mention in section 5.2.
2. Overview
This section provides an overview of various aspects of our work: we impart some intuition for why we expect the train and test distributions to differ for small-sized models, describe where our
technique fits into a model building workflow, mention connections to previous work and establish our notation and terminology.
2.1. Intuition
Let's begin with a quick demonstration of how modifying the training distribution can be useful. We have the binary class data, shown in Figure 1, that we wish to classify with decision trees with
depth = 5.
FIGURE 1
Our training data is a subset of this data (not shown). The training data is approximately uniformly distributed in the input space—see the 2D kernel density plot in the top-left panel in Figure 2.
The bottom-left panel in the figure shows the regions of a decision tree with depth = 5 learns, using the CART algorithm. The top-right panel shows a modified distribution of the data (now the
density seems to be relatively concentrated away from the edge regions of the input space), and the corresponding decision tree with depth = 5, also learned using CART, is visualized in the
bottom-right panel. Both decision trees used the same learning algorithm and possess the same depth. As we can see, the F1 scores are significantly different: 63.58 and 71.87%, respectively.
FIGURE 2
Where does this additional accuracy come from?
All classification algorithms use some heuristic to make learning tractable, e.g.,:
• Decision Trees—one step lookahead (note that the CART tree has a significantly smaller number of leaves than the possible 2^5 = 32, in our example).
• Logistic Regression—local search, e.g., Stochastic Gradient Descent (SGD).
• Artificial Neural Networks (ANN)—local search, e.g., SGD, Adam.
Increasing the size allows for offsetting the shortcomings of the heuristic by adding parameters to the model till it is satisfactorily accurate: increasing depth, terms, hidden layers, or nodes per
layer. Our hypothesis is, in restricting a model to a small size, this potential gap between the representational and effective capacities becomes pronounced. In such cases, modifying the data
distribution guides the heuristic to focus learning on regions of the input space that are valuable in terms of accuracy. We are able to empirically demonstrate this effect for DTs in section 4.2.1.
2.2. Workflow
Figure 3 shows how our sampling technique modifies the model building workflow. In the standard workflow, we feed the data into a learning algorithm, $trai{n}_{{F}}\left(\text{ }\right)$, to obtain a
model. In our setup, the data is presented to a system, represented by the dashed box, that is comprised of both the learning algorithm and our sampling technique.
FIGURE 3
This system produces the final model in an iterative fashion: the sampling technique (or sampler) produces a sample using its current distribution parameters, that is used by the learning algorithm
to produce a model. This model is evaluated on a validation dataset and the validation score is conveyed back to the sampler. This information is used to modify the distribution parameters and
generate a new training sample for the algorithm, and so on, till we reach a stopping criteria. The criteria we use is a specified number of iterations—we refer to this as our budget. The best model
produced within the budget, as measured by the validation score, is our final model, and the corresponding distribution is presented as the ideal training distribution.
2.3. Previous Work
We are aware of no prior work that studies the relationship between data distribution and accuracy in the small model regime. In terms of the larger view of modifying the training distribution to
influence learning, parallels may be drawn to the following methodologies:
1. When learning on data with class imbalance, using a different train distribution compared to test via over/under-sampling (Japkowicz and Stephen, 2002), is a commonly used strategy. Seen from this
perspective, we are positing that modifying the original distribution is helpful in a wider set of circumstances, i.e., when there is no imbalance, as in Figure 1, but the model is restricted in
2. Among popular techniques, Active Learning (Settles, 2009; Dasgupta, 2011) probably bears the strongest resemblance to our approach. However, our problem is different in the following key respects:
(a) In active learning, we don't know the labels of most or all of the data instances, and there is an explicit label acquisition cost that must be accounted for. In contrast, our work targets the
traditional supervised learning setting where the joint distribution of instances and labels is approximately known through a fixed set of samples drawn from that distribution.
(b) Because there is a label acquisition cost, learning from a small subset of the data such that the resulting model approximates one learned on the complete dataset, is strongly incentivized. This
economy in sample size is possibly the most common metric used to evaluate the utility of an active learner. This is different from our objective, where we are not interested in minimizing training
data size, but in learning small-sized models. Further, we are interested in outperforming a model learned on the complete data.
3. Coreset construction techniques (Bachem et al., 2017; Munteanu and Schwiegelshohn, 2018) seek to create a “summary” weighted sample of a dataset with the property that a model learned on this
dataset approximates one learned on the complete dataset. Here too, the difference in objectives is that we focus on small models, ignore training data size, and are interested in outperforming a
model learned on the complete data.
This is not to say that the tools of analysis from the areas of active learning or coreset identification cannot be adapted here; but current techniques in these areas do not solve for our objective.
2.4. Terminology and Notation
Let's begin with the notion of “model size.” Even though there is no standard notion of size across model families, or even within a model family, we assume the term informally denotes model
attribute(s) with the following properties:
1. size ∝ bias^−1
2. Smaller the size of a model, easier it is to interpret.
As mentioned earlier, only property 1 is strictly required for our technique to be applicable; property 2 is needed for interpretability.
Some examples of model size are depth of decision trees, number of non-zero terms in a linear model and number of rules in a rule set.
In practice, a model family may have multiple notions of size depending upon the modeler, e.g., depth of a tree or the number of leaves. The size might even be determined by multiple attributes in
conjunction, e.g., maximum depth of each tree and number of boosting rounds in the case of a gradient boosted model (GBM). It is also possible that while users of a model might agree on a definition
of size they might disagree on the value for the size up to which the model stays interpretable. For example, are decision trees interpretable up to a depth of 5 or 10? Clearly, the definition of
size and its admissible values might be subjective. Regardless, the discussion in this paper remains valid as long as the notion of size exhibits the properties above. With this general notion in
mind, we say that interpretable models are typically small.
Here are the notations we use:
1. The matrix X ∈ ℝ^N×d represents an ordered collection of N input feature vectors, each of which has d dimensions. We assume individual feature vectors ${x}_{i}\in {ℝ}^{d×1}$ to be column vectors,
and hence the ith row of X represents ${x}_{i}^{T}$. We occasionally treat X as a set and write x[i] ∈ X to denote the feature vector x[i] is part of the collection X.
An ordered collection of N labels is represented by the vector Y ∈ ℝ^N.
We represent a dataset with N instances with the tuple (X, Y), where X ∈ ℝ^N×d, Y ∈ ℝ^N, and the label for x[i] is Y[i], where 1 ≤ i ≤ N.
2. The element at the pth row and qth column indices of a matrix A is denoted by [A][pq].
3. We refer to the joint distribution p(X, Y) from which a given dataset was sampled, as the original distribution. In the context of learning a model and predicting on a held-out dataset, we
distinguish between the train, validation, and test distributions. In this work, the train distribution may or may not be identical to the original distribution, which would be made clear by the
context, but the validation and test distributions are always identical to the original distribution.
4. The terms pdf and pmf denote probability density function and probability mass function, respectively. The term “probability distribution” may refer to either, and is made clear by the context. A
distribution p, parameterized by θ, defined over the variable x, is denoted by p(x; θ).
5. We use the following terms introduced before:
• accuracy(M, p) is the classification accuracy of model M on data represented by the joint distribution p(X, Y) of instances X and labels Y. We often overload this term to use a dataset instead of
distribution. In this case, we write accuracy(M, (X, Y)) where (X, Y) is the dataset.
• $trai{n}_{{F}}\left(p,\eta \right)$ produces a model obtained using a specific training algorithm for a model family ${F}$, where the model size is fixed at η. This may also be overloaded to use a
dataset, and we write: $trai{n}_{{F}}\left(\left(X,Y\right),\eta \right)$.
6. We denote the depth of a tree T by the function depth(T).
7. ℝ, ℤ, and ℕ denote the sets of reals, integers, and natural numbers, respectively.
The rest of the paper is organized as follows: in section 3 we describe in detail two formulations of the problem of learning the optimal density. Section 4 reports experiments we have conducted to
evaluate our technique. It also presents our analysis of the results. We conclude with section 5 where we discuss some of the algorithm design choices and possible extensions of our technique.
3. Methodology
In this section we describe our sampling technique. We begin with a intuitive formulation of the problem in section 3.1 to illustrate challenges with a simple approach. This also allows us to
introduce the relevant mathematical tools. Based on our understanding here, we propose a much more efficient approach in section 3.3.
3.1. A Naive Formulation
We phrase the problem of finding the ideal density (for the learning algorithm) as an optimization problem. We represent the density over the input space with the pdf p(x; Ψ), where Ψ is a parameter
vector. Our optimization algorithm runs for a budget of T time steps. Algorithm 1 lists the execution steps.
In Algorithm 1:
1. suggest() is a call to the optimizer at time t, that accepts past validation scores s[t−1], …s[1] and values of the density parameter Ψ[t−1], …, Ψ[1]. These values are randomly initialized for t =
1. Note that not all optimizers require this information, but we refer to a generic form of optimization that makes use of the entire history.
2. In Line 4, a sampled dataset (X[t], Y[t]) comprises of instances x[i] ∈ X[train], and their corresponding labels y[i] ∈ Y[train]. Denoting the sampling weight of an instance x[i] as w(x[i]), we
use w(x[i]) ∝ p(x[i]; Ψ[t]), ∀x[i] ∈ X[train].
The sampling in Line 10 is analogous.
3. Although the training happens on a sample drawn based on Ψ[t], the validation dataset (X[val], Y[val]) isn't modified by the algorithm and always reflects the original distribution. Hence, s[t]
represents the accuracy of a model on the original distribution.
4. In the interest of keeping the algorithm simple to focus on the salient steps/challenges, we defer a discussion of the sample size N[s] to our improved formulation in section 3.3.
Algorithm 1 represents a general framework to discover the optimal density within a time budget T. We refer to this as a “naive” algorithm, since within our larger philosophy of discovering the
optimal distribution, this is the most direct way to do so. It uses accuracy() as both the objective and fitness function, where the score s[t] is the fitness value for current parameters Ψ[t]. It is
easy to see here what makes our technique model-agnostic: the arbitrary learner $trai{n}_{{F}}\left(\text{ }\right)$ helps define the fitness function but there are no assumptions made about its
form. While conceptually simple, clearly the following key implementation aspects dictate its usefulness in practice:
1. The optimizer to use for suggest().
2. The precise representation of the pdf p(x; Ψ).
We look at these next.
3.1.1. Optimization
The fact that our objective function is not only a black-box, but is also noisy, makes our optimization problem hard to solve, especially within a budget T. The quality of the optimizer suggest()
critically influences the utility of Algorithm 1.
We list below the characteristics we need our optimizer to possess:
1. Requirement 1: It should be able to work with a black-box objective function. Our objective function is accuracy(), which depends on a model produced by $trai{n}_{{F}}\left(\text{ }\right)$. The
latter is an input to the algorithm and we make no assumptions about its form. The cost of this generality is that accuracy() is a black-box function and our optimizer needs to work without knowing
its smoothness, amenability to gradient estimation etc.
2. Requirement 2: Should be robust against noise. Results of accuracy() may be noisy. There are multiple possible sources of noise, e.g.,:
(a) The model itself is learned on a sample (X[t], y[t]).
(b) The classifier might use a local search method like SGD whose final value for a given training dataset depends on various factors like initialization, order of points, etc.
3. Requirement 3: Minimizes calls to the objective function. The acquisition cost of a fitness value s[t] for a solution Ψ[t] is high: this requires a call to accuracy(), which in turn calls $trai{n}
_{{F}}\left(\text{ }\right)$. Hence, we want the optimizer to minimize such calls, instead shifting the burden of computation to the optimization strategy. The number of allowed calls to accuracy()
is often referred to as the fitness evaluation budget.
Some optimization algorithms that satisfy the above properties to varying degrees are the class of Bayesian Optimization (BO) (Brochu et al., 2010; Shahriari et al., 2016) algorithms; evolutionary
algorithms such as Covariance Matrix Adaptation Evolution Strategy (CMA-ES) (Hansen and Ostermeier, 2001; Hansen and Kern, 2004) and Particle Swarm Optimization (PSO) (Kennedy and Eberhart, 1995;
Parsopoulos and Vrahatis, 2001); heuristics based algorithms such as Simulated Annealing (Kirkpatrick et al., 1983; Gelfand and Mitter, 1989; Gutjahr and Pflug, 1996); bandit-based algorithms such as
Parallel Optimistic Optimization (Grill et al., 2015) and Hyperband (Li L. et al., 2017).
We use BO here since it has enjoyed substantial success in the area of hyperparameter optimization (e.g., Bergstra et al., 2011; Snoek et al., 2012; Perrone et al., 2018; Dai et al., 2019), where the
challenges are similar to ours.
While a detailed discussion of BO techniques is beyond the scope of this paper (refer to Brochu et al., 2010; Shahriari et al., 2016 for an overview), we briefly describe why they meet our
requirements: BO techniques build their own model of the response surface over multiple evaluations of the objective function; this model serves as a surrogate (whose form is known) for the actual
black-box objective function. The BO algorithm relies on the surrogate alone for optimization, bypassing the challenges in directly working with a black-box function (Requirement 1 above). The
surrogate representation is also probabilistic; this helps in quantifying uncertainties in evaluations, possibly arising due to noise, making for robust optimization (Requirement 2). Since every call
to suggest() is informed by this model, the BO algorithm methodically focuses on only the most promising regions in the search space, making prudent use of its fitness evaluation budget (Requirement
The family of BO algorithms is fairly large and continues to grow (Bergstra et al., 2011; Hutter et al., 2011; Snoek et al., 2012, 2015; Wang et al., 2013; Gelbart et al., 2014; Hernández-Lobato et
al., 2016; Levesque et al., 2017; Li C. et al., 2017; Rana et al., 2017; Malkomes and Garnett, 2018; Perrone et al., 2018; Alvi et al., 2019; Dai et al., 2019; Letham et al., 2019; Nayebi et al.,
2019). We use the Tree Structured Parzen Estimator (TPE) algorithm (Bergstra et al., 2011) since it scales linearly with the number of evaluations (the runtime complexity of a naive BO algorithm is
cubic in the number of evaluations; see Shahriari et al., 2016) and has a popular and mature library: Hyperopt (Bergstra et al., 2013).
3.1.2. Density Representation
The representation of the pdf, p(x; Ψ) is the other key ingredient in Algorithm 1. The characteristics we are interested in are:
1. Requirement 1: It must be able to represent an arbitrary density function. This is an obvious requirement since we want to discover the optimal density.
2. Requirement 2: It must have a fixed set of parameters. This is for convenience of optimization, since most optimizers cannot handle the conditional parameter spaces that some pdf representations
use. A common example of the latter is the popular Gaussian Mixture Model (GMM), where the number of parameters increases linearly with the number of mixture components.
This algorithm design choice allows for a larger scope of being able to use different optimizers in Algorithm 1; there are many more optimizers that can handle fixed compared to conditional parameter
spaces. And an optimizer that works with the latter, can work with a fixed parameter space as well^4.
The Infinite Gaussian Mixture Model (IGMM) (Rasmussen, 1999), a non-parametric Bayesian extension to the standard GMM, satisfies these criteria. It side-steps the problem of explicitly denoting the
number of components by representing it using a Dirichlet Process (DP). The DP is characterized by a concentration parameter α, which determines both the number of components (also known as
partitions or clusters) and association of a data point to a specific component. The parameters for these components are not directly learned, but are instead drawn from prior distributions; the
parameters of these prior distributions comprises our fixed set of variables (Requirement 2). We make the parameter α part of our optimization search space, so that the appropriate number of
components maybe discovered; this makes our pdf flexible (Requirement 1).
We make a few modifications to the IGMM for it to better fit our problem. This doesn't change its compatibility to our requirements. Our modifications are:
1. Since our data is limited to a “bounding box” within ℝ^d (this region is easily found by determining the min and max values across instances in the provided dataset, for each dimension, ignoring
outliers if needed), we replace the Gaussian mixture components with a multivariate generalization of the Beta distribution. We pick Beta since it naturally supports bounded intervals. In fact, we
may treat the data as lying within the unit hypercube [0, 1]^d without loss of generality, and with the understanding that the features of an instance are suitably scaled in the actual
Using a bounded interval distribution provides the additional benefit that we don't need to worry about infeasible solution regions in our optimization.
2. Further, we assume independence across the d dimensions as a starting point. We do this to minimize the number of parameters, similar to using a diagonal covariance matrix in GMMs.
Thus, our d-dimensional generalization of the Beta is essentially a set of d Beta distributions, and every component in the mixture is associated with such a set. For k mixture components, we have k×
d Beta distributions in all, as against k d-dimensional Gaussians in an IGMM.
3. A Beta distribution uses two positive valued shape parameters. Recall that we don't want to learn these parameters for each of the k×d Beta distributions (which would defeat our objective of a
fixed parameter space); instead we sample these from prior distributions. We use Beta distributions for our priors too: each shape parameter is drawn from a corresponding prior Beta distribution.
Since we have assumed that the dimensions are independent, we have two prior Beta for the shape parameters per dimension. We obtain the parameters {A[j], B[j]} of a Beta for dimension j, 1 ≤ j ≤ d,
by drawing A[j] ~ Beta(a[j], b[j]) and ${B}_{j}~Beta\left({a}_{j}^{\prime },{b}_{j}^{\prime }\right)$, where {a[j], b[j]} and $\left\{{a}_{j}^{\prime },{b}_{j}^{\prime }\right\}$ are the shape
parameters of the priors.
There are a total of 4d prior parameters, with 4 prior parameters $\left\{{a}_{j},{b}_{j},{a}_{j}^{\prime },{b}_{j}^{\prime }\right\}$ per dimension j, 1 ≤ j ≤ d.
We refer to this mixture model as an Infinite Beta Mixture Model (IBMM)^5. For d dimensional data, we have $\text{Ψ}=\left\{\alpha ,{a}_{1},{b}_{1},{a}_{1}^{\prime },{b}_{1}^{\prime },\dots ,{a}_{d},
{b}_{d},{a}_{d}^{\prime },{b}_{d}^{\prime }\right\}$. This is a total of 4d + 1 parameters.
Algorithm 2 shows how we sample N[t] points from (X, Y) using the IBMM.
We first determine the partitioning of the number N[s], induced by the DP (line 2). We use Blackwell-MacQueen sampling (Blackwell and MacQueen, 1973) for this step. This gives us k components,
denoted by c[i], 1 ≤ i ≤ k, and the corresponding number of points n[i], 1 ≤ i ≤ k to be assigned to each component. We then sample points one component at a time: we draw the Beta parameters per
dimension—A[ij], B[ij]—from the priors (lines 4–6), followed by constructing sampling weights p(x[l]|c[i]), ∀x[l] ∈ X assuming independent dimensions (line 9).
We emphasize here that we use the IBMM purely for representational convenience. All the 4d + 1 parameters are learned by the optimizer, and we ignore the standard associated machinery for estimation
or inference. These parameters cannot be learned from the data since our fundamental hypothesis is that the optimal distribution is different from the original distribution.
3.2. Challenges
The primary challenge with this formulation is the size of the search space. We have successfully tried out Algorithm 1 on small toy datasets as proof-of-concept, but for most real world datasets,
optimizing over 4d + 1 variables leads to an impractically high run-time even using a fast optimizer like TPE.
One could also question the independence assumption for dimensions, but that doesn't address the problem of the number of variables: learning a pdf directly in d dimensions would require at least O(d
) optimization variables. In fact, a richer assumption makes the problem worse with O(d^2) variables to represent inter-dimension interactions.
3.3. An Efficient Approach Using Decision Trees
We begin by asking if we can prune the search space in some fashion. Note that we are solving a classification problem, measured by accuracy(); however, the IBMM only indirectly achieves this goal by
searching the complete space Ψ. The search presumably goes through distributions with points from only one class, no points close to any or most of the class boundary regions, etc; distributions that
decidedly result in poor fitness scores. Is there a way to exclude such “bad” configuration values from the search space?
One strategy would be to first determine where the class boundaries lie, and penalize any density Ψ[t] that doesn't have at least some overlap with them. This is a common optimization strategy used
to steer the search trajectory away from bad solutions. However, implementation-wise, this leads to a new set of challenges:
1. How do we determine, and then represent, the location of class boundaries?
2. What metric do we use to appropriately capture our notion of overlap of Ψ[t] and these locations?
3. How do we efficiently execute the previous steps? After all, our goal is to either (a) reduce the number of optimization variables OR (b) significantly reduce the size of the search space for the
current O(d) variables.
We offer a novel resolution to these challenges that leads to an efficient algorithm by making the optimization “class boundary sensitive.”
Our key insight is an interesting property of decision trees (DT). A DT fragments its input space into axis-parallel rectangles. Figure 4 shows what this looks like when we learn a tree using CART on
the dataset from Figure 1. Leaf regions are shown with the rectangles with the black edges.
FIGURE 4
Note how regions with relatively small areas almost always occur near boundaries. This happens here since none of the class boundaries are axis-parallel, and the DT, in being constrained in
representation to axis-parallel rectangles, must use multiple small rectangles to approximate the curvature of the boundary. This is essentially piecewise linear approximation in high dimensions,
with the additional constraint that the “linear pieces” be axis-parallel. Figure 5 shows a magnified view of the interaction of leaf edges with a curved boundary. The first panel shows how
hypothetical trapezoid leaves might closely approximate boundary curvature. However, since the DT may only use axis-parallel rectangles, we are led to multiple small rectangles as an approximation,
as shown in the second panel.
FIGURE 5
We exploit this geometrical property; in general, leaf regions with relatively small areas (volumes, in higher dimensions) produced by a DT, represent regions close to the boundary. Instead of
directly determining an optimal pdf on the input space, we now do the following:
1. Learn a DT, with no size restrictions, on the data (X[train], Y[train]). Assume the tree produces m leaves, where the region encompassed by a leaf is denoted by R[i], 1 ≤ i ≤ m.
2. Define a pmf over the leaves, that assigns mass to a leaf in inverse proportion to its volume. Let L ∈ {1, 2, …, m} be a random variable denoting a leaf. Our pmf is ${P}_{L}\left(i\right)=P\left(L
=i\right)=f\left({R}_{i}\right),\mathrm{\text{ where }}f\left({R}_{i}\right)\propto vol{\left({R}_{i}\right)}^{-1}$.
The probability of sampling outside any R[i] is set to 0.
3. To sample a point, sample a leaf first, based on the above pmf, and then sample a point from within this leaf assuming a uniform distribution:
(a) Sample a leaf, i ~ P[L].
(b) Sample a point within this leaf, $x~{U}\left({R}_{i}\right)$.
(c) Since leaves are characterized by low entropy of the label distribution, we assign the majority label of leaf i, denoted by label(i), to the sampled point x.
Assuming we have k unique labels, label(i) is calculated as follows:
Let S[i] = {y[j]:y[j] ∈ Y[train], x[j] ∈ X[train], x[j] ∈ R[i]}. Then,
$label(i)=arg maxkp^ik (5)$
$where,p^ik=1|Si|∑SiI(yj=k) (6)$
Note here that because of using ${U}\left({R}_{i}\right)$ we may generate points x ∉ X[train]. Also, since a point x ∈ R[i] ∩ X[train] gets assigned label(i), the conditional distribution of labels
approximately equals the original distribution:
$p(Yt|Xt)≈p(Ytrain|Xtrain) (7)$
We call such a DT a density tree^6 which we formally define as follows.
Definition 3.1. We refer to a DT as a density tree if (a) it is learned on (X[train], Y[train]) with no size restrictions (b) there is a pmf defined over its leaves s.t. ${P}_{L}\left(i\right)=P\left
(L=i\right)=f\left({R}_{i}\right),\mathrm{\text{ where }}f\left({R}_{i}\right)\propto vol{\left({R}_{i}\right)}^{-1}$.
Referring back to our desiderata, it should be clear how we address some of the challenges:
1. The location of class boundaries are naturally produced by DTs, in the form of (typically) low-volume leaf regions.
2. Instead of penalizing the lack of overlap with such boundary regions, we sample points in way that favors points close to class boundaries.
Note that in relation to Equation (3) (reproduced below), q no longer ranges over all possible distributions; but over a restricted set relevant to the problem:
$pη*=arg maxqaccuracy(trainF(q,η),p) (8)$
We visit the issue of efficiency toward the end of this section.
This simple scheme represents our approach at a high-level. However, this in itself is not sufficient to build a robust and efficient algorithm. We consider the following refinements to our approach:
1. pmf at the leaf level. What function f must we use to construct our pmf ? One could just use $f\left({R}_{i}\right)=c·vol{\left({R}_{i}\right)}^{-1}$ where c is the normalization constant $c=1/\
sum _{i=1}^{m}vol{\left({R}_{i}\right)}^{-1}$. However, this quantity changes rapidly with volume. Consider a hypercube with edge-length a in d dimensions; the ratio of the (non-normalized) mass
between this and another hypercube with edge-length a/2 is 2^d. Not only is this change drastic, but it also has potential for numeric underflow.
An alternative is to use a function that changes more slowly like the inverse of the length of the diagonal, $f\left({R}_{i}\right)=c·diag{\left({R}_{i}\right)}^{-1}\mathrm{\text{ where }}c=1/\sum _
{i=1}^{m}diag{\left({R}_{i}\right)}^{-1}$. Since DT leaves are axis-parallel hyperrectangles, diag(R[i]) is always well defined. In our hypercube example, the probability masses are $\propto 1/\left
(a\sqrt{d}\right)$ and $\propto 1/\left(a\sqrt{d}/2\right)$ when the edge-lengths are a and a/2, respectively. The ratio of the non-normalized masses between the two cubes is now 2.
This begs the question: is there yet another pmf we can use, that is optimal in some sense? Instead of looking for such an optimal pmf, we adopt the more pragmatic approach of starting with a “base”
pmf —we use the inverse of the diagonal length—and then allowing the algorithm to modify it, via smoothing, to adapt it to the data.
2. Smoothing. Our algorithm may perform smoothing over the base pmf as part of the optimization. We use Laplace smoothing (Jurafsky and Martin, 2019, section 3.4), with λ as the smoothing
coefficient. This modifies our pmf thus:
$f′(Ri)=c(f(Ri)+λm) (9)$
Here, c is the normalization constant. The optimizer discovers the ideal value for λ.
We pick Laplace smoothing because it is fast. Our framework, however, is general enough to admit a wide variety of options (discussed in section 5.2).
3. Axis-aligned boundaries. A shortcoming of our geometric view is if a boundary is axis-aligned, there are no leaf regions of small volumes along this boundary. This foils our sampling strategy. An
easy way to address this problem is to transform the data by rotating or shearing it, and then construct a decision tree (see Figure 6). The image on the left shows a DT with two leaves constructed
on the data that has an axis-parallel boundary. The image on the right shows multiple leaves around the boundary region, after the data is transformed (the transformation may be noticed at the top
left and bottom right regions).
The idea of transforming data by rotation is not new (Rodriguez et al., 2006; Blaser and Fryzlewicz, 2016). However, a couple of significant differences in our setup are:
(a) We don't require rotation per se as our specific transformation; any transformation that produces small leaf regions near the boundary works for us.
(b) Since interpretability in the original input space is our goal, we need to transform back our sample. This would not be required, say, if our only goal is to increase classification accuracy.
The need to undo the transformation introduces an additional challenge: we cannot drastically transform the data since sampled points in the transformed space might be outliers in the original space.
Figure 7 illustrates this idea, using the same data as in Figure 6.
The first panel shows leaves learned on the data in the transformed space. Note how the overall region covered by the leaves is defined by the extremities—the top-right and bottom-left corners—of the
region occupied by the transformed data. Any point within this rectangle is part of some leaf in a DT learned in this space. Consider point P—it is valid for our sampler to pick this. The second
panel shows what the training data and leaf-regions look like when they are transformed back to the original space. Clearly, the leaves from the transformed space may not create a tight envelope
around the data in the original space, and here, P becomes an outlier.
Sampling a significant number of outliers is problematic because:
(a) The validation and test sets do not have these points and hence learning a model on a training dataset with a lot of outliers would lead to sub-optimal accuracies.
(b) There is no way to selectively ignore points like P in their leaf, since we uniformly sample within the entire leaf region. The only way to avoid sampling P is to ignore the leaf containing it
(using an appropriate pmf); which is not desirable since it also forces us to ignore the non-outlier points within the leaf.
Note that we also cannot transform the leaves back to the original space first and then sample from them, since (1) we lose the convenience and low runtime of uniform sampling ${U}\left({R}_{i}\
right)$: the leaves are not simple hyperrectangles any more; (2) for leaves not contained within the data bounding box in the original space, we cannot sample from the entire leaf region without
risking obtaining outliers again–see point Q in $\overline{ABCD}$, in Figure 7.
A simple and efficient solution to this problem is to only slightly transform the data, so that we obtain the small volume leaves at class boundaries (in the transformed space), but also, all valid
samples are less likely to be outliers. This may be achieved by restricting the extent of transformation using a “near identity” matrix A ∈ ℝ^d×d:
$[A]pq=1,ifp=q (10)$
$[A]pq~U([0,ϵ]),ifp≠q, where ϵ∈ℝ>0is a small number. (11)$
With this transformation, we would still be sampling outliers, but:
(a) Their numbers are not significant now.
(b) The outliers themselves are close to the data bounding box in the original space.
These substantially weaken their negative impact on our technique.
The tree is constructed on AX, where X is the original data, and samples from the leaves, ${X}_{t}^{\prime }$, are transformed back with ${A}^{-1}{X}_{t}^{\prime }$. Figure 6 is actually an example
of such a near-identity transformation.
A relevant question here is how do we know when to transform our data, i.e., when do we know we have axis-aligned boundaries? Since this is computationally expensive to determine, we always create
multiple trees, each on a transformed version of the data (with different transformation matrices), and uniformly sample from the different trees. It is highly unlikely that all trees in this bagging
step would have axis-aligned boundaries in their respective transformed spaces. Bagging also provides the additional benefit of low variance.
We denote this bag of trees and their corresponding transformations by B. Algorithm 3 details how B is created. Our process is not too sensitive to the choice of epsilon, hence we set ϵ = 0.2 for our
4. Selective Generalization. Since we rely on geometric properties alone to define our pmf, all boundary regions receive a high probability mass irrespective of their contribution to classification
accuracy. This is not desirable when the classifier is small and must focus on a few high impact regions. In other words, we prioritize all boundaries, but not all of them are valuable for
classification; our algorithm needs a mechanism to ignore some of them. We refer to this desired ability of the algorithm as selective generalization.
Figure 8 illustrates the problem and suggests a solution. The data shown has a small green region, shown with a dashed blue circle in the first panel, which we may want to ignore if we had to pick
between learning its boundary or the relatively significant vertical boundary. The figure shows two trees of different depths learned on the data—leaf boundaries are indicated with solid black lines.
A small tree, shown on the left, automatically ignores the circle boundary, while a larger tree, on the right, identifies leaves around it.
Thus, one way to enable selective generalization is to allow our technique to pick a density tree of appropriate depth.
But a shallow density tree is already part of a deeper density tree!–we can just sample at the depth we need. Instead of constructing density trees with different depths, we learn a “depth
distribution” over fully grown density trees; drawing a sample from this tells us what fraction of the tree to consider.
Figure 9A illustrates this idea. The depth distribution is visualized vertically and adjacent to a tree. We sample r ∈ [0, 1] from the distribution, and scale and discretize it to reflect a valid
value for the depth. Let depth[T]() be the scaling/discretizing function for a tree T. Taking the tree in the figure as our example, r = 0 implies we sample our data instances from the nodes at depth
[T](r) = 0 i.e. at the root, and r = 0.5 implies we must sample from the nodes at depth[T](r) = 1. We refer to the pmf for the nodes at a depth to be the sampling scheme at that depth. T has 4
sampling schemes—each capturing class boundary information at a different granularity, ranging from the root with no information and the leaves with the most information.
We use an IBMM for the depth distribution. Similar to the one previously discussed in section 3.1.2, the depth-distribution has a parameter α for the DP and parameters {a, b, a′, b′} for its Beta
priors. The significant difference is we have just one dimension now: the depth. The IBMM is shared across all trees in the bag; Algorithm 4 provides details at the end of this section.
5. Revisiting label entropy. When we sampled only from the leaves of a density tree, we could assign the majority label to the samples owing to the low label entropy. However, this is not true for
nodes at intermediate levels—which the depth distribution might lead us to sample from. We deal with this change by defining an entropy threshold E. If the label distribution at a node has entropy ≤
E, we sample uniformly from the region encompassed by the node (which may be a leaf or an internal node) and use the majority label. However, if the entropy > E, we sample only among the training
data instances that the node covers. Like ϵ, our technique is not very sensitive to a specific value of E (and therefore, need not be learned), as long as it is reasonably low: we use E = 0.15 in our
6. Incomplete trees. Since we use CART to learn our density trees, we have binary trees that are always full, but not necessarily complete, i.e., the nodes at a certain depth alone might not
represent the entire input space. To sample at such depths, we “back up” to the nodes at the closest depth. Figure 9B shows this: at depth = 0 and depth = 1, we can construct our pmf with only nodes
available at these depths, {A} and {B, C}, respectively, and still cover the whole input space. But for depth = 2 and depth = 3, we consider nodes {B, D, E} and {B, D, F, G}, respectively. The dotted
red line connects the nodes that contribute to the sampling scheme for a certain depth.
Algorithm 4 shows how sampling from B works.
FIGURE 6
Figure 6. (Left) Axis parallel boundaries don't create small regions. (Right) This can be addressed by transforming the data. We see an increase in depth and the number of leaves of the density tree
in the latter case.
FIGURE 7
Figure 7. (Left) Transformed data. (Right) The leaves in the inverse transformation contain regions outside the bounding box of the original dataset. See text for a description of points P and Q.
FIGURE 8
Figure 8. A region of low impact is shown in the first panel with a dashed blue circle. The first tree ignores this while a second, larger, tree creates a leaf for it.
FIGURE 9
Figure 9. (A) The set of nodes at a depth have an associated pmf to sample from (not shown). A depth is picked based on the IBMM. (B) In case of an incomplete binary tree, we use the last available
nodes closest to the depth being sampled from, so that the entire input space is represented. The red dotted lines show the nodes comprising the sampling scheme for different depths.
Figure 10 illustrates some of the distributions we obtain using our mechanism. Figure 10A shows our data—note, we only have axis-aligned boundaries. In Figures 10B–D, we show the depth distribution
at the top, going from favoring the root in Figure 10B, to nodes halfway along the height of the tree in Figure 10C, finally to the leaves in Figure 10D. The contour plot visualizes the
distributions, where a lighter color indicates relatively higher sample density. We see that in Figure 10B, we sample everywhere in the data bounding box. In Figure 10C, the larger boundary is
identified. In Figure 10D, the smaller boundary is also identified. A bag of size 5 was used and the smoothing coefficient λ was held constant at a small value.
FIGURE 10
Figure 10. (A) Shows our dataset, while (B–D) show how the sample distribution varies with change of the depth distribution.
This completes the discussion of the salient details of our sampling technique. The optimization variables are summarized below:
1. λ, the Laplace smoothing coefficient.
2. α, the DP parameter.
3. {a, b, a′, b′}, the parameters of the Beta priors for the IBMM depth distribution. A component/partition i is characterized by the distribution Beta(A[i], B[i]), where A[i] ~ Beta(a, b), ${B}_{i}
~Beta\left({a}^{\prime },{b}^{\prime }\right)$.
The IBMM and its parameters, {α, a, b, a′, b′}, are shared across all trees in the bag B, and λ is shared across all sampling schemes.
We also introduced two additional parameters: ϵ and E. As mentioned previously, we do not include them in our optimization since our process is largely insensitive to their precise values as long as
these are reasonably small. We use ϵ = 0.2 and E = 0.15 for our experiments.
The above parameters exclusively determine how the sampler works. In addition, we propose the following parameters:
4. N[s] ∈ ℕ, sample size. The sample size can have a significant effect on model performance. We let the optimizer determine the best sample size to learn from. We constrain N[s] to be larger than
the minimum number of points needed for statistically significant results.
Note that we can allow N[s] > |X[train]|. This larger sample will be created by either repeatedly sampling points - at nodes where the label entropy > E—or by generating synthetic points, when
entropy ≤ E.
5. p[o] ∈ [0, 1]—proportion of the sample from the original distribution. Given a value for N[s], we sample (1−p[o])N[s] points from the density tree(s) and p[o]N[s] points (stratified) from our
training data (X[train], Y[train]).
Recall that our hypothesis is that learning a distribution helps until a size η′ (Equation 3). Beyond this size, we need to provide a way for the sampler to reproduce the original distribution. While
it is possible the optimizer finds a Ψ[t] that corresponds to this distribution, we want to make this easier: now the optimizer can simply set p[o] = 1. Essentially, p[o] is way to “short-circuit”
the discovery of the original distribution.
This variable provides the additional benefit that observing a transition p[o] = 0 → 1, as the model size increases, would empirically validate our hypothesis.
We have a total of eight optimization variables in this technique. The variables that influence the sampling behavior are collectively denoted by Ψ = {α, a, b, a′, b′}. The complete set of variables
is denoted by Φ = {Ψ, N[s], λ, p[o]}.
This is a welcome departure from our naive solution: the number of optimization variables does not depend on the dimensionality d at all! Creating density trees as a preprocessing step gives us a
fixed set of 8 optimization variables for any data. This makes the algorithm much more efficient than before, and makes it practical to use for real world data.
Algorithm 5 shows how we modify our naive solution to incorporate the new sampler.
As before, we discover the optimal Φ using TPE as the optimizer and accuracy() as the fitness function. We begin by constructing our bag of density trees, B, on transformed versions of (X[train], Y[
train]), as described in Algorithm 3. At each iteration in the optimization, based on the current value p[o_t], we sample data from B and (X[train], Y[train]), train our model on it, and evaluate it
on (X[val], Y[val]). In our implementation, lines 7–11 are repeated (thrice, in our experiments) and the accuracies are averaged to obtain a stable estimate for s[t].
Additional details pertaining to Algorithm 5:
1. We use a train : val : test split ratio of 60 : 20 : 20.
2. The training step to build model M[t] in line 10, takes into account class imbalance: it either balances the data by sampling (this is the case with a Linear Probability Model), or it uses an
appropriate cost function or instance weighting, to simulate balanced classes (this is case with DTs or Gradient Boosted Models).
However, it is important to note that both (X[val], Y[val])and(X[test], Y[test]) represent the original distribution, and thus indeed test the efficacy of our technique on data with varying degrees
of class imbalance.
4. Experiments
This section discusses experiments that validate our technique and demonstrate its practical utility. We describe our experimental setup in section 4.1 and present our observations and analysis in
section 4.2.
4.1. Setup
We evaluate Algorithm 5 using 3 different learning algorithms, i.e., $trai{n}_{{F}}\left(\text{ }\right)$, on 13 real world datasets. We construct models for a wide range of sizes, η, to
comprehensively understand the behavior of the algorithm. For each combination of dataset, learning algorithm and model size, we record the percentage relative improvement in the F1(macro) score on (
X[test], Y[test]) compared to the baseline of training the model on the original distribution:
We specifically choose the F1 macro metric as it accounts for class imbalance, e.g., it penalizes the score even if the model performs well on a majority class but poorly on a minority class.
Since the original distribution is part of the optimization search space, i.e., when p[o] = 1, the lowest improvement we report is 0%, i.e., δF1 ∈ [0, ∞). All reported values of δF1 are averaged over
five runs of Algorithm 5. As mentioned before, in each such run, lines 7–11 in the algorithm are repeated thrice to obtain a robust estimate for accuracy(), and thus, s[t].
We also perform upper-tailed paired sample t-tests, with a p_value threshold of 0.1, to assess if the mean of the F1[new] scores are higher than the mean of the F1[baseline] scores, in a
statistically significant way.
4.1.1. Data
We use a variety of real-world datasets, with different dimensionalities, number of classes and different class distributions to test the generality of our approach. The datasets were obtained from
the LIBSVM website (Chang and Lin, 2011), and are described in Table 1. The column “Label Entropy,” quantifies the extent of class imbalance, and is computed for a dataset with C classes in the
following way:
$Label Entropy=∑j∈{1,2,…,C}-pjlogCpj Here, pj=|{xi|yi=j}|N (12)$
Values close to 1 imply classes are nearly balanced in the dataset, while values close to 0 represent relative imbalance.
TABLE 1
Table 1. Datasets: we use the dataset versions available on the LIBSVM website (Chang and Lin, 2011). However, we have mentioned the original source in the “Description” column.
4.1.2. Models
We use the following model families, ${F}$, and learning algorithms, $trai{n}_{{F}}\left(\text{ }\right)$, in our experiments:
1. Decision Trees: We use the implementation of CART in the scikit-learn library (Pedregosa et al., 2011). Our notion of size here is the depth of the tree.
Sizes: For a dataset, we first learn an optimal tree T[opt] based on the F1-score, without any size constraints. Denote the depth of this tree by depth(T[opt]). We then try our algorithm for these
settings of CART's max_depth parameter: {1, 2, …, min(depth(T[opt]), 15)}, i.e., we experiment only up to a model size of 15, stopping early if we encounter the optimal tree size. Stopping early
makes sense since the model has attained the size needed to capture all patterns in the data; changing the input distribution is not going to help beyond this point.
Note that while our notion of size is the actual depth of the tree produced, the parameter we vary is max_depth; this is because decision tree libraries do not allow specification of an exact tree
depth. This is important to remember since CART produces trees with actual depth up to as large as the specified max_depth, and therefore, we might not see actual tree depths take all values in {1,
2, …, min(depth(T[opt]), 15)}, e.g., max_depth = 5 might give us a tree with depth = 5, max_depth = 6 might also result in a tree with depth = 5, but max_depth = 7 might give us a tree with depth =
7. We report relative improvements at actual depths.
2. Linear Probability Model (LPM) (Mood, 2010): This is a linear classifier. Our notion of size is the number of terms in the model, i.e., features from the original data with non-zero coefficients.
We use our own implementation based on scikit-learn. Since LPMs inherently handle only binary class data, for a multiclass problem, we construct a one-vs-rest model, comprising of as many binary
classifiers as there are distinct labels. The given size is enforced for each binary classifier. For instance, if we have a 3-class problem, and we specify a size of 10, then we construct 3 binary
classifiers, each with 10 terms. We did not use the more common Logistic Regression classifier because: (1) from the perspective of interpretability, LPMs provide a better sense of variable
importance (Mood, 2010) (2) we believe our effect is equally well illustrated by either linear classifier.
We use the Least Angle Regression (Efron et al., 2004) algorithm, that grows the model one term at a time, to enforce the size constraint.
Sizes: For a dataset with dimensionality d, we construct models of sizes: {1, 2, …, min(d, 15)}. Here, the early stopping for LPM happens only for the dataset cod-rna, which has d = 8. All other
datasets have d > 15 (see Table 1).
3. Gradient Boosted Model (GBM): We use decision trees as our base classifier in the boosting. Our notion of size is the number of trees in the boosted forest for a fixed maximum depth of the base
classifiers. We use the LightGBM library (Ke et al., 2017) for our experiments.
We run two sets of experiments with the GBM, with maximum depths fixed at 2 and 5. This helps us compare the impact of our technique when the model family ${F}$ inherently differs in its effective
capacity, e.g., we would expect a GBM with 10 trees and a maximum depth of 5 to be more accurate than a GBM with 10 trees and a maximum depth of 2.
Sizes: If the optimal number of boosting rounds for a dataset is r[opt], we explore the model size range: {1, 2, …, min(r[opt], 10)}. We run two sets of experiments with GBM—one using base
classification trees with max_depth = 2, and another with max_depth = 5. Both experiments use the same range for size/boosting rounds.
The density trees themselves use the CART implementation in scikit-learn. We use the Beta distribution implementation provided by the SciPy package (Jones et al., 2001).
4.1.3. Parameter Settings
Since TPE performs optimization with box constraints, we need to specify our search space for the various parameters in Algorithm 5:
1. λ: this is varied in the log-space such that log[10]λ ∈ [−3, 3].
2. p[o]: We want to allow the algorithm to arbitrarily mix samples from B and (X[train], Y[train]). Hence, we set p[o] ∈ [0, 1].
3. N[s]: We set N[s] ∈ [1000, 10000]. The lower bound ensures that we have statistically significant results. The upper bound is set to a reasonably large value.
4. α: For a DP, α ∈ ℝ[>0]. We use a lower bound of 0.1.
We rely on the general properties of a DP to estimate an upper bound, α[max]. Given α, for N points, the expected number of components k is given by:
$E[k|α]=O(αHN) (13)$
$E[k|α]≤αHN (14)$
$α≥E[k|α]HN (15)$
Here, H[N] is the Nth harmonic sum (see Blei, 2007).
Since our distribution is over the depth of a density tree, we already know the maximum number of components possible, k[max] = 1+depth of density tree. We use N = 1, 000, since this is the lower
bound of N[s], and we are interested in the upper bound of α (note H[N] ∝ N—see section 1.3). We set k[max] = 100 (this is greater than any of the density tree depths in our experiments) to obtain a
liberal upper bound, α[max] = 100/H[1000] = 13.4. Rounding up, we set α ∈ [0.1, 14]^7.
We draw a sample from the IBMM using Blackwell-MacQueen sampling (Blackwell and MacQueen, 1973).
5. {a, b, a′, b′}: Each of these parameters are allowed a range [0.1, 10] to admit various shapes for the Beta distributions.
We need to provide a budget T of iterations for the TPE to run. In the case of DT, GBM and binary class problems for LPM, T = 3, 000. Since multiclass problems in LPM require learning multiple
classifiers, leading to high running times, we use a lower value of T = 1, 000. We arrived at these budget values by trial and error; not low enough to lead to inconclusive results, not unreasonably
high to run our experiments.
4.2. Observations and Analysis
We break down our results and discussion by the $trai{n}_{{F}}\left(\text{ }\right)$ used.
4.2.1. DT Results
The DT results are shown in Table 2. A series of unavailable scores, denoted by “-,” toward the right end of the table for a dataset denotes we have already reached its optimal size. For example, in
Table 2, cod-rna has an optimal size of 10.
TABLE 2
For each dataset, the best improvement across different sizes is shown underlined. As mentioned before, we perform upper-tailed paired sample t-tests at a p-value threshold of 0.1, to compare the
original and new F1 scores. Table 2 shows the statistically significant entries in bold, and entries that are not statistically significant are shown in regular font. The horizontal line separates
binary datasets from multiclass datasets.
This data is also visualized in Figure 11. The x-axis shows a scaled version of the actual tree depths for easy comparison: if the largest actual tree depth explored is η[max] for a dataset, then a
size η is represented by η/η[max]. This allows us to compare a dataset like cod-rna, which only has models up to a size of 10, with covtype, where model sizes go all the way up to 15.
FIGURE 11
Figure 11. Improvement in F1 score on test with increasing size. Data in Table 2.
We observe significant improvements in the F1-score for at least one model size for majority of the datasets. The best improvements themselves vary a lot, ranging from 0.70% for phishing to 181.33%
for connect-4. More so, these improvements seem to happen at small sizes: only one best score—for covtype.binary—shows up on the right half of Table 2. This is inline with Equations (3) and (4):
beyond a model size η′, δF1 = 0%.
It also seems that we do much better with multiclass data than with binary classes. Because of the large variance in improvements, this is hard to observe in Figure 11. However, if we separate the
binary and multiclass results, as in Figure 12, we note that there are improvements in both the binary and multiclass cases, and the magnitude in the latter are typically higher (note the y-axes). We
surmise this happens because, in general, DTs of a fixed depth have a harder problem to solve when the data is multiclass, providing our algorithm with an easier baseline to beat.
FIGURE 12
Figure 12. Performance on binary vs. multi-class classification problems using CART. This is an elaboration of Figure 11.
Class imbalance itself doesn't seem to play a role. As per Table 1, the datasets with most imbalance are ijcnn1, covtype, connect-4, for which we see best improvements of 12.96, 101.80, 181.33%,
Most of the statistically significant results occur at small model sizes (note how most bold entries are on the left half of the table), reinforcing the validity of our technique. Since some of the
models go up to (or close to) the optimal model size—the last column in Table 2 for these datasets are either empty or tending to 0 (all datasets except ijcnn1, a1a, covtype, connect-4 satisfy this
condition)—a significant δF1 is also not expected.
Figure 13 shows the behavior of p[o], only for the datasets where our models have grown close to the optimal size. Thus, we exclude ijcnn1, a1a, covtype, connect-4. We observe that indeed p[o] → 1 as
our model grows to the optimal size. This empirically validates our hypothesis from section 2.1, that smaller models prefer a distribution different from the original distribution to learn from, but
the latter is optimal for larger models. And we gradually transition to it as model size increases.
FIGURE 13
Demonstrating this effect is a key contribution of our work.
We are also interested in knowing what the depth-distribution IBMM looks like. This is challenging to visualize for multiple datasets in one plot, since we have an optimal IBMM learned by our
optimizer, for each model size setting. We summarize this information for a dataset in the following manner:
1. Pick a sample size of N points to use.
2. We allocate points to sample from the IBMM for a particular model size, in proportion of δF1. For instance, if we have experimented with three model sizes, and δF1 are 7, 11, and 2%, we sample
0.35, 0.55, and 0.1N points, respectively from the corresponding IBMMs.
3. We fit a Kernel Density Estimator (KDE) over these N points, and plot the KDE curve. This plot represents the IBMM across model sizes for a dataset weighted by the improvement seen for a size.
N should be large enough that the visualization is robust to sample variances. We use N = 10, 000.
Figure 14 shows such a plot for DTs. The x-axis represents the depth of the density tree normalized to [0, 1]. The smoothing by the KDE causes some spillover beyond these bounds.
FIGURE 14
Figure 14. Distribution over levels in density tree(s). Aggregate of distribution over different model sizes.
We observe that, in general, the depth distribution is concentrated either near the root of a density tree, where we have little or no information about class boundaries and the distribution is
nearly identical to the original distribution, or at the leaves, where we have complete information of the class boundaries. An intermediate depth is relatively less used. This pattern in the depth
distribution is surprisingly consistent across all the models and datasets we have experimented with. We hypothesize this might be because of the following reasons:
1. The information provided at an intermediate depth—where we have moved away from the original distribution, but have not yet completely discovered the class boundaries—might be relatively noisy to
be useful.
2. The model can selectively generalize well enough from the complete class boundary information at the leaves.
Note that while fewer samples are drawn at intermediate depths, the number is not always insignificant—as an example, see pendigits in Figure 14; hence using a distribution across the height of the
density tree is still a useful strategy.
4.2.2. LPM Results
The results for LPM are shown in Table 3. The improvements look different from what we observed for DT, which is to be expected across different model families. Notably, compared to DTs, there is no
prominent disparity in the improvements between binary class and multiclass datasets. Since the LPM builds one-vs.-rest binary classifiers in the multiclass case, and the size restriction—number of
terms—applies to each individually, this intuitively makes sense. This is unlike DTs where the size constraint was applied to a single multiclass classifier. However, much like DTs, we still observe
the pattern of the greatest improvements occurring at relatively smaller model sizes.
TABLE 3
Figure 15 shows the plots for improvement in the F1-score and the weighted depth distribution. The depth distribution plot displays concentration near the root and the leaves, similar to the case of
the DT in Figure 14.
FIGURE 15
Figure 15. Linear Probability Model: improvements and the distribution over depths of the density trees.
Note that unlike the case of the DT, we haven't determined how many terms the optimal model for a dataset has; we explore up to min(d, 15). Nevertheless, as in the case of DTs, we note the pattern
that the best improvements typically occur at smaller sizes: only higgs exhibits its largest improvements at a relatively large model size in Table 3. Here too, class imbalance doesn't seem to play a
role (datasets with most imbalance—ijcnn1, covtype, connect-4—show best improvements of 17.9, 27.84, 76.68%, respectively, and most results at small model sizes are statistically significant.
4.2.3. GBM Results
An interesting question to ask is how, if at all, the bias of the model family of ${F}$ in Algorithm 5, influences the improvements in accuracy. We cannot directly compare DTs with LPMs since we
don't know how to order models from different families: we cannot decide how large a DT to compare to a LPM with, say, 4 non-zero terms.
To answer this question we look at GBMs where we identify two levers to control the model size. We consider two different GBM models—with the max_depth of base classifier trees as 2 and 5,
respectively. The number of boosting rounds is taken as the size of the classifier and is varied from 1 to 10. We refer to the GBMs with base classifiers with max_depth = 2 and max_depth = 5 as
representing weak and strong model families, respectively.
We recognize that qualitatively there are two opposing factors at play:
1. A weak model family implies it might not learn sufficiently well from the samples our technique produces. Hence, we expect to see smaller improvements than when using a stronger model family.
2. A weak model family implies there is a lower baseline to beat. Hence, we expect to see larger improvements.
We present an abridged version of the GBM results in Table 4 in the interest of space. The complete results are made available in Table A1 in the Appendix. We present both the improvement in the F1
score, δF1, and its new value, F1[new].
TABLE 4
Figures 16, 17 show the improvement and depth distribution plots for the GBMs with max_depth = 2 and max_depth = 5, respectively.
FIGURE 16
FIGURE 17
The cells highlighted in blue in Table 4 are where the GBM with max_depth = 2 showed a larger improvement than a GBM with max_depth = 5 for the same number of boosting rounds. The cells highlighted
in red exhibit the opposite case. Clearly, both factors manifest themselves. Comparing the relative improvement plots in Figures 16, 17, we see that improvements continue up to larger sizes when max_
depth = 2 (also evident from Table 4). This is not surprising: we expect a stronger model to extract patterns from data at relatively smaller sizes, compared to a weaker model.
Observe that in Table 4, for the same number of boosting rounds, the new scores F1[new] for the weaker GBMs are up to as large (within some margin of error) as the scores for the stronger GBMs. This
is to be expected since our sampling technique diminishes the gap between representational and effective capacities (when such a gap exists); it does not improve the representational capacity itself.
Hence a weak classifier using our method is not expected to outperform a strong classifier that is also using our method.
The depth distribution plots for the GBMs show a familiar pattern: high concentration at the root or the leaves. Also, similar to DTs and LPMs, the greatest improvements for a dataset mostly occur at
relatively smaller model sizes (see Table A1).
4.2.4. Summary
Summarizing our analysis above:
1. We see significant improvements in the F1 score across different combinations of model families, model sizes and datasets with different dimensionalities and label distributions.
2. Since in the DT experiments, we have multiple datasets for which we reached the optimal tree size, we were able to empirically validate the following related key hypotheses:
(a) With larger model sizes the optimal distribution tends toward the original distribution. This is conveniently indicated with p[o] → 1 as η increases.
(b) There is model size η′, beyond which δF1 ≈ 0%.
3. For all the model families experimented with—DTs, LPMs, GBMs (results in Table A1)—the greatest improvements are seen for relatively smaller model sizes.
4. In the case of DTs, the improvements are, in general, higher with multiclass than binary datasets. We do not see this disparity for LPMs. We believe this happens because of our subjective notion
of size: in the case of DTs there is a single tree to which the size constraint applies, making the baseline easier to beat for multiclass problems; while for LPMs it applies to each one-vs.-rest
linear model.
Its harder to characterize the behavior of the GBMs in this regard, since while the base classifiers are DTs, each of which is a multiclass classifier, a GBM maybe comprised of multiple DTs.
5. The GBM experiments give us the opportunity to study the effect of using model families, ${F}$, of different strengths. We make the following observations:
(a) We see both these factors at work: (1) a weaker model family has an easier baseline to beat, which may lead to higher δF1 scores relative to using a stronger model family (2) a stronger model
family is likely to make better use of the optimal distribution, which may lead to higher δF1 scores relative to using a weaker model family.
(b) For a stronger model family, the benefit of using our algorithm diminishes quickly as model size grows.
(c) While the improvement δF1 for a weaker family may exceed one for a stronger family, the improved score F1[new] may, at best, match it.
6. The depth distribution seems to favor either nodes near the root or the leaves, and this pattern is consistent across learning algorithms and datasets.
Given our observations, we would recommend using our approach as a pre-processing step for any size-limited learning, regardless of whether the size is appropriately small for our technique to be
useful or not. If the size is large, then our method will return to the original sample anyways.
5. Discussion
In addition to empirically validating our algorithm, the previous section also provided us with an idea of the kind of results we might expect of it. Using that as a foundation, we revisit our
algorithm in this section, to consider some of our design choices and possible extensions.
5.1. Algorithm Design Choices
Conceptually, Algorithm 5 consists of quite a few building blocks. Although we have justified our implementation choices for them in section 3, it is instructive to look at some reasonable
1. Since we use our depth distribution to identify the value of a depth ∈ ℤ[≥0], a valid question is why not use a discrete distribution, e.g., a multinomial? Our reason for using a continuous
distribution is that we can use a fixed number of optimization variables to characterize a density tree of any depth, with just an additional step of discretization. Also, recall that the depth
distribution applies to all density trees in the forest B, each of which may have a different depth. A continuous distribution affords us the convenience of not having to deal with them individually.
2. A good candidate for the depth distribution is the Pitman-Yor process (Pitman and Yor, 1997)—a two-parameter generalization of the DP (recall, this has one parameter: α). Considering our results
in Figures 14–17, where most depth distributions seem to have up to two dominant modes, we did not see a strong reason to use a more flexible distribution at the cost of introducing an optimization
3. We considered using the Kumaraswamy distribution (Kumaraswamy, 1980) instead of Beta for the mixture components. The advantage of the former is its cumulative distribution function maybe be
expressed as a simple formula, which leads to fast sampling. However, our tests with a Python implementation of the function showed us no significant benefit over the Beta in the SciPy package, for
our use case: the depth distribution is in one dimension, and we draw samples in batches (all samples for a component are drawn simultaneously). Consequently, we decided to stick to the more
conventional Beta distribution^8.
5.2. Extensions and Applications
Our algorithm is reasonably abstracted from low level details, which enables various extensions and applications. We list some of these below:
1. Smoothing: We had hinted at alternatives to Laplace smoothing in section 3.3. We discuss one possibility here. Assuming our density tree has n nodes, we let S ∈ ℝ^n×n denote a pairwise similarity
matrix for these nodes, i.e., [S][ij] is the similarity score between nodes i and j. Let P ∈ ℝ^1×n denote the base (i.e., before smoothing) probability masses for the nodes. Normalizing $P×{S}^{k},k\
in {ℤ}_{\ge 0}$ gives us a smoothed pmf that is determined by our view of similarity between nodes. Analogous to transition matrices, the exponent k determines how diffuse the similarity is; this can
replace λ as an optimization variable.
The ability to incorporate a node similarity matrix opens up a wide range of possibilities, e.g., S might be based on the Wu-Palmer distance (Wu and Palmer, 1994), SimRank (Jeh and Widom, 2002), or
Random Walk with Restart (RWR) (Pan et al., 2004).
2. Categorical variables: We have not explicitly discussed the case of categorical features. There are a couple of ways to handle data with such features:
(a) The density tree may directly deal with categorical variables. When sampling uniformly from a node that is defined by conditions on both continuous and categorical variables, we need to combine
the outputs of a continuous uniform sampler (which we use now) and a discrete uniform sampler (i.e., multinomial with equal masses) for the respective feature types.
(b) We could create a version of the data with one-hot encoded categorical features for constructing the density tree. For input to $trai{n}_{{F}}\left(\text{ }\right)$ at each iteration, we
transform back the sampled data by identifying values for the categorical features to be the maximums in their corresponding sub-vectors. Since the optimizer already assumes a black-box $trai{n}_
{{F}}\left(\text{ }\right)$ function, this transformation would be modeled as a part of it.
3. Model compression: An interesting possible use-case is model compression. Consider the column boostinground = 1 for the senseit_sei dataset in Table 4. Assuming the base classifiers have grown to
their max_depths, the memory footprint in terms of nodes for the GBMs with max_depth = 2 and max_depth = 5 are 2^2 + 1 = 5 and 2^5 + 1 = 33, respectively.
Replacing the second model (larger) with the first (small) in a memory constrained system reduces footprint by (33−5)/33 = 85% at the cost of changing the F1 score by (0.60−0.62)/0.62 = −3.2% only.
Such a proposition becomes particularly attractive if we look at the baseline scores, i.e., accuracies on the original distribution. For the larger model, F1[baseline] = F1[new]/(1 + δF1/100) = 0.62/
(1 + 1.8046) = 0.22. If we replace this model with the smaller model enhanced by our algorithm, we not only reduce the footprint but actually improve the F1 score by (0.60−0.22)/0.22 = 173.7%!
We precisely state this application thus: our algorithm may be used to identify a model size η[e] (subscript “e” for “equivalent”) in relation to a size η > η[e] such that:
$accuracy(trainF(pηe*,ηe),p)≈accuracy(trainF(p,η),p) (16)$
4. Segment analysis: Our sampling operates within the bounding box U ⊂ ℝ^d; in previous sections, U was defined by the entire input data. However, this is not necessary: we may use our algorithm on a
subset of the data V ⊂ U, as long as V is a hyperrectangle in ℝ^d′, d′ ≤ d. This makes our algorithm useful for applications like cohort analysis, common in marketing studies, where the objective is
to study the behavior of a segment—say, based on age and income—within a larger population. Our algorithm is especially appropriate since traditionally such analyses have emphasized interpretability.
5. Multidimensional size: The notion of size need not be a scalar. Our GBM experiments touch upon this possibility. The definition of size only influences how the call to $trai{n}_{{F}}\left(\text{ }
\right)$internally executes; Algorithm 5 itself is agnostic to this detail. This makes our technique fairly flexible. For example, it is easy in our setup to vary both max_depth and number of
boosting rounds for GBMs.
6. Different optimizers: As mentioned in section 3.1.2, the fact that our search space has no special structure implies the workings of the optimizer is decoupled from the larger sampling framework.
This makes it easy to experiment with different optimizers. For example, an interesting exercise might be to study the effect of the hybrid optimizer Bayesian Optimization with Hyperband (BOHB) (
Falkner et al., 2018) when $trai{n}_{{F}}\left(\text{ }\right)$ is an iterative learner; BOHB uses an early stopping strategy in tandem with Bayesian Optimization.
7. Over/Under-sampling: As the range of the sample size parameter N[s] is set by the user, the possibility of over/under-sampling is subsumed by our algorithm. For instance, if our dataset has 500
points, and we believe that sampling up to 4 times might help, we can simply set N[s] ∈ [500,2,000]. Over/Under-sampling need not be explored as a separate strategy.
6. Conclusion
Our work addresses the trade-off between interpretability and accuracy. The approach we take is to identify an optimal training distribution that often dramatically improves model accuracy for an
arbitrary model family, especially when the model size is small. We believe this is the first such technique proposed. We have framed the problem of identifying this distribution as an optimization
problem, and have provided a technique that is empirically shown to be useful across multiple learning algorithms and datasets. In addition to its practical utility, we believe this work is valuable
in that it challenges the conventional wisdom that the optimal training distribution is the test distribution.
A unique property of our technique is that beyond a pre-processing step of constructing a DT, which we refer to as a density tree, the number of variables in the core optimization step does not
depend on the dimensionality of the data; it uses a fixed set of eight variables. The density tree is used to determine a feasible space of distributions to search through, making the optimization
efficient. Our choice of using DTs is innovative since while all classifiers implicitly identify boundaries, only few classifiers like DTs, rules, etc., can explicitly indicate their locations in the
feature space. We have also discussed how our algorithm may be extended in some useful ways.
We hope that the results presented here would motivate a larger discussion around the effect of training distributions on model accuracy.
Data Availability Statement
The datasets analyzed for this study can be found on the LIBSVM website at https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/multiclass.html and https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/
Author Contributions
AG and BR have jointly formulated the problem, worked on certain aspects of the representation and designed experiments. AG has additionally worked on practical aspects of the representation, and
executed the experiments.
Conflict of Interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Supplementary Material
The Supplementary Material for this article can be found online at: https://www.frontiersin.org/articles/10.3389/frai.2020.00003/full#supplementary-material
1. ^https://blogs.wsj.com/cio/2018/05/11/bank-of-america-confronts-ais-black-box-with-fraud-detection-effort/
2. ^https://www.darpa.mil/program/explainable-artificial-intelligence
3. ^https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, https://www.propublica.org/article/how-we-analyzed-the-compas-recidivism-algorithm
4. ^The optimizer we use, TPE, can handle conditional spaces. However, as mentioned, our goal is flexibility in implementation.
5. ^We justify this name by noting that there is more than one multivariate generalization of the Beta: the Dirichlet distribution is a popular one, but there are others (e.g., Olkin and Trikalinos,
6. ^We use this term since this helps us define a pdf over the input space ℝ^d. We don't abbreviate this term to avoid confusion with “DT.” DT always refers to a decision tree in this work, and the
term “density tree” is used as-is.
7. ^We later observe from our experiments that this upper bound is sufficient since nearly all depth distributions have at most 2 dominant components (see Figures 14–17).
8. ^Interestingly, another recent paper on interpretability does use the Kumaraswamy distribution (Bastings et al., 2019).
Alimoglu, F., and Alpaydin, E. (1996). “Methods of combining multiple classifiers based on different representations for pen-based handwritten digit recognition,” in Proceedings of the Fifth Turkish
Artificial Intelligence and Artificial Neural Networks Symposium (TAINN 96) (Istanbul).
Alvi, A., Ru, B., Calliess, J.-P., Roberts, S., and Osborne, M. A. (2019). “Asynchronous batch Bayesian optimisation with improved local penalisation,” in Proceedings of the 36th International
Conference on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, eds K. Chaudhuri and R. Salakhutdinov (Long Beach, CA), 253–262.
Ancona, M., Oztireli, C., and Gross, M. (2019). “Explaining deep neural networks with a polynomial time algorithm for shapley value approximation,” in Proceedings of the 36th International Conference
on Machine Learning, Vol. 97 of Proceedings of Machine Learning Research, eds K. Chaudhuri and R. Salakhutdinov (Long Beach, CA), 272–281.
Angelino, E., Larus-Stone, N., Alabi, D., Seltzer, M., and Rudin, C. (2017). “Learning certifiably optimal rule lists,” in Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD '17 (New York, NY), 35–44.
Baldi, P., Sadowski, P., and Whiteson, D. (2014). Searching for exotic particles in high-energy physics with deep learning. Nat. Commun. 5:4308. doi: 10.1038/ncomms5308
Bastings, J., Aziz, W., and Titov, I. (2019). “Interpretable neural predictions with differentiable binary variables,” in Proceedings of the 57th Annual Meeting of the Association for Computational
Linguistics (Florence: Association for Computational Linguistics), 2963–2977.
Bergstra, J., Bardenet, R., Bengio, Y., and Kégl, B. (2011). “Algorithms for hyper-parameter optimization,” in Proceedings of the 24th International Conference on Neural Information Processing
Systems, NIPS'11 (Stockholm: Curran Associates Inc.), 2546–2554.
Bergstra, J., Yamins, D., and Cox, D. D. (2013). “Making a science of model search: Hyperparameter optimization in hundreds of dimensions for vision architectures,” in Proceedings of the 30th
International Conference on International Conference on Machine Learning - Volume 28, ICML'13, I-115–I-123 (Atlanta, GA).
Blackwell, D., and MacQueen, J. B. (1973). Ferguson distributions via polya urn schemes. Ann. Stat. 1, 353–355.
Blaser, R., and Fryzlewicz, P. (2016). Random rotation ensembles. J. Mach. Learn. Res. 17, 1–26.
Breiman, L., Friedman, J. H., Olshen, R. A., and Stone, C. J. (1984). Classification and Regression Trees. New York, NY: Chapman & Hall.
Brochu, E., Cora, V. M., and de Freitas, N. (2010). A tutorial on bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning.
CoRR abs/1012.2599. [Preprint].
Caruana, R., Lou, Y., Gehrke, J., Koch, P., Sturm, M., and Elhadad, N. (2015). “Intelligible models for healthcare: predicting pneumonia risk and hospital 30-day readmission,” in Proceedings of the
21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '15 (New York, NY), 1721–1730.
Chang, C.-C., and Lin, C.-J. (2001). “IJCNN 2001 challenge: generalization ability and text decoding,” in Proceedings of IJCNN. IEEE (Washington, DC), 1031–1036.
Chang, C.-C., and Lin, C.-J. (2011). LIBSVM: A library for support vector machines. ACM Trans. Intell. Syst. Technol. 2, 27:1–27:27. doi: 10.1145/1961189.1961199
Collobert, R., Bengio, S., and Bengio, Y. (2002). “A parallel mixture of svms for very large scale problems,” in Advances in Neural Information Processing Systems 14, eds T. G. Dietterich, S. Becker,
and Z. Ghahramani (Montreal, QC: MIT Press), 633–640.
Dai, Z., Yu, H., Low, B. K. H., and Jaillet, P. (2019). “Bayesian optimization meets Bayesian optimal stopping,” in Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of
Proceedings of Machine Learning Research, eds K. Chaudhuri and R. Salakhutdinov (Long Beach, CA), 1496–1506.
Dasgupta, S. (2011). Two faces of active learning. Theor. Comput. Sci. 412, 1767–1781. doi: 10.1016/j.tcs.2010.12.054
Dean, D. J., and Blackard, J. A. (1998). Comparison of Neural Networks and Discriminant Analysis in Predicting Forest Cover Types, Ft Collins, CO: Colorado State University. doi: 10.5555/928509
Duarte, M. F., and Hu, Y. H. (2004). Vehicle classification in distributed sensor networks. J. Parallel Distrib. Comput. 64, 826–838. doi: 10.1016/j.jpdc.2004.03.020
Efron, B., Hastie, T., Johnstone, I., and Tibshirani, R. (2004). Least angle regression. Ann. Stat. 32, 407–499. doi: 10.1214/009053604000000067
Falkner, S., Klein, A., and Hutter, F. (2018). “Bohb: Robust and efficient hyperparameter optimization at scale,” in ICML (Stockholm), 1436–1445.
Gelbart, M. A., Snoek, J., and Adams, R. P. (2014). “Bayesian optimization with unknown constraints,” in Proceedings of the Thirtieth Conference on Uncertainty in Artificial Intelligence, UAI'14
(Arlington, VI: AUAI Press), 250–259.
Gelfand, S. B., and Mitter, S. K. (1989). Simulated annealing with noisy or imprecise energy measurements. J. Optim. Theory Appl. 62, 49–62.
Goodman, B., and Flaxman, S. (2017). European union regulations on algorithmic decision-making and a “right to explanation”. AI Mag. 38, 50–57. doi: 10.1609/aimag.v38i3.2741
Grill, J.-B., Valko, M., Munos, R., and Munos, R. (2015). “Black-box optimization of noisy functions with unknown smoothness,” in Advances in Neural Information Processing Systems 28, eds C. Cortes,
N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett (Montreal, QC: Curran Associates, Inc.), 667–675.
Gutjahr, W. J., and Pflug, G. C. (1996). Simulated annealing for noisy cost functions. J. Glob. Optim. 8, 1–13.
Hansen, N., and Kern, S. (2004). “Evaluating the CMA evolution strategy on multimodal test functions,” in Parallel Problem Solving From Nature PPSN VIII, Vol. 3242 of LNCS, eds X. Yao et al. (Berlin;
Heidelberg: Springer), 282–291. Available online at: https://link.springer.com/chapter/10.1007/978-3-540-30217-9_29#citeas
Hansen, N., and Ostermeier, A. (2001). Completely derandomized self-adaptation in evolution strategies. Evol. Comput. 9, 159–195. doi: 10.1162/106365601750190398
Hastie, T., Tibshirani, R., and Friedman, J. (2009). The Elements of Statistical Learning: Data Mining, Inference and Prediction, 2 Edn. Springer.
Herman, B. (2017). “The promise and peril of human evaluation for model interpretability,” Presented at NIPS 2017 Symposium on Interpretable Machine Learning. Available online at: https://arxiv.org/
Hernández-Lobato, J. M., Gelbart, M. A., Adams, R. P., Hoffman, M. W., and Ghahramani, Z. (2016). A general framework for constrained bayesian optimization using information-based search. J. Mach.
Learn. Res. 17, 5549–5601.
Hsu, C.-W., and Lin, C.-J. (2002). A comparison of methods for multiclass support vector machines. IEEE Trans. Neural Netw. 13, 415–425. doi: 10.1109/72.991427
Hutter, F., Hoos, H. H., and Leyton-Brown, K. (2011). “Sequential model-based optimization for general algorithm configuration,” in Proceedings of the 5th International Conference on Learning and
Intelligent Optimization, LION'05 (Berlin; Heidelberg: Springer-Verlag), 507–523.
Japkowicz, N., and Stephen, S. (2002). The class imbalance problem: a systematic study. Intell. Data Anal. 6, 429–449. doi: 10.3233/IDA-2002-6504
Jeh, G., and Widom, J. (2002). “Simrank: a measure of structural-context similarity,” in Proceedings of the Eighth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '02
(New York, NY), 538–543.
Jones, E., Oliphant, T., Peterson, P., et al. (2001). SciPy: Open Source Scientific Tools for Python. Available online at: http://www.scipy.org/
Juan, Y., Zhuang, Y., Chin, W.-S., and Lin, C.-J. (2016). “Field-aware factorization machines for ctr prediction,” in Proceedings of the 10th ACM Conference on Recommender Systems, RecSys '16 (New
York, NY: Association for Computing Machinery), 43–50.
Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., et al. (2017). “LightGBM: a highly efficient gradient boosting decision tree,” in Proceedings of the 31st International Conference on Neural
Information Processing Systems, NIPS'17 (Long Beach, CA: Curran Associates Inc.), 3149–3157.
Kennedy, J., and Eberhart, R. (1995). “Particle swarm optimization,” in Proceedings of ICNN'95 - International Conference on Neural Networks, Vol. 4, (Perth, WA) 1942–1948.
Kirkpatrick, S., Gelatt, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science 220, 671–680. doi: 10.1126/science.220.4598.671
Koh, P. W., and Liang, P. (2017). “Understanding black-box predictions via influence functions,” in Proceedings of the 34th International Conference on Machine Learning, Vol. 70 of Proceedings of
Machine Learning Research, eds D. Precup and Y. W. Teh (Sydney, NSW: International Convention Centre), 1885–1894.
Kumaraswamy, P. (1980). A generalized probability density function for double-bounded random processes. J. Hydrol. 46, 79–88.
Lakkaraju, H., Bach, S. H., and Leskovec, J. (2016). “Interpretable decision sets: a joint framework for description and prediction,” in Proceedings of the 22Nd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, KDD '16, (San Francisco, CA) 1675–1684.
Letham, B., Karrer, B., Ottoni, G., and Bakshy, E. (2019). Constrained bayesian optimization with noisy experiments. Bayesian Anal. 14, 495–519. doi: 10.1214/18-BA1110
Letham, B., Rudin, C., McCormick, T. H., and Madigan, D. (2013). Interpretable classifiers using rules and Bayesian analysis: building a better stroke prediction model. Ann. Appl. Stat. 9, 1350–1371.
doi: 10.1214/15-AOAS848
Levesque, J.-C., Durand, A., Gagné, C., and Sabourin, R. (2017). “Bayesian optimization for conditional hyperparameter spaces,” 2017 International Joint Conference on Neural Networks (IJCNN)
(Anchorage), 286–293.
Li, C., Gupta, S., Rana, S., Nguyen, V., Venkatesh, S., and Shilton, A. (2017). “High dimensional bayesian optimization using dropout,” in Proceedings of the Twenty-Sixth International Joint
Conference on Artificial Intelligence, IJCAI-17 (Melbourne, VIC), 2096–2102.
Li, L., Jamieson, K., DeSalvo, G., Rostamizadeh, A., and Talwalkar, A. (2017). Hyperband: a novel bandit-based approach to hyperparameter optimization. J. Mach. Learn. Res. 18, 6765–6816.
Lim, M., and Hastie, T. (2015). Learning interactions via hierarchical group-lasso regularization. J. Comput. Graph. Stat. 24, 627–654. doi: 10.1080/10618600.2014.938812
Lipton, Z. C. (2018). The mythos of model interpretability. Queue 16, 30:31–30:57. doi: 10.1145/3236386.3241340
Lou, Y., Caruana, R., Gehrke, J., and Hooker, G. (2013). “Accurate intelligible models with pairwise interactions,” in Proceedings of the 19th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD '13 (New York, NY), 623–631.
Lundberg, S. M., and Lee, S.-I. (2017). “A unified approach to interpreting model predictions,” in Advances in Neural Information Processing Systems 30, eds I. Guyon, U. V. Luxburg, S. Bengio, H.
Wallach, R. Fergus, S. Vishwanathan, and R. Garnett (Long Beach, CA: Curran Associates, Inc.), 4765–4774.
Malkomes, G., and Garnett, R. (2018). “Automating bayesian optimization with bayesian optimization,” in Advances in Neural Information Processing Systems 31, eds S. Bengio, H. Wallach, H. Larochelle,
K. Grauman, N. Cesa-Bianchi, and R. Garnett (Montreal, QC: Curran Associates, Inc.), 5984–5994.
Michie, D., Spiegelhalter, D. J., Taylor, C. C., and Campbell, J., . (eds.) (1995). Machine Learning, Neural and Statistical Classification. Ellis Horwood.
Mohammad, R. M., Thabtah, F., and McCluskey, L. (2012). “An assessment of features related to phishing websites using an automated technique,” in 2012 International Conference for Internet Technology
and Secured Transactions (London), 492–497.
Mood, C. (2010). Logistic regression: why we cannot do what we think we can do, and what we can do about it. Eur. Sociol. Rev. 26, 67–82. doi: 10.1093/esr/jcp006
Munteanu, A., and Schwiegelshohn, C. (2018). Coresets-methods and history: a theoreticians design pattern for approximation and streaming algorithms. KI - Künstl. Intell. 32, 37–53. doi: 10.1007/
Nayebi, A., Munteanu, A., and Poloczek, M. (2019). “A framework for Bayesian optimization in embedded subspaces,” in Proceedings of the 36th International Conference on Machine Learning, Vol. 97 of
Proceedings of Machine Learning Research, eds K. Chaudhuri and R. Salakhutdinov (Long Beach, CA), 4752–4761.
Olkin, I., and Trikalinos, T. (2014). Constructions for a bivariate beta distribution. Stat. Probabil. Lett. 96, 54–60. doi: 10.1016/j.spl.2014.09.013
Pan, J.-Y., Yang, H.-J., Faloutsos, C., and Duygulu, P. (2004). “Automatic multimedia cross-modal correlation discovery,” in Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, KDD '04 (Seattle, WA), 653–658.
Parsopoulos, K. E., and Vrahatis, M. N. (2001). “Particle swarm optimizer in noisy and continuously changing environments,” in Artificial Intelligence and Soft Computing, IASTED/ACTA, ed M. H. Hamza
(IASTED/ACTA Press), 289–294.
Paschke, F., Bayer, C., Bator, M., Mönks, U., Dicks, A., Enge-Rosenblatt, O., et al. (2013). “Sensorlose zustandsüberwachung an synchronmotoren,” in Proceedings of Computational Intelligence Workshop
Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2011). Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830.
Perrone, V., Jenatton, R., Seeger, M. W., and Archambeau, C. (2018). “Scalable hyperparameter transfer learning,” in Advances in Neural Information Processing Systems 31, eds S. Bengio, H. Wallach,
H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett (Montreal, QC: Curran Associates, Inc.), 6845–6855.
Pitman, J. and Yor, M. (1997). The two-parameter poisson-dirichlet distribution derived from a stable subordinator. Ann. Probab. 25, 855–900.
Platt, J. (1998). “Fast training of support vector machines using sequential minimal optimization,” in Advances in Kernel Methods - Support Vector Learning (MIT Press).
Quinlan, J. R. (1993). C4.5: Programs for Machine Learning. San Francisco, CA: Morgan Kaufmann Publishers Inc.
Rana, S., Li, C., Gupta, S., Nguyen, V., and Venkatesh, S. (2017). “High dimensional Bayesian optimization with elastic Gaussian process,” in Proceedings of the 34th International Conference on
Machine Learning, Vol. 70 of Proceedings of Machine Learning Research, eds D. Precup and Y. W. Teh (Sydney, NSW: International Convention Centre), 2883–2891.
Rasmussen, C. E. (1999). “The infinite gaussian mixture model,” in Proceedings of the 12th International Conference on Neural Information Processing Systems, NIPS'99 (Cambridge, MA: MIT Press),
Ribeiro, M. T., Singh, S., and Guestrin, C. (2016). “Why should i trust you?”: explaining the predictions of any classifier,” in Proceedings of the 22Nd ACM SIGKDD International Conference on
Knowledge Discovery and Data Mining, KDD '16 (New York, NY), 1135–1144.
Rodriguez, J. J., Kuncheva, L. I., and Alonso, C. J. (2006). Rotation forest: a new classifier ensemble method. IEEE Trans. Pattern Anal. Mach. Intell. 28, 1619–1630.
Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. in 2017 IEEE International
Conference on Computer Vision (ICCV), pages 618–626.
Settles, B. (2009). Active Learning Literature Survey. Computer Sciences Technical Report 1648, University of Wisconsin, Madison, WI.
Shahriari, B., Swersky, K., Wang, Z., Adams, R. P., and de Freitas, N. (2016). Taking the human out of the loop: a review of Bayesian optimization. Proc. IEEE 104, 148–175. doi: 10.1109/
Snoek, J., Larochelle, H., and Adams, R. P. (2012). “Practical bayesian optimization of machine learning algorithms,” in Advances in Neural Information Processing Systems 25, eds F. Pereira, C. J. C.
Burges, L. Bottou, and K. Q. Weinberger (Lake Tahoe, NV: Curran Associates, Inc.), 2951–2959.
Snoek, J., Rippel, O., Swersky, K., Kiros, R., Satish, N., Sundaram, N., et al. (2015). “Scalable bayesian optimization using deep neural networks,” in Proceedings of the 32Nd International
Conference on International Conference on Machine Learning - Volume 37, ICML'15, 2171–2180.
Ustun, B., and Rudin, C. (2016). Supersparse linear integer models for optimized medical scoring systems. Mach. Learn. 102, 349–391. doi: 10.1007/s10994-015-5528-6
Uzilov, A. V., Keegan, J. M., and Mathews, D. H. (2006). Detection of non-coding rnas on the basis of predicted secondary structure formation free energy change. BMC Bioinformatics 7:173. doi:
Wang, C.-C., Tan, K. L., Chen, C.-T., Lin, Y.-H., Keerthi, S. S., Mahajan, D., et al. (2018). Distributed newton methods for deep neural networks. Neural Comput. 30, 1673–1724. doi: 10.1162/
Wang, Z., Zoghi, M., Hutter, F., Matheson, D., and De Freitas, N. (2013). “Bayesian optimization in high dimensions via random embeddings,” in Proceedings of the Twenty-Third International Joint
Conference on Artificial Intelligence, IJCAI '13 (Beijing: AAAI Press), 1778–1784.
Wu, Z., and Palmer, M. (1994). “Verbs semantics and lexical selection,” in Proceedings of the 32Nd Annual Meeting on Association for Computational Linguistics, ACL '94 (Stroudsburg, PA: Association
for Computational Linguistics), 133–138.
Keywords: ML, interpretable machine learning, Bayesian optimization, infinite mixture models, density estimation
Citation: Ghose A and Ravindran B (2020) Interpretability With Accurate Small Models. Front. Artif. Intell. 3:3. doi: 10.3389/frai.2020.00003
Received: 24 October 2019; Accepted: 29 January 2020;
Published: 25 February 2020.
Copyright © 2020 Ghose and Ravindran. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other
forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice.
No use, distribution or reproduction is permitted which does not comply with these terms.
*Correspondence: Abhishek Ghose, abhishek.ghose.82@gmail.com | {"url":"https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2020.00003/full","timestamp":"2024-11-03T10:17:59Z","content_type":"text/html","content_length":"920800","record_id":"<urn:uuid:06bc5721-8734-4bd9-ad8d-d9ef07ffc05a>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00201.warc.gz"} |
The Outlet lets water flow downstream with a prescribed flow rate. It is similar to the Pump, except that water only flows down, by gravity.
When PID controlled, the Outlet must point towards the controlled Basin in terms of edges.
1 Tables
1.1 Static
column type unit restriction
node_id Int32 - sorted
control_state String - (optional) sorted per node_id
active Bool - (optional, default true)
flow_rate Float64 $^3/$ non-negative
min_flow_rate Float64 $^3/$ (optional, default 0.0)
max_flow_rate Float64 $^3/$ (optional)
min_upstream_level Float64 \(\text{m}\) (optional)
max_downstream_level Float64 \(\text{m}\) (optional)
2 Equations
The Outlet is very similar to the Pump, but it has an extra reduction factor for physical constraints:
\[ Q = \mathrm{clamp}(\phi Q_\text{set}, Q_{\min}, Q_{\max}) \]
• \(Q\) is the realized Outlet flow rate.
• \(Q_\text{set}\) is the Outlet’s target flow_rate.
• \(Q_{\min}\) and \(Q_{\max}\) are the Outlet min_flow_rate and max_flow_rate.
• \(\phi\) is the reduction factor, which smoothly reduces flow based on all of these criteria:
□ The upstream volume is below \(10 m^3\).
□ The upstream level is less than \(0.02 m\) above the downstream level.
□ The upstream level is below min_upstream_level + \(0.02 m\)
□ The downstream level is above max_downstream_level - \(0.02 m\) | {"url":"https://ribasim.org/reference/node/outlet.html","timestamp":"2024-11-07T16:38:26Z","content_type":"application/xhtml+xml","content_length":"34370","record_id":"<urn:uuid:886b1fbd-c30c-45ef-8eb8-52806c8455e6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00599.warc.gz"} |
Basic Sciences Applications
Given a family of linear constraints and a linear objective function one can consider whether to apply a Linear Programming (LP) algorithm or use a Linear Superiorization (LinSup) algorithm on this
data. In the LP methodology one aims at finding a point that fulfills the constraints and has the minimal value of the objective function … Read more | {"url":"https://optimization-online.org/category/applications-science-engineering/basic-sciences-applications/","timestamp":"2024-11-13T21:38:54Z","content_type":"text/html","content_length":"111398","record_id":"<urn:uuid:7f66425c-9319-404a-9241-69385c1b5431>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00449.warc.gz"} |
What is a Percent?
The percent (%) symbol is used in mathematics and programs like Microsoft Excel to represent a fraction of a whole number (i.e., 60% is 6/10 or 3/5 or .60).
Where is the percent key on the keyboard?
Below is an overview of a computer keyboard with the percent key highlighted in blue.
How to create the % symbol
Creating the % symbol on a U.S. keyboard
To create a percent symbol using a U.S. keyboard, hold down the Shift and press the 5 at the top of the keyboard (Shift+5).
Doing the Alt code Alt+37 can also create a percent.
Creating the % symbol on a smartphone or tablet
To create a percent on a smartphone or tablet, open the keyboard and go to the numbers section (123). Then, tap the (#+=) or symbols (sym) section, and press your finger on the % symbol.
Examples of using the percent on a computer
• Represent a percentage (i.e., 60% is 6/10 or 3/5 or .60).
• Variable, for further information, see our %1, \1, and $1 definition.
• In programming languages like Perl, a % is used for a hash. In Python, the % is an operator that returns the remainder of a number.
• In Microsoft Windows, a percent is used for an environment variable.
• In Microsoft Excel and other spreadsheet programs, Percentage is a number format that allows numbers to be formatted as percentages.
• The percent key may be used in a keyboard shortcut, like Ctrl+Shift+%.
Percentage format
In Microsoft Excel and other spreadsheet programs, Percentage is a number format that allows numbers to be formatted as a percentage.
Use the keyboard shortcut Ctrl+Shift+5 to change a cell format to percentage.
Using a % as a wildcard in a database
A percent symbol can also be used as a wildcard, representing any character (letter, number, hyphen, or another special character).
For example, searching for %Army% in Oracle systems returns any record containing the word "Army," regardless of where it was found. Using the percent symbol as a wildcard helps find search results
when a user is unsure what records they are looking for exactly. While the number of search results returned can be numerous, the user could narrow the search further with the percent symbol with
additional characters or words in the search.
SQL example
SELECT full_name
FROM employees
WHERE full_name LIKE 'Bryan%'
In SQL (Structured Query Language), this example would result in a list of all employee names that start with "Bryan."
How to find a percent of currency or another number
To find the percent of a number, divide the part (typically the smaller value) by the whole (the larger value) and then multiply the total by 100. For example, 1180 / 1454 = 0.81155... Taking that
total and multiplying it by 100 (0.1155 * 100) gives you 81.15... (81.15%).
To find the percentage of currency, multiply the decimal percentage value by the total currency. For example, 20% (0.20 in decimal) of $100.00 (0.20 * 100) is 20 or $20.00.
These same formulas can be used in Microsoft Excel and other spreadsheets to find percentages of numbers in a cell. For example, if cell A1 contained "1180" and cell B1 contained "1454," in cell C1,
add the formula "=SUM(A1/B1)" to get the decimal percentage. Changing the cell C1 format would make that cell show 81.16% or 81% if the decimal place was changed to 0.
%1, Keyboard terms, Number key, Shift+5, Spreadsheet terms, Typography terms | {"url":"http://hkci.net/percent.html","timestamp":"2024-11-05T13:42:36Z","content_type":"text/html","content_length":"15745","record_id":"<urn:uuid:d82ca406-02a0-4dfd-aa69-9c6821af361c>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00682.warc.gz"} |