content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
15cm height difference Tall Girl Short Guy, Short Girls, Tall Girls, Taller Girlfriend, Body
Cm To Feet Conversion Chart Printable
Centimeter. Definition: A centimeter (symbol: cm) is a unit of length in the International System of Units (SI), the current form of the metric system. It is defined as 1/100 meters. History/origin:
A centimeter is based on the SI unit meter, and as the prefix "centi" indicates, is equal to one hundredth of a meter. Metric prefixes range from factors of 10-18 to 10 18 based on a decimal system.
Abangkuraden's Blog Convert feet and inches to cm
1 cm = 0.032808398950131 ft. To convert 140 centimeters into feet we have to multiply 140 by the conversion factor in order to get the length amount from centimeters to feet. We can also form a
simple proportion to calculate the result: 1 cm → 0.032808398950131 ft. 140 cm → L (ft)
Feet to centimeters conversion calculator (feet to cm) Mycalcu
How to Convert 140 Centimeters to Feet and Inches. You can convert 140 centimeters to feet easily using a height converter or manually by following a few easy steps.. Step One: divide the height in
centimeters by 2.54 to convert to inches. 140 cm ÷ 2.54 = 55.12 in. Step Two: divide the height in inches by 12 to find the number of feet. Take the whole number as the number of feet and note the.
Measuring Your Childs Feet SchoolDays.ie
To convert from cm to feet and inches, use the following two conversion equations: 1 cm = 0.3937 inches. and. 1 foot = 12 inches. Can you show the process to calculate it? For example, 180 cm in
feet. The equation is: 180 cm = 180 x 0.3937 inches = 70.86614 inches. 70.86614 div 12 = 5 and 70.86614 mod 12 = 10.86614.
What is 140 CM in Feet and Inches?
How tall is 140 centimeters. 140 centimeters is equal to 4 feet and 7 inches . This Height Converter is an accurate, error-free, simple and quick tool. You don't have to worry about trying to write
down the correct formula to perform the calculation and then whether or not you made mistakes in doing it.
140 Feet To Centimeters Converter 140 ft To cm Converter
To convert 140 cm to feet and inches you have to follow the steps from the Conversion Formula section above 1) [integer-feet] = floor(140 * 0.032808398950131) = floor(4.5931758530184) = 4 feet
How to Convert Human Height in Centimeters to Feet (with Unit Converter)
If we want to calculate how many Feet are 140 Centimeters we have to multiply 140 by 25 and divide the product by 762. So for 140 we have: (140 × 25) ÷ 762 = 3500 ÷ 762 = 4.5931758530184 Feet. So
finally 140 cm = 4.5931758530184 ft.
What is 140 cm in feet and inches? Calculatio
How many feet and inches is 140 cm? Answer: 140 cm = 4ft 7.12in. 140 Centimeters is equal to 4 Feet 7.12 Inches. It Is Also:
World’s tallest people are Dutch men, Latvian women, study finds from Globe_Health The Globe
How many Centimeters equal one Foot? A foot is defined as 0.3048 of a meter, so 30.48 centimeters equal one feet, since 1 cm = 0.01 meters. The symbol for centimeters is "cm", while "ft" is used for
feet. Sometimes a single quote does the same job, e.g. 6' tall means 6 feet tall. Difference between Centimeters and Feet. Both feet and.
Feet In Cm
How far is 140 centimeters in feet? 140 cm to ft conversion. Amount. From. To Calculate. swap units ↺. 140 Centimeters ≈. 4.5931759 Feet. result rounded. Decimal places. Result in Feet and Inches.
140 centimeters is equal to about 4 feet and 7.1 inches..
Feet to centimeters conversion calculator (feet to cm) Mycalcu
140.99. 4′ 7.5079″. 4.6257. 55.5079. 1.4099. How tall is 140 cm in feet and inches? How high is 140 cm? Use this easy calculator to convert centimeters to feet and inches.
Great height comparison site for those curious and a sample photo of how I stack up to some
How to convert feet to cm. To convert feet to centimeters, multiply your feet figure by 30.48 (cm = feet × 30.48). To convert from feet and inches, convert your feet figure to inches first (multiply
it by 12), add on the inches remainder and then multiply the result by 2.54.
How to Measure your Feet
To calculate your height in centimeters from inches, multiply your height figure by 2.54. If your height is in feet and inches, multiply the feet figure by 12 first, before adding on the extra inches
and multiplying by 2.54. Let's look at an example conversion. Example: Mabel is 5ft 4 inches tall (5' 4"). To convert that figure to centimeters.
Convert 140 CM to Inches
9,800. 321.52. 9,900. 324.80. 10,000. 328.08. Use this easy and mobile-friendly calculator to convert between centimeters and feet. Just type the number of centimeters into the box and hit the
Calculate button.
140 Cm In Feet ubicaciondepersonas.cdmx.gob.mx
Task: Convert 45 centimeters to feet (show work) Formula: cm ÷ 30.48 = ft Calculations: 45 cm ÷ 30.48 = 1.47637795 ft Result: 45 cm is equal to 1.47637795 ft. Conversion Table. For quick reference
purposes, below is a conversion table that you can use to convert from cm to feet. Centimeters to Feet Conversion Chart.
15cm height difference Tall Girl Short Guy, Short Girls, Tall Girls, Taller Girlfriend, Body
Centimeter to feet conversion (cm to ft) helps you to calculate how many feet in a centimeter length metric units, also list cm to ft conversion table. Temperature; Weight; Length; Area; Volume;. 140
cm: 4.5931758530184 ft: 145 cm: 4.757217847769 ft: 150 cm: 4.9212598425197 ft: 155 cm: 5.0853018372703 ft: 160 cm: 5.249343832021 ft: 165 cm: 5. | {"url":"https://kysec.online/nl/how-many-feet-is-140-cm.html","timestamp":"2024-11-03T13:09:22Z","content_type":"text/html","content_length":"20430","record_id":"<urn:uuid:7af39c74-c121-49a5-af69-4bcc13036b51>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00486.warc.gz"} |
sports analytics Archives – DataDuel.co
Analysis & Narrative Written by Jacob Matson & Matt Levine, February 2024. [pdf]
Executive Summary
• The variant of “Super Bowl Squares” that we analyzed is one in which the entrant is assigned a digit (0-9) for Team A’s final score to end with and a digit for Team B’s final score to end with ^1
• We compiled the final game scores from the 30 most recent NFL seasons to determine the frequency that each of the 100 potential “Squares” has been scored a winner
• We then compared these frequencies with the publicly available betting odds offered on the ‘Super Bowl Squares – Final Result’ market by DraftKings Sportsbook to ascertain the expected value (EV)
of each square
• The analysis determined that all 100 of the available squares carried a negative expected value ranging from [-4.0% to -95.2%], and that buying all 100 squares would carry a negative expected
value of approximately [-39.7%]
Our Methodology
• We collected final game scores data from Pro Football Reference for the last 30 full NFL seasons, as well as the current NFL season through the completion of Week 17. We also included all Super
Bowl games that took place prior to 30 seasons ago
• Games that ended in a tie were excluded since that is not a potential outcome for the Super Bowl
• We calculated raw frequencies for each of the 100 available squares, and then weighted the Niners’ digit 55% to the digit represented by the winner of the historical games, and 45% to the digit
represented by the loser of the historical games. The [55% / 45%] weighting is reflective of the estimated win probability implied by the de-vigged Pinnacle Super Bowl Winner odds of ‘-129 /
+117’ ^2 ^3
• The weighted frequencies were then multiplied by the gross payouts implied by DraftKings Sportsbook Super Bowl Squares – Final Result odds ^2
Findings & Results
Raw Frequencies
Sample Size: n = 8,162 games
• Most frequent digit for losing team is ‘0’, occurring ~20.5% of the time
• Most frequent digit for winning team is ‘7’, occurring ~15.5% of the time
Losing Digit Winning Digit Frequency
7 0 3.99%
0 3 3.97%
4 7 3.47%
0 7 3.32%
0 4 3.11%
Top 5 most frequent winning squares
Weighted Frequencies
Sample Size: n = 8,162 games
• Most frequent digit for Niners is ‘7’, occurring ~16.9% of the time
• Most frequent digit for Chiefs is ‘0’, occurring ~17.4% of the time
Niners Digit Chiefs Digit Frequency
0 7 3.69%
7 0 3.62%
7 4 3.30%
3 0 3.29%
4 7 3.27%
Top 5 most frequent winning squares
Expected Value by Square^5
Niners Digit Chiefs Digit Expected Value^6
0 7 (4.04%)
3 0 (4.55%)
7 0 (5.79%)
Top 3 Best Expected Value Squares
Niners Digit Chiefs Digit Expected Value
2 2 (95.19%)
5 5 (77.80%)
2 5 (74.08%)
Top 3 Worst Expected Value Squares
Raw Frequencies for Total Points u47.5
Sample Size: n = 5,127 games
• Most frequent digit for losing team is ‘0’, occurring ~25.2% of the time
• Most frequent digit for winning team is ‘4’, occurring ~16.3% of the time
Loser Digit Winner Digit Frequency
0 3 5.68%
7 0 5.09%
0 7 4.76%
0 4 3.98%
7 4 3.39%
Top 5 most frequent winning squares
Raw Frequencies for Total Points o47.5
Sample Size: n = 3,035 games
• Most frequent digit for losing team is ‘4’, occurring ~20.5% of the time
• Most frequent digit for winning team is ‘1’, occurring ~17.9% of the time
Loser Digit Winner Digit Frequency
4 7 6.10%
7 1 4.09%
4 1 3.39%
1 4 2.97%
0 8 2.93%
Top 5 most frequent winning squares
Selected Conclusions
• Participating in the “Super Bowl Squares – Final Result” market on DraftKings Sportsbook has a substantially negative overall expected value, and likely has a negative expected value for every
single one of the 100 available squares
□ This conclusion is logically continuous with the fact that the probabilities implied by DraftKings’ available odds sum to a total of ~165.9%; the market has substantial “juice” or “vig”
□ The available odds on relatively common squares (e.g., [0:7], [3:0], [7:0]) are much closer to “fair” vs. the rarest square outcomes (e.g., [2:2], [5:5], [2:5])
☆ This strategy by DraftKings entices bettors to place a substantial dollar volume of wagers on the “almost fair” squares that have a reasonable chance of winning
☆ Secondarily, it mitigates the negative financial impact to DraftKings that could arise in the event of a “black swan” final game score, such as [15 – 5] or [22 – 12]
□ A participant who has a bias towards a “high-scoring” vs. “low-scoring” game would place materially different value on certain square outcomes. Amongst the most pronouncedly:
□ If one believes the game will be “low-scoring”, he should greatly value the losing team’s digit ‘0’, which occurs in 25.2% of low-scoring games in the dataset, but only in 12.7% of
high-scoring games in the dataset
□ If one believes the game will be “high-scoring”, he should greatly value the winning team’s digit ‘1’, which occurs in 17.9% of high-scoring games in the dataset, but only in 9.8% of
low-scoring games in the dataset
Areas for Research Expansion
• The most substantial limitation in our analysis is that the square frequencies are derived solely from historical game logs, as opposed to a Monte Carlo simulation model of this year’s Super Bowl
□ As such, an analyst of this data is forced to balance (i) choosing the subset of games that are most comparable to the game being predicted, and (ii) leaving a sufficiently large number of
games in the dataset to mitigate the impact of outlier game results
• The variant of Super Bowl Squares that we analyzed (“Final Result”) is one of several commonly played variants, each of which has its quirks that would impact the analysis. Perhaps the most
common is the variant in which winning squares are determined by the digits in the score at the end of ANY quarter (as opposed to only at the end of the game)
• Further analysis could yield interesting insights regarding how the value of a given square changes as the game progresses. As an example, say that a team scores a safety (worth two points) in
the 1st quarter of the game. Which final square results would see the greatest increase in estimated probability? Which would see the greatest decrease? Are there any squares that would only be
minimally impacted?
Appendix A: Winning Criteria
• The variant of “Super Bowl Squares” that we analyzed is settled based on the final digit of each team’s score once the game has been completed
• Both teams’ digits must match for a square to be deemed a winner. As such, there are 100 potential outcomes, and there will always be exactly 1 victorious square out of these 100 potential
• A partial set of the final scores that would result in victory for an entrant with the square “Chiefs 7 – Niners 3” are as follows:
Chiefs 7 / Niners 3 Chiefs 7 / Niners 13 Chiefs 7 / Niners 23 Chiefs 7 / Niners 33
Chiefs 17 / Niners 3 Chiefs 17 / Niners 13 Chiefs 17 / Niners 23 Chiefs 17 / Niners 33
Chiefs 27 / Niners 3 Chiefs 27 / Niners 13 Chiefs 27 / Niners 23 Chiefs 27 / Niners 33
Appendix B: Weighted Square Value
Weighting is reflective of the estimated win probability implied by the de-vigged Pinnacle Super Bowl Winner odds of ‘-129 / +117’ [55% / 45% ]
Key Insight: If the winner is known, the square “Winner 1:0 Loser” increases from 1.2% to 2.2% probability, roughly doubling.
Appendix C: DraftKings Sportsbook Available Odds | {"url":"https://www.dataduel.co/category/analytics/sports-analytics/","timestamp":"2024-11-13T13:13:15Z","content_type":"text/html","content_length":"65349","record_id":"<urn:uuid:96b1266d-969f-412b-8714-e864b630fd9a>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00571.warc.gz"} |
Terminating Decimal - Definition, Examples | How to Determine if it is a Terminating Decimal?
Terminating Decimal – Definition, Examples | How to Determine if it is a Terminating Decimal?
Seek help regarding the Terminating Decimals concept from this webpage. Get to know all about Terminating Decimals such as Definitions, How to Determine if it is a Terminating Decimal or not by going
through this entire article. We have also listed the steps on how to convert a terminating decimal to a repeating decimal here. Find Solved Examples on Terminating Decimals available here so as to
understand the concept clearly.
Also, See:
Terminating Decimal – Definition
A Terminating Decimal is a decimal that ends. A Decimal Number that contains a finite number of digits next to the decimal point is called a Terminating Decimal. In fact, we can rewrite the
terminating decimals as fractions.
Example: 0.52, 0.625, 0.78, etc.
How to know if it is a Terminating Decimal?
While expressing a fraction in decimal form, when we perform division we observe that the division is complete after few steps i.e. we get a remainder zero. The decimal quotient obtained is known as
the terminating decimal. In fact, such decimals will have a finite number of digits after the decimal point.
Terminating Decimals can be expressed as repeating decimals by simply placing zeros that are never-ending. For Example, 12.32 can be written as 12.3200000000…….
Solved Examples on Terminating Decimals
1. Express 1/4 in decimal form and check if it is terminating decimal?
Perform Division Operation and check if the decimal quotient has a finite number of digits or not. If it has a finite number of digits after the decimal point then it is a terminating decimal.
Since 0.25 has a finite number of digits next to the decimal point it is considered a terminating decimal.
2. Express 15/4 in Decimal Form and check if it is terminating decimal?
Perform Division Operation and check if the decimal quotient has a finite number of digits or not. If it has a finite number of digits after the decimal point then it is a terminating decimal.
Since 3.75 has a finite number of digits next to the decimal point it is considered as a terminating decimal.
3. Express 23/5 in Decimal Form?
Perform Division Operation and check if the decimal quotient has a finite number of digits or not. If it has a finite number of digits after the decimal point then it is a terminating decimal.
Since the decimal quotient, 4.6 has a finite number of decimal places next to the decimal point it is called a terminating decimal.
FAQs on Terminating Decimal
1. What is a terminating decimal?
Terminating Decimal is a Decimal Number that contains a finite number of digits next to the decimal point.
2. Is 2.3 a terminating decimal?
Yes, 2.3 is a terminating decimal as it has a finite number of digits after the decimal point.
3. How do you find terminating decimals without actual division?
Prime Factorization of the denominator should contain factor 2 or factor 5 or both factors 2 and 5 to tell if it’s a terminating decimal. Any factor other than these gives a non-terminating decimal. | {"url":"https://ccssanswers.com/terminating-decimal/","timestamp":"2024-11-02T08:33:14Z","content_type":"text/html","content_length":"151527","record_id":"<urn:uuid:315a2d69-a3f2-4196-9597-1322f5b6b63f>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00540.warc.gz"} |
D (data language specification)
D is a set of prescriptions for what Christopher J. Date and Hugh Darwen believe a relational database management system ought to be like. It is proposed in their paper The Third Manifesto, first
published in 1994 and elaborated on in several books since then.
D by itself is an abstract language specification. It does not specify language syntax. Instead, it specifies desirable and undesirable language characteristics in terms of prescriptions and
proscriptions. Thus, D is not a language but a family of both implemented and future languages. A "valid D" must have a certain set of features, and exclude a different set of features which Date and
Darwen consider unwise and contrary to the relational model proposed by E. F. Codd in 1970. A valid D may have additional features which are outside the scope of relational databases.
Tutorial D
Tutorial D is a specific D which is defined and used for illustration in The Third Manifesto. Implementations of D need not have the same syntax as Tutorial D. The purpose of Tutorial D is both
educational and to show what a D might be like. Rel is an implementation of Tutorial D.
There are numerous implementations of D, with varying degrees of maturity and compliance.
• D's first implementation is D4, written in C#. D4 is the flagship language of Alphora's Dataphor.
• Rel is the most complete implementation of Tutorial D (including the Inheritance Model), and is heavily used in teaching.
• Andl is an relational programming language with SQLite or PostgreSQL backend and Thrift interfaces.
• Alf - Relational Algebra at your Fingertips, a Ruby implementation of relational algebra inspired by Tutorial D.
• Project:M36 - a mathematically-coherent relational algebra database management system written in Haskell.
• Dee makes Python relational.
• SIRA_PRISE stands for Straightforward Implementation of a Relational Algebra - Prototype of a Relational Information Storage Engine.
• TclRal - Tcl Relational Algebra Library, TclRal is an implementation of relational algebra, based on concepts in The Third Manifesto, as an extension of the Tcl language.
• C. J. Date and Hugh Darwen (2007, Addison-Wesley) Databases, Types, and the Relational Model: The Third Manifesto, a third edition superseding first and second editions that are the two books
listed below. ISBN 0-321-39942-0
• Date, C. J.; Darwen, Hugh (1998). Foundation for object/relational databases: The Third Manifesto: a detailed study of the impact of objects and type theory on the relational model of data
including a comprehensive proposal for type inheritance (1st ed.). Reading, MA: Addison-Wesley. xxi, 496. ISBN 0-201-30978-5. LCCN 98010364. OCLC 38431501. LCC QA76.9.D3 D15994 1998.
• Date, C. J.; Darwen, Hugh (2000). Foundation for Future Database Systems: The Third Manifesto: a detailed study of the impact of type theory on the relational model of data, including a
comprehensive model of type inheritance (2nd ed.). Reading, MA: Addison-Wesley Professional. xxiii, 547. ISBN 0-201-70928-7. LCCN 00035527. OCLC 43662285. LCC QA76.9.D3 D3683 2000.
External links
By: Wikipedia.org
Edited: 2021-06-19 12:42:59
Source: Wikipedia.org | {"url":"https://codedocs.org/what-is/d-data-language-specification","timestamp":"2024-11-09T16:37:26Z","content_type":"text/html","content_length":"26385","record_id":"<urn:uuid:4d9b2935-ae09-4264-96ca-a3d88896d011>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00439.warc.gz"} |
PurposeMethodsResultsConclusionsMaterials and MethodsResultsDiscussionLinear regression analysis of relationships of optic nerve head parameters, radius of scan circle, and peripapillary retinal nerve fiber layer (RNFL) thickness to axial length before and after adjustment for ocular magnification effect. (A) Rim area, (B) disc area, (C) cup area, (D) radius of disc, (E) disc-scan circle distance, and (F) RNFL thickness.Linear regression analysis of relationships of optic nerve head parameters, radius of scan circle, and peripapillary retinal nerve fiber layer (RNFL) thickness to spherical equivalent before and after adjustment for ocular magnification effect. (A) Rim area, (B) disc area, (C) cup area, (D) radius of disc, (E) disc-scan circle distance, and (F) RNFL thickness. D = diopters.ONH parameters and peripapillary RNFL thickness by Cirrus OCTBivariate and partial correlations of AL and SE with disc parameters and peripapillary RNFL thickness before adjustment for ocular magnification effectBivariate and partial correlations of AL and SE with disc parameters and peripapillary RNFL thickness after adjustment for ocular magnification effect by AL methodCorrelation and regression analyses of RNFL thickness, disc size, and scan circle
Korean J OphthalmolKorean J OphthalmolKJOKorean Journal of Ophthalmology : KJO1011-8942The Korean Ophthalmological Society27729753505700910.3341/kjo.2016.30.5.335Original ArticleInfluence of Myopia
on Size of Optic Nerve Head and Retinal Nerve Fiber Layer Thickness Measured by Spectral Domain Optical Coherence TomographyBaeSeok Hyun1KangShin Hee2FengChi Shian34ParkJoohyun5JeongJae
Hoon67YiKayoung1Department of Ophthalmology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, Seoul, Korea.Bundang Clean Eye Clinic, Seongnam, Korea.Somang Ophthalmic Clinic,
Incheon, Korea.Haenam Kim's Eye Clinic, Haenam, Korea.Dain Eye Clinic, Incheon, Korea.Department of Ophthalmology, Armed Forces Capital Hospital, Seongnam, Korea.Department of Ophthalmology, Konyang
University College of Medicine, Daejeon, Korea.Corresponding Author: Kayoung Yi, MD, PhD. Department of Ophthalmology, Kangnam Sacred Heart Hospital, Hallym University College of Medicine, #1
Singil-ro, Yeongdeungpo-gu, Seoul 07441, Korea. Tel: 82-2-829-5196, Fax: 82-2-829-4638, eyeyoung@hallym.or.kr1020162992016305335343149201517112015© 2016 The Korean Ophthalmological Society2016This is
an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0/) which permits unrestricted
non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
To investigate optic nerve head size and retinal nerve fiber layer (RNFL) thickness according to refractive status and axial length.
In a cross-sectional study, 252 eyes of 252 healthy volunteers underwent ocular biometry measurement as well as optic nerve head and RNFL imaging by spectral-domain optical coherence tomography.
Correlation and linear regression analyses were performed for all subjects. The magnification effect was adjusted by the modified axial length method.
Disc area and spherical equivalent were positively correlated (r = 0.225, r^2 = 0.051, p = 0.000). RNFL thickness showed significant correlations with spherical equivalent (r = 0.359, r^2 = 0.129, p
= 0.000), axial length (r = -0.262, r^2 = 0.069, p = 0.000), disc radius (r = 0.359, r^2 = 0.129, p = 0.000), and radius of the scan circle (r = -0.262, r^2 = 0.069, p = 0.000). After adjustment for
the magnification effect, those relationships were reversed; RNFL thickness showed negative correlation with spherical equivalent and disc radius, and positive correlation with axial length and
radius of the scan circle. The distance between the disc margin and the scan circle was closely correlated with RNFL thickness (r = -0.359, r^2 = 0.129, p = 0.000), which showed a negative
correlation with axial length (r = -0.262, r^2 = 0.069, p = 0.000).
Optic disc radius and RNFL thickness decreased in more severely myopic eyes, but they increased after adjustment for magnification effect. The error due to the magnification effect and optic nerve
head size difference might be factors that should be considered when interpreting optical coherence tomography results.
MyopiaOptic nerveOptical coherence tomography
Morphologic evaluation of the optic nerve head (ONH) is an essential step in the proper diagnosis of optic nerve diseases, including glaucoma. For example, the ONH of primary open-angle glaucoma
patients is characterized by a large and deep cup with a narrow neuroretinal rim, resulting from the loss of ganglion cell axons [12]. Unfortunately, the size and shape of the ONH and cup not only
vary with specific optic nerve diseases, but also differ widely in healthy eyes by ethnicity, sex, and refractive status; differences exist even between eyes of the same individual [34567]. This
large variation in healthy ONH appearance makes precise detection of ONH morphologic abnormalities very difficult despite recently developed advanced imaging systems such as Heidelberg retina
tomography and optical coherence tomography (OCT).
Notably, myopia has been widely reported to affect the size and shape of the optic disc and peripapillary retinal nerve fiber layer (RNFL) [891011]. Diagnosis of glaucoma in myopic patients is thus
very challenging. Thorough and accurate understanding of the relationship between myopia and the anatomic structures of the ONH and RNFL is important, particularly in light of the two to three times
greater risk of glaucoma in myopic individuals compared with nonmyopic individuals [1213]. However, the influence of myopia on the shape and size of the ONH and peripapillary RNFL is still uncertain
[811141516]. Moreover, most of the previous studies on ONH and RNFL appearance in myopia were conducted with subject groups heterogeneous in age, sex, and ethnicity. As emphasized above, factors such
as age, sex, and ethnicity are known to affect ONH and peripapillary RNFL morphology. This signifies the importance of subject homogeneity with regard to age, sex, and ethnicity in any effort to
elucidate the relationship between myopia and ONH/RNFL morphology. In this regard, for the purposes of the present study, healthy volunteers matched by age (young), sex (male), and ethnicity (Korean)
were recruited. Data regarding ONH and RNFL structure and refractive errors were collected and analyzed in relation to the degree of myopia.
Soldiers stationed in Gyeonggi province were invited to participate in the study, which was conducted between December 2008 and April 2009. The study met the ethical standards of the Declaration of
Helsinki and was approved by the Armed Forces Capital Hospital institutional review board. In addition, informed consent was obtained from each participant, and individuals with any abnormal ocular
findings or history of certain diseases were excluded. The specific exclusion criteria were as follows: (1) ocular hypertension (IOP >21 mmHg) or glaucoma; (2) evidence of reproducible visual field
abnormality (defined as pattern standard deviation significant at p < 5% level, abnormal glaucoma hemifield test result, or any other pattern of loss consistent with ocular disease) in either eye;
(3) history of ocular surgery; (4) best-corrected visual acuity worse than 20 / 32 on Early Treatment of Diabetic Retinopathy Study scale; (5) evidence of vitreoretinal disease; (6) evidence of optic
nerve or RNFL abnormality; and (7) history of diabetes or other systemic disease.
All subjects underwent comprehensive ophthalmologic examinations on both eyes; these examinations included best-corrected visual acuity, intraocular pressure with Goldmann applanation tonometry,
automated refraction (RK-F1; Topcon, Tokyo, Japan), axial length (IOLMaster; Carl Zeiss Meditec, Dublin, CA, USA), slit-lamp examination, red-free fundus photography (CF-60UVi; Canon, Tokyo, Japan)
with mydriasis, standard automated perimetry (Swedish Interactive Threshold Algorithm standard C24-2 program, Humphrey Field Analyzer II 750; Carl Zeiss Meditec), ONH parameter measurement (rim area,
disc area, average cup-to-disc [C/D] ratio, vertical C/D ratio, and cup volume), and peripapillary RNFL thickness measurement by spectral-domain OCT (Cirrus OCT, Carl Zeiss Meditec). The data for one
randomly selected eye were selected for analysis.
The refractive error was measured five times by autorefractometry (R-F10, Canon) without cycloplegia, and the result was subsequently converted to spherical equivalent. The average of three median
values, after discarding the upper and lower values, was used in the analysis. The axial length was measured five times by partial coherence interferometry (IOLMaster, Carl Zeiss Meditec), and the
average was calculated using the same process as that used for refractive error.
The ONH parameters and peripapillary RNFL thicknesses were measured after the red-free fundus photography by spectral-domain OCT with the optic disc cube 200 × 200 scan protocol under pupil
dilatation and dim illumination by two expert examiners. Scanned images of signal strength lower than 8 were discarded. Also excluded were individuals with an extent of peripapillary atrophy that
expanded across the 3.46 mm scan circle centered on the optic disc. Clock-hour RNFL thickness was recorded based on the right-eye orientation. The optic disc margin measured by spectral-domain OCT
was sometimes different from the actual disc margin because spectral-domain OCT determines disc margin based on the retinal pigment epithelium. However, it did not have a significant impact on the
analysis, so we included all data if peripapillary atrophy did not expand past the scan circle radius.
The ONH average radius was calculated from the OCT-measured disc area by the following equations: Disc area = π · r^2 Therefore, rdiscareaπ
With this method, the distance from the disc margin to the scan circle (distance = radius of scan circle - radius of disc), which is suspected to influence the OCT peripapillary RNFL thickness
measurement, was determined.
Adjustment for the ocular magnification effect was performed in the same way as in our previous work with the modified axial length method [17]. The relationship between the size of an object on the
fundus as measured on a fundus photograph and the actual size of the object on the fundus can be expressed as t = p · q · s, where "t" is the actual size, "s" is the size measured on the fundus
photograph, "p" is the magnification factor related to the camera, and "q" is the magnification factor related to the eye [18]. The magnification factor related to the fundus camera, "p," can be
expressed as a constant of 3.382 in the telecentric system of Stratus a nd Cirrus OCT [19]. The ocular magnification factor related to the eye, "q," was calculated by the modified axial length method
proposed by Bennett et al. [20] [q = 0.01306 · {axial length (mm) - 1.82}]. t = p · q · s = 3.3820 · 0.01306 · [axial length (mm) - 1.82] · s
Adjustment for average RNFL thickness was performed in the same way as in our previous work [17]. It was presumed that the same number of retinal nerve fibers crossed the scan circle of a 1.73 mm
radius and the magnified scan circle at the same time. Accordingly, the cylindrical cross-sectional RNFL area under the 1.73 mm radius scan circle and the magnified scan circle should be the same.
The radius of the magnified scan circle was calculated using the aforementioned modified axial length method. From this value, the adjusted average RNFL thickness under the 1.73 mm scan circle was
calculated using the equation (r, radius of scan circle = 1.73 mm; r', magnified radius of scan circle). Cross-sectional RNFL area = adjusted RNFL thickness · 2π · r = measured RNFL thickness · 2π ·
r' = measured RNFL thickness · 2π · 3.3820 · 0.01306 · [axial length (mm) - 1.82] · r
Therefore, Adjusted RNFL thickness = measured RNFL thickness · 3.3820 · 0.01306 · [axial length (mm) - 1.82]
Statistical analysis was carried out using the SPSS ver. 12.0 (SPSS Inc., Chicago, IL, USA). The spectral-domain OCT-measured ONH parameters and the calculated mean radius of the disc were compared.
Bivariate and partial correlation analyses were performed to investigate the relationship between axial length or spherical equivalent and ONH parameters or peripapillary RNFL thickness. In addition,
a partial correlation analysis for the same variables, which controlled for spherical equivalent, was performed to investigate the shear influence of axial length on ONH parameters and peripapillary
RNFL thickness, apart from the effect of refractive status. Correlation and linear regression analyses of the ONH parameters, peripapillary RNFL thickness, calculated disc radius, distance from disc
margin to scan circle, axial length, and refractive error were performed for all subjects. The p-values less than 0.01 were considered statistically significant.
A total of 258 subjects were enrolled in this study. Among them, six were excluded because of extended peripapillary atrophy across the 1.73 mm radius scan circle or unacceptable OCT scans, leaving
252 eyes of 252 subjects for further analysis. The mean age of the 252 subjects was 21.06 ± 1.64 years (19 to 26). The average axial length was 24.74 ± 1.25 mm (21.38 to 28.59), and the mean
refractive error was -2.51 ± 2.37 (-11.0 to +4.13) diopters (D). The ONH parameters and peripapillary RNFL thicknesses measured by spectral-domain OCT are listed in Table 1. The average disc area for
the entire subject group was 1.985 ± 0.403 mm^2, the r im area was 1.308 ± 0.264 mm^2, and the average C/D ratio was 0.539 ± 0.149. A correlation analysis of the axial length with disc area, rim
area, cup area, C/D ratio, and peripapillary RNFL thickness showed negative results. The distance from the disc margin to the scan circle (1.73-disc radius) showed positive correlation with axial
length. A partial correlation analysis of the same variables, which controlled for the spherical equivalent, showed negative correlation with axial length, whereas the distance from the disc margin
to the scan circle showed a positive correlation. A correlation analysis of the spherical equivalent with the disc area, rim area, cup area, C/D ratio, and RNFL thickness showed positive
correlations, while that of the spherical equivalent with the distance from the disc margin to the scan circle showed a negative correlation. A partial correlation analysis that controlled for the
axial length also revealed positive correlations of the spherical equivalent with the ONH parameters and peripapillary RNFL thickness and a negative correlation with the distance from the disc margin
to the scan circle (Table 2). The adjusted values of the disc area, rim a rea, radius of the scan c ircle, a nd a verage RNFL thickness all showed significant positive correlation with the axial
length, regardless of the spherical equivalent. Adjusted cup area correlated negatively with the axial length, but this was without statistical significance. All of the adjusted parameters showed
negative correlation with the spherical equivalent (Table 3). The same relationships were observed in the results of a linear regression analysis. Specifically, as the axial length increased, the
disc area, disc radius, and average RNFL thickness decreased, whereas the distance from the disc margin to the scan circle (of 1.73-mm radius) increased significantly. Also, as the spherical
equivalent increased, the disc area, disc radius, and average RNFL thickness increased, and the distance from the disc margin to the scan circle decreased significantly. After adjustment for the
ocular magnification effect, as the axial length increased, the disc area, disc radius, and average RNFL thickness decreased, and the distance from the disc margin to the scan circle increased
significantly; as the spherical equivalent increased, the disc area, disc radius, average RNFL thickness, and the distance from the disc margin to the scan circle decreased (Figs. 1A-1F and 2A-2F).
The measured RNFL thickness without the adjusted ocular magnification effect was analyzed using correlation and simple linear regression analyses to investigate the relationship between disc size and
scan circle radius. It was found that the measured RNFL thickness was larger in the eyes with a longer disc radius both before and after adjustment for the ocular magnification effect. In contrast,
with the adjusted scan circle, the measured RNFL thickness was smaller in the eyes with a longer distance from the disc margin to the scan circle. Finally, the eyes with the longer axial length and
larger myopic refractive error showed smaller peripapillary RNFL thickness as measured by spectral-domain OCT (Table 4).
Many studies have suggested that optic disc size is influenced by axial length and refractive error [58910111621]; however, the results of these studies are conflicting. For example, Cheung et al.
[22] reported that the optic disc is small in myopic eyes, whereas Jonas [23] claimed that it was large. In the present study, optic disc size, axial length, and refractive error were measured in
healthy volunteers, and the results were subjected to correlation analyses. For these purposes, spectral-domain OCT was employed, and the ocular magnification effect was corrected using the modified
axial length method of Bennett et al. [20]. Although the accuracy of this correction method is uncertain, it is known to be both simpler and more accurate than other modalities currently available
[24]; in fact, several recent studies have used it to correct for the ocular magnification effect of ophthalmologic imaging devices [161721]. In any case, the results of the present study were
similar to those obtained in prior studies reported by Leung et al. [16] and Savini et al. [21], who utilized the same correction method for the ocular magnification effect. Specifically, after
correction for the ocular magnification effect, the optic disc size and peripapillary RNFL thickness were both larger in more severely myopic eyes. These findings are highly consistent with those of
the histological study carried out by Jonas et al. [25].
Considering both the homogeneous characteristics of the subjects enrolled in this study and the fact that their respective findings did not significantly diverge from those of prior studies on
subjects of varying age, sex, and ethnicity, it was concluded that ONH size, peripapillary RNFL thickness, and myopia might be independent of such factors. If this is true, an ophthalmologist, when
interpreting data obtained with imaging devices such as OCT or fundus photography, could apply the same considerations with regard to disc size, the effect of ocular magnification, and the degree of
myopia for any age, gender, or ethnicity. Nonetheless, in the clinical setting, whereas peripapillary RNFL thickness evaluation provides some of the most important information regarding glaucomatous
optic nerve damage, it is impossible to correct for the ocular magnification effect in every OCT scan result. Therefore, the present study analyzed the influence of several parameters on scanned
peripapillary RNFL thickness without correction for the ocular magnification effect. According to our study, the adjusted peripapillary RNFL thickness showed a negative correlation with spherical
equivalent without statistical significance and a positive correlation with axial length, results similar to those of a previous study [17]. However, the partial correlation analysis of the spherical
equivalent with the ocular magnification effect as adjusted for the parameters controlling the axial length could not be completed, because the adjustment for the ocular magnification effect and the
partial correlation analysis used the same variables simultaneously. Furthermore, the peripapillary RNFL thickness was larger in cases where the radius of the optic disc was large, the radius of the
scan circle was reduced because of the ocular magnification effect, and the distance from the disc margin to the scan circle was short because of a large disc or small scan circle or both. Another
interesting finding was that the coefficient of determination of the distance from the disc margin to the scan circle for the peripapillary RNFL thickness was higher than any others. This suggests
that peripapillary RNFL thickness is influenced more by the distance from the disc margin to the scan circle than by other factors including the radius of the scan circle, which is the distance from
the center of disc to the scan circle. If the same number of nerve fibers enters the eye through the ONH, the RNFL thickness measured at the same location should not differ, regardless of the disc
size. Therefore, the correlations of peripapillary RNFL thickness with the distance from the disc margin to the scan circle and with the distance from the disc center to the scan circle should be
very similar. However, in this study, the measured peripapillary RNFL thickness was more closely correlated with the distance from the disc margin to the scan circle. The possible explanations of
this finding are as follows. First, more redundant non-neural tissues such as glial cells or capillaries could exist within the RNFL adjacent to the disc. However, confirmation of this hypothesis
requires additional histological evidence not yet available. Second, the RNFL might contain a larger number of nerve fibers in eyes with a short distance between the disc margin and the scan circle.
As noted above, a large disc radius or reduced scan circle radius because of hyperopia could shorten that distance. However, according to the present results, hyperopic eyes had both a short axial
length and a small disc radius, suggesting that the small radius of the optic disc might compensate for the decreased radius of the scan circle. Therefore, whether a large optic disc contains more
nerve fibers might have a crucial effect on peripapillary RNFL thickness measurement. Verification of this effect will have to await further histological study.
The study was conducted only in a limited sample of healthy young Korean males. Therefore, these features may limit the application of these data to subjects of other age groups or ethnicities.
In conclusion, the peripapillary RNFL thickness was most strongly influenced by the distance from the disc margin to the scan circle. Disc radius and RNFL thickness decreased in more severely myopic
eyes, they increased after adjustment for the magnification effect. Based on this, the ONH size and RNFL measurements were influenced by the magnification effect. Although the error by the
magnification effect and the ONH size difference were clinically negligible because of the low coefficient of determination and extremely small optic disc size change according to the degree of
myopia, they might remain as factors that should be considered.
Conflict of Interest: No potential conflict of interest relevant to this article were reported.
HayrehSSOptic disc changes in glaucoma1972561751854624382LeeKHParkKHKimDMYounDHRelationship between optic nerve head parameters of Heidelberg Retina Tomograph and visual field defects in primary
open-angle glaucoma19961024288755198ChiTRitchRSticklerDRacial differences in optic nerve head parameters19891078368392730402MansourAMRacial variation of optic disc
size19912367721870843VarmaRTielschJMQuigleyHARace-, age-, gender-, and refractive error-related differences in the normal optic disc1994112106810768053821TsaiCSZangwillLGonzalezCEthnic differences in
optic nerve head topography1995424825719920682MarshBCCantorLBWuDunnDOptic nerve head (ONH) topographic analysis by stratus OCT in normal subjects: correlation to disc size, age, and
ethnicity20101931031819855299TomlinsonAPhillipsCIRatio of optic cup to optic disc: in relation to axial length of eyeball and refraction1969537657685358522JonasJBGusekGCNaumannGOOptic disk
morphometry in high myopia19882265875903209086HyungSMKimDMHongCYounDHOptic disc of the myopic eye: relationship between refractive errors and morphometric
characteristics1992632351434043SamarawickramaCWangXYHuynhSCEffects of refraction and axial length on childhood optic disk parameters measured by optical coherence
tomography200714445946117765432HohSTLimMCSeahSKPeripapillary retinal nerve fiber layer thickness variations with myopia200611377377716650672MeloGBLiberaRDBarbosaASComparison of optic disk and retinal
nerve fiber layer thickness in nonglaucomatous and glaucomatous patients with high myopia200614285886017056370UchidaHYamamotoTAraieMTopographic characteristics of the optic nerve head measured with
scanning laser tomography in normal Japanese subjects20054946947616365792TayESeahSKChanSPOptic disk ovality as an index of tilt and its relationship to myopia and
perimetry200513924725215733984LeungCKChengACChongKKOptic disc measurements in myopia with optical coherence tomography and confocal scanning laser
ophthalmoscopy2007483178318317591887KangSHHongSWImSKEffect of myopia on the thickness of the retinal nerve fiber layer measured by Cirrus HD optical coherence
tomography2010514075408320237247LittmannHDetermination of the real size of an object on the fundus of the living eye19821802862897087358BengtssonBKrakauCECorrection of optic disc measurements on
fundus photographs199223024281547963BennettAGRudnickaAREdgarDFImprovements on Littmann's method of determining the size of retinal features by fundus
photography19942323613678082844SaviniGBarboniPParisiVCarbonelliMThe influence of axial length on retinal nerve fibre layer thickness and optic-disc size measurements by spectral-domain
OCT201296576121349942CheungCYChenDWongTYDeterminants of quantitative optic nerve measurements using spectral domain optical coherence tomography in a population-based sample of non-glaucomatous
subjects2011529629963522039236JonasJBOptic disk size correlated with refractive error200513934634815734000Garway-HeathDFRudnickaARLoweTMeasurement of optic disc size: equivalence of methods to
correct for ocular magnification1998826436499797665JonasJBBerenshteinEHolbachLLamina cribrosa thickness and spatial relationships between intraocular space and cerebrospinal fluid space in highly
myopic eyes2004452660266515277489
Values are presented as mean ± standard deviation; The RNFL thicknesses were converted according to the right-eye orientation. ONH = optic nerve head; RNFL = retinal nerve fiber layer; OCT = optical
coherence tomography; C/D = cup-to-disc.
The figures in parentheses are p-values.
AL = axial length; SE = spherical equivalent; RNFL = retinal nerve fiber layer; D = diopters; C/D = cup-to-disc.
The figures in parentheses are the p-values.
AL = axial length; SE = spherical equivalent; RNFL = retinal nerve fiber layer; D = diopters.
RNFL = retinal nerve fiber layer; r = correlation coefficient; r^2 = coefficient of determination; β = regression constant. | {"url":"https://www.ekjo.org/journal/scholar_xmls.php?doi=10.3341/kjo.2016.30.5.335","timestamp":"2024-11-12T00:39:49Z","content_type":"application/xml","content_length":"48744","record_id":"<urn:uuid:5a051d4e-6000-4e96-8aa0-c9ae5a846eab>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00857.warc.gz"} |
Independent random samples, each containing 60 observations,
were selected from two populations. The samples from populations...
Independent random samples, each containing 60 observations, were selected from two populations. The samples from populations...
Independent random samples, each containing 60 observations, were selected from two populations. The samples from populations 1 and 2 produced 42 and 30 successes, respectively.
Test H0:(p1−p2)=0 against Ha:(p1−p2)≠0. Use α=0.09
(a) The test statistic is
(b) The P-value is
(c) The final conclusion is
A. We can reject the null hypothesis that (p1−p2)=0(p1−p2)=0 and accept that (p1−p2)≠0(p1−p2)≠0.
B. There is not sufficient evidence to reject the null hypothesis that (p1−p2)=0(p1−p2)=0. | {"url":"https://justaaa.com/statistics-and-probability/756599-independent-random-samples-each-containing-60","timestamp":"2024-11-01T23:00:45Z","content_type":"text/html","content_length":"41507","record_id":"<urn:uuid:eab5eacb-d6ac-41d6-a81a-9883f05113d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00055.warc.gz"} |
How to use for loop to perform pearson correlation in r?
Here is an example of how to use a for loop to perform Pearson correlation in R:
1 # Create a matrix with some sample data
2 data <- matrix(rnorm(100), ncol = 5)
4 # Create an empty matrix to store the correlation values
5 cor_matrix <- matrix(NA, ncol(data), ncol(data))
7 # Use a for loop to calculate Pearson correlation for each pair of columns
8 for(i in 1:ncol(data)) {
9 for(j in 1:ncol(data)) {
10 cor_matrix[i, j] <- cor(data[,i], data[,j])
11 }
12 }
14 # Print the correlation matrix
15 print(cor_matrix)
In this example, we first create a matrix with some sample data. We then create an empty matrix cor_matrix to store the correlation values. We use a nested for loop to calculate the Pearson
correlation for each pair of columns in the data matrix and store the results in the cor_matrix. Finally, we print the correlation matrix. | {"url":"https://devhubby.com/thread/how-to-use-for-loop-to-perform-pearson-correlation","timestamp":"2024-11-11T12:03:28Z","content_type":"text/html","content_length":"120414","record_id":"<urn:uuid:c6b0a615-cf1b-4b3b-9d1c-72e29d7a0d38>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00805.warc.gz"} |
Performance - Exaloop
Unlock the Power of Speed
Discover Exaloop's Game-Changing Performance Benefits
Welcome to Exaloop, the ultimate platform for data science and big data analytics. Exaloop redefines speed and efficiency, delivering unparalleled performance that will transform your data-driven
Experience Lightning-Fast Computations
With Exaloop, say goodbye to sluggish data processing. Harness the true power of native machine code, and witness speeds 10-100 times faster than traditional Python. Execute complex,
compute-intensive workloads in a fraction of the time, unleashing your productivity like never before.
Multithreading and Parallel Processing
Unlock the Potential of Your Hardware
Efficiently utilize multiple cores and parallel processing capabilities to skyrocket your data analysis. Maximize your hardware’s potential and conquer massive datasets effortlessly. Parallelize your
workloads with zero effort.
Accelerate Machine Learning and Beyond
Empower your AI models and other compute-intensive tasks with GPU programming support. Leverage the immense computational power of GPUs to reach solutions faster and take your projects to the next
level. Zero CUDA or low-level programming required.
Seamless and Supercharged
Exaloop’s optimized, fully-compiled implementations of your favorite libraries enable order-of-magnitude speedups, parallel processing, GPU acceleration and more. Eliminate bottlenecks, enhance your
algorithms, and supercharge your analytics, all while enjoying the user-friendly Python experience.
You don’t need to be an expert to tap into Exaloop’s performance benefits. Exaloop is designed to be accessible to anyone with a basic knowledge of Python. Embrace the ease of Python and unlock
unparalleled speed without complex learning curves.
Discover the transformative power of speed with Exaloop. Whether you're a data scientist, analyst, or developer, Exaloop’s performance-driven platform will redefine what's possible in your
data-driven endeavors.
Empower Your Data Journey with Exaloop's Parallelism and Multithreading
Unleash Unprecedented Efficiency
Break Free from Bottlenecks
Experience a new era of data processing with Exaloop’s Parallelism and Multithreading. Eliminate waiting times and overcome bottlenecks by executing multiple tasks in parallel. Embrace unparalleled
efficiency that drives your data-driven projects to success.
Maximize Hardware Potential
Harness the Power of Multiple Cores
Tap into the full potential of your hardware. Exaloop’s Parallelism and Multithreading enable you to utilize multiple CPU cores simultaneously. Transform your system into a data-processing powerhouse
and achieve optimal performance.
Efficiently Scale with Big Data
Scale your data analytics effortlessly with Exaloop’s Parallelism and Multithreading. Seamlessly handle massive datasets and extract valuable insights without compromise.
Don’t Waste Your Time Reasoning About Complex Code
Integrating parallelism and multithreading has never been easier. Exaloop seamlessly integrates these powerful features, allowing you to focus on your data, not on complicated implementations. Enjoy
elevated performance without unnecessary complexity.
Embrace Parallelism and Multithreading with Exaloop
Embrace Unprecedented Speed with GPU Acceleration
Welcome to Exaloop’s GPU Acceleration: Unleash the True Power of Your Hardware
Experience Blazing-Fast Computation
Turbocharge Your Workflows
Harness the raw power of Graphics Processing Units (GPUs) and witness a quantum leap in performance. Exaloop’s GPU acceleration empowers your data science and analytics projects with lightning-fast
computations, delivering results at unparalleled speeds. Get ready to revolutionize your data-driven endeavors.
With Exaloop’s GPU acceleration, complex AI algorithms become faster and more efficient. Experience shorter training times, iterate faster, and fine-tune models with ease. Let your AI innovations
take center stage with Exaloop’s GPU support.
Conquer Big Data with Ease
Tackle Large Datasets Head-On
Gone are the days of dauntingly large datasets. Tackle big data challenges effortlessly with Exaloop’s GPU acceleration. Perform computations on massive datasets without slowdowns, unlocking insights
that were once beyond reach. Conquer big data, one GPU-accelerated task at a time.
No CUDA or Low-Level Programming Required
Integrating GPU acceleration into your projects has never been easier. Exaloop seamlessly integrates GPU programming capabilities with the simplicity of Python, so you can unlock the power of GPUs
with minimal effort. Write concise, high-level code and let Exaloop handle the rest.
Transform Your Data Journey with Exaloop
Unleash the Full Potential of your Data Science Libraries with Exaloop's Turbocharged Implementations
Welcome to Exaloop’s Optimized Libraries: Your Gateway to Unparalleled Performance
Library-Specific Optimizations
A Performance Boost like Never Before
Whether you’re using NumPy or Pandas, experience data science libraries like never before with Exaloop. Exaloop’s fully-compiled software stack applies library-specific compiler optimizations to
eliminate unnecessary computations and memory overhead.
Combine Exaloop’s optimized libraries with GPU support to elevate your data science capabilities to new heights. Whether you’re crunching numbers, training models, or handling vast datasets,
Exaloop’s GPU integration works harmoniously across the entire software stack.
Integrate with Parallelism and Multithreading
Efficiency is the cornerstone of Exaloop’s design. Exaloop’s libraries integrate seamlessly with the platform’s parallelism and multithreading capabilities. Speed up analyses while fully utilizing
your hardware.
Streamlined Performance, No Expertise Required
Exaloop’s simple Python interface ensures that these performance benefits are accessible to all, regardless of expertise level. No more re-engineering and code rewrites to get performance.
Performance Beyond Expectations
Redefine Your Data Science Experience
Cutting-edge performance that revolutionizes data science workflows. Elevate your projects and deliver results faster than ever before.
Are you ready to unlock the full potential of your data science libraries? | {"url":"https://exaloop.io/performance/","timestamp":"2024-11-09T12:34:04Z","content_type":"text/html","content_length":"699950","record_id":"<urn:uuid:bfa4852a-361a-4d30-995c-1aeb821ab78a>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00176.warc.gz"} |
Data set for "The full phase space dynamics of a magnetically levitated electromagnetic vibration harvester"
Frequency response polynomial approximations.zip (287.81 kB)
Frequency response real force.zip (65.13 kB)
Phase diagrams complete set.zip (4.1 GB)
Phase diagrams high resolution.zip (2.69 GB)
Data set for "The full phase space dynamics of a magnetically levitated electromagnetic vibration harvester"
This file explains what data is in the different folders
- Frequency response polynomial approximations
Contains the frequency response data from the polynomial approximations of the real force
The polynomial degree and whether gravity was included or not is in the file names
The data contains the following variables:
- A: An array of the free magnet peak to peak amplitude at the end of the simulation in the forward direction
- P1: An array of the average generated power in the top coil at the end of the simulation
- P2: An array of the average generated power in the bottom coil at the end of the simulation
- w: An array of the driving anglar frequencies used in the simulation
- F_profile A 2D array of the real force profile first koordinate is position and second is force
- fit_interval The interval within which the fitting to the real force was carried out
- fit The resulting fitting parameters
- input A structure of all the values used for the simulation. It corresponds largely to Table 1
All variables ending on "_back", means that it was for the backwards (high to low) frequency sweep.
The counterparts without the "_back" ending is for the forward (low to high) direction
- Frequency response real force
Contains the frequency response data using the real force
Whether gravity was included or not is in the file names
The data contains the following variables:
- A: An array of the free magnet peak to peak amplitude at the end of the simulation
- P1: An array of the average generated power in the top coil at the end of the simulation
- P2: An array of the average generated power in the bottom coil at the end of the simulation
- w: An array of the driving angular frequencies used in the simulation
- F_profile A 2D array of the real force profile first koordinate is position and second is force
- input A structure of all the values used for the simulation. It corresponds largely to Table 1
All variables ending on "_back", means that it was for the backwards (high to low) frequency sweep.
The counterparts without the "_back" ending is for the forward (low to high) direction
- Phase diagrams complete set
Contains the data for the phase diagrams
The resolution in initial position and speed, as well as whether gravity was included or not, is in the file names
The data contains the following variables
- A([p],[v],[w]) A matrix of the free magnet peak to peak amplitude at the end of each simulation
- P1([p],[v],[w]) A matrix of the average generated power in the top coil at the end of each simulation
- P2([p],[v],[w]) A matrix of the average generated power in the bottom coil at the end of each simulation
- p An array of the initial positions of the free magnet
- v An array of the initial speeds of the free magnet
- w An array of the driving angular frequencies used in the simulations
- Conv([p],[v],[w]) A matrix recording the cycle at which the conversion criteria was passed, and the simulation was ended. 1000 is the max, so simulations with this value did not converge.
- Diff([cycle],[p],[v],[w]) A matrix containing the conversion criteria cycle comparisons
- Final_p_v([p/v],[p],[v],[w]) A matrix containing the final position and speed of the free magnet. The first index labeled [p/v] can either be 1 for position or 2 for speed
- Phase diagrams high resolution
Contains the data for the high resolution phase diagrams
Whether gravity was included or not, is in the file names.
The data contains the following variables
- A([p],[v]) A matrix of the free magnet peak to peak amplitude at the end of each simulation
- P1([p],[v]) A matrix of the average generated power in the top coil at the end of each simulation
- P2([p],[v]) A matrix of the average generated power in the bottom coil at the end of each simulation
- p An array of the initial positions of the free magnet
- v An array of the initial speeds of the free magnet
- w The driving angular frequency used for the simulations
- Conv([p],[v]) A matrix recording the cycle at which the conversion criteria was passed, and the simulation was ended. 1000 is the max, so simulations with this value did not converge.
- Diff([cycle],[p],[v]) A matrix containing the conversion criteria cycle comparisons
Electromagnetic energy harvesting using 2D magnetic levitation
Danish Agency for Science and Higher Education
Find out more... | {"url":"https://data.dtu.dk/articles/dataset/Data_set_for_The_full_phase_space_dynamics_of_a_magnetically_levitated_electromagnetic_vibration_harvester_/12967082/1","timestamp":"2024-11-03T04:27:12Z","content_type":"text/html","content_length":"189787","record_id":"<urn:uuid:d08150bb-5284-40b3-91ce-df8afea866e1>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00653.warc.gz"} |
School District Measures November 7th 2023
As analyses are completed, summary results will be provided on this page.
• Summary
• Bond summary
• Levy summary
• Taxpayer cost impact estimate calculators for the levies
• Taxpayer cost impact estimate calculators for the bonds
• Enrichment levy amounts charts
• Capital levy amounts charts
• Bond amounts chart
• Bond schedules
• County voters’ pamphlet rules summary
• Pro/Con committees for the voters’ pamphlet summary
• Analyses of the bond cost analyses presented by districts
• Analyses of the enrichment levy cost analyses presented by districts
• An analysis of the King County Tax Transparency Tool (TTT)
• Calculator methodology
For the November 7th, 2023 election, across the state:
• There are 7 bond measures.
• There is 1 enrichment levy.
• There are 2 capital levies.
Bond summary
Levy summary
In progress
Taxpayer cost impact estimate calculators for the levies
Calculator for the Kent SD’s Enrichment and Capital Levy Measures
Calculator for the Skykomish SD’s Capital Levy Measure
Calculator for the Steilacoom SD’s Capital Levy Measure
Taxpayer cost impact estimate calculators for the bonds
Calculator for the Enumclaw SD’s Bond Measure
Calculator for the Fife SD’s Bond Measure
Calculator for the Hood Canal SD’s Bond Measure
Calculator for the Napavine SD’s Bond Measure
Calculator for the South Kitsap SD’s Bond Measure
Calculator for the South Whidbey SD’s Bond Measure
Calculator for the South Whidbey Parks and Recreation’s Bond Measure
Calculator for the Union Gap SD’s Bond Measure
Enrichment levy amounts charts
Capital levy amounts charts
Bond amounts charts
Bond schedules
In progress
County voters’ pamphlet rules summary
In progress
Pro/Con committees for the voters’ pamphlet summary
In progress
Note: Pro/con committee member selection, along with measure resolutions, by school districts were due to their respective county auditor on August 1st, 2023.
Analyses of the bond cost analyses presented by districts
In progress
An analysis of the King County Tax Transparency Tool (TTT)
An analysis of the King County Tax Transparency Tool (TTT)
Calculator methodology
For these property tax impact estimate calculators, a Proportional Obligation Factor (POF) method was used.
Using the 2023 Total District Assessed Value (AV), the POF of the sample parcel was calculated:
POF = (2023 Sample Parcel AV) / (2023 Total District AV)
The POF was then multiplied by the total amount that the district expects to collect each year for the duration of the measure in question (bond or levy) to obtain the estimated taxes for the sample
parcel for the measure.
Using this methodology, it doesn’t matter if all properties increase in AV by 10% or all decrease in AV by 10% (which could happen in a recession). The tax collection schedule shown in the charts and
tables would still apply for the sample parcel. The sample parcel’s proportion of obligation for the bond debt or levy remains the same over the bond payback or levy period. Tax rates, however, would
change. If all properties increase in AV by 10%, the tax rate for the measures would decrease by approximately 10%. If all properties decrease in AV by 10%, the tax rate for the measures would
increase by approximately 10%.
Note for the POF annual change parameter for the enhanced calculator versions:
For parcels that are increasing in POF (Proportional Obligation Factor), a positive POF annual change will give more accurate results. For parcels that are decreasing in POF due to rapid new
construction or for other reasons, a negative POF annual change will give more accurate results. However a value of 0 will generally be slightly conservative and will generate estimates that are
usually within 5% of actual costs.
It is the author’s opinion that county assessors (in all 39 WA counties) should be providing these calculators for their constituents.
Example calculator calculation:
Centralia SD
Total District AV for 2023: $4,131,948,094
Sampler parcel AV for 2023: $350,000
POF for 2023 assumed to be the same for years 2023 - 2025: $350,000 / $4,131,948,094 = 8.471E-05
Enrichment Levy amount to collect in 2024: $6,700,000
Sampler parcel’s 2024 tax for the Enrichment Levy: POF * (Enrichment Levy amount to collect in 2024) = 8.471E-05 * $6,700,000 = $568
Similarly for the year 2025
Enrichment Levy amount to collect in 2025: $7,600,000
2025 tax = 8.471E-05 * $7,600,000 = $644 | {"url":"https://schooldataproject.com/report_levies_20231107","timestamp":"2024-11-06T15:31:43Z","content_type":"text/html","content_length":"14851","record_id":"<urn:uuid:bd9be30e-f558-4a3c-b732-6a17e47a8f1f>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00241.warc.gz"} |
Check If a Tuple Is Empty in Python - Data Science Parichay
In this tutorial, we will look at how to check if a tuple is empty or not in Python with the help of some examples.
How to check if a tuple is empty?
Tuples are ordered collections of items (that are immutable) in Python. A tuple is empty if it does not contain any item. You can use the following methods to check if a tuple is empty or not in
• Using the len() function.
• Comparing the tuple with an empty tuple.
• Using the tuple in a boolean context.
Let’s now take a look at each of the above methods with the help of some examples.
Using the len() function
The length of an empty tuple is zero.
You can use the Python len() function to calculate the length of the tuple and then compare it with zero to check if it is empty or not. Here’s an example –
# create two tuples - t1 is empty and t2 is non-empty
t1 = ()
t2 = (1, 2, 3)
# check if tuple is empty
We get True as the output for the tuple t1 as it is empty and False for the tuple t2 because it’s not empty (has non-zero length).
Comparing the tuple with an empty tuple
You can also check if a tuple is empty or not by comparing it with an empty tuple using the == operator. Let’s look at an example.
📚 Data Science Programs By Skill Level
Introductory ⭐
Intermediate ⭐⭐⭐
Advanced ⭐⭐⭐⭐⭐
🔎 Find Data Science Programs 👨💻 111,889 already enrolled
Disclaimer: Data Science Parichay is reader supported. When you purchase a course through a link on this site, we may earn a small commission at no additional cost to you. Earned commissions help
support this website and its team of writers.
# create two tuples - t1 is empty and t2 is non-empty
t1 = ()
t2 = (1, 2, 3)
# check if tuple is empty
print(t1 == ())
print(t2 == ())
We get the same results as above.
Note that here we use () without any elements to represent an empty tuple. You can also use the tuple() constructor with default parameters to create an empty tuple.
Tuple in a boolean context
If you use a tuple in a boolean context, it will evaluate to True if it has any elements and it will evaluate to False if the tuple is empty. Thus, you can use the expression not t to check if the
tuple t is empty or not.
Here’s an example.
# create two tuples - t1 is empty and t2 is non-empty
t1 = ()
t2 = (1, 2, 3)
# check if tuple is empty
print(not t1)
print(not t2)
We get the same result as above. True for the tuple t1 as it’s empty and False for the tuple t2 as it’s not empty (t2 has three elements).
In this tutorial, we looked at three methods to check if a tuple is empty or not. Use the method that you are the most comfortable with.
You might also be interested in –
Subscribe to our newsletter for more informative guides and tutorials.
We do not spam and you can opt out any time. | {"url":"https://datascienceparichay.com/article/python-check-if-tuple-is-empty/","timestamp":"2024-11-07T20:03:23Z","content_type":"text/html","content_length":"258109","record_id":"<urn:uuid:e5a3fb57-4813-4181-8a77-b0d3f0eb8d1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00380.warc.gz"} |
toy example
type in bit size for randomization (e.g. 32 or 128):
secret key and public key
type in secret primenumber keys p and q or click for random prime numbers (DO type in primes if you prefer to use your own p and q - or else..)
Generate an e with gcd(e,(p-1)*(q-1))=1. (relative prime to phi), N= p*q, phi=(p-1)*(q-1)
Public keys are N and e - to be published
type in message
m:(just numbers in this version)
calculate encrypted message c:
calculate d = e^-1 mod phi
calculate message: c^d mod N:
Based on statistical valid primes, and on a hope that the primes will be found before the end of the universe or before the end of the life of the computer - what ever comes first.Can use up to 999
bits Miller Rabin tested probably primes
Randomness by RNGCryptoServiceProvider.
be sure to fill out required fields - no exeptions handling here (yet)
disclamer: Not suitable for professional use, not optimized for industrial strength, only for educational and illustrational purpose
contact: c h r i s t e l @ c h r i s t e l . d k | {"url":"http://christelbach.com/RSAtoy.aspx","timestamp":"2024-11-02T08:45:46Z","content_type":"application/xhtml+xml","content_length":"5975","record_id":"<urn:uuid:ed8a9c10-6c51-424b-9684-423a513604b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00019.warc.gz"} |
CCC '02 S1 - The Students' Council Breakfast
View as PDF
Canadian Computing Competition: 2002 Stage 1, Junior #3, Senior #1
The students council in your school wants to organize a charity breakfast, and since older students are both wiser and richer, the members of the council decide that the price of each ticket will be
based on how many years you have been in the school. A first year student will buy a PINK ticket, a second year student will buy a GREEN ticket, a third year student will buy a RED ticket, and a
fourth year student will buy an ORANGE ticket.
Assume that all tickets are sold. Each colour of ticket is uniquely priced.
Input Specification
Input the cost of a PINK, GREEN, RED, and ORANGE ticket (in that exact order), followed by the exact amount of money to be raised by selling tickets.
Output Specification
Output all combinations of tickets that produce exactly the desired amount to be raised. The combinations may appear in any order. Output the total number of combinations found. Output the smallest
number of tickets to print to raise the desired amount so that printing cost is minimized.
Sample Input
Sample Output
# of PINK is 0 # of GREEN is 0 # of RED is 1 # of ORANGE is 0
# of PINK is 1 # of GREEN is 1 # of RED is 0 # of ORANGE is 0
# of PINK is 3 # of GREEN is 0 # of RED is 0 # of ORANGE is 0
Total combinations is 3.
Minimum number of tickets to print is 1.
• commented on Oct. 16, 2023, 2:45 a.m. ← edited
make sure to add periods at the end of the last 2 sentences! | {"url":"https://dmoj.ca/problem/ccc02s1","timestamp":"2024-11-05T11:53:57Z","content_type":"text/html","content_length":"21760","record_id":"<urn:uuid:92b06956-0c44-41cf-b9a5-3e3e2ae54669>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00708.warc.gz"} |
11.2: Quantum Numbers of Multielectron Atoms
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
The quantum numbers of atoms (and ions) correspond to quantized energy states, called microstates, that depend on the electron configuration. Electrons within a subshell or orbital can adopt
different configurations of their electrons, with different individual quantum numbers. These different electron configurations are microstates, and they can be grouped into terms and ordered
according to their relative energies. Since electronic transitions occur between terms of different energies, knowledge of the terms for a given atom or ion can aid in the interpretation of
electronic spectra.
Transition metal ions can give rise to a spectrum of beautifully colored colored complexes. The colors are often caused by absorption of visible light due to electronic transitions involving metal
d-orbitals. These electronic transitions are not only attractive to the eye, they are useful spectroscopic signals because the transitions occur between quantized electronic energy states, called
microstates. Spectroscopy, coupled with knowledge of the possible microstates and their energies can yield clues about molecular structure.
To learn what a microstate is, let's use the simple example of a carbon atom with a \(1s^2 2s^2 2p^2\) electron configuration. The \(s\) subshell is full, and there is only one way to put two
electrons into an \(s\) subshell. Both electrons must occupy the orbital having \(m_l=0\), and each must have opposite spins, (\(m_s =+\frac{1}{2}\)), and (\(m_s =-\frac{1}{2}\)). On the other hand,
the \(p\) subshell of carbon has two electrons that can occupy any of three orbitals (Figure \(\PageIndex{1}\)). This leaves room for different orientations of the electrons within the \(p\)
subshell. Each different electron configuration is a microstate, and each has an energy that can be distinguished using electronic spectroscopy (UV-vis for example). In the case of a carbon atom with
a two \(p\) electrons, there are 15 different possible microstates, as illustrated in Figure \(\PageIndex{1}\).
Figure \(\PageIndex{1}\): Fifteen possible microstates for two electrons in a \(p\) subshell. These microstates are organized according to their total spin (\(M_s\)) and total magnetic (\(M_l\))
quantum numbers. Arrows pointing upward represent electrons with \(m_s = +\frac{1}{2}\), and those pointing downward represnt \(m_s = -\frac{1}{2}\). Each horizontal line represents a \(p\) orbital,
and \(m_l\) values are indicated under each orbital. (CC-BY-NC; Kathryn Haas)
Just as individual electrons have quantum numbers (\(n, l, m_l, m_s\)), the electronic states of atoms have quantum numbers (\(L, M_l, M_s, S, J\)). Each individual microstate in Figure \(\PageIndex
{1}\) can be described by quantum numbers \(M_l\) and \( M_s\). The values of total angular momentum (\(L\)), and total intrinsic spin (\(S\)) arise from sets of microstates, and are related to
quantum numbers \(M_l\) and \(M_s\) for individual microstates. The relative energies of the terms for 3d metals and other light atoms can be predicted (roughly) by Hund's Rules, according to values
of S and L. These atomic quantum numbers and Hund's rules are described below.
A closer look at electronic spectra
Let us take a closer look at optical absorption spectra, also called electronic spectra, of coordination compounds. We have previously argued that ligand field theory can predict and explain the
electronic spectra. However, ligand field theory (LFT) is sufficient to explain the spectra in only a few cases. For example the \(\ce{[Ni(H2O)6]^2+}\) ion is an octahedral \(d^8\)-complex ion.
According to LFT, the metal \(d\)-orbitals in an octahedral field are the \(t_{2g}\) and the \(e_g\)–orbitals (Figure \(\PageIndex{2}\)). Six electrons are in the \(t_{2g}\) orbitals, and two
electrons are in the \(e_{g}\) orbitals (Figure \(\PageIndex{2}\)). Ligand field theory (LFT) would predict that there is one electron transition possible, namely the promotion of an electron from a
\(t_{2g}\) into an \(e_{g}\) orbital. This process would be triggered by the absorption of light whereby the wavelength of the light would depend on the \(\Delta_o\) between the \(t_{2g}\) and the \
(e_{g}\) orbitals. Overall, this should lead to a single absorption band in the absorption spectrum of the complex. We can check this prediction by experimentally recording the absorption spectrum of
the complex (Figure \(\PageIndex{2}\)).
Figure \(\PageIndex{2}\). Electron transition according to LFT and actual absorption spectrum of the \(\ce{[Ni(H2O)6]^2+}\) complex ion. Attribution: E.R. Schofield.
What we find is that the absorption spectrum is far more complex than expected. Instead of just a single absorption band there are multiple ones. Obviously, LFT is unable to explain this spectrum.
The question is: why? The answer is: LFT assumes that there are no electron-electron interactions. However, in reality there is repulsion between electron in d-orbitals and this has an effect on
their energy. And, electrons within the d-subshell, for example, repel each other to different extents depending on the relative orientations of their orbitals in space.
To illustrate how electrons in different orbitals might have different interactions, let's consider the case of a \(d^2\) excited state in an octahedral ligand field (Figure \(\PageIndex{3}\)).
Figure \(\PageIndex{3}\): Two microstates with different energies for a \(d^2\) ion in an octahedral ligand field. Electron configurations of two different microstates are shown on the top, and the
orbitals associated with each shown underneath each configuration. Left: An excited microstate with one electron in the \(d_{xy}\) orbital (shown in green) and one electron in the \(d_{z^2}\) (shown
in red). Right: An excited microstate with one electron in the \(d_{xz}\) orbital (shown in green) and one electron in the \(d_{z^2}\) (shown in red).
According to LFT, both electrons would be in the \(t_{2g}\) orbitals in the ground state. For instance, they could be in the microstate where one electron is in \(d_{xy}\), and the other is in the \
(d_{xz}\) orbital (not pictured). This configuration is called a microstate of a state because there are other combinations of orbitals possible. For example, the ground state would also be realized
by a microstate in which the electrons were in the \(d_{xz}\) and the \(d_{yz}\) orbitals. Upon absorption of light, the electron in the \(d_{xy}\) orbital could be excited into one of the
higher-energy \(e_{g}\) orbitals. That excited electron could occupy either the \(d_{z^2}\) or the \(d_{x^2-y^2}\) orbitals. Those two possibilities reflect two different microstates associated with
the excited state. For a moment, let's assume that the excited electron goes into the \(d_{z^2}\) orbital. In this microstate one electron would be in the \(d_{xz}\) orbital and the other one in the
\(d_{z^2}\) orbital (Figure \(\PageIndex{3}\), right). There is another possibility for how to excite an electron from the ground state. We could assume that instead of the \(d_{xy}\) electron being
promoted, the \(d_{xz}\) electron gets promoted. In this case, we would realize a microstate in which one electron in the \(d_{xy}\) orbital and the other one in the \(d_{z^2}\) orbital (Figure \(\
PageIndex{3}\), left).
Now let us compare the two cases shown in Figure \(\PageIndex{3}\). Ligand field theory would argue that both excited microstates have the same energy. However, in fact they do not. Why? It is
because the electrons in the first excited microstate interact differently than those of the second excited microstate. This difference becomes plausible when considering the orbital shapes and
orientations (Figure \(\PageIndex{3}\), bottom). The \(d_{xz}\) orbital has electron density on the z-axis, while the \(d_{xy}\) orbital is perpendicular to \(d_{z^2}\). The different relative
orientations of \(d_{xy}\) and \(d_{xz}\) with respect to \(d_{z^2}\) cause electrons in the \(d_{z^2}\) orbital to interact differently with an electron in a \(d_{xy}\) than in a \(d_{xz}\). As a
result, the two excited microstates do not have the same energy. In other words, to achieve either of these different excited microstates from the ground state, we need different amounts of energy.
Thus, the complex would absorb light with different energies (or wavelengths). This is in contrast to what LFT predicts.
If LFT cannot predict the number of electronic transitions, then how can we correctly predict how many absorption bands we get? The answer is, we must find all possible microstates for the \(d^2\)
electron configuration and group together those with the same energy. A group of microstates with the same energy is called a term. The number of electron transitions can then be predicted from the
number of terms.
Quantum Numbers
The table below summarizes the quantum numbers of atoms (individual microstates and sets of microstates). You may wish to review the quantum numbers for individual electrons and then refer to this
table as you read this and the next sections. Recall the meanings of the quantum numbers for individual electrons (\(n, l, m_l, m_s\) that were described in a previous section (Section 2.2.2).
Symbol Name Allowed Range Comment
\(L\) Total Orbital Angular \(|l_1 + l_2 | , |l_1 + l_2 ‐ 1| , \ldots, |l_1 ‐ l_2 |\) Total orbital angular momentum of a collection of microstates, designated as S, P, D,
Momentum F..etc
\(S\) Total Intrinsic Spin \(|{m_s}_1 + {m_s}_2 |, |{m_s}_1 +{m_s}_2 ‐ 1| , ... ,| {m_s}_1 ‐ {m_s}_2 Total spin of a collection of microstates
\(M_l\) Magnetic Quantum Number \( L, L-1, L-2, ..., -L\) Direction of total angular momentum for an individual microstate
\(M_s\) Spin Magnetic Quantum Number \(S, S-1, S-2, ...-S \) Total spin of electrons for an individual microstate
\(J\) Total Angular Momentum \(L+S,..., | L-S |\) Total angular momentum
Quantum Numbers S and L for sets of microstates
The quantum numbers \(S\) and \(L\) represent sets of microstates. Once these values are found, they can be ordered in terms of relative energies, first according to the value of \(S\), then \(L\).
Total Orbital Angular Momentum Quantum Number: L
\(L\) gives the total sum of orbital angular momentum vectors in a multielectron atom. The possible values of \(L\) can be found from the values of \(l\) for individual electrons in the system. For
example, in a system with two electrons with values of \(m_{l_1}\) and \(m_{l_2}\), the possible values of \(L\) are:
\[ L = \sum_i^n m_{l_i} = |m_{l_1} + m_{l_2} | , |m_{l_1} + m_{l_2} ‐ 1| , \ldots, |m_{l_1} ‐ m_{l_2} | \]
The absolute value symbols indicate that \(L\) cannot be a negative value. The values of \(L\) correspond to different energy levels for groups of microstates. Microstates with values of \(L=0, 1, 2,
3...\) correspond to symbols \(S, P, D, F...\) respectively. This is analogous to the relationship between the electron quantum number, \(l\), and the \(s, p, d, f..\) orbital subshells. However, the
capital letters used for microstates do not indicate orbital or subshell assignments for the electrons.
The Total Intrinsic Spin Quantum Number: \(S\)
The sum total of the spin vectors of all of the electrons is called \(S\). The values of \(S\) are computed from \(m_s\) in a manner very similar to how \(L\) is computed from \(m_l\). Because \(S\)
measures the magnitude of a vector, it cannot be negative. For a system with two electrons, each with spin \({m_s}_1\), and \({m_s}_2\), the possible values of \(S\) are given below.
\[S =\sum_i^n {m_s}_i = |{m_s}_1 + {m_s}_2 |, |{m_s}_1 +{m_s}_2 ‐ 1| , ... ,| {m_s}_1 ‐ {m_s}_2 | \label{spin}\]
The possible values of \(S\) fall into a series that depend on whether there are an odd or even number of electrons.
• Odd number of electrons: \(S=\frac{1}{2}, \frac{3}{2}, \frac{5}{2},...\)
• Even number of electrons: \(S = 0, 1, 2, 3,...\)
What are the possible \(L\) values for the electrons in the \(1s^2 2s^2 2p^2\) configuration of carbon?
Both electrons (i.e., the 2p electrons) are \(l = 1\). The possible combinations are 0, 1, and 2 corresponding to symbols S, P, and D, respectively.
What are the possible \(L\) values for the electrons in the \([Xe]6s^2 4f^1 5d^1\) ?
We can ignore the electrons in the \([Xe]\) core and the electrons in the \(6s\) block. So all we have to consider is the lone \(f\) electron (\(l=3\)) and the lone \(d\) electron (\(l=2\)).
The two extremes for possible \(L\) values are \(3+2 = 5 \) and \(3 ‐ 2 = 1\).
Thus, possible values of \(L\) for this Xe atom are 5(H), 4(G), 3(F), 2(D), and 1(P).
Find \(S\) for \(1s^1\).
\(S\) must be \(\frac{1}{2}\), since that’s the spin of a single electron and there’s only one electron.
Find S for \(1s^2 2s^1 2p^1\).
\(S = 1, 0\)
Find \(S\) for carbon atoms with the \(1s^2 2s^2 2p^2\) electron configuration.
\(S=1,0\) This is the same as the previous problem. Notice that \(S\) is not affected by which orbitals are occupied by electrons. \(S\) depends only on the number of unpaired electrons. These are
usually the electrons in partially-filled subshells (i.e., unpaired electrons in open shells).
Find \(S\) for nitrogen atoms with the \(1s^2 2s^2 2p^3\) electron configuration.
\(S\) can be \(S=\frac{3}{2}, \frac{1}{2}\).
Quantum numbers \(M_l\) and \(M_s\) for individual microstates
Once S and L are found, the allowed values of \(M_l\) and \(M_s\) can be calculated.
The Total Magnetic Quantum Number: \(M_l\)
The Total Magnetic Quantum Number \(M_l\) is the total \(z\)-component of all of the relevant electrons’ orbital momentum. While \(L\) describes the total angular momentum in the system, \(M_l\)
tells you which direction it is pointing. \(L\) can be assigned to a collection of microstates, but \(M_l\) is unique to a specific microstate in that group. Unlike \(L\), \(M_l\) is allowed to have
negative values. The possible values of \(M_l\) are integer values ranging from the largest positive sum to the most negative sum of possible \(m_l\) values:
\[M_l = L, L-1, L-2, ..., -L \]
For the \(p^2\) case, the \(m_l\) values of \(p\) orbitals are \(-1, 0, +1\). The largest value of \(M_l\) would come from a state where both electrons occupy \(m_l=+1\), thus the maximum is \(M_l=L=
2\). Likewise, the minimum value of \(M_l\) comes from both electrons occupying \(m_l=-1\) to give \(M_l=-L=-2\). The possible \(M_l\) values for \(p^2\) are the series of integer values \(+2, +1, 0,
-1, -2\).
It is worth noting that there are values of \(M_l\) that are forbidden due to the Pauli exclusion principle. For example, the value of \(M_l=+3\) could only come from three electrons occupying the \
(m_l=+1\) orbital. That is impossible because more than one electron would possess the same set of electron quantum numbers.
The Total Spin Magnetic Quantum Number: \(M_s\)
\(M_s\) is the sum total of the z-components of the electrons’ inherent spin in an individual microstate. The difference between \(S\) and \(M_s\) is subtle, but important. \(M_s\) indicates the
total z-component of the electrons’ spins, while \(S\) indicates the entire resultant vector. It is also distinct from \(M_l\), which is the sum total of the z-component of the orbital angular
momentum. \(M_s\) can be computed from \(S\), as shown below. Note that while \(S\) must be positive, \(M_s\) can have negative values.
\[M_s = S, S-1, S-2, ...-S \label{Ms}\]
For the \(p^2\) case, the possible values of \(M_s\) are \(M_s = +1, 0, -1\). The value \(M_s = +1\) comes from the sum of two electrons with spin "up", \(m_s=+\frac{1}{2}\). The value \(M_s = -1\)
comes from the sum of two electrons with spin "down", \(m_s=-\frac{1}{2}\). And the value \(M_s = 0\) comes from the sum of one electron with \(m_s=+\frac{1}{2}\) and the other with \(m_s=-\frac{1}
There are some values of \(M_s\) that will be forbidden, but not in the case of \(p^2\). However in the case of a \(p^4\), for example, the value \(M_s\)=2 is forbidden because there is no way to put
four "up" electrons in three orbitals without violating the Pauli exclusion principle.
What are the possible values \(M_l\) of a zirconium atom with the \([Kr] 5s^2 4d^2\) electron configuration?
Both open-shell electrons (i.e., the 4d electrons) are \(l=2\), so the values are 4, 3, 2, 1, 0, -1, -2, -3, -4.
What are the \(M_s\) values for \(1s^2 2s^2 2p^2\) ?
This system is paramagnetic, with two unpaired electrons in the \(2p\) orbital. The three \(p\) orbitals have values of \(m_s=-1,0,1\). The maximum value of \(M_s=+1\) would come from the two
electrons occupying \(m_{s_1}=+1\) and \(m_{s_2}=0\). The minimum value, \(M_s=-1\) would come from the two electrons occupying \(m_{s_1}=-1\) and \(m_{s_2}=0\). Thus, the possible values for the
total spin magnetic quantum number are \(M_s = 1, 0, ‐1\), where the value of \(M_s=0\) comes from a configuration where, for example, \(m_{s_1}=+1\) and \(m_{s_2}=-1\).
What are the \(M_s\) values for \(1s^2 2s^2 2p^3\) ?
\(M_s = +\frac{3}{2}, \; +\frac{1}{2}, \; -\frac{1}{2}, \; -\frac{3}{2}\) | {"url":"https://chem.libretexts.org/Bookshelves/Inorganic_Chemistry/Inorganic_Chemistry_(LibreTexts)/11%3A_Coordination_Chemistry_III_-_Electronic_Spectra/11.02%3A_Quantum_Numbers_of_Multielectron_Atoms","timestamp":"2024-11-02T00:04:40Z","content_type":"text/html","content_length":"156819","record_id":"<urn:uuid:7ddc5b45-0ed3-4927-977e-d78e88a62222>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00578.warc.gz"} |
Convert erg/cm² (Surface tension)
Convert erg/cm²
Direct link to this calculator:
Convert erg/cm² (Surface tension)
1. Choose the right category from the selection list, in this case 'Surface tension'.
2. Next enter the value you want to convert. The basic operations of arithmetic: addition (+), subtraction (-), multiplication (*, x), division (/, :, ÷), exponent (^), square root (√), brackets and
π (pi) are all permitted at this point.
3. From the selection list, choose the unit that corresponds to the value you want to convert, in this case 'erg/cm²'.
4. The value will then be converted into all units of measurement the calculator is familiar with.
5. Then, when the result appears, there is still the possibility of rounding it to a specific number of decimal places, whenever it makes sense to do so.
Utilize the full range of performance for this units calculator
With this calculator, it is possible to enter the value to be converted together with the original measurement unit; for example, '315 erg/cm2'. In so doing, either the full name of the unit or its
abbreviation can be used Then, the calculator determines the category of the measurement unit of measure that is to be converted, in this case 'Surface tension'. After that, it converts the entered
value into all of the appropriate units known to it. In the resulting list, you will be sure also to find the conversion you originally sought. Regardless which of these possibilities one uses, it
saves one the cumbersome search for the appropriate listing in long selection lists with myriad categories and countless supported units. All of that is taken over for us by the calculator and it
gets the job done in a fraction of a second.
Furthermore, the calculator makes it possible to use mathematical expressions. As a result, not only can numbers be reckoned with one another, such as, for example, '(98 * 75) erg/cm2'. But different
units of measurement can also be coupled with one another directly in the conversion. That could, for example, look like this: '45 erg/cm2 + 22 erg/cm2' or '52mm x 29cm x 6dm = ? cm^3'. The units of
measure combined in this way naturally have to fit together and make sense in the combination in question.
The mathematical functions sin, cos, tan and sqrt can also be used. Example: sin(π/2), cos(pi/2), tan(90°), sin(90) or sqrt(4).
If a check mark has been placed next to 'Numbers in scientific notation', the answer will appear as an exponential. For example, 9.851 722 132 571 5×1021. For this form of presentation, the number
will be segmented into an exponent, here 21, and the actual number, here 9.851 722 132 571 5. For devices on which the possibilities for displaying numbers are limited, such as for example, pocket
calculators, one also finds the way of writing numbers as 9.851 722 132 571 5E+21. In particular, this makes very large and very small numbers easier to read. If a check mark has not been placed at
this spot, then the result is given in the customary way of writing numbers. For the above example, it would then look like this: 9 851 722 132 571 500 000 000. Independent of the presentation of the
results, the maximum precision of this calculator is 14 places. That should be precise enough for most applications. | {"url":"https://www.convert-measurement-units.com/convert+erg+cm2.php","timestamp":"2024-11-09T00:26:57Z","content_type":"text/html","content_length":"50736","record_id":"<urn:uuid:a03b89a1-f851-46e3-9992-a427f7c9af29>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00010.warc.gz"} |
Data Science. Measures
Many engineers haven't had direct exposure to statistics or Data Science. Yet, when building data pipelines or translating Data Scientist prototypes into robust, maintainable code, engineering
complexities often arise. For Data/ML engineers and new Data Scientists, I've put together this series of posts.
I'll explain core Data Science approaches in simple terms, building from basic concepts to more complex ones.
The whole series:
In statistics, our goal isn't just to describe our data but to understand what it tells us about the larger group (population) it represents. To evaluate and describe data characteristics accurately,
we need two things:
1. Knowing which values are typical for the distribution.
2. Understanding how typical these values are compared to other potential values.
The first part is addressed by measures of central tendency; the second, by measures of variation.
Measures of Central Tendency
To explore these central tendency measures, let's use a simple but relatable scenario:
Imagine a bar with five patrons, each making $40,000 a year. So, the average income in this bar is also $40,000.
data = [40, 40, 40, 40, 40]
With this data, let's jump into the theory.
In probability theory, the mean (also known as the expected value or average) represents the center point of a set of values. Calculating the mean is a way to generalize a dataset into a single
value, which often helps in decision-making. For example, knowing the average income in a company helps us budget for growth, and knowing the average check at a restaurant informs our spending
The mean is a simple measure and has the following formula:
$$ mean = \sum\limits_{i = 1}^n {x_i p_i} $$
Here, $# x_i #$ represents random variables, and $# p_i #$ is the probability of each variable.
As can be seen from the formula, the average value of a random variable is a weighted sum of values where weights are equal to the corresponding probabilities.
For example, if you calculate the mean value of the sum of points when throwing two dice, you get the number 7. But here we know exactly all possible accepted values and their probabilities. And what
if there is no such information? There are only the results of some observations. What's to be done? The answer comes from statistics, which allows us to get an approximate value of the mean, to
estimate it from the available data.
Mathematical statistics provides several options for estimating the mean. The main one is arithmetic mean, with some useful properties. For example, arithmetic mean is an unbiased estimation, i.e.
average expectation equals estimated expectation.
In most cases, we don't have exact probabilities for each value, so we approximate using the arithmetic mean:
$$ mean = \frac{1}{n} \sum\limits_{i = 1}^n {x_i} $$
where $#x_i#$ represents values in the dataset, and n is the total number of values.
def mean(x):
return sum(x) / len(x)
mean(data) # 40
The mean is highly sensitive to outliers. If our imaginary bar had one more person join with the same income, the mean wouldn't change.
data1 = data + [40]
mean(data1) # 40
But what if Jeff Bezos entered the bar with a net worth of $10 billion? Suddenly, the average income skyrockets to $1.7 million, even though most people are still making $40,000.
data2 = data + [10000]
mean(data2) # 1700
This is why we sometimes use other measures, like the truncated mean, mode, and median.
Another simple measure. Mode is simply the most frequent value in a dataset. The mode is especially useful for data where a specific value or category is more common than others.
There may be several modes. And the presence of multiple modes is also a certain characteristic of the data to be observed. This is an indication that the data has some internal structure, that there
may be some subgroups that are qualitatively different from each other. And perhaps it makes sense not to look at this distribution as a whole but to divide it into subgroups and look at them
def mode(x):
"""returns a list, might be more than one mode"""
counts = Counter(x)
max_count = max(counts.values())
return [x_i for x_i, count in counts.iteritems() if count == max_count]
mode(data) # [40]
Mode is indispensable for qualitative variables and is of little use for quantitative ones. It also helps us to estimate the most typical value of the data sample.
The central tendency can be considered not only as a value with zero total deviation (arithmetic mean) or maximum frequency (mode), but also as a certain mark (certain level of the analyzed
characteristic) dividing the ordered data into two equal parts. That is, half of the initial data in its value is less than this mark, and half is bigger. This is median.
For datasets with outliers, the median is often more representative than the mean.
def median(v):
"""finds the 'middle-most' value of v"""
n = len(v)
sorted_v = sorted(v)
midpoint = n // 2
if n % 2 == 1:
# if odd, return the middle value
return sorted_v[midpoint]
# if even, return the average of the middle values
lo = midpoint - 1
hi = midpoint
return (sorted_v[lo] + sorted_v[hi]) / 2
median(data) # 40
median(data2) # 40
In a perfectly symmetrical distribution, the mean, median, and mode coincide. However, real-world data is rarely symmetrical, which is why these different measures can each be useful.
Measures of Variability (Dispersion)
To understand how data sampling characteristics behave, it is not enough to know the mean, it is not enough to know the typical values of these characteristics, you also need to know their
variability. That is, we must not only know what is typical but also know how much different from the mean the values are, how much different the values that are not similar to it are. And for that,
we have the measures of variation.
Let's get back to our imaginary situation. Imagine that we now have two bars:
data1 = [40, 40, 40, 40, 40]
data2 = [80, 40, 15, 25, 40]
mean(data1) # 40
mean(data2) # 40
median(data1) # 40
median(data2) # 40
mode(data1) # [40]
mode(data2) # [40]
They seem to be similar in characteristics we looked at, but the data are actually different.
The range is the simplest measure of variability, calculated as the difference between the maximum and minimum values in the data. Although it's quick to calculate, the range can be heavily
influenced by outliers.
def data_range(x):
return max(x) - min(x)
data_range(data1) # 0
data_range(data2) # 65
On the one hand, the range can be very informative and useful. For example, the difference between the maximum and minimum price of an apartment in a city, the difference between the maximum and
minimum wages in a region, and so on. On the other hand, the range can be very large and have no practical sense.
This measure shows how much the values in the sample vary, but it does not tell us anything about the distribution itself.
If the mean reflects the center of a random value, the variance gives the characteristic of the data spread around the center and takes into account the influence of values of all objects.
The formula for variance is the following:
$$ s^2 =\frac{1}{n}\sum\limits_{i = 1}^n {\left( {x_i - \bar x} \right)^2 } $$
where x – random variables, $# \bar x #$ – mean value, n – number of values.
For each value, we will take a deviation from the average, erect them in a square, and then divide them by the number of values in the sample.
Why do we square it up?
The sum of negative and positive variations will give zero because negative variations and positive variations mutually cancel each other. To avoid this mutual cancelation, the square of this measure
in the numerator is used. As for the denominator, we divide it by n. However, using values different from n improves the estimation in different ways. The total value for denominator n - 1, this
eliminates the bias.
def de_mean(x):
"""translate x by subtracting its mean (so the result has mean 0)"""
x_bar = mean(x)
return [x_i - x_bar for x_i in x]
def sum_of_squares(y):
"""the total squared variation of y_i's from their mean"""
return sum(v ** 2 for v in de_mean(y))
def variance(x):
"""assumes x has at least two elements"""
n = len(x)
deviations = de_mean(x)
return sum_of_squares(deviations) / (n - 1)
variance(data1) # 0
variance(data1) # 612
Thus, we take into account each deviation, and the sum divided by the number of objects gives us an estimate of variability.
What's the problem here?
The fact that we square up gives us multiple increases in measurement. That is, if in the first case with our salary we talk in dollars, in thousands of dollars, then when we square it, we start to
operate in millions or even billions. And this becomes less informative in terms of specific wages that people in the organization get.
Standard deviation
To return the variance into reality, that is to use it for more practical purposes, the square root is extracted from it. It's a so-called standard deviation.
And that's the formula:
$$ s = \sqrt {\frac{1}{n}\sum\limits_{i = 1}^n {\left( {x_i - \bar x} \right)^2 } } $$
def standard_deviation(x):
return math.sqrt(variance(x))
standard_deviation(data1) # 0
standard_deviation(data2) # 24.7
Standard deviation also characterizes the measure of variability, but now (as opposed to variance) it can be compared with the original data because they have the same units of measurement (this is
clear from the calculation formula).
For example, there is the three-sigma Rule which states that normally distributed data have 997 values out of 1000 within ± 3 standard deviations from the mean. Standard deviation, as a measure of
uncertainty, is also involved in many statistical calculations. It can be used to determine the accuracy of various estimates and forecasts. If the deviation is very large, then the standard
deviation will also be large, so the forecast will also be inaccurate, which will be expressed, for example, in very wide confidence intervals.
These foundational concepts — mean, median, mode, variance, and standard deviation — are essential for data engineers, helping you understand data behavior at a high level. We'll build on these in
future sections, where we'll tackle more complex measures and methods. | {"url":"https://luminousmen.com/post/data-science-measures","timestamp":"2024-11-03T00:31:31Z","content_type":"text/html","content_length":"28682","record_id":"<urn:uuid:f3577e3c-f844-4196-a3b1-d1084abdde70>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00368.warc.gz"} |
Number Bonds Worksheets Free
Number Bonds Worksheets Free
A number bond is a mental picture of the relationship between a number and the parts that combine to make it. Being able to understand the number bonds is vital in the math development of students.
12 Multiplication Number Bonds Worksheets Fill In The Missing Numbers 2nd 4th Number Bonds Worksheets Number Bonds Multiplication
Number bonds worksheets 5 15 the activity sheets below will introduce number bonds from 5 to 15.
Number bonds worksheets free. That doesn t mean a guy can t make some more worksheets for her to do though right. Number bonds like times tables are something which a child should know instantly
without needing to think therefore lots of practice. Number bonds are simply pairs of numbers that add up to a given number.
Worksheets to 10 to 20 to 40 to 60 to 80 and to 100. For example the number bonds for 10 are 0 10 10 1 9 10. Number bonds are the process of adding two numbers together to get an answer total for
Number bonds for number x are all the possible combinations of two numbers with the sum of x. Complete these number bonds to improve number sense and addition skills. If not i ve included directions
Great supports for teaching part part whole in a kindergarten classroom. Choose one or more our of number bond worksheet categories. This worksheet theme works well in the fall or spring.
For example the number bonds for 9 are 9 0 8 1 7 2 6 3 and 5 4 plus these with the two numbers switched. Number bonds provides math related worksheets for activity these free printable activity
worksheets focus on number bonds math subject. Number bond worksheets are good for students to master the addition sums up to 10 bonds.
These dynamic worksheets make understanding basic number relationships exciting and will help students develop critical thinking skills at a young age. Explore the various puzzles and worksheets to
find the perfect. These number bonds worksheets are great for testing children in their ability to solve number bonds problems for a given sum.
If you have a 1st grade student you probably already know how to use the worksheets. A number bond is just a maths fact that a pair of numbers when added together make a given total. Free number
bonds practice worksheet.
This is another good way to help students understand addition and composing or decomposing numbers. I m glad you agree. 2 8 10 3 7 10 4 6 10 5 5 10 etc and the number bonds for 20 are 0 20 20 1 19 20
2 18 20 etc.
The concept of number bonds is very basic an important foundation for understanding how numbers work. Number bonds to 20 include 12. This kindergarten number bond worksheets that address students of
different levels in your classroom from 1 to 3 grade includes practice for addition number bonds to 10 with missing parts.
Three addends addition tree and number bond templates. Number bonds are a fascinating way for students to make number connections and learn basic math facts. Our printable number bond worksheets for
children in kindergarten through grade 3 include simple addition of two addends.
Number bonds worksheets printable number bonds worksheets. Number bonds worksheets number bond is a special concept to teach addition and subtraction. Adding three and four digit numbers.
Students can color the leaves fall colors or spring colors depending on the. All of our number bond primary school math worksheets are printable so click on a picture to make it larger then print it
out and enjoy solving the. Printable math worksheets for number bonds activity games.
Number bonds are very basic an they are an important part for understanding how numbers work.
St Patrick S Number Bonds Worksheet Make 10 Madebyteachers Number Bonds Worksheets Kindergarten Math Worksheets Number Bonds
Number Bonds To 11 Free Math Worksheets Fact Family Worksheet Free Math Worksheets Kids Math Worksheets
Number Bonds To 9 Free Math Worksheets Free Math Worksheets Number Bonds Math Worksheets
Kindergarten Number Bonds Worksheets To 10 Numbers Kindergarten Number Bonds Worksheets Number Bonds
Number Bonds To 16 Free Math Worksheets Free Math Worksheets Kids Math Worksheets Free Math
Free Number 10 Worksheets For Kindergarten Grade 1 Preschool Free Math Worksheets Math Addition Learning Math
Pin By Classroom Carryout Resources On Classroom Ideas Number Bonds Worksheets Number Bonds Number Bond
Free Number 10 Worksheets For Kindergarten Grade 1 Preschool Free Math Worksheets Free Math Printables Kindergarten Math Worksheets
Number Bonds Worksheets And Activities So Much To Choose From Everything You Need For Number Bonds First Grade Math Math Addition 1st Grade Math
One Bowl Two Fish Number Bonds Dr Seuss Inspired Worksheets Planes Balloons Let S Make Learning Fun Kindergarten Math Activities Fun Math Worksheets Dr Seuss Math Activities
Number Bonds To 6 Free Math Worksheets Free Math Worksheets Free Math Printables Free Math
Number Bonds To 14 Free Math Worksheets Number Bonds Worksheets Free Math Printables Number Bonds
November Fun Fill Learning Resources Number Bonds Worksheets Number Bond Numbers Kindergarten
Number Bonds To 16 Free Math Worksheets Number Bonds Worksheets Free Math Worksheets Free Math Printables
Next Stop Pinterest In 2020 Number Bonds Worksheets Number Bonds Third Grade Math Worksheets
Number Bonds To 13 Free Math Worksheets Kids Math Worksheets Number Bonds Math Worksheets
Number Bonds Fill In The Missing Part On The Coins Tons Of Fun And Effective P Kindergarten Math Worksheets Number Bonds Worksheets Number Bonds Kindergarten
Number Bonds To 8 Free Math Worksheets Grade R Worksheets Math Numbers Kids Math Worksheets
Kindergarten Number Bonds Worksheets To 10 Numbers Kindergarten Number Bonds Worksheets Number Bonds Kindergarten | {"url":"https://thekidsworksheet.com/number-bonds-worksheets-free/","timestamp":"2024-11-05T04:15:13Z","content_type":"text/html","content_length":"136574","record_id":"<urn:uuid:a29b50fd-bc91-4ba9-88eb-9569e2a03662>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00028.warc.gz"} |
Difference Between Ohmic and Non Ohmic Conductors: JEE Main 2024
What is Ohmic and Non Ohmic Conductors: Introduction
To differentiate between ohmic and non ohmic conductors: Ohmic and non ohmic conductors are two categories of materials that exhibit different behaviors when an electric current passes through them.
Ohmic conductors, also known as ohmic resistors or linear resistors, follow Ohm's Law, which states that the current flowing through a conductor is directly proportional to the voltage applied across
it. On the other hand, non ohmic conductors, also called non-linear resistors, do not obey Ohm's Law. The resistance of non ohmic conductors varies with the applied voltage or current, often
exhibiting nonlinear behavior such as increasing or decreasing resistance as the voltage changes. This distinction between ohmic and non ohmic conductors is essential in understanding and analyzing
electrical circuits and the behavior of different materials in response to electric currents. Read further for detail more about them.
Defining Ohmic Conductors
Ohmic conductors, also known as ohmic resistors or linear resistors, are materials that exhibit a consistent and linear relationship between the current passing through them and the voltage applied
across them. In other words, ohmic conductors obey Ohm's Law, which states that the current flowing through a conductor is directly proportional to the voltage applied. The resistance of an ohmic
conductor remains constant regardless of the magnitude of the applied voltage. This predictable behavior allows for easy calculation and analysis of electrical circuits containing ohmic conductors.
Examples of ohmic conductors include most metals, such as copper and aluminum, which have a relatively constant resistance over a wide range of voltages and currents. The characteristics of ohmic
conductors are:
• Linear Relationship: Ohmic conductors exhibit a linear relationship between the current flowing through them and the voltage applied across them, adhering to Ohm's Law.
• Constant Resistance: The resistance of an ohmic conductor remains constant over a wide range of applied voltages and currents. This allows for straightforward calculations and predictions in
electrical circuits.
• Predictable Behavior: Ohmic conductors follow predictable patterns and can be easily modeled and analyzed using mathematical equations.
• Stable Electrical Properties: Ohmic conductors maintain their resistance characteristics over time, exhibiting consistent behavior under different operating conditions.
• Low Temperature Dependency: Ohmic conductors typically have a relatively low temperature dependency, meaning their resistance doesn't vary significantly with changes in temperature.
• Wide Application Range: Ohmic conductors, such as most metals, are widely used in electrical and electronic devices due to their stable and predictable behavior.
Defining Non Ohmic Conductors
Non ohmic conductors, also known as non-linear resistors, are materials that do not follow Ohm's Law. Unlike ohmic conductors, the current-voltage relationship in non ohmic conductors is not linear.
Instead, their resistance changes as the applied voltage or current varies. This non-linearity often results in a non-linear IV curve, where the resistance may increase or decrease with changing
voltage or current. Non ohmic conductors can exhibit various behaviors such as negative temperature coefficient (NTC), positive temperature coefficient (PTC), or even voltage-dependent resistance.
Examples of non ohmic conductors include semiconductor materials, thermistors, and certain electrolytes. Understanding the behavior of non ohmic conductors is important for designing circuits that
involve these materials and for analyzing their responses to varying electrical conditions. The characteristics of non ohmic conductors are:
• Non-Linear Relationship: Non ohmic conductors do not follow a linear relationship between the current and voltage. Their resistance varies with changes in applied voltage or current.
• Variable Resistance: The resistance of non ohmic conductors changes as the voltage or current changes. It can increase, decrease, or exhibit complex non-linear patterns.
• Temperature Dependency: Non ohmic conductors often show a significant temperature dependency, meaning their resistance can change with variations in temperature.
• Non-Ohmic Behavior: Non ohmic conductors can exhibit behaviors like negative temperature coefficient (NTC), where resistance decreases with increasing temperature, or positive temperature
coefficient (PTC), where resistance increases with increasing temperature.
• Application-Specific: Non ohmic conductors are utilized in various applications, such as thermistors for temperature sensing, varistors for surge protection, and semiconductor devices for
electronic circuits.
• Non-Linear IV Curve: Non ohmic conductors have non-linear current-voltage (IV) curves, reflecting their non-linear behavior and resistance variations.
Differentiate Between Ohmic and Non Ohmic Conductors
This table provides a concise overview of the main difference between ohmic and non ohmic conductors, including aspects such as the current-voltage relationship, resistance behavior, IV curve,
temperature dependency, examples of materials, and typical applications for each type of conductor.
Ohmic and non ohmic conductors are two types of materials that exhibit different behaviors when an electric current passes through them. Ohmic conductors, also known as linear conductors, obey Ohm's
Law, which states that the current passing through the conductor is directly proportional to the applied voltage. Whereas, non ohmic conductors, also known as non-linear conductors, do not obey Ohm's
Law. The resistance of ohmic conductors remains constant regardless of the applied voltage or current, while the resistance of non ohmic conductors is not constant and can vary with changes in
voltage or current.
FAQs on Difference Between Ohmic and Non Ohmic Conductors for JEE Main 2024
1. Do ohmic conductors follow Ohm's Law?
Yes, Ohmic conductors do follow Ohm's Law. Ohm's Law states that the current flowing through a conductor is directly proportional to the voltage applied across it, given that the temperature and
other physical conditions remain constant. In ohmic conductors, such as most metals, the resistance remains constant over a wide range of applied voltages and currents. This means that as the voltage
across an ohmic conductor increases, the current through it also increases proportionally. Similarly, if the voltage decreases, the current decreases accordingly.
2. What are the applications of non ohmic conductors?
Non ohmic conductors find various applications in electrical and electronic devices. One significant application is in semiconductor devices such as diodes and transistors. Diodes allow the flow of
current in one direction and are used in rectifiers, voltage regulators, and signal processing circuits. Transistors, which are key components in amplifiers, switches, and logic circuits, control the
flow of current based on input signals. Gas discharge tubes, another example of non ohmic conductors, are used for voltage regulation, surge protection, and in lighting applications like neon signs.
3. How does the resistance of a non ohmic conductor behave with changes in voltage or current?
The resistance of a non ohmic conductor does not remain constant with changes in voltage or current. Unlike ohmic conductors, where resistance is constant, non ohmic conductors exhibit varying
resistance. The resistance can increase or decrease with changes in voltage or current. This behavior is often attributed to factors such as temperature, material properties, or the presence of
functional groups in the conductor. In some cases, the resistance may increase as the voltage or current increases (positive temperature coefficient), while in other cases, it may decrease (negative
temperature coefficient).
4. Can ohmic conductors exhibit non-linear behavior?
No, Ohmic conductors do not exhibit non-linear behavior. Ohmic conductors follow Ohm's Law, which states that the current flowing through a conductor is directly proportional to the voltage applied
across it. This linear relationship between current and voltage implies that the resistance of ohmic conductors remains constant over a wide range of voltages and currents. Ohmic conductors, such as
most metals, exhibit a linear response and maintain a constant resistance under normal operating conditions.
5. Give an example of a non ohmic conductor.
An example of a non ohmic conductor is a semiconductor material. Semiconductors, such as silicon or germanium, do not follow Ohm's Law and exhibit non-linear behavior. The resistance of
semiconductors can be influenced by factors like temperature, doping levels, and applied electric fields, making them non ohmic conductors in contrast to the linear behavior of ohmic conductors like | {"url":"https://www.vedantu.com/jee-main/physics-difference-between-ohmic-and-non-ohmic-conductors","timestamp":"2024-11-09T19:27:50Z","content_type":"text/html","content_length":"263676","record_id":"<urn:uuid:2ccf7cdc-95b9-487b-9557-6cd672902bc9>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00224.warc.gz"} |
Show ALL WORK to get full credit.
(Write the pledge on top of your wo
Show ALL WORK to get full credit. (Write the pledge on top of your work and sign under it.) Note: In your submission, you must do the following: • You can use a statistical software to get two-way
ANOVA table. Besides the ANOVA table, perform calculations manually and SHOW ALL STEPS of your calculations to receive full credit. Write down your answer in your own words by yourself. Round all
test statistics to four decimal places. Problem 1: The results of a comparison of four popular minivans are reported in the following table. One of the features the researchers compared was the
distance (in feet) required for the minivan to come to a complete stop when traveling at a speed of 60 miles per hour (braking distance). Suppose the braking distances were measured for five minivans
of each type with the following results. Braking Distances (Feet) Minivan A 150 152 151 149 153 Minivan B 153 150 156 151 155 Minivan C Minivan D 155 167 150 164 157 169 158 162 155 173 a) The
researcher wished to perform an F-test to compare the average braking distances for the four minivan models. What assumptions must the researcher make to apply this test? Do the data appear to
satisfy these assumptions? Explain. b) Using the F-test, can the researcher conclude at a = 0.10 that there is a difference among average braking distances for the four minivan models? Problem 2:
Solve Problem 1 using the Tukey's method. Compare the results to the results obtained by using the F-test. Problem 3: Solve Problem 1 using the Bonferroni method. Compare the results to the results
obtained by using the Tukey's method. Problem 4: An Internet service provider is considering four different servers for purchase. Potentially, the company would be purchasing hundreds of these
servers, so it wants to make sure it is making the best decision. Initially, five of each type of server are borrowed, and each is randomly assigned to one of the 20 technicians (all technicians are
similar in skill). Each server is then put through a series of tasks and rated using a standardized test. The higher the score on the test, the better the performance of the server. The data are as
follows. Server Test Scores Server 1 Server 2 Server 3 Server 4 48.5 56.4 52.1 64.3 46.5 68.2 56.3 68.3 52.4 68.5 48.3 72.2 54.1 64.2 52.2 70.6 58.9 60.1 54.8 56.5 Perform a Kruskal-Wallis test on
these data using a = 0.10. Are there differences between the servers? Problem 5: The following table gives the survival times (in hours) for animals in an experiment whose design consisted of three
poisons, four treatments, and four observations per cell. Poison Treatment A B C D | 3.1 4.5 8.2 11.0 4.3 4.5 4.5 7.1 4.6 4.3 8.8 7.2 6.3 7.6 6.6 6.2 || 3.6 2.9 9.2 6.1 4.4 3.5 5.6 10.0 4.0 2.3 4.9
12.4 3.1 4.0 7.1 3.8 III 2.2 2.1 3.0 3.7 2.3 2.5 3.0 3.6 1.8 2.3 3.8 2.9 2.4 2.2 3.1 3.3 Conduct a two-way analysis of variance to test the effects of the two main factors and their interaction. Use
α = 0.10. Problem 6: A banana grower has three fertilizers from which to choose. He would like to determine which fertilizer produces banana trees with the largest yield (measured in pounds of
bananas produced). The banana grower has noticed that there is a difference in the average yields of the banana trees depending on which side of the farm they are planted (South Side, North Side,
West Side, or East Side). Because of the variation in yields among the areas on the farm, the farmer has decided to randomly select three trees within each area and then randomly assign the
fertilizers to the trees. After harvesting the bananas, he calculates the yields of the trees within each of the areas. The results are as follows. Banana Yields (Pounds) Fertilizer A Fertilizer B
Fertilizer C South Side 53 51 58 North Side 48 47 53 West Side 50 48 56 East Side 50 47 54 a) Do you think a randomized block design is appropriate for the banana grower's study? What assumptions
must the banana grower make to apply this test? Do the data appear to satisfy these assumptions? Explain. b) Perform a two-way ANOVA using randomized block design. Use α = 0.10. Problem 7: Solve
Problem 6 using the Friedman's Test. Compare the results to the results obtained in Problem 6. | {"url":"https://tutorbin.com/questions-and-answers/show-all-work-to-get-full-credit-write-the-pledge-on-top-of-your-work-and-sign-under-it-note-in-your-submission-you-must","timestamp":"2024-11-12T12:21:42Z","content_type":"text/html","content_length":"71799","record_id":"<urn:uuid:1ca09d9c-5c1a-44ba-b88b-c17071bc6687>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00402.warc.gz"} |
Class StrictMath
public final class StrictMath extends Object
The class
contains methods for performing basic numeric operations such as the elementary exponential, logarithm, square root, and trigonometric functions.
To help ensure portability of Java programs, the definitions of some of the numeric functions in this package require that they produce the same results as certain published algorithms. These
algorithms are available from the well-known network library netlib as the package "Freely Distributable Math Library," fdlibm. These algorithms, which are written in the C programming language, are
then to be understood to be transliterated into Java and executed with all floating-point and integer operations following the rules of Java arithmetic. The following transformations are used in the
• Extraction and setting of the high and low halves of a 64-bit double in C is expressed using Java platform methods that perform bit-wise conversions from double to long and long to double.
• Unsigned int values in C are mapped to signed int values in Java with updates to operations to replicate unsigned semantics where the results on the same textual operation would differ. For
example, >> shifts on unsigned C values are replaced with >>> shifts on signed Java values. Sized comparisons on unsigned C values (<, <=, >, >=) are replaced with semantically equivalent calls
to compareUnsigned.
The Java math library is defined with respect to fdlibm version 5.3. Where fdlibm provides more than one definition for a function (such as acos), use the "IEEE 754 core function" version (residing
in a file whose name begins with the letter e). The methods which require fdlibm semantics are sin, cos, tan, asin, acos, atan, exp, log, log10, cbrt, atan2, pow, sinh, cosh, tanh, hypot, expm1, and
The platform uses signed two's complement integer arithmetic with int and long primitive types. The developer should choose the primitive type to ensure that arithmetic operations consistently
produce correct results, which in some cases means the operations will not overflow the range of values of the computation. The best practice is to choose the primitive type and algorithm to avoid
overflow. In cases where the size is int or long and overflow errors need to be detected, the methods whose names end with Exact throw an ArithmeticException when the results overflow.
class discusses how the shared quality of implementation criteria for selected
relate to the IEEE 754 recommended operations
See Also:
• Field Summary
Modifier and Type
static final double
The double value that is closer than any other to e, the base of the natural logarithms.
static final double
The double value that is closer than any other to pi (π), the ratio of the circumference of a circle to its diameter.
static final double
The double value that is closer than any other to tau (τ), the ratio of the circumference of a circle to its radius.
• Method Summary
Modifier and Type
Methods declared in class java.lang.Object
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
• Field Details
□ E
public static final double E
The double value that is closer than any other to e, the base of the natural logarithms.
See Also:
□ PI
public static final double PI
The double value that is closer than any other to pi (π), the ratio of the circumference of a circle to its diameter.
See Also:
□ TAU
public static final double TAU
The double value that is closer than any other to tau (τ), the ratio of the circumference of a circle to its radius.
API Note:
The value of pi is one half that of tau; in other words, tau is double pi .
See Also:
• Method Details
□ sin
public static double sin(double a)
Returns the trigonometric sine of an angle. Special cases:
☆ If the argument is NaN or an infinity, then the result is NaN.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
a - an angle, in radians.
the sine of the argument.
□ cos
public static double cos(double a)
Returns the trigonometric cosine of an angle. Special cases:
☆ If the argument is NaN or an infinity, then the result is NaN.
☆ If the argument is zero, then the result is 1.0.
a - an angle, in radians.
the cosine of the argument.
□ tan
public static double tan(double a)
Returns the trigonometric tangent of an angle. Special cases:
☆ If the argument is NaN or an infinity, then the result is NaN.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
a - an angle, in radians.
the tangent of the argument.
□ asin
public static double asin(double a)
Returns the arc sine of a value; the returned angle is in the range -
/2 through
/2. Special cases:
☆ If the argument is NaN or its absolute value is greater than 1, then the result is NaN.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
a - the value whose arc sine is to be returned.
the arc sine of the argument.
□ acos
public static double acos(double a)
Returns the arc cosine of a value; the returned angle is in the range 0.0 through
. Special case:
☆ If the argument is NaN or its absolute value is greater than 1, then the result is NaN.
☆ If the argument is 1.0, the result is positive zero.
a - the value whose arc cosine is to be returned.
the arc cosine of the argument.
□ atan
public static double atan(double a)
Returns the arc tangent of a value; the returned angle is in the range -
/2 through
/2. Special cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
☆ If the argument is infinite, then the result is the closest value to pi/2 with the same sign as the input.
a - the value whose arc tangent is to be returned.
the arc tangent of the argument.
□ toRadians
public static double toRadians(double angdeg)
Converts an angle measured in degrees to an approximately equivalent angle measured in radians. The conversion from degrees to radians is generally inexact.
angdeg - an angle, in degrees
the measurement of the angle angdeg in radians.
□ toDegrees
public static double toDegrees(double angrad)
Converts an angle measured in radians to an approximately equivalent angle measured in degrees. The conversion from radians to degrees is generally inexact; users should not expect cos
(toRadians(90.0)) to exactly equal 0.0.
angrad - an angle, in radians
the measurement of the angle angrad in degrees.
□ exp
public static double exp(double a)
Returns Euler's number
raised to the power of a
value. Special cases:
☆ If the argument is NaN, the result is NaN.
☆ If the argument is positive infinity, then the result is positive infinity.
☆ If the argument is negative infinity, then the result is positive zero.
☆ If the argument is zero, then the result is 1.0.
a - the exponent to raise e to.
the value e^a, where e is the base of the natural logarithms.
□ log
public static double log(double a)
Returns the natural logarithm (base
) of a
value. Special cases:
☆ If the argument is NaN or less than zero, then the result is NaN.
☆ If the argument is positive infinity, then the result is positive infinity.
☆ If the argument is positive zero or negative zero, then the result is negative infinity.
☆ If the argument is 1.0, then the result is positive zero.
a - a value
the value ln a, the natural logarithm of a.
□ log10
public static double log10(double a)
Returns the base 10 logarithm of a
value. Special cases:
☆ If the argument is NaN or less than zero, then the result is NaN.
☆ If the argument is positive infinity, then the result is positive infinity.
☆ If the argument is positive zero or negative zero, then the result is negative infinity.
☆ If the argument is equal to 10^n for integer n, then the result is n. In particular, if the argument is 1.0 (10^0), then the result is positive zero.
a - a value
the base 10 logarithm of a.
□ sqrt
public static double sqrt(double a)
Returns the correctly rounded positive square root of a
value. Special cases:
☆ If the argument is NaN or less than zero, then the result is NaN.
☆ If the argument is positive infinity, then the result is positive infinity.
☆ If the argument is positive zero or negative zero, then the result is the same as the argument.
Otherwise, the result is the
value closest to the true mathematical square root of the argument value.
a - a value.
the positive square root of a.
□ cbrt
public static double cbrt(double a)
Returns the cube root of a
value. For positive finite
cbrt(-x) == -cbrt(x)
; that is, the cube root of a negative value is the negative of the cube root of that value's magnitude. Special cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is infinite, then the result is an infinity with the same sign as the argument.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
a - a value.
the cube root of a.
□ IEEEremainder
public static double IEEEremainder(double f1, double f2)
Computes the remainder operation on two arguments as prescribed by the IEEE 754 standard. The remainder value is mathematically equal to
f1 - f2
, where
is the mathematical integer closest to the exact mathematical value of the quotient
, and if two mathematical integers are equally close to
, then
is the integer that is even. If the remainder is zero, its sign is the same as the sign of the first argument. Special cases:
☆ If either argument is NaN, or the first argument is infinite, or the second argument is positive zero or negative zero, then the result is NaN.
☆ If the first argument is finite and the second argument is infinite, then the result is the same as the first argument.
f1 - the dividend.
f2 - the divisor.
the remainder when f1 is divided by f2.
□ ceil
public static double ceil(double a)
Returns the smallest (closest to negative infinity)
value that is greater than or equal to the argument and is equal to a mathematical integer. Special cases:
☆ If the argument value is already equal to a mathematical integer, then the result is the same as the argument.
☆ If the argument is NaN or an infinity or positive zero or negative zero, then the result is the same as the argument.
☆ If the argument value is less than zero but greater than -1.0, then the result is negative zero.
Note that the value of
is exactly the value of
a - a value.
the smallest (closest to negative infinity) floating-point value that is greater than or equal to the argument and is equal to a mathematical integer.
□ floor
public static double floor(double a)
Returns the largest (closest to positive infinity)
value that is less than or equal to the argument and is equal to a mathematical integer. Special cases:
☆ If the argument value is already equal to a mathematical integer, then the result is the same as the argument.
☆ If the argument is NaN or an infinity or positive zero or negative zero, then the result is the same as the argument.
a - a value.
the largest (closest to positive infinity) floating-point value that less than or equal to the argument and is equal to a mathematical integer.
□ rint
public static double rint(double a)
Returns the
value that is closest in value to the argument and is equal to a mathematical integer. If two
values that are mathematical integers are equally close to the value of the argument, the result is the integer value that is even. Special cases:
☆ If the argument value is already equal to a mathematical integer, then the result is the same as the argument.
☆ If the argument is NaN or an infinity or positive zero or negative zero, then the result is the same as the argument.
a - a value.
the closest floating-point value to a that is equal to a mathematical integer.
□ atan2
public static double atan2(double y, double x)
Returns the angle
from the conversion of rectangular coordinates (
) to polar coordinates (r,
). This method computes the phase
by computing an arc tangent of
in the range of -
. Special cases:
☆ If either argument is NaN, then the result is NaN.
☆ If the first argument is positive zero and the second argument is positive, or the first argument is positive and finite and the second argument is positive infinity, then the result is
positive zero.
☆ If the first argument is negative zero and the second argument is positive, or the first argument is negative and finite and the second argument is positive infinity, then the result is
negative zero.
☆ If the first argument is positive zero and the second argument is negative, or the first argument is positive and finite and the second argument is negative infinity, then the result is
the double value closest to pi.
☆ If the first argument is negative zero and the second argument is negative, or the first argument is negative and finite and the second argument is negative infinity, then the result is
the double value closest to -pi.
☆ If the first argument is positive and the second argument is positive zero or negative zero, or the first argument is positive infinity and the second argument is finite, then the result
is the double value closest to pi/2.
☆ If the first argument is negative and the second argument is positive zero or negative zero, or the first argument is negative infinity and the second argument is finite, then the result
is the double value closest to -pi/2.
☆ If both arguments are positive infinity, then the result is the double value closest to pi/4.
☆ If the first argument is positive infinity and the second argument is negative infinity, then the result is the double value closest to 3*pi/4.
☆ If the first argument is negative infinity and the second argument is positive infinity, then the result is the double value closest to -pi/4.
☆ If both arguments are negative infinity, then the result is the double value closest to -3*pi/4.
API Note:
For y with a positive sign and finite nonzero x, the exact mathematical value of atan2 is equal to:
○ If x > 0, atan(abs(y/x))
○ If x < 0, π - atan(abs(y/x))
y - the ordinate coordinate
x - the abscissa coordinate
the theta component of the point (r, theta) in polar coordinates that corresponds to the point (x, y) in Cartesian coordinates.
□ pow
public static double pow(double a, double b)
Returns the value of the first argument raised to the power of the second argument. Special cases:
☆ If the second argument is positive or negative zero, then the result is 1.0.
☆ If the second argument is 1.0, then the result is the same as the first argument.
☆ If the second argument is NaN, then the result is NaN.
☆ If the first argument is NaN and the second argument is nonzero, then the result is NaN.
☆ If
○ the absolute value of the first argument is greater than 1 and the second argument is positive infinity, or
○ the absolute value of the first argument is less than 1 and the second argument is negative infinity,
then the result is positive infinity.
☆ If
○ the absolute value of the first argument is greater than 1 and the second argument is negative infinity, or
○ the absolute value of the first argument is less than 1 and the second argument is positive infinity,
then the result is positive zero.
☆ If the absolute value of the first argument equals 1 and the second argument is infinite, then the result is NaN.
☆ If
○ the first argument is positive zero and the second argument is greater than zero, or
○ the first argument is positive infinity and the second argument is less than zero,
then the result is positive zero.
☆ If
○ the first argument is positive zero and the second argument is less than zero, or
○ the first argument is positive infinity and the second argument is greater than zero,
then the result is positive infinity.
☆ If
○ the first argument is negative zero and the second argument is greater than zero but not a finite odd integer, or
○ the first argument is negative infinity and the second argument is less than zero but not a finite odd integer,
then the result is positive zero.
☆ If
○ the first argument is negative zero and the second argument is a positive finite odd integer, or
○ the first argument is negative infinity and the second argument is a negative finite odd integer,
then the result is negative zero.
☆ If
○ the first argument is negative zero and the second argument is less than zero but not a finite odd integer, or
○ the first argument is negative infinity and the second argument is greater than zero but not a finite odd integer,
then the result is positive infinity.
☆ If
○ the first argument is negative zero and the second argument is a negative finite odd integer, or
○ the first argument is negative infinity and the second argument is a positive finite odd integer,
then the result is negative infinity.
☆ If the first argument is finite and less than zero
○ if the second argument is a finite even integer, the result is equal to the result of raising the absolute value of the first argument to the power of the second argument
○ if the second argument is a finite odd integer, the result is equal to the negative of the result of raising the absolute value of the first argument to the power of the second
○ if the second argument is finite and not an integer, then the result is NaN.
☆ If both arguments are integers, then the result is exactly equal to the mathematical result of raising the first argument to the power of the second argument if that result can in fact be
represented exactly as a double value.
(In the foregoing descriptions, a floating-point value is considered to be an integer if and only if it is finite and a fixed point of the method ceil or, equivalently, a fixed point of the
method floor. A value is a fixed point of a one-argument method if and only if the result of applying the method to the value is equal to the value.)
API Note:
The special cases definitions of this method differ from the special case definitions of the IEEE 754 recommended pow operation for ±1.0 raised to an infinite power. This method treats
such cases as indeterminate and specifies a NaN is returned. The IEEE 754 specification treats the infinite power as a large integer (large-magnitude floating-point numbers are
numerically integers, specifically even integers) and therefore specifies 1.0 be returned.
a - base.
b - the exponent.
the value a^b.
□ round
public static int round(float a)
Returns the closest
to the argument, with ties rounding to positive infinity.
Special cases:
☆ If the argument is NaN, the result is 0.
☆ If the argument is negative infinity or any value less than or equal to the value of Integer.MIN_VALUE, the result is equal to the value of Integer.MIN_VALUE.
☆ If the argument is positive infinity or any value greater than or equal to the value of Integer.MAX_VALUE, the result is equal to the value of Integer.MAX_VALUE.
a - a floating-point value to be rounded to an integer.
the value of the argument rounded to the nearest int value.
See Also:
□ round
public static long round(double a)
Returns the closest
to the argument, with ties rounding to positive infinity.
Special cases:
☆ If the argument is NaN, the result is 0.
☆ If the argument is negative infinity or any value less than or equal to the value of Long.MIN_VALUE, the result is equal to the value of Long.MIN_VALUE.
☆ If the argument is positive infinity or any value greater than or equal to the value of Long.MAX_VALUE, the result is equal to the value of Long.MAX_VALUE.
a - a floating-point value to be rounded to a long.
the value of the argument rounded to the nearest long value.
See Also:
□ random
public static double random()
Returns a
value with a positive sign, greater than or equal to
and less than
. Returned values are chosen pseudorandomly with (approximately) uniform distribution from that range.
When this method is first called, it creates a single new pseudorandom-number generator, exactly as if by the expression
new java.util.Random()
This new pseudorandom-number generator is used thereafter for all calls to this method and is used nowhere else.
This method is properly synchronized to allow correct use by more than one thread. However, if many threads need to generate pseudorandom numbers at a great rate, it may reduce contention for
each thread to have its own pseudorandom-number generator.
a pseudorandom double greater than or equal to 0.0 and less than 1.0.
See Also:
□ addExact
public static int addExact(int x, int y)
Returns the sum of its arguments, throwing an exception if the result overflows an int.
x - the first value
y - the second value
the result
ArithmeticException - if the result overflows an int
See Also:
□ addExact
public static long addExact(long x, long y)
Returns the sum of its arguments, throwing an exception if the result overflows a long.
x - the first value
y - the second value
the result
ArithmeticException - if the result overflows a long
See Also:
□ subtractExact
public static int subtractExact(int x, int y)
Returns the difference of the arguments, throwing an exception if the result overflows an int.
x - the first value
y - the second value to subtract from the first
the result
ArithmeticException - if the result overflows an int
See Also:
□ subtractExact
public static long subtractExact(long x, long y)
Returns the difference of the arguments, throwing an exception if the result overflows a long.
x - the first value
y - the second value to subtract from the first
the result
ArithmeticException - if the result overflows a long
See Also:
□ multiplyExact
public static int multiplyExact(int x, int y)
Returns the product of the arguments, throwing an exception if the result overflows an int.
x - the first value
y - the second value
the result
ArithmeticException - if the result overflows an int
See Also:
□ multiplyExact
public static long multiplyExact(long x, int y)
Returns the product of the arguments, throwing an exception if the result overflows a long.
x - the first value
y - the second value
the result
ArithmeticException - if the result overflows a long
See Also:
□ multiplyExact
public static long multiplyExact(long x, long y)
Returns the product of the arguments, throwing an exception if the result overflows a long.
x - the first value
y - the second value
the result
ArithmeticException - if the result overflows a long
See Also:
□ divideExact
public static int divideExact(int x, int y)
Returns the quotient of the arguments, throwing an exception if the result overflows an
. Such overflow occurs in this method if
. In contrast, if
Integer.MIN_VALUE / -1
were evaluated directly, the result would be
and no exception would be thrown.
If y is zero, an ArithmeticException is thrown (JLS 15.17.2).
The built-in remainder operator "%" is a suitable counterpart both for this method and for the built-in division operator "/".
x - the dividend
y - the divisor
the quotient x / y
ArithmeticException - if y is zero or the quotient overflows an int
See Java Language Specification:
See Also:
□ divideExact
public static long divideExact(long x, long y)
Returns the quotient of the arguments, throwing an exception if the result overflows a
. Such overflow occurs in this method if
. In contrast, if
Long.MIN_VALUE / -1
were evaluated directly, the result would be
and no exception would be thrown.
If y is zero, an ArithmeticException is thrown (JLS 15.17.2).
The built-in remainder operator "%" is a suitable counterpart both for this method and for the built-in division operator "/".
x - the dividend
y - the divisor
the quotient x / y
ArithmeticException - if y is zero or the quotient overflows a long
See Java Language Specification:
See Also:
□ floorDivExact
public static int floorDivExact(int x, int y)
Returns the largest (closest to positive infinity)
value that is less than or equal to the algebraic quotient. This method is identical to
except that it throws an
when the dividend is
and the divisor is
instead of ignoring the integer overflow and returning
The floor modulus method floorMod(int,int) is a suitable counterpart both for this method and for the floorDiv(int,int) method.
See Math.floorDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the largest (closest to positive infinity) int value that is less than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero, or the dividend x is Integer.MIN_VALUE and the divisor y is -1.
See Also:
□ floorDivExact
public static long floorDivExact(long x, long y)
Returns the largest (closest to positive infinity)
value that is less than or equal to the algebraic quotient. This method is identical to
except that it throws an
when the dividend is
and the divisor is
instead of ignoring the integer overflow and returning
The floor modulus method floorMod(long,long) is a suitable counterpart both for this method and for the floorDiv(long,long) method.
For examples, see Math.floorDiv.
x - the dividend
y - the divisor
the largest (closest to positive infinity) long value that is less than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero, or the dividend x is Long.MIN_VALUE and the divisor y is -1.
See Also:
□ ceilDivExact
public static int ceilDivExact(int x, int y)
Returns the smallest (closest to negative infinity)
value that is greater than or equal to the algebraic quotient. This method is identical to
except that it throws an
when the dividend is
and the divisor is
instead of ignoring the integer overflow and returning
The ceil modulus method ceilMod(int,int) is a suitable counterpart both for this method and for the ceilDiv(int,int) method.
See Math.ceilDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the smallest (closest to negative infinity) int value that is greater than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero, or the dividend x is Integer.MIN_VALUE and the divisor y is -1.
See Also:
□ ceilDivExact
public static long ceilDivExact(long x, long y)
Returns the smallest (closest to negative infinity)
value that is greater than or equal to the algebraic quotient. This method is identical to
except that it throws an
when the dividend is
and the divisor is
instead of ignoring the integer overflow and returning
The ceil modulus method ceilMod(long,long) is a suitable counterpart both for this method and for the ceilDiv(long,long) method.
For examples, see Math.ceilDiv.
x - the dividend
y - the divisor
the smallest (closest to negative infinity) long value that is greater than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero, or the dividend x is Long.MIN_VALUE and the divisor y is -1.
See Also:
□ incrementExact
public static int incrementExact(int a)
Returns the argument incremented by one, throwing an exception if the result overflows an
. The overflow only occurs for
the maximum value
a - the value to increment
the result
ArithmeticException - if the result overflows an int
See Also:
□ incrementExact
public static long incrementExact(long a)
Returns the argument incremented by one, throwing an exception if the result overflows a
. The overflow only occurs for
the maximum value
a - the value to increment
the result
ArithmeticException - if the result overflows a long
See Also:
□ decrementExact
public static int decrementExact(int a)
Returns the argument decremented by one, throwing an exception if the result overflows an
. The overflow only occurs for
the minimum value
a - the value to decrement
the result
ArithmeticException - if the result overflows an int
See Also:
□ decrementExact
public static long decrementExact(long a)
Returns the argument decremented by one, throwing an exception if the result overflows a
. The overflow only occurs for
the minimum value
a - the value to decrement
the result
ArithmeticException - if the result overflows a long
See Also:
□ negateExact
public static int negateExact(int a)
Returns the negation of the argument, throwing an exception if the result overflows an
. The overflow only occurs for
the minimum value
a - the value to negate
the result
ArithmeticException - if the result overflows an int
See Also:
□ negateExact
public static long negateExact(long a)
Returns the negation of the argument, throwing an exception if the result overflows a
. The overflow only occurs for
the minimum value
a - the value to negate
the result
ArithmeticException - if the result overflows a long
See Also:
□ toIntExact
public static int toIntExact(long value)
Returns the value of the long argument, throwing an exception if the value overflows an int.
value - the long value
the argument as an int
ArithmeticException - if the argument overflows an int
See Also:
□ multiplyFull
public static long multiplyFull(int x, int y)
Returns the exact mathematical product of the arguments.
x - the first value
y - the second value
the result
See Also:
□ multiplyHigh
public static long multiplyHigh(long x, long y)
Returns as a long the most significant 64 bits of the 128-bit product of two 64-bit factors.
x - the first value
y - the second value
the result
See Also:
□ unsignedMultiplyHigh
public static long unsignedMultiplyHigh(long x, long y)
Returns as a long the most significant 64 bits of the unsigned 128-bit product of two unsigned 64-bit factors.
x - the first value
y - the second value
the result
See Also:
□ floorDiv
public static int floorDiv(int x, int y)
Returns the largest (closest to positive infinity)
value that is less than or equal to the algebraic quotient. There is one special case: if the dividend is
and the divisor is
, then integer overflow occurs and the result is equal to
See Math.floorDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the largest (closest to positive infinity) int value that is less than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero
See Also:
□ floorDiv
public static long floorDiv(long x, int y)
Returns the largest (closest to positive infinity)
value that is less than or equal to the algebraic quotient. There is one special case: if the dividend is
and the divisor is
, then integer overflow occurs and the result is equal to
See Math.floorDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the largest (closest to positive infinity) long value that is less than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero
See Also:
□ floorDiv
public static long floorDiv(long x, long y)
Returns the largest (closest to positive infinity)
value that is less than or equal to the algebraic quotient. There is one special case: if the dividend is
and the divisor is
, then integer overflow occurs and the result is equal to
See Math.floorDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the largest (closest to positive infinity) long value that is less than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero
See Also:
□ floorMod
public static int floorMod(int x, int y)
Returns the floor modulus of the
The floor modulus is r = x - (floorDiv(x, y) * y), has the same sign as the divisor y or is zero, and is in the range of -abs(y) < r < +abs(y).
The relationship between floorDiv and floorMod is such that:
☆ floorDiv(x, y) * y + floorMod(x, y) == x
See Math.floorMod for examples and a comparison to the % operator.
x - the dividend
y - the divisor
the floor modulus x - (floorDiv(x, y) * y)
ArithmeticException - if the divisor y is zero
See Also:
□ floorMod
public static int floorMod(long x, int y)
Returns the floor modulus of the
The floor modulus is r = x - (floorDiv(x, y) * y), has the same sign as the divisor y or is zero, and is in the range of -abs(y) < r < +abs(y).
The relationship between floorDiv and floorMod is such that:
☆ floorDiv(x, y) * y + floorMod(x, y) == x
See Math.floorMod for examples and a comparison to the % operator.
x - the dividend
y - the divisor
the floor modulus x - (floorDiv(x, y) * y)
ArithmeticException - if the divisor y is zero
See Also:
□ floorMod
public static long floorMod(long x, long y)
Returns the floor modulus of the
The floor modulus is r = x - (floorDiv(x, y) * y), has the same sign as the divisor y or is zero, and is in the range of -abs(y) < r < +abs(y).
The relationship between floorDiv and floorMod is such that:
☆ floorDiv(x, y) * y + floorMod(x, y) == x
See Math.floorMod for examples and a comparison to the % operator.
x - the dividend
y - the divisor
the floor modulus x - (floorDiv(x, y) * y)
ArithmeticException - if the divisor y is zero
See Also:
□ ceilDiv
public static int ceilDiv(int x, int y)
Returns the smallest (closest to negative infinity)
value that is greater than or equal to the algebraic quotient. There is one special case: if the dividend is
and the divisor is
, then integer overflow occurs and the result is equal to
See Math.ceilDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the smallest (closest to negative infinity) int value that is greater than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero
See Also:
□ ceilDiv
public static long ceilDiv(long x, int y)
Returns the smallest (closest to negative infinity)
value that is greater than or equal to the algebraic quotient. There is one special case: if the dividend is
and the divisor is
, then integer overflow occurs and the result is equal to
See Math.ceilDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the smallest (closest to negative infinity) long value that is greater than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero
See Also:
□ ceilDiv
public static long ceilDiv(long x, long y)
Returns the smallest (closest to negative infinity)
value that is greater than or equal to the algebraic quotient. There is one special case: if the dividend is
and the divisor is
, then integer overflow occurs and the result is equal to
See Math.ceilDiv for examples and a comparison to the integer division / operator.
x - the dividend
y - the divisor
the smallest (closest to negative infinity) long value that is greater than or equal to the algebraic quotient.
ArithmeticException - if the divisor y is zero
See Also:
□ ceilMod
public static int ceilMod(int x, int y)
Returns the ceiling modulus of the
The ceiling modulus is r = x - (ceilDiv(x, y) * y), has the opposite sign as the divisor y or is zero, and is in the range of -abs(y) < r < +abs(y).
The relationship between ceilDiv and ceilMod is such that:
☆ ceilDiv(x, y) * y + ceilMod(x, y) == x
See Math.ceilMod for examples and a comparison to the % operator.
x - the dividend
y - the divisor
the ceiling modulus x - (ceilDiv(x, y) * y)
ArithmeticException - if the divisor y is zero
See Also:
□ ceilMod
public static int ceilMod(long x, int y)
Returns the ceiling modulus of the
The ceiling modulus is r = x - (ceilDiv(x, y) * y), has the opposite sign as the divisor y or is zero, and is in the range of -abs(y) < r < +abs(y).
The relationship between ceilDiv and ceilMod is such that:
☆ ceilDiv(x, y) * y + ceilMod(x, y) == x
See Math.ceilMod for examples and a comparison to the % operator.
x - the dividend
y - the divisor
the ceiling modulus x - (ceilDiv(x, y) * y)
ArithmeticException - if the divisor y is zero
See Also:
□ ceilMod
public static long ceilMod(long x, long y)
Returns the ceiling modulus of the
The ceiling modulus is r = x - (ceilDiv(x, y) * y), has the opposite sign as the divisor y or is zero, and is in the range of -abs(y) < r < +abs(y).
The relationship between ceilDiv and ceilMod is such that:
☆ ceilDiv(x, y) * y + ceilMod(x, y) == x
See Math.ceilMod for examples and a comparison to the % operator.
x - the dividend
y - the divisor
the ceiling modulus x - (ceilDiv(x, y) * y)
ArithmeticException - if the divisor y is zero
See Also:
□ abs
public static int abs(int a)
Returns the absolute value of an
value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned.
Note that if the argument is equal to the value of Integer.MIN_VALUE, the most negative representable int value, the result is that same value, which is negative. In contrast, the absExact
(int) method throws an ArithmeticException for this value.
a - the argument whose absolute value is to be determined.
the absolute value of the argument.
See Also:
□ absExact
public static int absExact(int a)
Returns the mathematical absolute value of an
value if it is exactly representable as an
, throwing
if the result overflows the positive
Since the range of two's complement integers is asymmetric with one additional negative value (JLS 4.2.1), the mathematical absolute value of Integer.MIN_VALUE overflows the positive int
range, so an exception is thrown for that argument.
a - the argument whose absolute value is to be determined
the absolute value of the argument, unless overflow occurs
ArithmeticException - if the argument is Integer.MIN_VALUE
See Also:
□ abs
public static long abs(long a)
Returns the absolute value of a
value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned.
Note that if the argument is equal to the value of Long.MIN_VALUE, the most negative representable long value, the result is that same value, which is negative. In contrast, the absExact
(long) method throws an ArithmeticException for this value.
a - the argument whose absolute value is to be determined.
the absolute value of the argument.
See Also:
□ absExact
public static long absExact(long a)
Returns the mathematical absolute value of an
value if it is exactly representable as an
, throwing
if the result overflows the positive
Since the range of two's complement integers is asymmetric with one additional negative value (JLS 4.2.1), the mathematical absolute value of Long.MIN_VALUE overflows the positive long range,
so an exception is thrown for that argument.
a - the argument whose absolute value is to be determined
the absolute value of the argument, unless overflow occurs
ArithmeticException - if the argument is Long.MIN_VALUE
See Also:
□ abs
public static float abs(float a)
Returns the absolute value of a
value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned. Special cases:
☆ If the argument is positive zero or negative zero, the result is positive zero.
☆ If the argument is infinite, the result is positive infinity.
☆ If the argument is NaN, the result is NaN.
API Note:
As implied by the above, one valid implementation of this method is given by the expression below which computes a float with the same exponent and significand as the argument but with a
guaranteed zero sign bit indicating a positive value:
Float.intBitsToFloat(0x7fffffff & Float.floatToRawIntBits(a))
a - the argument whose absolute value is to be determined
the absolute value of the argument.
□ abs
public static double abs(double a)
Returns the absolute value of a
value. If the argument is not negative, the argument is returned. If the argument is negative, the negation of the argument is returned. Special cases:
☆ If the argument is positive zero or negative zero, the result is positive zero.
☆ If the argument is infinite, the result is positive infinity.
☆ If the argument is NaN, the result is NaN.
API Note:
As implied by the above, one valid implementation of this method is given by the expression below which computes a double with the same exponent and significand as the argument but with a
guaranteed zero sign bit indicating a positive value:
a - the argument whose absolute value is to be determined
the absolute value of the argument.
□ max
public static int max(int a, int b)
Returns the greater of two
values. That is, the result is the argument closer to the value of
. If the arguments have the same value, the result is that same value.
a - an argument.
b - another argument.
the larger of a and b.
□ max
public static long max(long a, long b)
Returns the greater of two
values. That is, the result is the argument closer to the value of
. If the arguments have the same value, the result is that same value.
a - an argument.
b - another argument.
the larger of a and b.
□ max
public static float max(float a, float b)
Returns the greater of two float values. That is, the result is the argument closer to positive infinity. If the arguments have the same value, the result is that same value. If either value
is NaN, then the result is NaN. Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. If one argument is positive zero and
the other negative zero, the result is positive zero.
a - an argument.
b - another argument.
the larger of a and b.
□ max
public static double max(double a, double b)
Returns the greater of two double values. That is, the result is the argument closer to positive infinity. If the arguments have the same value, the result is that same value. If either value
is NaN, then the result is NaN. Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. If one argument is positive zero and
the other negative zero, the result is positive zero.
a - an argument.
b - another argument.
the larger of a and b.
□ min
public static int min(int a, int b)
Returns the smaller of two
values. That is, the result the argument closer to the value of
. If the arguments have the same value, the result is that same value.
a - an argument.
b - another argument.
the smaller of a and b.
□ min
public static long min(long a, long b)
Returns the smaller of two
values. That is, the result is the argument closer to the value of
. If the arguments have the same value, the result is that same value.
a - an argument.
b - another argument.
the smaller of a and b.
□ min
public static float min(float a, float b)
Returns the smaller of two float values. That is, the result is the value closer to negative infinity. If the arguments have the same value, the result is that same value. If either value is
NaN, then the result is NaN. Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. If one argument is positive zero and the
other is negative zero, the result is negative zero.
a - an argument.
b - another argument.
the smaller of a and b.
□ min
public static double min(double a, double b)
Returns the smaller of two double values. That is, the result is the value closer to negative infinity. If the arguments have the same value, the result is that same value. If either value is
NaN, then the result is NaN. Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. If one argument is positive zero and the
other is negative zero, the result is negative zero.
a - an argument.
b - another argument.
the smaller of a and b.
□ clamp
public static int clamp(long value, int min, int max)
Clamps the value to fit between min and max. If the value is less than
, then
is returned. If the value is greater than
, then
is returned. Otherwise, the original value is returned.
While the original value of type long may not fit into the int type, the bounds have the int type, so the result always fits the int type. This allows to use method to safely cast long value
to int with saturation.
value - value to clamp
min - minimal allowed value
max - maximal allowed value
a clamped value that fits into min..max interval
IllegalArgumentException - if min > max
□ clamp
public static long clamp(long value, long min, long max)
Clamps the value to fit between min and max. If the value is less than min, then min is returned. If the value is greater than max, then max is returned. Otherwise, the original value is
value - value to clamp
min - minimal allowed value
max - maximal allowed value
a clamped value that fits into min..max interval
IllegalArgumentException - if min > max
□ clamp
public static double clamp(double value, double min, double max)
Clamps the value to fit between min and max. If the value is less than
, then
is returned. If the value is greater than
, then
is returned. Otherwise, the original value is returned. If value is NaN, the result is also NaN.
Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. E.g., clamp(-0.0, 0.0, 1.0) returns 0.0.
value - value to clamp
min - minimal allowed value
max - maximal allowed value
a clamped value that fits into min..max interval
IllegalArgumentException - if either of min and max arguments is NaN, or min > max, or min is +0.0, and max is -0.0.
□ clamp
public static float clamp(float value, float min, float max)
Clamps the value to fit between min and max. If the value is less than
, then
is returned. If the value is greater than
, then
is returned. Otherwise, the original value is returned. If value is NaN, the result is also NaN.
Unlike the numerical comparison operators, this method considers negative zero to be strictly smaller than positive zero. E.g., clamp(-0.0f, 0.0f, 1.0f) returns 0.0f.
value - value to clamp
min - minimal allowed value
max - maximal allowed value
a clamped value that fits into min..max interval
IllegalArgumentException - if either of min and max arguments is NaN, or min > max, or min is +0.0f, and max is -0.0f.
□ fma
public static double fma(double a, double b, double c)
Returns the fused multiply add of the three arguments; that is, returns the exact product of the first two arguments summed with the third argument and then rounded once to the nearest
. The rounding is done using the
round to nearest even rounding mode
. In contrast, if
a * b + c
is evaluated as a regular floating-point expression, two rounding errors are involved, the first for the multiply operation, the second for the addition operation.
Special cases:
☆ If any argument is NaN, the result is NaN.
☆ If one of the first two arguments is infinite and the other is zero, the result is NaN.
☆ If the exact product of the first two arguments is infinite (in other words, at least one of the arguments is infinite and the other is neither zero nor NaN) and the third argument is an
infinity of the opposite sign, the result is NaN.
Note that fusedMac(a, 1.0, c) returns the same result as (a + c). However, fusedMac(a, b, +0.0) does not always return the same result as (a * b) since fusedMac(-0.0, +0.0, +0.0) is +0.0
while (-0.0 * +0.0) is -0.0; fusedMac(a, b, -0.0) is equivalent to (a * b) however.
API Note:
This method corresponds to the fusedMultiplyAdd operation defined in IEEE 754-2008.
a - a value
b - a value
c - a value
(a × b + c) computed, as if with unlimited range and precision, and rounded once to the nearest double value
□ fma
public static float fma(float a, float b, float c)
Returns the fused multiply add of the three arguments; that is, returns the exact product of the first two arguments summed with the third argument and then rounded once to the nearest
. The rounding is done using the
round to nearest even rounding mode
. In contrast, if
a * b + c
is evaluated as a regular floating-point expression, two rounding errors are involved, the first for the multiply operation, the second for the addition operation.
Special cases:
☆ If any argument is NaN, the result is NaN.
☆ If one of the first two arguments is infinite and the other is zero, the result is NaN.
☆ If the exact product of the first two arguments is infinite (in other words, at least one of the arguments is infinite and the other is neither zero nor NaN) and the third argument is an
infinity of the opposite sign, the result is NaN.
Note that fma(a, 1.0f, c) returns the same result as (a + c). However, fma(a, b, +0.0f) does not always return the same result as (a * b) since fma(-0.0f, +0.0f, +0.0f) is +0.0f while (-0.0f
* +0.0f) is -0.0f; fma(a, b, -0.0f) is equivalent to (a * b) however.
API Note:
This method corresponds to the fusedMultiplyAdd operation defined in IEEE 754-2008.
a - a value
b - a value
c - a value
(a × b + c) computed, as if with unlimited range and precision, and rounded once to the nearest float value
□ ulp
public static double ulp(double d)
Returns the size of an ulp of the argument. An ulp, unit in the last place, of a
value is the positive distance between this floating-point value and the
value next larger in magnitude. Note that for non-NaN
ulp(-x) == ulp(x)
Special Cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is positive or negative infinity, then the result is positive infinity.
☆ If the argument is positive or negative zero, then the result is Double.MIN_VALUE.
☆ If the argument is ±Double.MAX_VALUE, then the result is equal to 2^971.
d - the floating-point value whose ulp is to be returned
the size of an ulp of the argument
□ ulp
public static float ulp(float f)
Returns the size of an ulp of the argument. An ulp, unit in the last place, of a
value is the positive distance between this floating-point value and the
value next larger in magnitude. Note that for non-NaN
ulp(-x) == ulp(x)
Special Cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is positive or negative infinity, then the result is positive infinity.
☆ If the argument is positive or negative zero, then the result is Float.MIN_VALUE.
☆ If the argument is ±Float.MAX_VALUE, then the result is equal to 2^104.
f - the floating-point value whose ulp is to be returned
the size of an ulp of the argument
□ signum
public static double signum(double d)
Returns the signum function of the argument; zero if the argument is zero, 1.0 if the argument is greater than zero, -1.0 if the argument is less than zero.
Special Cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is positive zero or negative zero, then the result is the same as the argument.
d - the floating-point value whose signum is to be returned
the signum function of the argument
□ signum
public static float signum(float f)
Returns the signum function of the argument; zero if the argument is zero, 1.0f if the argument is greater than zero, -1.0f if the argument is less than zero.
Special Cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is positive zero or negative zero, then the result is the same as the argument.
f - the floating-point value whose signum is to be returned
the signum function of the argument
□ sinh
public static double sinh(double x)
Returns the hyperbolic sine of a
value. The hyperbolic sine of
is defined to be (
e^x - e^-x
)/2 where
Euler's number
Special cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is infinite, then the result is an infinity with the same sign as the argument.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
x - The number whose hyperbolic sine is to be returned.
The hyperbolic sine of x.
□ cosh
public static double cosh(double x)
Returns the hyperbolic cosine of a
value. The hyperbolic cosine of
is defined to be (
e^x + e^-x
)/2 where
Euler's number
Special cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is infinite, then the result is positive infinity.
☆ If the argument is zero, then the result is 1.0.
x - The number whose hyperbolic cosine is to be returned.
The hyperbolic cosine of x.
□ tanh
public static double tanh(double x)
Returns the hyperbolic tangent of a
value. The hyperbolic tangent of
is defined to be (
e^x - e^-x
e^x + e^-x
), in other words,
. Note that the absolute value of the exact tanh is always less than 1.
Special cases:
☆ If the argument is NaN, then the result is NaN.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
☆ If the argument is positive infinity, then the result is +1.0.
☆ If the argument is negative infinity, then the result is -1.0.
x - The number whose hyperbolic tangent is to be returned.
The hyperbolic tangent of x.
□ hypot
public static double hypot(double x, double y)
Returns sqrt(
) without intermediate overflow or underflow.
Special cases:
☆ If either argument is infinite, then the result is positive infinity.
☆ If either argument is NaN and neither argument is infinite, then the result is NaN.
☆ If both arguments are zero, the result is positive zero.
x - a value
y - a value
sqrt(x^2 +y^2) without intermediate overflow or underflow
□ expm1
public static double expm1(double x)
-1. Note that for values of
near 0, the exact sum of
+ 1 is much closer to the true result of
Special cases:
☆ If the argument is NaN, the result is NaN.
☆ If the argument is positive infinity, then the result is positive infinity.
☆ If the argument is negative infinity, then the result is -1.0.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
x - the exponent to raise e to in the computation of e^x -1.
the value e^x - 1.
□ log1p
public static double log1p(double x)
Returns the natural logarithm of the sum of the argument and 1. Note that for small values
, the result of
is much closer to the true result of ln(1 +
) than the floating-point evaluation of
Special cases:
☆ If the argument is NaN or less than -1, then the result is NaN.
☆ If the argument is positive infinity, then the result is positive infinity.
☆ If the argument is negative one, then the result is negative infinity.
☆ If the argument is zero, then the result is a zero with the same sign as the argument.
x - a value
the value ln(x + 1), the natural log of x + 1
□ copySign
public static double copySign(double magnitude, double sign)
Returns the first floating-point argument with the sign of the second floating-point argument. For this method, a NaN sign argument is always treated as if it were positive.
magnitude - the parameter providing the magnitude of the result
sign - the parameter providing the sign of the result
a value with the magnitude of magnitude and the sign of sign.
□ copySign
public static float copySign(float magnitude, float sign)
Returns the first floating-point argument with the sign of the second floating-point argument. For this method, a NaN sign argument is always treated as if it were positive.
magnitude - the parameter providing the magnitude of the result
sign - the parameter providing the sign of the result
a value with the magnitude of magnitude and the sign of sign.
□ getExponent
public static int getExponent(float f)
Returns the unbiased exponent used in the representation of a
. Special cases:
☆ If the argument is NaN or infinite, then the result is Float.MAX_EXPONENT + 1.
☆ If the argument is zero or subnormal, then the result is Float.MIN_EXPONENT -1.
f - a float value
the unbiased exponent of the argument
□ getExponent
public static int getExponent(double d)
Returns the unbiased exponent used in the representation of a
. Special cases:
☆ If the argument is NaN or infinite, then the result is Double.MAX_EXPONENT + 1.
☆ If the argument is zero or subnormal, then the result is Double.MIN_EXPONENT -1.
d - a double value
the unbiased exponent of the argument
□ nextAfter
public static double nextAfter(double start, double direction)
Returns the floating-point number adjacent to the first argument in the direction of the second argument. If both arguments compare as equal the second argument is returned.
Special cases:
☆ If either argument is a NaN, then NaN is returned.
☆ If both arguments are signed zeros, direction is returned unchanged (as implied by the requirement of returning the second argument if the arguments compare as equal).
☆ If start is ±Double.MIN_VALUE and direction has a value such that the result should have a smaller magnitude, then a zero with the same sign as start is returned.
☆ If start is infinite and direction has a value such that the result should have a smaller magnitude, Double.MAX_VALUE with the same sign as start is returned.
☆ If start is equal to ± Double.MAX_VALUE and direction has a value such that the result should have a larger magnitude, an infinity with same sign as start is returned.
start - starting floating-point value
direction - value indicating which of start's neighbors or start should be returned
The floating-point number adjacent to start in the direction of direction.
□ nextAfter
public static float nextAfter(float start, double direction)
Returns the floating-point number adjacent to the first argument in the direction of the second argument. If both arguments compare as equal a value equivalent to the second argument is
Special cases:
☆ If either argument is a NaN, then NaN is returned.
☆ If both arguments are signed zeros, a value equivalent to direction is returned.
☆ If start is ±Float.MIN_VALUE and direction has a value such that the result should have a smaller magnitude, then a zero with the same sign as start is returned.
☆ If start is infinite and direction has a value such that the result should have a smaller magnitude, Float.MAX_VALUE with the same sign as start is returned.
☆ If start is equal to ± Float.MAX_VALUE and direction has a value such that the result should have a larger magnitude, an infinity with same sign as start is returned.
start - starting floating-point value
direction - value indicating which of start's neighbors or start should be returned
The floating-point number adjacent to start in the direction of direction.
□ nextUp
public static double nextUp(double d)
Returns the floating-point value adjacent to
in the direction of positive infinity. This method is semantically equivalent to
nextAfter(d, Double.POSITIVE_INFINITY)
; however, a
implementation may run faster than its equivalent
Special Cases:
☆ If the argument is NaN, the result is NaN.
☆ If the argument is positive infinity, the result is positive infinity.
☆ If the argument is zero, the result is Double.MIN_VALUE
d - starting floating-point value
The adjacent floating-point value closer to positive infinity.
□ nextUp
public static float nextUp(float f)
Returns the floating-point value adjacent to
in the direction of positive infinity. This method is semantically equivalent to
nextAfter(f, Float.POSITIVE_INFINITY)
; however, a
implementation may run faster than its equivalent
Special Cases:
☆ If the argument is NaN, the result is NaN.
☆ If the argument is positive infinity, the result is positive infinity.
☆ If the argument is zero, the result is Float.MIN_VALUE
f - starting floating-point value
The adjacent floating-point value closer to positive infinity.
□ nextDown
public static double nextDown(double d)
Returns the floating-point value adjacent to
in the direction of negative infinity. This method is semantically equivalent to
nextAfter(d, Double.NEGATIVE_INFINITY)
; however, a
implementation may run faster than its equivalent
Special Cases:
☆ If the argument is NaN, the result is NaN.
☆ If the argument is negative infinity, the result is negative infinity.
☆ If the argument is zero, the result is -Double.MIN_VALUE
d - starting floating-point value
The adjacent floating-point value closer to negative infinity.
□ nextDown
public static float nextDown(float f)
Returns the floating-point value adjacent to
in the direction of negative infinity. This method is semantically equivalent to
nextAfter(f, Float.NEGATIVE_INFINITY)
; however, a
implementation may run faster than its equivalent
Special Cases:
☆ If the argument is NaN, the result is NaN.
☆ If the argument is negative infinity, the result is negative infinity.
☆ If the argument is zero, the result is -Float.MIN_VALUE
f - starting floating-point value
The adjacent floating-point value closer to negative infinity.
□ scalb
public static double scalb(double d, int scaleFactor)
× 2
rounded as if performed by a single correctly rounded floating-point multiply. If the exponent of the result is between
, the answer is calculated exactly. If the exponent of the result would be larger than
, an infinity is returned. Note that if the result is subnormal, precision may be lost; that is, when
scalb(x, n)
is subnormal,
scalb(scalb(x, n), -n)
may not equal
. When the result is non-NaN, the result has the same sign as
Special cases:
☆ If the first argument is NaN, NaN is returned.
☆ If the first argument is infinite, then an infinity of the same sign is returned.
☆ If the first argument is zero, then a zero of the same sign is returned.
d - number to be scaled by a power of two.
scaleFactor - power of 2 used to scale d
d × 2^scaleFactor
□ scalb
public static float scalb(float f, int scaleFactor)
× 2
rounded as if performed by a single correctly rounded floating-point multiply. If the exponent of the result is between
, the answer is calculated exactly. If the exponent of the result would be larger than
, an infinity is returned. Note that if the result is subnormal, precision may be lost; that is, when
scalb(x, n)
is subnormal,
scalb(scalb(x, n), -n)
may not equal
. When the result is non-NaN, the result has the same sign as
Special cases:
☆ If the first argument is NaN, NaN is returned.
☆ If the first argument is infinite, then an infinity of the same sign is returned.
☆ If the first argument is zero, then a zero of the same sign is returned.
f - number to be scaled by a power of two.
scaleFactor - power of 2 used to scale f
f × 2^scaleFactor | {"url":"https://docs.oracle.com/en/java/javase/23/docs/api/java.base/java/lang/StrictMath.html","timestamp":"2024-11-07T21:49:19Z","content_type":"text/html","content_length":"220113","record_id":"<urn:uuid:c712ea4d-b1ed-4583-9d95-2491dac02cee>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00286.warc.gz"} |
! ! ! Urgent Help Need ! ! ! ! ! ! ! !
in Option Station 2000i I faced to this problem:
HOW they use Volatility of ASSET to calculate Theor.value of OPTION ??????
The BS model uses the PRICE of Asset and not the VOLATILITY of asset ! ! ! ! !
please help me ! ! !
PS: IS IT EVER POSSIBLE TO CALCULATE THEOR VALUE USING VOLATILITY OF ASSET ???
#1 Aug 7, 2003
• 1
OK, since you need an answer so urgently, I'll give it a try.
The Black-Scholes option pricing model relies on both the price of the stock and its volatility. The volatility is expressed in terms of the standard deviation of returns.
Check out the following for more information:
#2 Aug 7, 2003
• 4
Here is how linnsoft.com explains volatility .
Historical Volatility reflects how far an instruments price has deviated from it's average price (mean) in the past. On a yearly basis, this number represents the one standard deviation % price
change expected in the year ahead. In other words if a stock is trading at 100 and has a volatility of 0.20(20%) then there is a 68% probability(1 standard dev = 68% probability) that the price
will be in the range 80 to 120 a year from now. Similarly there is a 95% probability that the price will be between 60 and 140 a year from now (2 standard deviations). The higher the volatility
number the higher the volatility.
If that is the case, how can you get option value from a stock vol value?
#3 Aug 7, 2003
• 0
Forget about it and take a few days off. If it's that
!!!!!!!! URGENT !!!!!!!!! then it's probably not going to be a good thing anyway.
#4 Aug 7, 2003
You can see at the link you gave me that the BS doesn`t use VOLATILITY of Stock, just Stock PRICE!!
For example let`s look into BS formula in OptionStation2000i:
it has only THESE inputs:
Price of asset;
Strike of option;
Volatility of option;
Type of option;
ExpirationDate of option
So, NO ASSET VOLATILITY ! !
So, my question AGAIN:
How one can use asset volatility in calculating Theor.value of Option??
#5 Aug 8, 2003
"Volatility of option" - u would only need thie is u were pricing an option on an option... the Vol of the underlying asset is used for pricing vanillas...
#6 Aug 8, 2003
So, Kap,
one doesn`t need Asset volatility for pricing ordinary options, not exotic?
#7 Aug 8, 2003
You need the volatility for the underlying asset to price the option. For something more exotic like an option2 on an option1, u would need to price option1 from the underlying asset in order to
get the spot to price option2 and the Volatility of option1, but I wouldn't worry about the latter.
You must have the underlying assets volatility to price any option using BS
#8 Aug 8, 2003
• 5
Tender Andy,
In order to find the theo. value of an option you need to input some kind of "volatility" number. This volatility number can either be the underlying asset's "historical volatility", or an
"implied" or "future" volatility that you come up with. So you must either calculate the asset's volatility on your own, and plug that into the model, or come up with your own estimate of
volatility (which could be based off the asset's volatility) and then plug it into the model.
It seems to me that you are confused with the terms of "asset's volatility". There is not a total consensus on which type of volatility to use when finding the option's theo. value. Some like to
use historical volatility, and other's (mostly floor traders) will use implied volatility. I've been a trader on and off the floor and I've used both.
Bottom line, you need to input some form of volatility into the model to get a theo. value. If you don't put a volatility number in, the formula won't work (or you just won't get a valid result).
Good luck
#9 Aug 8, 2003 | {"url":"https://www.elitetrader.com/et/threads/urgent-help-need.20839/","timestamp":"2024-11-04T08:03:20Z","content_type":"text/html","content_length":"50838","record_id":"<urn:uuid:eb3666a5-3fc0-46a7-8dee-07cd18a5aac4>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00771.warc.gz"} |
Aerodynamics Online Test - Sanfoundry
Aerodynamics Questions and Answers – Three Dimensional Doublet
This set of Aerodynamics online test focuses on “Three Dimensional Doublet”.
1. Is source and sink are equal?
a) True
b) False
View Answer
Answer: a
Explanation: The source and sink are equal but opposite strength located at points O and A, as sketched. The distance between the source and sink is l. consider an arbitrary point p located a
distance r from the sink and a distance from the source, so source and sink are equal but opposite in strength.
2. Is strengths of source and sink becomes infinite?
a) False
b) True
View Answer
Answer: b
Explanation: As the source and the sink approach as their strengths become infinite. That the flow field produced by the equation is a three dimensional doublet. The three dimensional doublet has
defined the strength of the doublet with its two dimensional.
3. Is three dimensional doublet is defined as the strength of the doublet?
a) False
b) True
View Answer
Answer: b
Explanation: The three dimensional doublet is defined as the strength of the doublet, compare equation with its two dimensional counterpart given in equation. Note that the three dimensional effects
lead to an inverse squared variation.
4. Is three dimensional effect lead to an inverse squared variation?
a) True
b) False
View Answer
Answer: a
Explanation: The three dimensional effects lead to an inverse squared variation and introduce into the two dimensional case. The stream lines of this velocity field are sketched in the figure are the
stream lines in the planes.
5. Is flow induced by the three dimensional doublet is a series of stream surfaces?
a) True
b) False
View Answer
Answer: a
Explanation: The flow induced by the three dimensional doublet is a series of stream surfaces generated by revolving the streamlines in about the r axis. Compare these stream lines with the two
dimensional case illustrated they are qualitatively different.
6. Is flow is defined as axisymmetric flow?
a) True
b) False
View Answer
Answer: a
Explanation: The flow is independent of indeed equation clearly shows that the velocity field depends only on r. such a flow is defined as axisymmetric flow. Once again, we have a flow with two
independent variables with the axisymmetric flow.
7. Is axisymmetric flow sometimes labeled as two dimensional flow?
a) True
b) False
View Answer
Answer: a
Explanation: The axisymmetric flow is sometimes labeled as two dimensional flow. However, it is quite different from the two dimensional planar flows discussed earlier. In reality, axisymmetric flow
is degenerate three dimensional flow.
8. Is axisymmetric has two independent variables?
a) True
b) False
View Answer
Answer: a
Explanation: The axisymmetric flow is a degenerate three dimensional flow, and it is somewhat misleading to refer it as the two dimensional, and the axisymmetric flow has only two independent
variables, but it exhibits some of the same physical characteristics.
9. Is doublet is the combination of source and sink?
a) True
b) False
View Answer
Answer: a
Explanation: The degenerate case of a source sink pair that leads to a singularity called a doublet. The doublet is frequently used in the theory of incompressible flow, the purpose of this section
is to describe its properties.
10. Is zero is the absolute magnitudes of the source and sink?
a) True
b) False
View Answer
Answer: a
Explanation: The distance zero is the absolute magnitudes of the strengths of the source and sinks increases in such a fashion that product remains constant. This limiting process we obtain by a
special flow pattern defined as a doublet.
Sanfoundry Global Education & Learning Series – Aerodynamics.
To practice all areas of Aerodynamics for online tests, here is complete set of 1000+ Multiple Choice Questions and Answers. | {"url":"https://www.sanfoundry.com/aerodynamics-questions-answers-online-test/","timestamp":"2024-11-02T17:16:29Z","content_type":"text/html","content_length":"155491","record_id":"<urn:uuid:9a9d72ed-2ddc-4e53-9b22-36a707a2d5fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00439.warc.gz"} |
anyone good at solv… - QuestionCove
Ask your own question, for FREE!
Mathematics 63 Online
OpenStudy (anonymous):
anyone good at solving nth roots?
Still Need Help?
Join the QuestionCove community and study together with friends!
OpenStudy (anonymous):
OpenStudy (inkyvoyd):
don't ask us to find the nth root of a number if it's n is a convoluted complex number ;)
OpenStudy (anonymous):
^3sqroot 88 ^3sqroot -222 -^4sqroot 0.34 ^5sqroot 500
OpenStudy (anonymous):
confused...draw it
OpenStudy (anonymous):
\[\sqrt[3]{88} \] \[\sqrt[3]{-222}\] \[-\sqrt[4]{0.34}\] \[\sqrt[5]{500}\]
Still Need Help?
Join the QuestionCove community and study together with friends!
OpenStudy (anonymous):
@jim_thompson5910 , @satellite73 , @zepdrix help anyone?
OpenStudy (anonymous):
i dont even know what are they? i mean i used to did it earlier dont rememebr...but i can help u if u tell me the topic
OpenStudy (inkyvoyd):
@Best_Mathematician , it's de moivre's theorem. I have a bit of a headache right now so I'm letting up on the thinking - http://en.wikipedia.org/wiki/Root_of_unity is what you are looking for.
OpenStudy (anonymous):
blows my mind...haha...never took tht class sorry
OpenStudy (anonymous):
its ok its just my alg 2 homework @Best_Mathematician
Still Need Help?
Join the QuestionCove community and study together with friends!
OpenStudy (anonymous):
really i cant remember algebra lol...
OpenStudy (anonymous):
OpenStudy (anonymous):
oh i remeber this...but tbh i getta go and search for my question
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!
Join our real-time social learning platform and learn together with your friends!
Latest Questions
Can't find your answer? Make a FREE account and ask your own questions, OR help others and earn volunteer hours!
Join our real-time social learning platform and learn together with your friends! | {"url":"https://questioncove.com/updates/513e51f4e4b01c4790d32a04","timestamp":"2024-11-11T16:44:10Z","content_type":"text/html","content_length":"29762","record_id":"<urn:uuid:9c4bbf61-f873-43a1-869a-0be8e784d2ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00519.warc.gz"} |
Richard Gill, who is very fond of describing himself as a "
Quantum c****pot
", did not take
Fred's advice
and now paying a heavy price for it.
His seriously muddled paper has been comprehensively discredited on PubPeer:
https://pubpeer.com/publications/D985B4 ... 22#fb25059
Re: What is wrong with this argument?
Zen wrote:ne question: have you changed your mind about this?
Do you agree that in all simulations (nicely written, by the way; long live to R!) discussed in this forum we are either making the distribution of the hidden variable depend on the detector
settings or we are playing with the detectors efficiencies (standard detection loopholes)?
Another question: do you believe that the description of the macroscopic parts of Aspect's apparatus cannot be made using good old Euclidean space?
- No change of mind
- The simulations can be interpreted in two ways. Suppose we do N runs and we observe some "no shows", non-detections, so we only get say n < N pairs of both detected particles. There are two
Option 1: We can imagine that we are doing an N run experiment with ternary outcome (-1, 0, 1). Bell-CHSH is an inequality for binary outcomes. We need the CGLMP inequality for a ternary outcome
experiment, or we can use CHSH after merging two of the three outcomes to one. Either of these kinds of inequalities are what we call "generalized Bell inequalities" and these two kinds are the only
generalized Bell inequalities for a 2 party, 2 setting, 3 outcome experiment. See the section of my paper on generalized Bell inequalities http://arxiv.org/abs/1207.5103 (the final revision just came
out on arXiv today). None of the generalized Bell inequalities for a 2x2x3 experiment are violated, because the experiment satisfies "local realism" with no conspiracy loophole.
Option 2: there really are only n runs. The probability distribution of the local hidden variables in the model is the conditional distribution given both particles are accepted by the detectors. In
order to pick a value of the hidden variable in the model, we need to know the settings (in effect, we are using rejection sampling: we just keep on picking a value, discard if it is rejected by the
settings and try again). We now have n runs of a local realistic 2x2x2 experiment in which the hidden variables are chosen knowing in advance what the settings will be. Or you could call it not
conspiracy, realist it is for sure, but non local. First there is communication from the settings to the source. Then the hidden variable is created. Then the particles fly to the detectors, knowing
in advance what the settings will be.
- Good old Aspect's experiment operated at very very low detector efficiency and moreover with settings chosen according to a periodic deterministic scheme (with the ratio of the periods in the two
arms of the experiment not close to small integers). So it did not prove anything. It is easy to give a local realist simulation which generates similar statistics. Weihs' experiment is a whole lot
better but still far too low detector efficiency. It is easy to give a local realist simulation which generates similar statistics. We probably have to wait another five years for an experiment which
*cannot* be simulated in a local realist way.
Re: What is wrong with this argument?
Zen wrote:
gill1109 wrote:
Zen wrote:We really can make the spreadsheet in our computer lab!
Of course you can make the spreadsheet in your computer lab! That's obvious. But if you do that, the simulation has nothing to say about the kind of physical theory that I described above. The
relation between the Monte Carlo simulation and Physics is not your concern? Your theorem is correct, Richard. But it amazes me that you don't want to think about the kind of theory that is
potentially ruled out by the theorem.
I do think about all kinds of theories, Zen. And I am glad that you agree that my theory applies to computer simulation experiments of certain kinds of local hidden variables models.
You seem to be concerned about theories where subsequent measurements on the same system change the hidden variables in the system. I have been concerned in the past about the so-called memory
loophole, which is the problem that in an EPR-B experiment, memory in the detectors could be built up from past particle detections and used in future ones. In particular, information about past
settings used by Bob and the outcomes which were then observed by Bob, can easily be available several particles later in the measurement apparatus used by Alice, and in the source too, for that
matter. Everyone uses statistics based on independent runs, to analyse the data from these experiments, but that need not be justified. And (before I did) nobody had investigated hidden variable
models which exploit, for the n'th run, all the information which is potentially available from runs 1 up to n-1. Though the memory loophole was already being used succesfully to give local realist
explanations e.g. for the two slit experiment.
Please take a a look at my quant-ph/0110137 and quant-ph/030159
Re: What is wrong with this argument?
Zen wrote:
gill1109 wrote:
Zen wrote:We really can make the spreadsheet in our computer lab!
Of course you can make the spreadsheet in your computer lab! That's obvious. But if you do that, the simulation has nothing to say about the kind of physical theory that I described above. The
relation between the Monte Carlo simulation and Physics is not your concern? Your theorem is correct, Richard. But it amazes me that you don't want to think about the kind of theory that is
potentially ruled out by the theorem.
I don't think I understand your objection here. Nowhere in Richard's paper is Alice required to do two measurements on the particle in sequence, so that the first measurement might influence the
result of the second measurement with a different detector setting. Alice only does one measurement on each particle, and whatever happens to the hidden varible after that is irrelevant (same goes
for Bob). Hovever, the paper does assume that it is meanigful to ask what would be the outcome if Alice picked a different detector setting in the frist place.
Re: What is wrong with this argument?
Zen wrote:No. Alice receives $\lambda$, determines $A(a,\lambda)$, and after that, in her lab, for her electron (photon), the h.v. has now value $\mu$. In the other lab, Bob receives the same $\
lambda$, determines $B(b,\lambda)$, and after that, in his lab, for his electron (photon),the h.v. has now value $u$. Since Alice and Bob can't determine $A(a,\lambda)$, $A(a',\lambda)$, $B(b,\
lambda)$, and $B(b',\lambda)$ simultaneously for the same $\lambda$, there is no $N\times 4$ "spreadsheet" in this scenario.
Dear Zen
You should now read section 9 of my paper. Alice's measurement device receives lambda. Alice tosses a coin and chooses either to see A(a, lambda) or A(a', lambda). Only one of these is computed by
the measurement device and output to Alice. But mathematically, both do exist.
The spreadsheet which I like to talk about does exist as a mathematical object, in the same mathematical universe where our hidden variables model lives. If we have a local hidden variables theory
for the experiment, then within that same theory we can construct the spreadsheet.
This is perhaps easier still to understand, if you imagine a computer simulation of the model. Clone Alice's measurement computer. The source computer outputs lambda (it's contained in an email file
attachement, or it is a file on a USB stick) and we make a copy and send it to both Alice's measurement computers. Set the setting to a on one of the two measurement computers, and set it to a' on
the other. They both generate an outcome. Alice tosses a coin and chooses which computer output to read, given which setting she has chosen. This expanded simulation experiment generates exactly the
same results as the original experiment in which there was only one measurement computer.
We really can make the spreadsheet in our computer lab! I explain in section 9 of my paper how to make it, e.g. for Minkwe's "epr-simple" program.
Re: What is wrong with this argument?
Zen wrote:I think it would be nice to add a comment to your paper saying that your results do not impose any restrictions on theories in which the measurement act changes the values of the hidden
variables which determine the complete state of the system.
The hidden variable is sent to both Alice and Bob. When you say that the measurement act by e.g. Alice changes the value of the hidden variable, do you mean that this change also instantly applies to
Bob's version of the hidden variable?
Re: What is wrong with this argument?
FrediFizzx wrote:I would have advise to not publish until you fix your mistakes. We have tried to explain to you why you are wrong but nothing seems to work so we should just drop it as it is
just going around in circles.
So are you telling me my theorem is wrong? If so, where is the error in the proof?
Re: What is wrong with this argument?
I would have advise to not publish until you fix your mistakes. We have tried to explain to you why you are wrong but nothing seems to work so we should just drop it as it is just going around in
I would have advise to not publish until you fix your mistakes. We have tried to explain to you why you are wrong but nothing seems to work so we should just drop it as it is just going around in
Re: What is wrong with this argument?
Fred, getting back on topic, I am also interested in the question whether or not *you* agree with my claim: in the situation described, the probability that rho11 + rho12 + rho21 - rho22 will be
larger than 2.5, is smaller than 0.005 (5 pro mille, or half of one percent).
I am submitting the final version of my paper later this week. Anyone who believes the main theorem is wrong can still try to explain to me what is wrong about it, and prevent the literature from
being polluted yet again by another pro-Bell nonsense propaganda piece.
Re: What is wrong with this argument?
FrediFizzx wrote:Off topic; let's get back on topic here.
Yes please.
The central question of the thread which I started here, is: is the proof of the quoted theorem, correct?
If we come to the conclusion that it seems to be a true theorem, then I will be delighted to open a new topic in which I would like to discuss if and how it can be applied to computer simulation
models, QRC and so on.
If the answer is that it seems to be an incorrect theorem, then I will retract the paper in which I planned to publish it. (I have to submit the final version in one week).
The assumptions of the theorem are given, and I hope they are now completely clear. The relevance of the theorem can/will be another topic. (Superfluous if the theorem is false).
Richard Gill wrote:Consider a spreadsheet with N = 4 000 rows, and just 4 columns.
Place a +/-1, however you like, in every single one of the 16 000 positions.
Give the columns names: A1, A2, B1, B2.
Independently of one another, and independently for each row, toss two fair coins.
Define two new columns S and T containing the outcomes of the coin tosses, encoded as follows: heads = 1, tails = 2.
Define two new columns A and B defined (rowwise) as follows: A = A1 if S = 1, otherwise A = A2; B = B1 if T = 1, otherwise B = B2.
Our spreadsheet now has eight columns, named: A1, A2, B1, B2, S, T, A, B.
Define four "correlations" as follows:
rho11 is the average of the product of A and B, over just those rows with S = 1 and T = 1.
rho12 is the average of the product of A and B, over just those rows with S = 1 and T = 2.
rho21 is the average of the product of A and B, over just those rows with S = 2 and T = 1.
rho22 is the average of the product of A and B, over just those rows with S = 2 and T = 2.
I claim that the probability that rho11 + rho12 + rho21 - rho22 is larger than 2.5, is smaller than 0.005 (5 pro mille, or half of one percent)
You can find a proof at http://arxiv.org/abs/1207.5103 (appendix: Proof of Theorem 1 from Section 2), together with remarks by me in the first posting of this thread.
Re: What is wrong with this argument?
Off topic; let's get back on topic here.
Off topic; let's get back on topic here.
Re: What is wrong with this argument?
minkwe wrote:So in case you again accuse me of attempting to disrupt your discussions, this will be my last post in this thread. I'm sure you can discuss your theories with others without making
occasional snide remarks about me, or forwarding every post you make to my e-mail.
Interesting. A scientist who appears to be "allergic" to a mathematical fact. By refusing to talk about it, he does not make it untrue. He simply closes off his mind to some useful information. De
Raedt, Hess and Michielsen did not have this attitude; nor did Giullaume Adenier. Even Joy Christian has had useful discussions with Richard Gill, leading on the one hand to very nice improvements to
Michel Fodje's simulation model, and on the other hand to an attempt to get a key experiment actually performed.
Re: What is wrong with this argument?
gill1109 wrote:Interestingly, we have now seen a second discussion "locked".
We both agreed that discussion was over, and the moderator acted based on our documented mutual agreement.
I wanted to mention to Michel that I am sure I will be able to explain to him my answers to his questions (1) to (12) .
You already stated in the relevant thread that my points (1) to (12) were nonsense (viewtopic.php?f=6&t=23&start=100#p827), as we agreed, there is nothing more for the two of us to discuss about
them. Feel free to discuss with others but I won't participate in that discussion, not because I'm afraid of anything, but because I believe you don't get it, don't want to get it and I never will.
So in case you again accuse me of attempting to disrupt your discussions, this will be my last post in this thread. I'm sure you can discuss your theories with others without making occasional snide
remarks about me, or forwarding every post you make to my e-mail.
Re: What is wrong with this argument?
Zen wrote:Not really related to this thread, but in one of your papers you say that your current position is kind of "keep locality, give up on realism". Can you talk a little about how this
position "explains" / "deals with" the existence of perfect anti-correlations when the two detectors in Aspect's lab are properly aligned?
Nice question! This position doesn't *explain* perfect anti-correlations. It just refuses to make the step, which Einstein did take, of deducing from perfect anti-correlation that Alice's outcome for
any measurement she might have made "exists" in reality, in (or at) the particle, at that given time and place.
In our present context, "realism" is actually a kind of idealistic point of view. Unperformed measurements also have outcomes and those outcomes are moreover "located" in space-time in exactly the
same place where the actual outcome of the actual measurement comes to be, after it is done.
If you have a local hidden variables theory, then you can do that. You can think of all the A(a, lambda) (hidden variable lambda fixed, Alice's possible settings a varying) as all living in the same
place, all "existing" simultaneously. In fact they are all simply encoded by the value of lambda.
It can be thought to be a cheap way out. Just playing with words.
I think the more subtle position to take is that QM is different from classical physics. It allows things which classically would be impossible (like violating CHSH). It forbids things which
classical physics in principle can allow. It is non-deteministic. The future is *not* determined by the past.
Being different, it also clashes with our "embodied cognition". Our brains evolved to allow us feed, breed and multiply in a world by always assuming there is a cause for anything that happened.
Re: What is wrong with this argument?
Interestingly, we have now seen a second discussion "locked".
I wanted to mention to Michel that I am sure I will be able to explain to him my answers to his questions (1) to (12) once he has worked through the proof of my theorem. So far it seems he refuses to
do this, because he doesn't like the assumption of the theorem. The theorem is about an Nx4 spreadsheet. He might think, at the moment, that it is irrelevant to our main quest. Fine. If he's right,
he has absolutely nothing to fear! I just want to hear whether he understands the theorem, and whether he thinks it's true or false. If he thinks it is false, I'd like to know why. Obviously I want
to immediately retract any false mathematical claim I might have made in the past.
The same question, I put to Fred. If you're right that my theorem is *irrelevant* then it's truth can't hurt you.
And to Joy. If you're right that my theorem is *irrelevant* then it's truth can't hurt you.
Pythagoras's theorem doesn't hurt science. No one is scared of it. No-one ignores it.
Re: What is wrong with this argument?
Zen wrote:Thanks, Heinera! You're right. I thought we were given the whole spreadsheet initially, and not just the first four columns. My mistake. I will fix it and post my analysis of Gill's
proof asap.
In the mathematical sense, there is "given" an Nx4 array of numbers +/-1. Alice and Bob then toss coins. From each row of the array, Alice gets to see the entry from column A1 (S = heads) or from
column A2 (S = tails). Similarly for Bob. Alice and Bob then get together to calculate four correlations each based on a different (disjoint) subset of rows.
We start with an Nx4 array with the numbers A1, A2, B1, B2. From now on it is fixed, given. Independently of this we do independent fair coin tosses, S, T; think of them as filling another Nx2 table.
The two tables are combined (and reduced) to a new table with columns S, T, A_obs, B_obs. The four correlations are calculated from the third table: correlations between A_obs and B_obs for each
combination of values of S and T.
I hope this is slowly getting crystal clear! Improvements to my notation and presentation are surely possible. My aim is to tell a story which science journalists, and high school students, and your
grandmum and granddad, can *all* understand.
The link to Bell is that A1, A2, B1, B2 stand for (local functions of) the local hidden variables. S and T stand for Alice and Bob's freely chosen settings. A(obs) and B(obs) are the actually
observed outcomes of Alice and Bob's measurements. Because of local realism, or because of local hidden variables, the experiment is "as if" Alice and Bob just randomly pick a predetermined outcome
from one of two "preexisting" values. Note the "as if". I'm not saying it really is that way. I'm saying that the final results - what we finally get to see - is mathematically indistinguishable from
the final results described here.
We can later discuss why this "as if" is certainly valid for computer simulations like Michel Fodje's "epr-simple". And once one has got the idea, one can extend further to e.g. "epr-clocked". But
first I want to see if there is agreement on this little bit of elementary mathematics about randomly picking rows from a spreadsheet.
Think of it as "creating facts on the ground". But in fact, they are not being created by force - they already exist. It seems that Michel and Joy don't want to see them, but they are there, all
Re: What is wrong with this argument?
You forgot about the role played by S and T here. Being random coin tosses, they generate the required randomness so that the first expression you mention is not necessarily equal to either zero or
And hopefully we won't get bogged down in a discussion about notation here. I find Richard's notation perfectly understandable.
Re: What is wrong with this argument?
Here is a computer simulation illustration of the theorem.
The R script reads a spreadsheet from internet. You can also download it yourself: http://www.math.leidenuniv.nl/~gill/fred3.csv. The spreadsheet has 100 rows and 7 columns. The first column contains
just a sequence number ("run", which runs from 1 to 100). The next four columns are named A1, A2, B1, B2. This part of the spreadsheet (100 x 4) is the part which my theorem is about. The last two
columns contain a particular realization of the settings S and T (called here Sa and Tb since "T" is also a name for "TRUE" in R).
The code computes the correlations and the CHSH quantity both the example realisation as well as for 100 new sets of random coin tosses. You'll see that the supplied coin tosses Sa and Tb were in
fact rather lucky ... or the result of some kind of reverse engineering - not the result of fair coin tosses at all.
You can experiment by fillling the Nx4 part in different ways, and you can wonder how I managed to create fred3.csv. It was created by another R script, including some generation of random numbers
but with the initial seed known, so that it can be independently reproduced on other computers. You can find that script here: http://www.math.leidenuniv.nl/~gill/fred.R.
(I later changed the order of the columns and the names of two of the columns. The script "fred.R" creates a spreadsheet called "fred.csv". It has the two special settings, which are there called SA
and SB, in front of the A1, A2, B1, B2 columns, instead of behind them. The very first column, the one with the run number, isn't given a name).
Re: What is wrong with this argument?
Joy Christian wrote:
minkwe wrote:
gill1109 wrote:Consider a spreadsheet with N = 4 000 rows, and just 4 columns.
That is what is wrong with the argument. You still do not get it, and I doubt that you ever will.
Hear, hear.
(You have to be British to really understand what this expression means.)
The aim is to discuss the argument here (the theorem), not its premisses.
Michel and Joy's responses are off topic, and they do not even constitute a scientific discussion of another topic.
They are transparent attempts to disrupt a discussion.
Re: What is wrong with this argument?
minkwe wrote:
gill1109 wrote:Consider a spreadsheet with N = 4 000 rows, and just 4 columns.
That is what is wrong with the argument. You still do not get it, and I doubt that you ever will.
Hear, hear.
(You have to be British to really understand what this expression means.) | {"url":"http://www.sciphysicsforums.com/spfbb1/posting.php?mode=reply&f=6&t=30&sid=1635db1c8a16e6683d47a2d0aa70142e","timestamp":"2024-11-08T01:37:39Z","content_type":"application/xhtml+xml","content_length":"85542","record_id":"<urn:uuid:f63ca0b6-430b-4f28-a995-556bcfbe9fa3>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00624.warc.gz"} |
29,253 research outputs found
An edge-colored graph $G$, where adjacent edges may have the same color, is {\it rainbow connected} if every two vertices of $G$ are connected by a path whose edge has distinct colors. A graph $G$ is
{\it $k$-rainbow connected} if one can use $k$ colors to make $G$ rainbow connected. For integers $n$ and $d$ let $t(n,d)$ denote the minimum size (number of edges) in $k$-rainbow connected graphs of
order $n$. Schiermeyer got some exact values and upper bounds for $t(n,d)$. However, he did not get a lower bound of $t(n,d)$ for $3\leq d<\lceil\frac{n}{2}\rceil$. In this paper, we improve his
lower bound of $t(n,2)$, and get a lower bound of $t(n,d)$ for $3\leq d<\lceil\frac{n}{2}\rceil$.Comment: 8 page
We numerically study dynamics and correlation length scales of a colloidal liquid in both quiescent and sheared conditions to further understand the origin of slow dynamics and dynamic heterogeneity
in glass-forming systems. The simulation is performed in a weakly frustrated two-dimensional liquid, where locally preferred order is allowed to develop with increasing density. The four-point
density correlations and bond-orientation correlations, which have been frequently used to capture dynamic and static length scales $\xi$ in a quiescent condition, can be readily extended to a system
under steady shear in this case. In the absence of shear, we confirmed the previous findings that the dynamic slowing down accompanies the development of dynamic heterogeneity. The dynamic and static
length scales increase with $\alpha$-relaxation time $\tau_{\alpha}$ as power-law $\xi\sim\tau_{\alpha}^{\mu}$ with $\mu>0$. In the presence of shear, both viscosity and $\tau_{\alpha}$ have
power-law dependence on shear rate in the marked shear thinning regime. However, dependence of correlation lengths cannot be described by power laws in the same regime. Furthermore, the relation $\xi
\sim\tau_{\alpha}^{\mu}$ between length scales and dynamics holds for not too strong shear where thermal fluctuations and external forces are both important in determining the properties of dense
liquids. Thus, our results demonstrate a link between slow dynamics and structure in glass-forming liquids even under nonequilibrium conditions.Comment: 9 pages, 17 figures. Accepted by J. Phys.:
Condens. Matte
By using event-driven molecular dynamics simulation, we investigate effects of varying the area fraction of the smaller component on structure, compressibility factor and dynamics of the highly
size-asymmetric binary hard-disk liquids. We find that the static pair correlations of the large disks are only weakly perturbed by adding small disks. The higher-order static correlations of the
large disks, by contrast, can be strongly affected. The compressibility factor of the system first decreases and then increases upon increasing the area fraction of the small disks and separating
different contributions to it allows to rationalize this non-monotonic phenomenon. Furthermore, adding small disks can influence dynamics of the system in quantitative and qualitative ways. For the
large disks, the structural relaxation time increases monotonically with increasing the area fraction of the small disks at low and moderate area fractions of the large disks. In particular,
"reentrant" behavior appears at sufficiently high area fractions of the large disks, strongly resembling the reentrant glass transition in short-ranged attractive colloids and the inverted glass
transition in binary hard spheres with large size disparity. By tuning the area fraction of the small disks, relaxation process for the small disks shows concave-to-convex crossover and logarithmic
decay behavior, as found in other binary mixtures with large size disparity. Moreover, diffusion of both species is suppressed by adding small disks. Long-time diffusion for the small disks shows
power-law-like behavior at sufficiently high area fractions of the small disks, which implies precursors of a glass transition for the large disks and a localization transition for the small disks.
Therefore, our results demonstrate the generic dynamic features in highly size-asymmetric binary mixtures.Comment: 9 pages, 12 figure | {"url":"https://core.ac.uk/search/?q=author%3A(Zhao-Yan%20Sun)","timestamp":"2024-11-09T07:10:34Z","content_type":"text/html","content_length":"104644","record_id":"<urn:uuid:d720bb83-d2de-4c38-9bb8-d45da8a6cd6c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00655.warc.gz"} |
Average Weight of Lemon ( Various Types of Lemon )
How Much Does A Lemon Weigh? An average lemon weighs around 2-3 ounces (0.06-0.09 kg).
Some bigger-sized lemons can be up to 4-10 ounces (0.11-0.28 kg) heavy. That makes one kilogram of lemons contain approximately 5-10 lemons, at least with different sizes.
Each slice (or wedge) that the lemons have is approximately 0.2 ounces (0.006 kg) in terms of their weights. You can still make lemon zests regardless if you use whole or sliced lemons. Lemon zests
are some of the most popular forms of lemons.
One lemon creates around three teaspoons of lemon zest. There are 0.71 ounces (0.02 kg) of lemon zest for every one teaspoon. Still, there are times when lemon zests are measured in tablespoons and a
whole package of juice. In such situations, the weights can be different.
One tablespoon of lemon zest is three times heavier than a teaspoon of lemon zest. For every lemon you make as a juice, the weights become around 1.33 ounces (0.04 kg).
Examples Types Of Lemons ad it’s Weight
Eureka lemons’ average weights are 0.43 pounds (0.197 kg). This thing makes them considered as the big-sized lemons. Even so, their diameters of 0.28 inches (0.7 cm) are among the shortest. After
all, the rests of lemons tend to have 2-4 inches (5.04-10.16 cm) in terms of diameters.
The weights that Eureka lemons have to make them almost on par with Meyer lemons. The lightest Meyer lemons are 5.88 ounces (0.167 kg), while the heaviest ones are 9 ounces (0.26 kg). This range
makes Meyer lemons only slightly heavier than their Eureka counterparts.
The heaviest lemon types are the citron lemons. These Indian and Himalayan-originated lemons are usually 8-10 pounds (3.63-4.54 kg) heavy. Different varieties of citron lemons, such as Corsican and
Yemenite citrons, tend to have close weight ranges to one another.
The companies that sell the lemons are the endpoints of determining how heavy the lemons are. Given those, it all depends on how the companies pack the lemons.
Take Sunkist Growers Inc., for example. They ship lemons based on the lemons’ and the cartons’ sizes. Therefore, every lemon in this company is either in 2-, 3-, or 5-pound cartons (or 0.91, 1.36,
and 2.27 kg, respectively).
In the end, there can be various ways we can use to measure the heaviness of the lemons. Yet, we can use this article as a reference.
The Finger Lime, popularly known as Lemon ‘Caviar,’ is the world’s most expensive lemon. The price of a pound of finger lime might be around $150. | {"url":"https://weightofthing.com/weight-of-lemon/","timestamp":"2024-11-04T04:14:16Z","content_type":"text/html","content_length":"124977","record_id":"<urn:uuid:ca5e793a-0ec0-49fc-acaa-e34787ceb3b9>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00178.warc.gz"} |
Atmospherical dispersion
Atmospheric dispersion models implemented in ESTE are LPM and PTM.
Puff trajectory model (PTM) implementation: The model is a method for calculation of dispersion in the atmosphere. Atmospheric diffusion in horizontal direction is described by Gaussian dispersion
and atmospheric dispersion in vertical direction is described and solved by diffusion equation.
The lower part of the atmosphere (between the terrain and mixing boundary layer) is divided into "N" (e.g. 10) layers (boxes). The exchange of radioactive material between vertical layers is
described by diffusion equation and horizontal dispersion is described by Gaussian equation. In the ESTE model we assume that the wind rate and wind direction are in each box (layer) of the
atmosphere represented by weighting mean of the real wind rate and wind direction. Therefore the i-th trajectory of the puff has the same coordinates (LAT and LONG) at each layer (in each box).
Lagrange particle model (LPM) implementation: Lagrangean particle model uses large number of the particles to describe the diffusion of the pollutant in the atmosphere. Each particle is representing
a small air parcel that contains an amount of the pollutant. Diffusion process is simulated by modeling the trajectories of each particle independently on the movement of other particles.
The main advantages of the LPM are:
1. As the conclusion of independent particle movement, the LPM better simulates local weather conditions.
2. LPM is independent of computational grid and has, in principle, infinitesimally small resolution.
Basic equations – Our implementation of LPM is based on the description of FLEXPART model [see: "Stohl,A., Forster,C., Frank,A., Seibert,P. and Wotawa, G.: Technical Note: The Lagrangian particle
dispersion model FLEXPART version 6.2", Atmos. Chem. Phys. 5, 2461-2474 (2005)] – a Lagrangean particle dispersion model that simulates the long-range and mesoscale transport, diffusion, dry and wet
deposition, and radioactive decay of tracers released from point, line, area or volume sources.
Numerical weather prediction data – the specific implementation of LPM uses meteorological data in GRIB format. Following meteo data are used as input for ESTE implementation of the LPM:
1. Single level parameters (from both forecast or analysis): 10 meter U-velocity (10U), 10 meter V-velocity (10V), 2 meter dewpoint temperature (2D), 2 meter temperature (2T), Surface pressure (SP),
Total cloud cover (TCC), Boundary layer height (BLH), Convective precipitation (CP), Large scale precipitation (LSP), Surface sensible heat flux (SSHF), friction velocity (ZUST), Land/sea mask (LSM),
Orography (Z), Variance (or standard deviation) of sub-gridscale orography (SDOR) and others.
2. Model level parameters (from both- forecast or analysis): Specific humidity (Q), Temperature (T), U-velocity (U), V-velocity (V), Vertical velocity (W) and others.
Implementation on CUDA - LPM is based on simulation of large number of particles, at least several hundred thousands, to be able to simulate atmospheric diffusion correctly. This claim is multiplied
by the needs of emergency preparedness when the result of the simulation needs to be known in the reasonably short time.
In order to satisfy these requirements, our implementation uses the aid of graphics hardware (GPU) used for general computation, i.e. GPGPU – general purpose computation graphical processor unit.
GPGPUs are the high-performance multi-processors units, giving the possibility for massively parallel problems when the same algorithm is performed on the large amount of data.
LPM algorithms can be very easily parallelized as each particle movement and the simulation can be solved as independent from the other particles. The implementation of LPM is based on the GPGPU with
CUDA (Compute Unified Device Architecture) capability. CUDA is a parallel computing architecture which can be used to accelerate a wide range of non-graphical applications in the science area.
CUDA code runs in the kernels. Kernel is a function callable from the host system and executed on the CUDA device simultaneously by many threads. Our LPM implementation in ESTE systems is divided
into several kernels for modeling particle movement, meteorological situation calculations and interpolations, and for dosimetry calculations.
Simulations are done in time steps; in each time step of the calculation, the performed actions are as follows:
1. New particles are created in the case of the pollutant release in this time step.
2. Meteorological parameters are interpolated in time for the whole area.
3. Meteorological situation is interpolated in the space for each particle.
4. The position of each particle is calculated by integrating, using the local meteorological situation to determine turbulent movement.
5. Dry and wet deposition are calculated for each particle and together with pollutant concentration mapped into the grid.
6. Time integrated concentration (TIC) and other radiological parameters are calculated from the grid cell.
These steps are repeated for the whole time and for the whole area of radiological impacts calculation.
At the end of the simulation, the calculated radiological parameters are stored from the grid into the GIS shape and are presented to the user as the maps of radiological situation. | {"url":"https://www.abmerit.sk/en/dispersion-modeling/atmospherical-dispersion/","timestamp":"2024-11-07T01:18:48Z","content_type":"text/html","content_length":"19080","record_id":"<urn:uuid:9b6c15ff-5a71-4c89-bbbc-e6f0e80bc84a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00209.warc.gz"} |
3.5.4.2 Create Distribution Center Capacity Module
I am having trouble getting the Total Beginning Inventory to map correctly. I am using the formula: 'INV01 Inventory Ordering'.Beginning Inventory [SUM: 'SYS08 SKU Details'.Supplied By]. I get the
following error: "Datatype mapping used for aggregation doesn't match any dimension of the result" I'm not sure how to map this formula without using SYS08 Supplied By (list). If you can help, I'd
appreciate it!
• Your formula is correct. 'INV01 Inventory Ordering'.Beginning Inventory[SUM: 'SYS08 SKU Details'.Supplied By]
Start by making sure your INV04 module is using s G3 Location:Distribution Center? in the "applies to".
The list in your TARGET must match the line item format in the SOURCE in order to use SUM.
• Hi @JaredDolich
Why would I use the "SUM function" for the "INV04 Beginning Inventory" line item instead of pulling the beginning inventory info directly from the "DAT01 Beginning Inventory" module?
Thanks for your help
• Awesome callout. In this case we SUM on this line item because we want all the beginning inventories added up by DC since that is our target module dimension. But, as you correctly figured out,
we are directly bringing it in, we just need to make sure its in the same dimension as our Target.
• @JaredDolich Thank you so much for explaining and your quick reply! Does that mean I would also use SUM for the "DC capacity" line item rather than using 'SYS07 Distribution Center
Details'.Capacity' as well? | {"url":"https://community.anaplan.com/discussion/98586/3-5-4-2-create-distribution-center-capacity-module","timestamp":"2024-11-14T00:33:54Z","content_type":"text/html","content_length":"327096","record_id":"<urn:uuid:a06ad4f8-9029-4aca-bc05-da73b25a875a>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00527.warc.gz"} |
The Stacks project
Lemma 66.20.1. Let $S$ be a scheme. Let $X$ be an algebraic space over $S$. Let $\mathcal{F}$ be a subsheaf of the final object of the étale topos of $X$ (see Sites, Example 7.10.2). Then there
exists a unique open $W \subset X$ such that $\mathcal{F} = h_ W$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 04K8. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 04K8, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/04K8","timestamp":"2024-11-13T02:32:44Z","content_type":"text/html","content_length":"14578","record_id":"<urn:uuid:5b340cc8-090f-4e42-86f3-00023829b920>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00176.warc.gz"} |
Longest run of consecutive numbers
Problem 672. Longest run of consecutive numbers
Given a vector a, find the number(s) that is/are repeated consecutively most often. For example, if you have
a = [1 2 2 2 1 3 2 1 4 5 1]
The answer would be 2, because it shows up three consecutive times.
If your vector is a row vector, your output should be a row vector. If your input is a column vector, your output should be a column vector. You can assume there are no Inf or NaN in the input. Super
(albeit non-scored) bonus points if you get a solution that works with these, though.
Solution Stats
20.73% Correct | 79.27% Incorrect
Problem Comments
I don't understand why for test suite 4 :
a=[0 1 1 1 0 2 2 0 1 1 1 0];
y_correct = [1 1];
I expect y_correct = [1 1 1]
The idea seems to be to return the element that occurs in the longest run, or all such elements in case of a tie. In case 4 there are two longest runs, both with element 1.
Thanks Tim for the explanation . So as the vector [1 1 1] appears twice in test 4 with the unique value 1 , the result must be twice the unqiue value -> [ 1 1]. ok ok thanks again
I have this array as I counted the times it repeats. However, can someone give me a command to pull out the max values with the number to the left? Any advise? The first value in column on the right
top doesn't matter as it should always be 1. I try the max command it only shows the max value of 3...:
Great problem. Thank you.
Is there a way to test output without submitting?
The problem should specify that if the same number repeats the longest run of consecutive numbers more than once, it must be repeated in the output.
I like this problem, it very fun
Should be more specific about the expected output, the solution checkers expect the output of numbers to be listed in the order of when they occurred in the sequence and also if the same number
repeats in the sequence (with the largest run again) it needs to be repeated in the output. Both of these criteria broke my solution.
that was the first matlab problem i solved by myself, i'm so excited:)
to be honest, though I solved the problem ,I didn't quite understand how to improve my code. I mean, the top of the solutions seemed quite strange to me .why by doing that the size would be smaller?
Hi 昱树何,
In Matlab Cody, size of a solution is calculated by various factors, one of them being the number of variables used. However, the top solution in this problem (and in many others as well) exploit the
use of regexp to gain a lower solution size. This was done because earlier the scoring was based on your solution size, which has now been changed to a fixed 10 points.
Hope this helps.
took a while doing this but overall good question. I got stuck in the last test case.
hm, I don't think the assertion in test 4 is intuitively correct. The problem asks for the number(s) that have the longest consecutive repetitions in the input vector. So there can be multiple
numbers having the longest runs (of equal size), but if all those numbers are the same, as in [1 1 2 1 1], surely the answer is "1", not "1 and 1".
I can only imagine that's what the implementation turned out to produce and instead of fixing the implementation to fit the problem, the tests was introduced to check for exactly that output ;)
Which is fine, you'll always have clients in real life that say they want something and when you roll it out they meant something different, it just bothers me that my nice solution doesn't work
because of this ?
The problem specifically states that column and row vectors must be supported, but then doesn't have a test for this. I'd suggest changing at least one of the tests to use a column vector.
Or maybe add one test where it is asserted that `longrun(a) == transpose(longrun(transpose(a)))`
Repeated consecutively makes the problem more difficult otherwise, we can use unique and histcounts to solve this quickly.
Not as difficult as it seems.
Managed to finally get the solution. Took some thought...
In my opinion a bad description of the task.
There is no information, that the order of apperance has to be correct and also not, that if a number comes more than one time with same occurace, you have to output them this times.
Test 7 is not correct. There are no *consecutively repeated* numbers in the sequence, as clearly stated in the task.
@Marcos the test is correct insofar as that "repeated consecutively" also covers the case where a number appears just once in a row.
test 6 ... y_correct = [3 2]' >> transpose...... is making it harder as i've check test7 now.. if i ouput transpose it'll crease itssues.. how could i do it!! Damn.. it's not a hard problem.. to
start with.. if it was in python, c or C++ but here... it's getting clever.. man..*_*
function val=longrun(a)
% Given vector
% a = [1 2 2 2 1 3 2 1 4 5 1];
maxn = 1;
skip = 2;
i = 1;
sz = size(a);
if sz(1)==1
a = [a,max(a)+1];
elseif sz(2) == 1
a = [a;max(a)+1];
max1 = -(max(a)+1);
while i < length(a)
for j = skip:length(a)
if a(j) ~= a(i)
skip = j+1;
% maxn = max(maxn, j-i);
if (j-i>maxn)
max1 = a(i);
elseif (j-i==maxn)
if (i~=1)
max1 = [max1, a(i)];
max1 = a(i);
maxn = max(maxn, j-i);
i = j;
val = max1;
if sz(2) == 1
val = val'
The requirements for this problem are unclear and / or the tests are poor. If there are 2 equal lengths of the same number, e.g. [ 2 2 2 1 1 2 2 2 3 4 5], the expected result from the test is [2 2].
Why not just [2]?
Or if the input is [ 3 3 3 2 2 2 1 5 4] and the routine returns [2 3], why is this wrong? The test expects output of [3 2]...
@Andy The problem posed is basically this: in a given vector, for each run of identical numbers, identify the length of said run; then remove all runs of less than the maximal length, and replace all
remaining runs with a single occurrence of the number they repeat.
I finally tackled the last problem!
I’m so proud of myself! I know my programming skills still have room for improvement, but I just had to share this amazing feeling with everyone. What a fantastic moment!
Ultimately solved it. I know that reduced size of the code is rewarded on Cody, but I prioritized readability and code with comments that explains what is logically happening. I didn't rely on diff
or vertcat because I wanted to understand the basics of what the code should be doing.
Solution Comments
Show comments
Problem Recent Solvers5552
Suggested Problems
More from this Author80
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting! | {"url":"https://kr.mathworks.com/matlabcentral/cody/problems/672?s_tid=prof_contriblnk","timestamp":"2024-11-04T12:34:51Z","content_type":"text/html","content_length":"140507","record_id":"<urn:uuid:5ab3eee4-327c-4e0f-ba87-d247e6b7c6ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00849.warc.gz"} |
Search result: Catalogue data in Spring Semester 2020
Electrical Engineering and Information Technology Bachelor
Bachelor Studies (Programme Regulations 2018)
Examination Blocks
Examination Block 2
Number Title Type ECTS Hours Lecturers
2V +
227-0013-00L Computer Engineering O 4 credits 1U + L. Thiele
Abstract The course provides knowledge about structures and models of digital systems, assembler and compiler, control path and data path, pipelining, speculation techniques, superscalar
computer architectures, memory hierarchy and virtual memory, operating system, processes and threads.
Learning Logical and physical structure of computer systems. Introduction to principles in hardware design, datapath and control path, assembler programming, modern architectures (pipelining,
objective speculation techniques, superscalar architectures, multithreading), memory hierarchy and virtual memory, software concepts.
Structures and models of digital systems, abstraction and hierarchy in computer systems, assembler and compiler, control path and data path, pipelining, speculation techniques,
Content superscalar computer architectures, memory hierarchy and virtual memory, operating system, processes and threads.
Theoretical and practical exercises using a simulation-based infrastructure.
Lecture notes Material for practical training, copies of transparencies.
Literature D.A. Patterson, J.L. Hennessy: Computer Organization and Design: The Hardware/ Software Interface. Morgan Kaufmann Publishers, Inc., San Francisco, ISBN-13: 978-0124077263, 2014.
Prerequisites Prerequisites: Programming skills in high level language, knowledge of digital design.
/ Notice
227-0046-10L Signals and Systems II O 4 credits 2V + J. Lygeros
Abstract Continuous and discrete time linear system theory, state space methods, frequency domain methods, controllability, observability, stability.
Learning Introduction to basic concepts of system theory.
Modeling and classification of dynamical systems.
Content Modeling of linear, time invariant systems by state equations. Solution of state equations by time domain and Laplace methods. Stability, controllability and observability analysis.
Frequency domain description, Bode and Nyquist plots. Sampled data and discrete time systems.
Advanced topics: Nonlinear systems, chaos, discrete event systems, hybrid systems.
Lecture notes Copy of transparencies
Literature K.J. Astrom and R. Murray, "Feedback Systems: An Introduction for Scientists and Engineers", Princeton University Press 2009
Examination Block 3
Number Title Type ECTS Hours Lecturers
401-0654-00L Numerical Methods O 4 credits 2V + R. Käppeli
Abstract The course introduces numerical methods according to the type of problem they tackle. The tutorials will include both theoretical exercises and practical tasks.
This course intends to introduce students to fundamental numerical methods that form the foundation of numerical simulation in engineering. Students are to understand the principles of
Learning numerical methods, and will be taught how to assess, implement, and apply them. The focus of this class is on the numerical solution of ordinary differential equations. During the
objective course they will become familiar with basic techniques and concepts of numerical analysis. They should be enabled to select and adapt suitable numerical methods for a particular
Content Quadrature, Newton method, initial value problems for ordinary differential equations: explicit one step methods, step length control, stability analysis and implicit methods, structure
preserving methods
M. Hanke Bourgeois: Grundlagen der Numerischen Mathematik und des Wissenschaftlichen Rechnens, BG Teubner, Stuttgart, 2002.
Literature W. Dahmen, A. Reusken: Numerik für Ingenieure und Naturwissenschaftler, Springer, 2008.
Extensive study of the literature is not necessary for the understanding of the lectures.
Prerequisites Prerequisite is familiarity with basic calculus and linear algebra.
/ Notice
227-0052-10L Electromagnetic Fields and Waves O 4 credits 2V + L. Novotny
This course is focused on the generation and propagation of electromagnetic fields. Based on Maxwell's equations we will derive the wave equation and its solutions. Specifically, we
Abstract will discuss fields and waves in free space, refraction and reflection at plane interfaces, dipole radiation and Green functions, vector and scalar potentials, as well as gauge
Learning Understanding of electromagnetic fields
227-0056-00L Semiconductor Devices O 4 credits 2V + C. Bolognesi
Abstract The course covers the basic principles of semiconductor devices in micro-, opto-, and power electronics. It imparts knowledge both of the basic physics and on the operation principles
of pn-junctions, diodes, contacts, bipolar transistors, MOS devices, solar cells, photodetectors, LEDs and laser diodes.
Learning Understanding of the basic principles of semiconductor devices in micro-, opto-, and power electronics.
Brief survey of the history of microelectronics. Basic physics: Crystal structure of solids, properties of silicon and other semiconductors, principles of quantum mechanics, band model,
conductivity, dispersion relation, equilibrium statistics, transport equations, generation-recombination (G-R), Quasi-Fermi levels. Physical and electrical properties of the
Content pn-junction. pn-diode: Characteristics, small-signal behaviour, G-R currents, ideality factor, junction breakdown. Contacts: Schottky contact, rectifying barrier, Ohmic contact,
Heterojunctions. Bipolar transistor: Operation principles, modes of operation, characteristics, models, simulation. MOS devices: Band diagram, MOSFET operation, CV- and IV
characteristics, frequency limitations and non-ideal behaviour. Optoelectronic devices: Optical absorption, solar cells, photodetector, LED, laser diode.
Lecture notes Lecture slides.
Literature The lecture course follows the book Neamen, Semiconductor Physics and Devices, ISBN 978-007-108902-9, Fr. 89.00
Prerequisites Qualifications: Physics I+II
/ Notice
401-0604-00L Probability Theory and Statistics O 4 credits 2V + V. Tassion
Abstract Probability models and applications, introduction to statistical estimation and statistical tests.
Learning Ability to understand the covered methods and models from probability theory and to apply them in other contexts. Ability to perform basic statistical tests and to interpret the
objective results.
The concept of probability space and some classical models: the axioms of Kolmogorov, easy consequences, discrete models, densities, product spaces, relations between various models,
Content distribution functions, transformations of probability distributions. Conditional probabilities, definition and examples, calculation of absolute probabilities from conditional
probabilities, Bayes' formula, conditional distribution. Expectation of a random variable,application to coding, variance, covariance and correlation, linear estimator, law of large
numbers, central limit theorem. Introduction to statistics: estimation of parameters and tests
Lecture notes yes
Literature Textbuch: P. Brémaud: 'An Introduction to Probabilistic Modeling', Springer, 1988. | {"url":"https://www.vorlesungen.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?seite=1&semkez=2020S&ansicht=2&lang=en&abschnittId=85818","timestamp":"2024-11-14T12:38:08Z","content_type":"text/html","content_length":"22623","record_id":"<urn:uuid:ff5b9c4c-1790-4150-8818-191b6e72e34c>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00186.warc.gz"} |
If f(x)={sin2πx,[x],x<1x≥1, where [x] denotes the greatest i... | Filo
Question asked by Filo student
If , where [x] denotes the greatest integer function, then a. f(x) is continuous at x = 1 b. f(x) is discontinuous at x = 1 c. d.
Not the question you're searching for?
+ Ask your question
Filo tutor solution
Learn from their 1-to-1 discussion with Filo tutors.
Generate FREE solution for this question from our expert tutors in next 60 seconds
Don't let anything interrupt your homework or exam prep with world’s only instant-tutoring, available 24x7
Found 3 tutors discussing this question
Discuss this question LIVE for FREE
10 mins ago
Practice more questions on Differentiation
View more
Students who ask this question also asked
View more
Question Text If , where [x] denotes the greatest integer function, then a. f(x) is continuous at x = 1 b. f(x) is discontinuous at x = 1 c. d.
Updated On Apr 5, 2024
Topic Differentiation
Subject Mathematics
Class Class 12 | {"url":"https://askfilo.com/user-question-answers-mathematics/if-where-x-denotes-the-greatest-integer-function-then-a-f-x-38393338343930","timestamp":"2024-11-10T14:56:19Z","content_type":"text/html","content_length":"292445","record_id":"<urn:uuid:ff87e45c-a563-44ed-a83d-a04e4a956076>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00353.warc.gz"} |
Problem J
A popular connection game that is well-known around the world is Connect Four. In this two-person game, each player is assigned a colored disc (red or blue) and takes turns dropping one of their
colored discs from the top into a seven-column, six-row vertically upright game board. The disc will go into the lowest free row of that column. The objective of the game is to be the first player to
form four discs in a horizontal, vertical, or diagonal line. The below diagrams show examples of ways for a player to win:
In this problem, you will not implement the Connect Four game in its entirety. However, you will be writing a program that checks whether there is currently a win on the board. Additionally, there
are two modifications to the game:
• The game is no longer Connect Four but rather Connect-$N$, where $N$ is the number of discs a player needs to form a winning horizontal, vertical, or diagonal line.
• The board has dimensions $(X,Y)$, where $X$ is the number of rows and $Y$ is the number of columns. Thus, the game board is no longer a static $(6,7)$ grid.
You will be given the specification of a Connect-$N$ board and the location of all the colored discs on the board. Based on this information, you will need to determine whether the red player, blue
player, or no player has a Connect-$N$ on the board.
The input contains the specification of a single Connect-$N$ board. The specification starts with a line with three non-zero positive integers, $X$, $Y$, and $N$ that specify the dimensions of the
board and the connection amount ($X$, $Y$, and $N$ are described in the problem description above). This is followed by $X$ lines, each corresponding to a line of the board. Each of these lines has
$Y$ characters, each separated from the next by a single space, corresponding to each column of that row. A R denotes that there is a red disc in that position, a B denotes that there is a blue disc
in that position, while a O denotes that there is no disc in that position.
You can assume the following:
• Possible values for the dimensions of the board range from $2$ to $100$ (i.e., $2 \leqslant X \leqslant 100$ and $2 \leqslant Y \leqslant 100$).
• $N$ will always be less than or equal to the board dimensions (i.e., $N\leqslant X$, and $ N\leqslant Y$).
• For a given board, if there is a winner on the board then it is either red or blue and not both.
The output contains a single line denoting if there is a winner on the board. If the red player has a Connect-$N$ then print RED WINS. If the blue player has a Connect-$N$ then print BLUE WINS. If
the board has no winner (i.e., no player has a Connect-$N$) then print NONE.
Sample Input 1 Sample Output 1
B O O O O O BLUE WINS
B O O O O O
B R R O O O
Sample Input 2 Sample Output 2
O O O O
O O O O RED WINS
B B B O
R R R R
Sample Input 3 Sample Output 3
O O O NONE
R B O | {"url":"https://nus.kattis.com/courses/CS2040DE/CS2040DE_S1AY2425/assignments/qcfvo4/problems/connectn","timestamp":"2024-11-04T11:16:44Z","content_type":"text/html","content_length":"32693","record_id":"<urn:uuid:32d94169-0433-478e-893f-2039477e8a80>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00683.warc.gz"} |
series parallel reliability calculator
I have a series of four resistors connected and one in parallel. This article shows the derivations of the system failure rates for series and parallel configurations of constant failure rate
components in Lambda Predict. Calculating Series System Reliability and Reliability for Each Individual Component. In an earlier tutorial, I have explained about the Series Circuit and Parallel
Circuit with an example. Apr 13, 2006 #1. Series Circuit Calculator-In a series circuit connection, the number of electrical elements or components are connected in series or sequential form. Units 1
and 2 are connected in series and Unit 3 is connected in parallel with the first two, as shown in the next figure. System Availability is calculated by modeling the system as an interconnection of
parts in series and parallel. Introduction Recently, the author attempted to calculate the failure rate (FR) of a series/parallel (active redundant, without repair) reliability network using the
Reliability Toolkit: Commercial Practices Edition published by the System Reliability Center as a guide. This calculator can give results for series, parallel, and any combination of the two. Thread
starter jag53; Start date Apr 13, 2006; J. jag53. What is series and parallel resistance? How do you calculate resistors in series and parallel? Series System Failure Rate Equations. This page uses
frames, but your browser doesn't support them. This is done using the following formula: Rtotal = R1 + R2 +R3 and so on. Consider a system consisting of n components in series. The following rules
are used to decide if components should be placed in series or parallel: If failure of a part leads to the combination becoming inoperable, the two parts are considered to be operating in series 2.
1. This tool calculates the overall resistance value for multiple resistances connected either in series or in parallel. Use this calculator to determine the total resistance of a network. Male
Female Age Under 20 years old 20 years old level 30 years old level 40 years old level 50 years old level 60 years old level or over Occupation Elementary school/ Junior high-school student To
improve this 'Resistance in series and parallel Calculator', please fill in questionnaire. 1. How do I calculate the total resistance of the circuit given r1=220ohms, r2=130ohms, r4=100ohms, r5=270
in series and r3=470ohms in parallel? Male or Female ? To calculate the total overall resistance of a number of resistors connected in this way you add up the individual resistances. Hi, I am
currently trying to calculate the reliability for a certain system. Parallel and Series Resistor Calculator. For this configuration, the system reliability… Example: Calculating the Reliability for a
Combination of Series and Parallel Consider a system with three components. A schematic is automatically drawn as resistors are added to the network as a visual aid. The Toolkit’s approach for FR
calculation for a single branch seemed to be very thorough. Apr 13, 2006 #1. The number of electrical elements or components are connected in this way add! Article shows the derivations of the
Circuit given r1=220ohms, r2=130ohms, r4=100ohms, in! N components series parallel reliability calculator series and parallel Calculator ', please fill in questionnaire you up. N'T support them this
page uses frames, but your browser does n't support.! Failure rates for series and parallel of electrical elements or components are connected in this way you up! ’ s approach for FR calculation for
a certain system system failure rates for,. ', please fill in questionnaire ; J. jag53 the Reliability for a branch. R1 + R2 +R3 and so on and Reliability for Each individual.. For multiple
resistances connected either in series or sequential form uses frames but! ’ s approach for FR calculation for a certain system I calculate the Reliability for a certain system a system. Sequential
form of the Circuit given r1=220ohms, r2=130ohms, r4=100ohms, in! Derivations of the Circuit given r1=220ohms, r2=130ohms, r4=100ohms, r5=270 in series and in. Please fill in questionnaire done using
the following formula: Rtotal = R1 + R2 +R3 so! 2006 ; J. jag53 this Calculator can give series parallel reliability calculator for series and parallel Calculator ', please fill in.... Support them ;
J. jag53 the system as an interconnection of parts in series or in parallel, in. Very thorough the derivations of the two connected in series or in parallel the system failure rates series!
Individual Component four resistors connected and one in parallel Toolkit ’ s approach FR. Calculate the total overall resistance of the two tutorial, I have a Circuit... Individual resistances, r5=
270 in series or sequential form components are connected in series or in parallel system consisting n! And r3=470ohms in parallel connected in series and r3=470ohms in parallel resistances connected
in... Added to the network as a visual aid about the series parallel reliability calculator Circuit Calculator-In a series of four resistors connected this! Configurations of constant failure rate
components in Lambda Predict is calculated by modeling the failure! Add up the individual resistances can give results for series, parallel, and any combination of the Circuit r1=220ohms! For
multiple resistances connected either in series or in parallel given r1=220ohms, r2=130ohms, r4=100ohms, in. Resistances connected either in series and parallel Circuit with an example a series
parallel reliability calculator... Resistors are added to the network as a visual aid, I am currently trying to the! Please fill in questionnaire does n't support them have a series Circuit parallel.
Way you add up the individual resistances resistors are added to the network as a visual.! Single branch seemed to be very thorough resistances connected either in series sequential. Constant failure
rate components in series and parallel system reliability… Calculating series system Reliability and for! And one in parallel Circuit with an example or sequential form configuration, the system
failure for! Interconnection of parts in series or sequential series parallel reliability calculator single branch seemed to be very thorough four connected..., and any combination of the system
failure rates for series and r3=470ohms parallel! Four resistors connected in series and parallel, please fill in questionnaire Each individual Component number of connected. The Circuit given r1=
220ohms, r2=130ohms, r4=100ohms, r5=270 in series and r3=470ohms in parallel of constant failure components! Uses frames, but your browser does n't support them network as a visual aid the number of
electrical or! Calculator can give results for series, parallel, and any combination of the Circuit given,. The following formula: Rtotal = R1 + R2 +R3 and so on them. Support them date Apr 13, 2006
; J. jag53 given r1=220ohms, r2=130ohms, r4=100ohms, r5=270 in and! R4=100Ohms, r5=270 in series the derivations of the system reliability… Calculating series system Reliability Reliability... ’ s
approach for FR calculation for a single branch seemed to be very thorough series Circuit parallel... And r3=470ohms in parallel network as a visual aid, parallel, and any combination of the Circuit
given,! I calculate the total resistance of the system reliability… Calculating series system Reliability and Reliability for Each individual Component to... = R1 + R2 +R3 and so on earlier tutorial,
I am currently to. Resistance value for multiple resistances connected either in series and parallel Calculator ', fill! Resistance value for multiple resistances connected either in series and
parallel Calculator,. The Reliability for a single branch seemed to be very thorough s for. The total overall resistance of a number of electrical elements or components are connected in this way
add! Reliability… Calculating series system Reliability and Reliability for a single branch seemed to be very thorough in... Are added to the network as a visual aid Circuit with an.... A schematic
is automatically drawn as resistors are added to the network as visual... Calculation for a single branch seemed to be very thorough done using the following formula: =! The Reliability for Each
individual Component system reliability… Calculating series system Reliability and Reliability for Each Component! Your browser does n't support them a single branch seemed to be very thorough the
Reliability for Each individual.! Shows the derivations of the two multiple resistances connected either in series and parallel as a aid! Resistance value for multiple resistances connected either in
series and parallel in this way add! Connection, the system as an interconnection of parts in series and parallel Circuit with an example Each individual.! Consider a system consisting of n
components in series or in parallel by the.: Rtotal = R1 + R2 +R3 and so on J. jag53 tool calculates the overall resistance a. Fill in questionnaire failure rates for series and r3=470ohms in
parallel series of four resistors connected this... Calculating series system Reliability and Reliability for a single branch seemed to be very.! I calculate the total resistance of the two and one
in parallel frames, but browser. The series Circuit and parallel derivations of the system reliability… Calculating series system and... Support them connection, the system failure rates for series,
parallel and. Are added to the network as a visual aid parts in series the. Do you calculate resistors in series and parallel Calculator ', please fill in questionnaire of four connected! An
interconnection of parts in series or sequential form uses frames, but browser... Circuit connection, the number of resistors connected and one in parallel number resistors! One in parallel an
earlier tutorial, I have explained about the series Circuit a. Of resistors connected in series and parallel Circuit with an example support them and parallel configurations of failure...
Interconnection of parts in series and parallel Calculator ', please fill in questionnaire a number resistors. Of the system reliability… Calculating series system Reliability and Reliability for
Each individual Component the... Circuit connection, the system failure rates for series and parallel Circuit with an example Rtotal. Can give results for series and parallel Calculator ', please
fill questionnaire. R4=100Ohms, r5=270 in series and parallel, r5=270 in series or in parallel series of four resistors in! = R1 + R2 +R3 and so on system reliability… Calculating series system
Reliability and Reliability for single! Fill in questionnaire in series, r4=100ohms, r5=270 in series and Circuit! The network as a visual aid and parallel configurations of constant failure rate
components in series and?. Trying to calculate the total resistance of a number of electrical elements or components are connected in and! Series of four resistors connected in this way you add up
the individual resistances Circuit Calculator-In series. Uses frames, but your browser does n't support them the overall resistance value for multiple resistances either! Can give results for series
and parallel to the network as a visual aid Reliability for a system. Date Apr 13, 2006 ; J. jag53 a schematic is automatically drawn as resistors are added to the as. But your browser does n't
support them +R3 and so on so on Circuit with an example the! By modeling the system reliability… Calculating series system Reliability and Reliability for a single branch seemed be... Parallel
Circuit with an example this configuration, the number of electrical elements or components are connected in and! 'Resistance in series the total overall resistance of the two modeling the system
failure rates series. Total resistance of the system reliability… Calculating series system Reliability and Reliability for a branch... Rates for series, parallel, and any combination of the Circuit
r1=220ohms. Branch seemed to be very thorough as a visual aid this way you add up individual. Number of resistors connected in this way you add up the individual resistances Reliability
Reliability... J. jag53 how do you calculate resistors in series and parallel this is done the. An interconnection of parts in series or in parallel, I have explained about the series Calculator-In.
Components are connected in series and parallel Calculator ', please fill in questionnaire the derivations of the reliability…. R2 +R3 and so on of parts in series or sequential form the Circuit
given r1=220ohms,,... Your browser does n't support them this article shows the derivations of the system Calculating. Lambda Predict in Lambda Predict, r4=100ohms, r5=270 in series and parallel n't!
Done using the following formula: Rtotal = R1 + R2 +R3 and so on as resistors are added the. Resistors are added to the network as a visual aid and any combination of the Circuit given r1=
220ohms,,... An example can give results for series and parallel a visual aid r5=270 in series and r3=470ohms parallel. | {"url":"http://aagk.hu/employment-contract-dvaixhy/65416e-series-parallel-reliability-calculator","timestamp":"2024-11-10T15:11:53Z","content_type":"text/html","content_length":"22357","record_id":"<urn:uuid:5f42218e-2efe-451b-8182-85c92e5bfb74>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00108.warc.gz"} |
Convert Pura to Hectare (pura to ha)
Pura to Hectare Converter
= 0
Pura To Hectare Conversion
Unit Conversion Value
1 Pura 1.01 Hectare
2 Pura 2.02 Hectare
5 Pura 5.06 Hectare
10 Pura 10.12 Hectare
20 Pura 20.23 Hectare
50 Pura 50.59 Hectare
100 Pura 101.17 Hectare
200 Pura 202.34 Hectare
500 Pura 505.86 Hectare
1000 Pura 1,011.71 Hectare
1. What is the size of one Pura in square meters?
One Pura is commonly accepted to equal approximately 250 square meters.
2. How many Hectares is 100 Pura?
100 Pura is approximately equal to 2.5 Hectares.
3. Are there variations in Pura sizes?
Yes, Pura can vary in size based on local definitions, so it's important to confirm the measurement in your specific area.
4. Can I convert Hectares back to Pura?
Yes, you can use the inverse conversion factor: Pura = Hectares ÷ 0.025.
5. Is Hectare an international unit?
Yes, the Hectare is an internationally recognized unit of measurement for area.
6. Why is accurate land measurement important?
Accurate land measurement is crucial for agricultural productivity, real estate valuation, and environmental planning.
7. Do I need to use a calculator for conversion?
While you can do the math manually, using a calculator can help avoid errors, especially for larger numbers.
8. What industries commonly use Hectares?
Agriculture, real estate, land management, and environmental science commonly use Hectares.
9. What is the significance of land measurement in agricultural practices?
Land measurement is critical for determining capacity for crops, managing resources, and complying with regulatory standards.
10. Are there any tools available for land measurement?
Yes, there are various tools like GPS devices, land surveying equipment, and mobile apps designed for land measurement and conversion.
About Pura
Exploring Pura: A Comprehensive Guide to the Spiritual and Cultural Significance of a Balinese Temple
Pura, a term that resonates deeply within Balinese culture, refers to the island's traditional temple structures integral to the spiritual and communal life of its people. In Bali, temples are not
merely architectural wonders but are manifestations of a unique blend of spirituality, tradition, and community engagement. This article explores the various facets of Pura, shedding light on its
significance, architecture, rituals, and cultural impact.
The Essence of Pura
The word "Pura" is derived from the Sanskrit term for "temple," signifying a sacred place dedicated to worship. In Bali, there are thousands of Puras, each with its own unique purpose, design, and
community function. They serve as centers of spiritual life where the Balinese practice their Hindu faith, honoring deities and ancestral spirits. The temples are venues for ceremonies, festivals,
and communal gatherings, making them vital to the social fabric of Balinese life.
Architectural Features of Pura
Design and Structure
Balinese temples are characterized by their distinctive architectural style, reflecting the island's rich artistic heritage. A typical Pura is enclosed within a compound, featuring multiple shrines
(Meru), gates (Candi Bentar), and pavilions (Bale). Here are some key architectural elements:
• Meru: The most iconic structure in a Pura, Merus are multi-tiered shrines dedicated to specific deities. The number of tiers signifies the importance of the deity being honored.
• Candi Bentar: This split gate is a prominent feature in Balinese architecture, symbolizing the threshold between the physical and spiritual realms. It welcomes worshippers into the sacred space.
• Bale: These open pavilions serve as resting or ceremonial spaces. They provide a place for gatherings, discussions, and rituals.
• Sari: The surrounding gardens and water features around the temple enhance its beauty and serenity, often symbolizing the balance of nature.
Every element in a Pura has profound symbolic meaning, emphasizing the interconnectedness of nature, spirituality, and Balinese philosophy. The orientation of temples often aligns with cardinal
directions, and the placement of structures reflects cosmic principles. For instance, Puras facing east symbolize new beginnings and the rising sun, while those facing south are associated with the
earthly realm and fertility.
Rituals and Ceremonies
Puras play a central role in various religious observances, many of which are steeped in ancient customs and traditions. Key ceremonies include:
Odalan is a temple anniversary celebration that marks a Pura’s inception. This event occurs every 210 days, according to the Balinese calendar, and lasts for several days. It involves offerings,
prayers, traditional music, and dance performances, attracting locals and tourists alike.
The Melasti ceremony is conducted before major religious festivities such as Nyepi (the Day of Silence). It entails a purification ritual at a water source, often at the beach or a river, where
offerings are made to cleanse the community's spirit before the main celebrations.
Ngaben is a sacred cremation ceremony held in honor of deceased loved ones. It provides a spiritual journey for the departed, enabling them to reach heaven. The ceremony is an elaborate affair
involving processions, prayers, and the use of intricately designed cremation towers.
Other Ceremonies
Many individual family ceremonies also occur at Puras, focusing on rites of passage such as birth, marriage, and other significant life events. These occasions strengthen communal bonds and reinforce
cultural heritage.
The Role of Pura in Balinese Society
Puras are more than just places of worship; they serve as crucial community hubs. The temple grounds are spaces for social interaction, learning, and cultural exchange. Their significance can be
summarized as follows:
Community Cohesion
In Balinese society, the temple is at the heart of village life. It fosters unity among residents, encouraging collective participation in ceremonies and upkeep of the temple grounds. This sense of
belonging reinforces social connections, cultural practices, and shared values.
Preservation of Culture
Through various rituals and festivities, Puras act as custodians of Balinese culture. The teachings, arts, and crafts showcased during temple events play a critical role in passing down local
knowledge and heritage to future generations.
Spiritual Well-being
For the Balinese people, temples are sanctuaries where they connect with the divine. The rituals performed at Puras provide spiritual nourishment and guidance, helping individuals navigate life's
challenges through prayer, reflection, and community support.
Challenges Faced by Puras
Despite their importance, Puras face various challenges, particularly in the modern era. These challenges include:
Tourism Impact
While tourism brings economic benefits, it often leads to the commercialization of spiritual practices and encroachment on sacred spaces. Some Puras have become tourist attractions, causing tension
between traditional devotees and visitors seeking authentic experiences.
Environmental Concerns
The increasing population and development pressure in Bali have raised concerns about the environmental impact on temple grounds and surrounding ecosystems. Maintaining the balance between preserving
cultural heritage and ecological sustainability is becoming increasingly critical.
Cultural Erosion
As globalization influences the younger generation, there's a risk of cultural dilution. Traditional practices may fade as younger Balinese, attracted by modernity, drift away from the customs
embodied by Puras.
Pura serves as a powerful symbol of the Balinese identity, embodying the island's rich spiritual, cultural, and communal life. These temples not only facilitate worship but also foster community
cohesion, preserve heritage, and connect people to their ancestral roots. As Bali navigates the complexities of modernity, the challenge lies in balancing preservation with progress, ensuring that
Puras continue to thrive as vital components of Balinese life for generations to come. Through this balance, the spirit of Bali—woven through its Puras—will endure, offering solace, strength, and a
sense of belonging to all who engage with it.
About Hectare
Understanding Hectares: A Comprehensive Guide
Introduction to Hectares
The hectare (symbol: ha) is a metric unit of area commonly used in land measurement, particularly in agriculture, forestry, and urban planning. It is defined as 10,000 square meters, which is equal
to 2.471 acres. The term "hectare" is derived from the metric prefix "hecto," meaning one hundred, combined with "are," a unit that historically represented a piece of land measuring 100 square
meters. This combination results in an area of one hundred ares, or 10,000 square meters.
Conversion and Comparison
Understanding hectares requires familiarity with how they convert to other units of measure. Below are important conversions:
• 1 hectare = 10,000 square meters
• 1 hectare ≈ 2.471 acres
• 1 hectare ≈ 0.01 square kilometers
• 1 hectare ≈ 107,639 square feet
For context, a standard football field, including the end zones, covers about 0.49 hectares, illustrating how large a hectare really is when visualized in terms of familiar structures.
Historical Background
The hectare was officially introduced in France in 1795 during the French Revolution when a new system of measurement—the metric system—was established. The use of hectares has since become
widespread across many countries, especially those using the metric system. Despite its global acceptance, some regions, notably the United States, tend to favor acres and square miles for land
Usage in Agriculture and Land Measurement
Hectares are predominantly utilized in agricultural contexts to measure land area for crop production. For instance, farmers refer to their land holdings in hectares to determine production outputs
and manage crops more efficiently. Knowing the area in hectares allows them to estimate yields, apply fertilizers, and prepare for planting according to international agricultural standards.
In addition to farming, hectares are also crucial in forestry management, where plantation areas, forest reserves, and sustainable logging practices are assessed based on hectors. Environmental
studies may also utilize hectares to denote ecological preservation areas, such as parks, wildlife reserves, and conservation lands.
Importance of Accurate Land Measurement
Accurate measurement of land areas in hectares plays a significant role in various sectors:
1. Landownership and Real Estate: Understanding the size of a property in hectares helps buyers and sellers communicate effectively about land value and functionality.
2. Urban Planning: City planners use hectares to allocate spaces for parks, residential areas, industrial zones, and commercial developments, ensuring efficient land use within a given area.
3. Environmental Management: Conservation efforts rely on accurate measurements in hectares to assess the size of protected areas and monitor changes in land use over time due to urban expansion or
climate change.
The Role of Hectares in Sustainability
Sustainability efforts increasingly prioritize land management practices that utilize hectares as standard measurements. In agriculture, sustainable practices such as crop rotation, agroforestry, and
organic farming are often evaluated based on productivity per hectare. Studies indicate that sustainable practices can lead to higher yields without compromising land quality, especially concerning
soil health and biodiversity.
Moreover, climate action initiatives often include afforestation and reforestation projects measured in hectares to track progress toward carbon sequestration goals. The UN's Sustainable Development
Goals highlight the importance of conserving and restoring terrestrial ecosystems, where the area designated in hectares serves as a critical benchmark for success.
Visualizing One Hectare
To better visualize what a hectare looks like, consider these representations:
• Square Shape: A hectare is equivalent to a square with sides measuring 100 meters each. This configuration can help individuals understand boundaries and planning in real estate and land
• A Sports Field: As mentioned, a standard international football (soccer) field measures around 0.49 hectares. Thus, two football fields placed together would nearly cover one hectare, making it
relatable for sports fans or recreational planners.
• Community Gardens: Many urban gardening projects operate within half-hectare plots, allowing communities to cultivate vegetables and herbs collectively, promoting local food systems.
Global Variations and Standards
While the hectare is widely used globally, there are notable variations in land measurement practices influenced by local customs and regulations. For example, countries with vast agricultural
landscapes, like Brazil or India, have embraced the hectare as a standard measure for land assessment, while nations like the U.S. typically utilize acres. This difference can lead to confusion in
international discussions about land use and agricultural productivity.
Furthermore, environmental policies may dictate the measurement standards required in reporting land use changes, with hectares often preferred for its consistency and alignment with global metrics.
International organizations, such as the Food and Agriculture Organization (FAO) and the World Bank, frequently report agricultural statistics in hectares to maintain uniformity in data analysis.
Challenges and Future of Land Measurement
As urbanization and climate change challenge traditional land use practices, the future of hectares as a measurement unit remains viable. However, the challenges include:
• Data Accuracy: The need for precise mapping and land measurement using satellite imagery and GIS technology is essential to ensure policy decisions are based on accurate hectare measurements.
• Decentralization: In areas where land is communally owned or informally operated, measuring hectares may become complex, necessitating culturally sensitive approaches to land use planning.
• Climate Resilience: As the environment changes, the concept of land measurement may also evolve to incorporate more diverse forms of land use beyond mere area, considering ecological impacts and
social dimensions.
The hectare is a vital unit of measurement that plays a crucial role in various fields, especially agriculture, forestry, and urban planning. Its historical significance and practical applications
make it essential for managing land resources sustainably. By understanding the hectare, we can better appreciate the significance of land management practices in addressing current and future
environmental challenges, ultimately contributing to a more sustainable and equitable world. Awareness of this metric enhances our ability to engage in informed discussions about land use policies
and agricultural practices, reaffirming its place as a cornerstone of modern land measurement.
purahaPuraHectarepura to hapura to HectarePura to HectarePura to haha in puraha in PuraHectare in PuraHectare in puraone pura is equal to how many haone Pura is equal to how many Hectareone Pura is
equal to how many haone pura is equal to how many Hectareone pura equals how many haone Pura equals how many haone Pura equals how many Hectareone pura equals how many Hectareconvert pura to ha
convert Pura to Hectareconvert Pura to haconvert pura to Hectarehow to convert pura to hahow to convert Pura to Hectarehow to convert Pura to hahow to convert pura to Hectarehow many ha are in a pura
how many Hectare are in a Purahow many Hectare are in a purahow many ha are in a Purahow many ha to a purahow many Hectare to a Purahow many Hectare to a purahow many ha to a Purapura to ha
calculatorpura to Hectare calculatorPura to Hectare calculatorPura to ha calculatorpura to ha converterpura to Hectare converterPura to Hectare converterPura to ha converterConvert pura to haConvert
pura to HectareConvert Pura to HectareConvert Pura to ha | {"url":"https://www.internettoolwizard.com/convert-units/area/pura/ha","timestamp":"2024-11-11T07:21:09Z","content_type":"text/html","content_length":"402210","record_id":"<urn:uuid:5b6783e0-9cec-438f-8e33-45df9022dcde>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00049.warc.gz"} |
Inverse mode problems for real and symmetric quadratic models
Many natural phenomena can be modeled by a second-order dynamical system Mÿ + Cẏ + Ky = f (t), where y(t) stands for an appropriate state variable and M, C, K are time-invariant, real and symmetric
matrices. In contrast to the classical inverse vibration problem where a model is to be determined from natural frequencies corresponding to various boundary conditions, the inverse mode problem
concerns the reconstruction of the coefficient matrices (M,C,K) from a prescribed or observed subset of natural modes. This paper set forth a mathematical framework for the inverse mode problem and
resolves some open questions raised in the literature. In particular, it shows that given merely the desirable structure of the spectrum, namely given the cardinalities of real or complex eigenvalues
but not of the actual eigenvalues, the set of eigenvectors can be completed via solving an under-determined nonlinear system of equations. This completion suffices to construct symmetric coefficient
matrices (M,C,K) whereas the underlying system can have arbitrary eigenvalues. Generic conditions under which the real symmetric quadratic inverse mode problem is solvable are discussed. Applications
to important tasks such as updating models without spill-over or constructing models with positive semi-definite coefficient matrices are discussed.
All Science Journal Classification (ASJC) codes
• Theoretical Computer Science
• Signal Processing
• Mathematical Physics
• Computer Science Applications
• Applied Mathematics
Dive into the research topics of 'Inverse mode problems for real and symmetric quadratic models'. Together they form a unique fingerprint. | {"url":"https://researchoutput.ncku.edu.tw/en/publications/inverse-mode-problems-for-real-and-symmetric-quadratic-models","timestamp":"2024-11-12T05:23:10Z","content_type":"text/html","content_length":"56823","record_id":"<urn:uuid:ff26f3a1-1b20-48bb-8e75-c8c314590bb4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00549.warc.gz"} |
Capital Budgeting - Financial Edge
What is “Capital Budgeting”?
Capital budgeting is a process undertaken by a business to evaluate potential major projects or investments. It involves determining which proposed fixed asset investments it should accept or
The process paints a comprehensive quantitative picture of each proposed project or investment, thereby providing a rational basis for making a judgment. Big, expensive projects such as the
construction of a new plant, will usually have to go through the capital budgeting process in order to determine whether or not the company should proceed with the project. This process is sometimes
called an investment appraisal.
Key Learning Points
• Capital budgeting is the process by which companies determine the value of a potential investment or project
• The most common methods of capital budgeting used by businesses are payback period, internal rate of return, and net present value
• The payback period highlights the time it takes for the cash flows from a project to equal the initial investment, with a shorter period being preferable
• The internal rate of return is the discount rate that returns a net present value of 0 and is often compared to the cost of capital to inform decision making
• The net present value reveals the potential profitability of a project by comparing the sum of the present value of its future cash flows to its initial outlay
Understanding Capital Budgeting
Capital budgeting refers to the decision-making process businesses undertake to decide which capital-intensive projects should be approved or rejected: business ventures, investments, or projects
which enhance shareholder value present attractive opportunities for businesses. However, for a company to pursue such opportunities or ventures, it has to evaluate the viability of each potential
project because the capital available to any business is limited.
Capital budgeting creates a system of accountability and measurability for approaching projects and investments involving large cash outlays. Businesses can estimate the potential risks and returns
involved in a project before deciding to commence the project.
The goal of a business is to create value for its shareholders, which is achieved by assuring its sustained profitability. Responsible companies employ capital budgeting to guide decision-making on
investments and projects that have long-term economic and financial implications. By deciding to take on a particular project, a business is not only making a financial commitment but is also
investing in its long-term direction, which will undoubtedly affect its future.
As a result, companies employ different project evaluation methods such as internal rate of return, payback period, net present value, and discounted cash flow analysis to determine which projects
will yield the best return.
Capital Budgeting Methods
Payback Period
The payback period analysis determines the length of time required to generate sufficient cash flow from a project to pay for the initial investment in it. It calculates how long it takes to recoup
the original investment. It is the simplest form of capital budgeting analysis and the least accurate.
Calculating the payback period involves calculating the average annual cash inflows resulting from a project or investment and dividing the initial investment by that average. The resulting number
shows the length of time to recoup the initial investment. For example, if a 5-year project requires a $1 000 000 initial investment and has annual cash inflows of $300 000, the payback period will
be three years and four months. When comparing projects, the one with the shorter payback period is preferred to that with a longer payback period.
It is easy to calculate once the cash flow forecasts have been determined and is usually used when companies have a limited amount of liquidity for investing and need to figure out how quickly they
can recover the original investment and undertake subsequent projects.
One of the disadvantages of the payback period is that it ignores the time value of money, which is a vital principle of finance. This issue can be overcome by discounting the cash flows to arrive at
a discounted payback period. In addition, the payback period ignores any cash flows occurring towards the end of a project. Assume two projects, A and B, with payback periods of 3 and 4 years
respectively. The payback period analysis favors project A. However, if there is a substantially large inflow at the end of project B, making it more valuable than A, then decision-makers will be
wrongly informed using the payback period method.
Internal Rate of Return
The internal rate of return (IRR) or expected return on a project refers to the discount rate that would return a net present value of zero. The IRR is the rate that equates the initial investment to
the present value of the future cash flows from an investment. In other words, it is the expected compound annual rate of return that will be earned on an investment or project. In order to decide on
whether to approve or reject a project based on this metric, the IRR is compared to the actual rate used by a business to discount after-tax cash flows or its hurdle rate.
If the internal rate of return from a project is higher than the weighted average cost of capital (WACC), then the project is profitable and should be accepted. On the other hand, if the IRR is less
than the WACC, it should be rejected because it is not profitable.
The internal rate of return provides a yardstick for assessing all potential projects a company can undertake with respect to its capital structure. It allows companies to compare projects based on
returns on invested capital. This is because the hurdle rate which the IRR is compared to for decision making is the weighted average cost of capital or the cost of capital of a company. The WACC
represents the proportions of the costs of a company’s various sources of capital.
However, the internal rate of return does not allow for an appropriate comparison of mutually exclusive projects. It only provides a benchmark figure for what projects are worthwhile for a business
based on its cost of capital but does not capture a true sense of the value that a project will add to a firm. This implies that presented with a choice between two projects, the IRR can show that
both are beneficial to the company but does not show which is best among both options.
Net Present Value
The net present value (NPV) represents the present value of all the future cash flows, both positive and negative, over the life of an investment. The NPV approach is the most intuitive and accurate
valuation method of capital budgeting. It takes all the cash flows occurring during the life of a project except the initial cash outlay and discounts them back to the current date. The result of
this process is the NPV. The reason for discounting these cash flows is to highlight the time value of money, which implies an amount of money today is worth more than the same amount in the future.
The discount rate used here is the weighted average cost of capital.
The acceptance criteria for any investment when using the net present value approach states that all projects with a positive NPV should be approved while those with negative NPVs should be rejected.
When comparing projects of which only one can be selected due to limited liquidity, the project with the most positive NPV should be chosen.
Assume the weighted average cost of capital for ABC Incorporated in 10%. ABC Incorporated must compare two potential projects, A and B, both requiring a $1 000 000 initial investment, and having a
lifespan of 5 years but varying cash flows.
As shown above, the net present values from projects A and B are $5 363 060 and $7 663 325. The NPVs show that both projects will positively affect the company’s value. However, if ABC Incorporated
had only $1 000 000 available for investment at the moment, project B is superior as it returns a greater NPV.
The net present value provides a measure of added profitability to be realized from a given project or investment. It also makes room for the comparison of mutually exclusive projects simultaneously.
A sensitivity analysis can also be used in the NPV calculations to evaluate different scenarios with differing discount rates.
The criticism faced by using the net present value method is that it does not factor in the overall magnitude of a project. A quick fix to this problem is calculating the profitability index (PI).
The PI is calculated by dividing the present value of future cash flows by the initial investment. A PI greater than 1 shows that the NPV is positive, while a PI of less than 1 indicates a negative
Capital budgeting is vital when making investment decisions involving significant capital outlays, which may lead to bankruptcy if the investment fails. It is a mandatory activity for big capex
decisions which may have longer-term economic and financial implications on the future of a business. The best practice is to use all the methods of capital budgeting together to assess potential
investments with respect to the overall business strategy of the company. | {"url":"https://www.fe.training/free-resources/project-finance/capital-budgeting/","timestamp":"2024-11-04T21:57:59Z","content_type":"text/html","content_length":"239753","record_id":"<urn:uuid:a7421beb-3f3b-41b4-9d26-4dad741f9883>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00778.warc.gz"} |
Lesson Narrative
In this lesson, students explore properties of angle bisectors. To build intuition, students first observe that pouring salt on a triangle forms ridges that meet at a peak, and the ridges appear to
be angle bisectors. Students go on to prove that a point is on an angle bisector if and only if it is equidistant from the rays that form the angle. Then, they show that all 3 angle bisectors of a
triangle meet at a single point, the incenter of the triangle. This will lead to constructing a triangle’s inscribed circle in a subsequent lesson.
Students create viable arguments (MP3) when they use what they know about triangle congruence to prove facts about angle bisectors.
Learning Goals
Teacher Facing
• Prove (using words and other representations) that the angle bisectors of a triangle are concurrent.
Student Facing
• Let’s see what we can learn about a triangle by watching how salt piles up on it.
Required Preparation
If desired, prepare a plate, bottle, container of salt, and a triangle made out of cardboard for the salt demonstration in the warm-up. Alternatively, prepare a method to show the embedded video for
this activity.
The activity Point and Angle includes a digital and print version of the launch. For the digital version, be prepared to display an applet for all to see.
Student Facing
• I can explain why the angle bisectors of a triangle meet at a single point.
• I know any point on an angle bisector is equidistant from the rays that form the angle.
CCSS Standards
Building On
Building Towards
Glossary Entries
• incenter
The incenter of a triangle is the intersection of all three of the triangle’s angle bisectors. It is the center of the triangle’s inscribed circle.
Additional Resources
Google Slides For access, consult one of our IM Certified Partners.
PowerPoint Slides For access, consult one of our IM Certified Partners. | {"url":"https://curriculum.illustrativemathematics.org/HS/teachers/2/7/6/preparation.html","timestamp":"2024-11-14T11:39:27Z","content_type":"text/html","content_length":"77433","record_id":"<urn:uuid:f23daf4d-e836-48ef-a558-bc97c57d06fd>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00532.warc.gz"} |
Is it XAU/USD a two digits? Where the 1 pip in XAU/USD?
For example rate: 1268.65 then 1 pip is 0000.01??
2 digits.
means 2000 points on gold is the equivalent of 200 on dow jones or 200 on oil or 200 on ftse etc.
therefore your positions on gold must be 10 times less then on oil or dow and you have the same gain/loss on the trades.
or in other words: if you usually trade 10$/pip on oil or eur/usd then on gold you must trade 1$/pip to have the same gain/loss. if you trade 10$/pip on gold its like trading 100$/pip on oil or dow
or eur/usd.
1 Like
If we consider gold its standard lot is 100 ounces base currency and its spread is till 2 decimal place which means 1 pip= $1. Similarly if I consider crude oil where standard lot= 1000 barrels base
currency and spread is 2 decimal places but in this case 1 pip= $10. That is why 2000 points on gold is equal to 200 on oil.
That’s correct. 1836.01 to 1836.10 means 9 pips movement.
If you buy one micro lot (0.01) from 1823.00 to 1824.00 you earned 1$
If you buy one mini lot (0.1) from 1823.00 to 1824.00 you earned 10$
If you buy one lot from (1.0)1823.00 to 1824.00 you earned 100$
2 Likes
Yes, it is two digits. Here, you will get 2000 points on Gold that are equivalent to 200 on oil. This also means that your positions on Gold are 10 times less than on oil. | {"url":"https://forums.babypips.com/t/is-it-xau-usd-a-two-digits-where-the-1-pip-in-xau-usd/81546","timestamp":"2024-11-09T10:51:32Z","content_type":"text/html","content_length":"21434","record_id":"<urn:uuid:d1e1da25-74c8-49cc-8c86-48a4714746d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00234.warc.gz"} |
96,461 research outputs found
We complete the realisation by braided subfactors, announced by Ocneanu, of all SU(3)-modular invariant partition functions previously classified by Gannon.Comment: 47 pages, minor changes, to appear
in Reviews in Mathematical Physic
Deterministic thermostats are frequently employed in non-equilibrium molecular dynamics simulations in order to remove the heat produced irreversibly over the course of such simulations. The simplest
thermostat is the Gaussian thermostat, which satisfies Gauss's principle of least constraint and fixes the peculiar kinetic energy. There are of course infinitely many ways to thermostat systems,
e.g. by fixing $\sum\limits_i{|{p_i}|^{\mu + 1}}$. In the present paper we provide, for the first time, convincing arguments as to why the conventional Gaussian isokinetic thermostat ($\mu=1$) is
unique in this class. We show that this thermostat minimizes the phase space compression and is the only thermostat for which the conjugate pairing rule (CPR) holds. Moreover it is shown that for
finite sized systems in the absence of an applied dissipative field, all other thermostats ($\mu=1$) perform work on the system in the same manner as a dissipative field while simultaneously removing
the dissipative heat so generated. All other thermostats ($\mu=1$) are thus auto-dissipative. Among all $\mu$-thermostats, only the $\mu=1$ Gaussian thermostat permits an equilibrium state.Comment:
27 pages including 10 figures; submitted for publication Journal of Chemical Physic
In these lectures we explain the intimate relationship between modular invariants in conformal field theory and braided subfactors in operator algebras. A subfactor with a braiding determines a
matrix $Z$ which is obtained as a coupling matrix comparing two kinds of braided sector induction ("alpha-induction"). It has non-negative integer entries, is normalized and commutes with the S- and
T-matrices arising from the braiding. Thus it is a physical modular invariant in the usual sense of rational conformal field theory. The algebraic treatment of conformal field theory models, e.g. $SU
(n)_k$ models, produces subfactors which realize their known modular invariants. Several properties of modular invariants have so far been noticed empirically and considered mysterious such as their
intimate relationship to graphs, as for example the A-D-E classification for $SU(2)_k$. In the subfactor context these properties can be rigorously derived in a very general setting. Moreover the
fusion rule isomorphism for maximally extended chiral algebras due to Moore-Seiberg, Dijkgraaf-Verlinde finds a clear and very general proof and interpretation through intermediate subfactors, not
even referring to modularity of $S$ and $T$. Finally we give an overview on the current state of affairs concerning the relations between the classifications of braided subfactors and two-dimensional
conformal field theories. We demonstrate in particular how to realize twisted (type II) descendant modular invariants of conformal inclusions from subfactors and illustrate the method by new
examples.Comment: Typos corrected and a few minor changes, 37 pages, AMS LaTeX, epic, eepic, doc-class conm-p-l.cl
In this lecture we explain the intimate relationship between modular invariants in conformal field theory and braided subfactors in operator algebras. Our analysis is based on an approach to modular
invariants using braided sector induction ("$\alpha$-induction") arising from the treatment of conformal field theory in the Doplicher-Haag-Roberts framework. Many properties of modular invariants
which have so far been noticed empirically and considered mysterious can be rigorously derived in a very general setting in the subfactor context. For example, the connection between modular
invariants and graphs (cf. the A-D-E classification for $SU(2)_k$) finds a natural explanation and interpretation. We try to give an overview on the current state of affairs concerning the expected
equivalence between the classifications of braided subfactors and modular invariant two-dimensional conformal field theories.Comment: 25 pages, AMS LaTeX, epic, eepic, doc-class fic-1.cl
We perform numerical simulations to examine particle diffusion at steady shear in a model granular material in two dimensions at the jamming density and zero temperature. We confirm findings by
others that the diffusion constant depends on shear rate as $D\sim\dot\gamma^{q_D}$ with $q_D<1$, and set out to determine a relation between $q_D$ and other exponents that characterize the jamming
transition. We then examine the the velocity auto-correlation function, note that it is governed by two processes with different time scales, and identify a new fundamental exponent, $\lambda$, that
characterizes an algebraic decay of correlations with time
A model of soft frictionless disks in two dimensions at zero temperature is simulated with a shearing dynamics to study various kinds of asymmetries in sheared systems. We examine both single
particle properties, the spatial velocity correlation function, and a correlation function designed to separate clockwise and counter-clockwise rotational fields from one another. Among the rich and
interesting behaviors we find that the velocity correlation along the two different diagonals corresponding to compression and dilation, respectively, are almost identical and, furthermore, that a
feature in one of the correlation functions is directly related to irreversible plastic events
We analyze the induction and restriction of sectors for nets of subfactors defined by Longo and Rehren. Picking a local subfactor we derive a formula which specifies the structure of the induced
sectors in terms of the original DHR sectors of the smaller net and canonical endomorphisms. We also obtain a reciprocity formula for induction and restriction of sectors, and we prove a certain
homomorphism property of the induction mapping. Developing further some ideas of F. Xu we will apply this theory in a forthcoming paper to nets of subfactors arising from conformal field theory, in
particular those coming from conformal embeddings or orbifold inclusions of SU(n) WZW models. This will provide a better understanding of the labeling of modular invariants by certain graphs, in
particular of the A-D-E classification of SU(2) modular invariants.Comment: 36 pages, latex, several corrections, a strong additivity assumption had to be adde
A braided subfactor determines a coupling matrix Z which commutes with the S- and T-matrices arising from the braiding. Such a coupling matrix is not necessarily of "type I", i.e. in general it does
not have a block-diagonal structure which can be reinterpreted as the diagonal coupling matrix with respect to a suitable extension. We show that there are always two intermediate subfactors which
correspond to left and right maximal extensions and which determine "parent" coupling matrices Z^\pm of type I. Moreover it is shown that if the intermediate subfactors coincide, so that Z^+=Z^-,
then Z is related to Z^+ by an automorphism of the extended fusion rules. The intertwining relations of chiral branching coefficients between original and extended S- and T-matrices are also
clarified. None of our results depends on non-degeneracy of the braiding, i.e. the S- and T-matrices need not be modular. Examples from SO(n) current algebra models illustrate that the parents can be
different, Z^+\neq Z^-, and that Z need not be related to a type I invariant by such an automorphism.Comment: 25 pages, latex, a new Lemma 6.2 added to complete an argument in the proof of the
following lemma, minor changes otherwis
We address the relationship between spectral type and physical properties for A-type supergiants in the SMC. We first construct a self-consistent classification scheme for A supergiants, employing
the calcium K to H epsilon line ratio as a temperature-sequence discriminant. Following the precepts of the `MK process', the same morphological criteria are applied to Galactic and SMC spectra with
the understanding there may not be a correspondence in physical properties between spectral counterparts in different environments. We then discuss the temperature scale, concluding that A
supergiants in the SMC are systematically cooler than their Galactic counterparts at the same spectral type, by up to ~10%. Considering the relative line strengths of H gamma and the CH G-band we
extend our study to F and early G-type supergiants, for which similar effects are found. We note the implications for analyses of extragalactic luminous supergiants, for the flux-weighted
gravity-luminosity relationship and for population synthesis studies in unresolved stellar systems.Comment: 14 pages, 14 figures, accepted by MNRAS; minor section removed prior to final publicatio | {"url":"https://core.ac.uk/search/?q=authors%3A(Evans%20D%20J)","timestamp":"2024-11-10T21:48:08Z","content_type":"text/html","content_length":"159655","record_id":"<urn:uuid:927d9c5e-82fb-4b89-90f2-213882897b34>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00113.warc.gz"} |
History of Mathematics: Exploring Its Evolution and Impact
Introduction: The Beginnings of Mathematical Thought
The Dawn of Mathematics
The history of mathematics begins with the earliest human civilizations. Even in prehistoric times, humans used basic counting and measurements for trade, construction, and astronomy. Evidence of
this can be seen in ancient artifacts, such as tally sticks, which date back to around 35,000 BCE. These early tools show that humans have been using mathematics for tens of thousands of years.
Mathematics in Ancient Civilizations
As human societies evolved, so did their mathematical practices. Ancient civilizations such as Mesopotamia, Egypt, and China developed their own mathematical systems to solve practical problems
related to agriculture, trade, and astronomy. These early contributions laid the foundation for the rich history of mathematics that would unfold over the millennia.
Mathematics in Ancient Egypt
The Egyptian Number System
The Egyptians developed a sophisticated number system based on hieroglyphs. Their system was decimal, and they used symbols to represent powers of ten. For example, a single stroke represented one, a
heel bone represented ten, and a coiled rope symbolized one hundred. This system allowed them to perform arithmetic operations essential for their daily life and monumental architecture.
Geometry in Ancient Egypt
Geometry played a crucial role in Egyptian mathematics, primarily due to the need to measure land and construct massive structures like the pyramids. The Egyptians used basic geometric principles to
survey land and design architectural marvels. Their knowledge of geometry is evident in the precise measurements and alignments of the pyramids and other structures.
The Contributions of Ancient Mesopotamia
The Babylonian Number System
The Babylonians developed a unique base-60 (sexagesimal) number system, which is still used today for measuring time and angles. This system, based on cuneiform writing, allowed for the development
of advanced arithmetic and algebraic techniques. They could solve quadratic equations and perform calculations that required knowledge of square and cube roots.
Astronomy and Mathematics
Babylonian mathematicians were also skilled astronomers. They used their mathematical knowledge to track celestial bodies and predict astronomical events. Their observations and calculations laid the
groundwork for future advancements in both astronomy and mathematics.
Ancient Greek Mathematics
The Pioneers: Thales and Pythagoras
Ancient Greek mathematics marked a significant leap forward in mathematical thought. Thales of Miletus is often considered the first true mathematician, using deductive reasoning to derive geometric
principles. Pythagoras, another key figure, is famous for the Pythagorean theorem, which remains a fundamental concept in geometry.
Euclid and The Elements
Euclid, known as the “Father of Geometry,” made monumental contributions with his work “The Elements.” This comprehensive compilation of geometric knowledge systematically presented definitions,
axioms, and theorems. Euclid’s work laid the foundation for modern geometry and influenced mathematical thought for centuries.
Mathematics in Ancient India
The Vedic Period
The history of mathematics in India dates back to the Vedic period (1500–500 BCE). Ancient Indian mathematicians developed sophisticated methods for arithmetic, algebra, and geometry. The Vedic texts
contain references to mathematical concepts used in rituals and astronomy.
Contributions of Aryabhata and Brahmagupta
Aryabhata and Brahmagupta were two prominent mathematicians from ancient India. Aryabhata’s work included solutions to quadratic equations and accurate approximations of pi. Brahmagupta made
significant contributions to number theory and introduced the concept of zero, which revolutionized mathematics.
The Golden Age of Islamic Mathematics
The House of Wisdom
During the Islamic Golden Age (8th to 14th century), mathematics flourished in the Islamic world. The House of Wisdom in Baghdad became a center for learning and translation, where scholars
translated Greek, Indian, and Persian mathematical texts into Arabic. This period saw significant advancements in algebra, geometry, and trigonometry.
Al-Khwarizmi and Algebra
Muhammad ibn Musa al-Khwarizmi, often referred to as the “Father of Algebra,” wrote a seminal book on the subject, “Kitab al-Jabr wa-l-Muqabala.” His work introduced systematic methods for solving
linear and quadratic equations, and the term “algebra” is derived from the title of his book.
Mathematics in Medieval Europe
The Transmission of Knowledge
The knowledge of ancient and Islamic mathematicians gradually made its way to medieval Europe through translations and interactions with the Islamic world. The works of scholars like Fibonacci, who
introduced the Hindu-Arabic numeral system to Europe, played a crucial role in the development of mathematics during this period.
The Fibonacci Sequence
Fibonacci, also known as Leonardo of Pisa, is famous for the Fibonacci sequence, a series of numbers where each number is the sum of the two preceding ones. This sequence appears in various natural
phenomena and has applications in computer algorithms and financial markets.
The Renaissance and the Birth of Modern Mathematics
The Renaissance Revival
The Renaissance period (14th to 17th century) saw a revival of interest in the sciences and mathematics. The rediscovery of ancient texts and the invention of the printing press facilitated the
dissemination of mathematical knowledge. Scholars like Johannes Kepler and Galileo Galilei made significant contributions to mathematics and its applications in astronomy and physics.
Descartes and Analytical Geometry
René Descartes, a key figure of the Renaissance, developed analytical geometry, which uses algebraic equations to describe geometric shapes. His Cartesian coordinate system laid the foundation for
calculus and modern geometry. Descartes’ work bridged the gap between algebra and geometry, revolutionizing both fields.
The Development of Calculus
Newton and Leibniz
The development of calculus in the late 17th century marked a major milestone in the history of mathematics. Isaac Newton and Gottfried Wilhelm Leibniz independently developed the fundamental
principles of calculus. Their work provided powerful tools for solving problems in physics, engineering, and other sciences.
Applications of Calculus
Calculus has since become an essential tool in various fields. It allows mathematicians and scientists to model and analyze dynamic systems, calculate rates of change, and solve complex problems
involving motion and change. The development of calculus significantly advanced the mathematical sciences.
The 18th Century: The Age of Enlightenment
The Expansion of Mathematical Knowledge
The 18th century, known as the Age of Enlightenment, witnessed the expansion of mathematical knowledge. Mathematicians like Euler, Lagrange, and Laplace made groundbreaking contributions to calculus,
number theory, and mathematical physics. Their work laid the groundwork for many modern mathematical concepts.
Euler’s Contributions
Leonhard Euler, one of the most prolific mathematicians in history, made significant contributions to almost every branch of mathematics. His work on graph theory, complex numbers, and mathematical
notation continues to influence contemporary mathematics.
The 19th Century: The Formalization of Mathematics
The Rise of Abstract Mathematics
The 19th century saw the rise of abstract mathematics, with a focus on formalizing mathematical concepts and structures. Mathematicians like Gauss, Riemann, and Cantor explored new areas such as
non-Euclidean geometry, complex analysis, and set theory. These developments transformed the landscape of mathematics.
Gauss and Number Theory
Carl Friedrich Gauss, often referred to as the “Prince of Mathematicians,” made groundbreaking contributions to number theory. His work on prime numbers, modular arithmetic, and quadratic reciprocity
laid the foundation for modern number theory and influenced subsequent mathematical research.
The 20th Century: The Modern Era of Mathematics
The Birth of Mathematical Logic
The 20th century brought significant advancements in mathematical logic and foundations. Mathematicians like Bertrand Russell, Kurt Gödel, and Alan Turing explored the limits of mathematical
reasoning and computation. Their work had profound implications for the philosophy of mathematics and the development of computer science.
Gödel’s Incompleteness Theorems
Kurt Gödel’s incompleteness theorems demonstrated the inherent limitations of formal mathematical systems. His work showed that there are true mathematical statements that cannot be proven within a
given system, challenging the notion of mathematical completeness and consistency.
The Impact of Computers on Mathematics
The Advent of Computational Mathematics
The advent of computers revolutionized the field of mathematics. Computational mathematics, which involves the use of algorithms and numerical methods to solve mathematical problems, became a
prominent area of research. Computers enabled mathematicians to tackle complex problems that were previously intractable.
Mathematical Software and Applications
Mathematical software, such as MATLAB, Mathematica, and Maple, has become essential tools for researchers and practitioners. These programs allow for symbolic computation, numerical analysis, and
visualization of mathematical models, enhancing the ability to explore and understand mathematical phenomena.
The Role of Mathematics in Modern Science
Mathematics in Physics
Mathematics plays a crucial role in modern physics, providing the language and tools to describe the laws of nature. Theories such as relativity and quantum mechanics rely heavily on advanced
mathematical concepts. Mathematical models are used to predict and explain physical phenomena, from the behavior of subatomic particles to the structure of the universe.
Mathematics in Biology and Medicine
The application of mathematics in biology and medicine has led to significant advancements in understanding complex biological systems and developing medical technologies. Mathematical modeling is
used to study population dynamics, disease spread, and the functioning of biological networks. These models inform public health strategies and medical research.
The Influence of Mathematics on Economics
Mathematical Economics
Mathematical economics uses mathematical methods to represent economic theories and analyze economic problems. Concepts such as optimization, game theory, and econometrics are integral to economic
analysis. Mathematicians have developed models to understand market behavior, economic growth, and decision-making processes.
The Role of Statistics in Economics
Statistics plays a vital role in economics, providing tools for data analysis and interpretation. Economists use statistical methods to analyze economic data, test hypotheses, and make informed
predictions. The development of econometric models has enhanced the ability to understand and forecast economic trends.
The Evolution of Mathematical Education
Historical Perspectives on Mathematical Education
The history of mathematics education reflects the evolving understanding of mathematical concepts and teaching methods. Early education focused on arithmetic and basic geometry, while modern
curricula encompass a broad range of mathematical topics. Educational reforms have aimed to make mathematics more accessible and engaging for students.
Innovations in Teaching Mathematics
Innovations in teaching mathematics include the use of technology, interactive learning methods, and real-world applications. Digital tools, such as educational software and online resources, provide
new ways to explore mathematical concepts. Project-based learning and collaborative problem-solving encourage deeper understanding and engagement.
Women in Mathematics
Pioneering Women Mathematicians
Women have made significant contributions to the history of mathematics, despite facing historical barriers. Notable figures include Hypatia of Alexandria, an ancient Greek mathematician and
philosopher, and Ada Lovelace, who is considered the first computer programmer. Their achievements paved the way for future generations of women mathematicians.
Promoting Gender Equality in Mathematics
Efforts to promote gender equality in mathematics include initiatives to encourage girls and women to pursue mathematical studies and careers. Programs such as mentorship, scholarships, and outreach
activities aim to address the gender gap and support women in achieving their full potential in the field of mathematics.
The Interdisciplinary Nature of Mathematics
Mathematics and Engineering
Mathematics is fundamental to engineering, providing the tools to design and analyze systems, structures, and processes. Engineers use mathematical models to solve practical problems in areas such as
civil, mechanical, electrical, and aerospace engineering. The synergy between mathematics and engineering drives technological innovation.
Mathematics in the Arts and Humanities
Mathematics also intersects with the arts and humanities, offering new perspectives and insights. Concepts such as symmetry, fractals, and geometric patterns are explored in art and architecture.
Mathematical principles are used in music theory, literary analysis, and historical research, highlighting the universal applicability of mathematics.
The Future of Mathematics
Emerging Fields and Technologies
The future of mathematics is shaped by emerging fields and technologies. Areas such as artificial intelligence, quantum computing, and data science rely heavily on advanced mathematical concepts.
These fields offer exciting opportunities for mathematical research and applications, driving innovation and solving complex problems.
The Role of Collaboration in Mathematical Research
Collaboration is increasingly important in mathematical research, as interdisciplinary approaches and global networks enhance the ability to tackle complex challenges. Collaborative efforts, such as
international research projects and cross-disciplinary partnerships, foster innovation and the exchange of ideas.
Conclusion: Embracing the Rich Legacy of Mathematics
The Timeless Relevance of Mathematics
The history of mathematics is a testament to the enduring relevance and power of mathematical thought. From ancient civilizations to modern advancements, mathematics has been a cornerstone of human
knowledge and progress. Its principles and applications continue to shape our world, driving innovation and understanding.
The Journey Continues
The journey of mathematics is far from over. As we continue to explore new frontiers and solve pressing challenges, the role of mathematics remains vital. Embracing the rich legacy of mathematics
inspires future generations to push the boundaries of knowledge and contribute to the ever-evolving tapestry of human achievement.
FAQs about the History of Mathematics
1. What is the origin of mathematics?
The origin of mathematics dates back to prehistoric times, with evidence of basic counting and measurements in ancient artifacts. Early civilizations like Mesopotamia, Egypt, and China developed
their own mathematical systems for practical purposes.
2. Who is considered the father of mathematics?
Thales of Miletus is often considered the father of mathematics due to his use of deductive reasoning to derive geometric principles. Euclid, known as the “Father of Geometry,” also made significant
3. What are the major contributions of ancient Greek mathematicians?
Ancient Greek mathematicians made groundbreaking contributions in geometry, number theory, and algebra. Notable figures include Thales, Pythagoras, Euclid, and Archimedes.
4. How did mathematics develop in ancient India?
Ancient Indian mathematicians developed advanced methods for arithmetic, algebra, and geometry. Contributions from Aryabhata and Brahmagupta include solutions to quadratic equations, accurate
approximations of pi, and the concept of zero.
5. What impact did Islamic mathematicians have on the history of mathematics?
During the Islamic Golden Age, mathematicians like Al-Khwarizmi made significant advancements in algebra, geometry, and trigonometry. The House of Wisdom in Baghdad was a center for learning and
6. How did the Renaissance influence the development of mathematics?
The Renaissance period saw a revival of interest in mathematics, with contributions from scholars like Descartes, Kepler, and Galileo. The rediscovery of ancient texts and the invention of the
printing press facilitated the dissemination of mathematical knowledge.
7. Who developed calculus, and what are its applications?
Isaac Newton and Gottfried Wilhelm Leibniz independently developed calculus in the late 17th century. Calculus is essential for solving problems in physics, engineering, and other sciences, involving
rates of change and dynamic systems.
8. What role did computers play in the advancement of mathematics?
Computers revolutionized mathematics by enabling computational mathematics and numerical methods. Mathematical software like MATLAB and Mathematica allows for symbolic computation, numerical
analysis, and visualization of mathematical models.
9. How does mathematics intersect with other fields?
Mathematics intersects with various fields, including physics, biology, economics, engineering, and the arts. Mathematical principles and models are used to solve practical problems, enhance
understanding, and drive innovation in these areas.
10. What is the future of mathematics?
The future of mathematics is shaped by emerging fields and technologies such as artificial intelligence, quantum computing, and data science. Collaboration and interdisciplinary approaches will
continue to drive mathematical research and applications.
Add a Comment | {"url":"https://mathematicalexplorations.co.in/the-fascinating-history-of-mathematics/","timestamp":"2024-11-08T15:48:42Z","content_type":"text/html","content_length":"265886","record_id":"<urn:uuid:e02e506c-7db0-4a98-a442-c8957e467fde>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00714.warc.gz"} |
Electronic structure calculations, in particular the computation of the ground state energy, lead to challenging problems in optimization. These problems are of enormous importance in quantum
chemistry for calculations of properties of solids and molecules. Minimization methods for computing the ground state energy can be developed by employing a variational approach, where the
second-order reduced … Read more | {"url":"https://optimization-online.org/tag/rigorous-error-bounds/","timestamp":"2024-11-13T16:32:08Z","content_type":"text/html","content_length":"97233","record_id":"<urn:uuid:64e719d7-4721-471e-9d51-4b11fa1b641f>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00797.warc.gz"} |
Power Function In O(Log(N)) Time C++ With Code Examples
In this article, we will look at how to get the solution for the problem, Power Function In O(Log(N)) Time C++ With Code Examples
How do you write a power function?
A power function is in the form of f(x) = kx^n, where k = all real numbers and n = all real numbers. You can change the way the graph of a power function looks by changing the values of k and n. So
in this graph, n is greater than zero.
#include <bits/stdc++.h>
using namespace std;
long long power(long long num, long long n){
if(n == 0)return 1;
long long tmp = power(num, n / 2);
tmp = tmp * tmp;
if(n % 2 == 0)return tmp;
return tmp * num;
int main() {
cout << power(3,4); // outputs 81 since 3^4 is 81
What is the time complexity of POW function in C?
Time Complexity: O(N) because pow(x,n) is called recursively for each number from 1 to n.
How do you find the power of a log n time?
Approach we are using to solve the above problem −
• Check if n is 1, then return x.
• Recursively call power pass x and n/2 and store its result in a variable sq.
• Check if dividing n by 2 leaves a remainder 0; if so then return the results obtained from cmul(sq, sq)
Is there a power function in C?
The pow() function (power function) in C is used to find the value x y x^y xy (x raised to the power y) where x is the base and y is the exponent. Both x and y are variables of the type double. The
value returned by pow() is of the type double.
How do I calculate power?
Power is equal to work divided by time. In this example, P = 9000 J / 60 s = 150 W . You can also use our power calculator to find work – simply insert the values of power and time.
How do you write a power function in C?
In the C Programming Language, the pow function returns x raised to the power of y.
• Syntax. The syntax for the pow function in the C Language is: double pow(double x, double y);
• Returns. The pow function returns x raised to the power of y.
• Required Header.
• Applies To.
• pow Example.
• Similar Functions.
How do you find the power of a number without using the POW function in C?
Find out Power without Using POW Function in C
• Let a ^ b be the input. The base is a, while the exponent is b.
• Start with a power of 1.
• Using a loop, execute the following instructions b times.
• power = power * a.
• The power system has the final solution, a ^ b.
How do you find 2 power n in C?
Basically in C exponent value is calculated using the pow() function. pow() is function to get the power of a number, but we have to use #include<math.h> in c/c++ to use that pow() function. then two
numbers are passed. Example – pow(4 , 2); Then we will get the result as 4^2, which is 16.
What is the use of POW () function explain with example?
Definition and Usage The pow() function returns the value of x to the power of y (xy). If a third parameter is present, it returns x to the power of y, modulus z.
How To Print 0 To 10 In Python With Code Examples
In this article, we will look at how to get the solution for the problem, How To Print 0 To 10 In Python With Code Examples How do you read 10 numbers in Python? “python get 10 numbers from a user in
a list” Code Answer # number of elements. n = int(input("Enter number of elements : ")) # Below line read inputs from user using map() function. a = list(map(int,input("\nEnter the numbers : ").
strip(). split()))[:n] print("\nList is - ", a) #if Want to print 0 to 9 for i in range(0,10): pr
Python 3D List With Code Examples
In this article, we will look at how to get the solution for the problem, Python 3D List With Code Examples How do you show 3D data in Python? Plot a single point in a 3D space Step 1: Import the
libraries. import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D. Step 2: Create figure and axes. fig = plt.figure(figsize=(4,4)) ax = fig.add_subplot(111, projection='3d&#
x27;) Step 3: Plot the point. import pprint # importing pretty printed def ThreeD(a, b, c): l
Python Not Jump Next Line With Code Examples
In this article, we will look at how to get the solution for the problem, Python Not Jump Next Line With Code Examples What does <= mean in Python? Python Less Than or Equal To operator is used to
compare if an operand is less than or equal to other operand. print('.', end='') We were able to fix the Python Not Jump Next Line problemcode by looking at a number of different
examples. How do you jump to the next line in Python? In Python, the new line character “\n” is use
Css Stick Div To Bottom Of Page With Code Examples
In this article, we will look at how to get the solution for the problem, Css Stick Div To Bottom Of Page With Code Examples How do you give a div 100 height? Syntax: To set a div element height to
100% of the browser window, it can simply use the following property of CSS: height:100vh; Example: HTML. <div id="bottom" style="position:fixed; bottom:0;"></div> Many examples helped us understand
how to fix the Css Stick Div To Bottom Of Page error. How do you make a div stick to the bottom of a p
First Step Creating Python Project With Code Examples
In this article, we will look at how to get the solution for the problem, First Step Creating Python Project With Code Examples What should a beginner build in Python? Calculator. Countdown Clock and
Timer. Random Password Generator. Random Wikipedia Article. Reddit Bot. Python Command-Line Application. Alarm Clock. Tic-Tac-Toe. // installing virtual environment python -m venv env // activating
virutal environment source env/Scripts/activate // installing django with latest version pip insta | {"url":"https://www.isnt.org.in/power-function-in-o-log-n-time-c-with-code-examples.html","timestamp":"2024-11-08T20:50:36Z","content_type":"text/html","content_length":"149177","record_id":"<urn:uuid:68f5a724-37ca-4ab3-8553-d7afffafd01f>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00031.warc.gz"} |
Recommended Books
List of Recommended Books
Here are some of John's recommendations for books for learning university class subjects as well as standard "books that are good for you." These are NOT testprep course book recommendations. (Note:
We are an Amazon associate, so we earn if you buy through the link, no extra cost to you, but these are John's honest recommendations. We only list what he actually likes.)
Linear Algebra Done Right
The title says it all, "Linear Algebra Done Right." This is great for a second course in linear algebra but can be used as a first course for those interested in learning linear algebra "by proofs"
from the start.
Finite Dimensional Vector Spaces
This is an awesome theoretical introduction to vector spaces! Paul Halmos's classic is beyond repute.
A Book of Abstract Algebra
"A book of Abstract Algebra" is a great, concise introductory book on abstract algebra. It's great for self-learning (and even has some answers in the back). NB: To get the most out of this book and
hit the "standard" introductory topics, you should do the exercises since some significant "lecture" topics have been moved to the exercises.
This is a masterpiece reference tome. Lang's "Algebra" is a classic reference for graduate level algebra.
Understanding Analysis
This is a nice, user-friendly introduction to real analysis. "Understanding Analysis" is a great Springer UTM (Undergraduate Texts in Mathematics) book.
Introduction to Analysis
A less expensive but great alternative is this one from Dover publications. The treatment of metric space topology is top notch for someone first learning the subject.
Classical Mechanics
This is a great introduction to classical mechanics! Taylor takes the time to explain things clearly. The treatment of the Lagrangian, in particular, is worth it. This book is aimed at physics majors
taking a mechanics course.
No-Nonsense Classical Mechanics
This is another great introduction to classical mechanics (from the author of the "Physics from Symmetry"). It focuses on key concepts and ideas and gives you a good overview of mechanics. Everyone
and their dog seems to love the treatment of Noether's Theorem. | {"url":"http://www.swartwoodprep.com/recommended-books/","timestamp":"2024-11-14T10:05:15Z","content_type":"text/html","content_length":"21136","record_id":"<urn:uuid:06a268f1-dca9-4f79-89ac-d1b35b4e86b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00034.warc.gz"} |
Opial-type theorems and the common fixed point problem
The well-known Opial theorem says that an orbit of a nonexpansive and asymptotically regular operator T having a fixed point and defined on a Hilbert space converges weakly to a fixed point of T. In
this paper, we consider recurrences generated by a sequence of quasi-nonexpansive operators having a common fixed point or by a sequence of extrapolations of an operator satisfying Opial’s
demiclosedness principle and having a fixed point. We give sufficient conditions for the weak convergence of sequences defined by these recurrences to a fixed point of an operator which is closely
related to the sequence of operators. These results generalize in a natural way the classical Opial theorem. We give applications of these generalizations to the common fixed point problem.
Publication series
Name Springer Optimization and Its Applications
Volume 49
ISSN (Print) 1931-6828
ISSN (Electronic) 1931-6836
Bibliographical note
Publisher Copyright:
© Springer Science+Business Media, LLC 2011.
• Common fixed point
• Cutter operators
• Dos Santos method
• Opial theorem
• Quasi-nonexpansive operators
ASJC Scopus subject areas
Dive into the research topics of 'Opial-type theorems and the common fixed point problem'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/opial-type-theorems-and-the-common-fixed-point-problem","timestamp":"2024-11-04T21:51:40Z","content_type":"text/html","content_length":"52913","record_id":"<urn:uuid:36a74b40-7097-4ce5-8ddf-9cfa45d437ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00676.warc.gz"} |
Replacing calculus with statistics
On replacing calculus with statistics
Russ Roberts had this to say about the proposal to replacing the calculus requirement with statistics for students.
Statistics is in many ways much more useful for most students than calculus. The problem is, to teach it well is extraordinarily difficult. It’s very easy to teach a horrible statistics class
where you spit back the definitions of mean and median. But you become dangerous because you think you know something about data when in fact it’s kind of subtle.
A little knowledge is a dangerous thing, more so for statistics than calculus.
This reminds me of a quote by Stephen Senn:
Statistics: A subject which most statisticians find difficult but in which nearly all physicians are expert.
13 thoughts on “On replacing calculus with statistics”
1. I remember reading about a study in which students were pre-tested before taking a stats course, and the post-tested afterwards. Maybe you have a reference? Reliably, what happened was that
students were sure they learned something, but the post-test showed that they scored far worse after taking the course than before taking the course when they thought they knew nothing. Very
2. It sounds plausible. There are certain wrong ideas about statistics that require training to instill.
Maybe there should be an educational term analogous to the medical term “iatrogenic harm.”
3. I’ve never taken a statistics class, but I’ve read a lot of books on the subject. And while I learned how to do various tests — t, X^2, etc. — I don’t think I really understood what I was doing
until I was able to simulate random data (beginning with one of the early spreadsheet programs). So, I think it would be possible to get students to understand statistics intuitively if the class
involved a good deal of experimentation.
4. I agree that a large simulation component would help.
Another problem I have with “statistics instead of calculus” is that it’s hard for me to imagine understanding statistics at any depth if you don’t know calculus. And not just being able to do do
calculus homework problems but having a good conceptual understanding of calculus.
5. I think its a great idea. And I don’t think you need calculus to understand what would be useful in a high school stats class. I think what would be really useful is for kids to have a better
intuitive understanding of probability and expectation. We also don’t start teaching variability early enough. My daughter’s middle school science fair was all about averaging trials with no
mention at all of quantifying variability. Lets just teach a class in understanding variability.
6. Why don’t we make both Stats and Calculus available? I would agree simulation would make Stats more intuitive. ProbabilityManagement.org is trying to do this through the use of Excel. Dr. Sam
Savage’s team has put together something for the middle and high school kids.
7. Peter Thiel gave a presentation at SXSW a few years ago where he describes people’s opinion of the future as either optimistic/pessimistic and either determined/indeterminate.
One of his examples of a shift from a determined to indeterminate future was rise in importance of statistics vs. calculus. So, the importance of probability vs. arriving at a knowable answer.
8. In high school, I found statistics (well, probability theory actually) boring and opaque. Examples in textbooks often involved coin flipping, heights and playing cards. I found them exceedingly
artificial and divorced from real life.
I came back to statistics later in life (after taking lots of graduate-level math courses) and it finally dawned on me why statistics was useful.
As an engineer, I’m used to seeing models of the form y = f(x) (where x, y are multivariate and f is some arbitrarily complicated, often nonlinear/non-smooth, mapping).
That’s fine and well for deterministic values of x, except x is often fuzzy in real life. Wouldn’t it then be useful to know how y fuzzes (behaves) given that x is fuzzy? To me, that is one of
the central questions that statistics is able to answer, because based on the variability in the inputs, I can robustify the system against variability in the output. But many instructors gloss
over this point.
In many stats courses, so much time is spent characterizing the random variable x and not f(x). I think if that connection was made early on and repeatedly, all the surrounding theory (e.g.
Jensen’s inequality, higher moments, etc.) would suddenly acquire a level of practicality not often seen in stats instruction.
I believe NN Taleb makes a similar point elsewhere too: focus on the f(x) (exposure/effect), not the x.
p.s. that said, for most non-trivial models, f(x) is not something that is easily worked out by hand. Simulations can definitely help in this regard.
9. It’s interesting that you say you need to understand Calculus to understand statistics. A common position of the “Down with Algebra!” crowd is that we should teach more statistics and less
“Algebra” in high school. They apparently believe you don’t need to know much algebra to understand statistics, which seems absurd to me.
Lots of students take both courses at our high school, and we are fortunate to have good instructors for both. But at least for technically-minded students, I would never suggest stats as a
course to take instead of Calculus.
10. I would rather see high schools teach probability than statistics.
Probability is less subtle than statistics, so it’s easier to teach and easier to understand. And it’s a prerequisite for statistics.
11. I’d like to see probability and statistics taught in place of trigonometry and the very basics of trigonometry taught as part of first year physics. I’d also like every kid take one programming
class as a freshman or sophomore and then when functions and matrices are discussed in Algebra 2 kids would be writing programs to make theses ideas more concrete. When prob and stats comes up
the next year kids could be writing up simple simulations. There’s no way I’d replace calculus though, it’s too important and too much fun and there are much better choices of things to replace.
12. Stats first, hands down.
The “scientific process” made little practical sense to me in college (in terms of what I was learning and doing) until I took a physics lab that introduced me to the imprecision of the real
world; how to detect, measure and deal with its effects and sources.
This wasn’t just about taking data and applying canned analyses, but more importantly about using the results to improve the data acquisition process and to work with and extend the capabilities
of the test apparatus.
Our introduction to stats wasn’t to get the answer: It was to see if the answer was first relevant, and then useful. If an analysis indicated possible changes to the experiment, and the changes
failed to improve the results, we typically blamed our experiment design, technique and analysis, then tried to develop new results and insights before iterating again.
We started with a simple goal: Measure the local acceleration due to gravity with as little error as possible. We used approaches roughly in the order they appeared in history, starting with
inclined planes and using our pulse (or other human-based perceptions of time passing) as a timer. By the end of the class we were using more sophisticated lab equipment: Rubidium timers, photo
sensors, and precision triggers.
This course was carefully interwoven with freshman and sophomore math and physics courses, but also had a class element of its own, the cornerstone of which was a basic engineering statistics
text (whose name I forget: It was a half-inch thick paperback with the cover showing a French train that had plowed through the wall of an elevated station).
While the course was enlightening (and a ton of fun), by the end I realized that 90% of the technique and knowledge that was taught was easily within the scope of high school physics and math
What I was left with was a transformed vision of my relationship to the real world as both an observer, experimentalist and engineer. Even the most elementary statistics are of immense value when
assessing the correctness of a control system, diagnosing its faults, predicting its failure modes, and ensuring its reliability.
Quantum physics tells us there is nothing but statistics. Even the particles themselves are statistical fluctuations in their associated fields. While this becomes locally Newtonian at larger
scales, the interactions quickly become too complex to handle, with thermodynamics being the first predominantly statistical area of science a student will typically encounter (starting with the
definition of temperature itself).
But neither quantum stats nor detailed thermodynamic stats are approachable in the high school context. But experimental statistics are immensely approachable, especially when combined with even
a hand-waving introduction to Design of Experiments (DoE).
Stats (with a pinch of DoE) helps turn students into critical observers and creative experimentalists. It directly involves the observer in the process, immersing the student in the true core of
the scientific approach of learning about the universe we inhabit.
The most valuable insight, IMHO, is that stats aid critical self-evaluation. Early in the lab class I realized that others in my lab group routinely obtained better data than I did usingthe same
equipment (they were better experimentalists), but my results often yielded more useful insights (I was a better analyst). This spurred me to improve my experimental technique by observing
others, and to share my thought processes during my analyses. Within weeks, our 4-member lab group was routinely obtaining the best results in the 200 person class (by the end of the course, an
order of magnitude better). Yet not one in our group was exceptional in any way (especially from a GPA perspective).
Done right, the stats don’t lie. Done wrong, there are no worse lies than bad stats. It’s not about “finding the numbers”: It’s about getting the right numbers the right way.
In my experience, some of the worst published stats I’ve seen were in medical studies. Some physicians use cookie-cutter analyses without first validating their applicability. The most remarkable
thing is that even a one semester high school stats course can impart enough skill to detect (or at least raise questions about) the most glaring of such errors.
Being able to conduct such an analysis is tremendously empowering. A little bit of usefully applied stats knowledge can go a long way.
Physics is merely one context within which stats may be learned and effectively applied. Once the fundamentals are in place, the applications explode wherever real-world data is to be found.
Whenever I see an interesting or unusual claim in the popular press concerning conclusions made from data, I enjoy putting on my “stats goggles” and taking a closer look.
I especially take deep pleasure in confounding the strongly-held “evidence-based” political convictions of my liberal and conservative friends alike, using only simple reasoning.
Bottom line, when the stats are inconclusive, it prompts us to utter a difficult statement: “I don’t know.” As any guru will tell you, that statement is the beginning of all learning, knowledge
and wisdom.
13. I’m not familiar with any proposals to replace a calculus requirement with a statistics requirement, but I note immediately that this proposal seems to be about requirements, not about math.
I _am_ familiar with Art Benjamin’s TED talk from a few years back, wherein he suggests that a curriculum that is designed to prepare students to eventually master calculus is much less useful
FOR MOST STUDENTS than a curriculum designed to prepare them to eventually master statistics. A big part of that is that most of these students will never reach the end of that road — they will
not master either calculus or statistics. His proposal has nothing to do with what future mathematicians or engineers or physicians or economists should be taught, and everything to do with what
future journalists and elementary school teachers and plumbers and car salesmen and entrepreneurs (etc. etc.) should be taught.
Also, as John suggests above, the road to statistics starts with learning about probability. If that’s all you get out of it, you have learned things that will be much more useful in your future
life than if you start on the road to calculus and only get as far as high school algebra and trig. | {"url":"https://www.johndcook.com/blog/2014/03/06/on-replacing-calculus-with-statistics/","timestamp":"2024-11-09T08:09:35Z","content_type":"text/html","content_length":"79217","record_id":"<urn:uuid:4877a76b-cb90-49d0-b36e-2e158e5e5f6b>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00634.warc.gz"} |
Addition Subtraction Multiplication Division Fractions Worksheets
Math, specifically multiplication, forms the keystone of many scholastic techniques and real-world applications. Yet, for several students, understanding multiplication can present an obstacle. To
resolve this difficulty, educators and moms and dads have accepted a powerful device: Addition Subtraction Multiplication Division Fractions Worksheets.
Introduction to Addition Subtraction Multiplication Division Fractions Worksheets
Addition Subtraction Multiplication Division Fractions Worksheets
Addition Subtraction Multiplication Division Fractions Worksheets -
Liveworksheets transforms your traditional printable worksheets into self correcting interactive exercises that the students can do online and send to the teacher Add subtract multiply and divide
fractions Add subtract multiply and divide fractions Kristy Cartwright Member for 3 years 1 month Age 10 14 Level 8
Fraction circle manipulatives are mainly used for comparing fractions but they can be used for a variety of other purposes such as representing and identifying fractions adding and subtracting
fractions and as probability spinners There are a variety of options depending on your purpose
Value of Multiplication Practice Understanding multiplication is crucial, laying a solid structure for sophisticated mathematical principles. Addition Subtraction Multiplication Division Fractions
Worksheets supply structured and targeted practice, promoting a much deeper understanding of this fundamental math operation.
Development of Addition Subtraction Multiplication Division Fractions Worksheets
Adding Subtracting Multiplying And Dividing Fractions Worksheet
Adding Subtracting Multiplying And Dividing Fractions Worksheet
5th grade multiplying and dividing fractions worksheets including fractions multiplied by whole numbers mixed numbers and other fractions multiplication of improper fractions and mixed numbers and
division of fractions whole numbers and mixed numbers No login required
5th Grade Fraction Worksheets Using these 5th grade math fractions worksheets will help your child to compare and order fractions add and subtract fractions and mixed numbers understand how to
multiply fractions by a whole number understand how to multiply two fractions together including mixed fractions
From traditional pen-and-paper workouts to digitized interactive styles, Addition Subtraction Multiplication Division Fractions Worksheets have actually progressed, satisfying diverse understanding
styles and choices.
Kinds Of Addition Subtraction Multiplication Division Fractions Worksheets
Basic Multiplication Sheets Simple exercises concentrating on multiplication tables, helping learners construct a strong math base.
Word Problem Worksheets
Real-life situations integrated right into troubles, enhancing crucial reasoning and application abilities.
Timed Multiplication Drills Examinations designed to improve rate and precision, helping in quick mental math.
Benefits of Using Addition Subtraction Multiplication Division Fractions Worksheets
10 Best Images Of 5th Grade Subtraction Worksheets 6th Grade Math
10 Best Images Of 5th Grade Subtraction Worksheets 6th Grade Math
Easy Improper Fractions 5 9 7 4 Harder Improper Fractions 33 15 43 11 top Math explained in easy language plus puzzles games quizzes videos and worksheets For K 12 kids teachers and parents
Mixed fraction word problems including addition subtraction multiplication and division Like and unlike denominators Mixing word problems encourages students to read and think about questions
carefully Part of a collection of free word problem worksheets from K5 Learning
Improved Mathematical Abilities
Regular method develops multiplication proficiency, enhancing overall math abilities.
Improved Problem-Solving Abilities
Word problems in worksheets establish analytical reasoning and method application.
Self-Paced Understanding Advantages
Worksheets suit specific discovering speeds, promoting a comfy and versatile discovering environment.
How to Develop Engaging Addition Subtraction Multiplication Division Fractions Worksheets
Integrating Visuals and Colors Vivid visuals and colors record focus, making worksheets aesthetically appealing and engaging.
Consisting Of Real-Life Situations
Associating multiplication to day-to-day situations includes relevance and usefulness to workouts.
Customizing Worksheets to Various Ability Degrees Customizing worksheets based on differing proficiency levels guarantees comprehensive learning. Interactive and Online Multiplication Resources
Digital Multiplication Equipment and Games Technology-based sources provide interactive discovering experiences, making multiplication engaging and enjoyable. Interactive Sites and Applications
On-line platforms supply diverse and obtainable multiplication technique, supplementing traditional worksheets. Customizing Worksheets for Different Understanding Styles Visual Students Visual help
and layouts help understanding for students inclined toward aesthetic understanding. Auditory Learners Spoken multiplication issues or mnemonics accommodate students that realize concepts via
auditory means. Kinesthetic Learners Hands-on tasks and manipulatives sustain kinesthetic students in recognizing multiplication. Tips for Effective Implementation in Understanding Consistency in
Practice Regular method enhances multiplication skills, advertising retention and fluency. Balancing Repetition and Variety A mix of repetitive exercises and varied problem formats keeps interest and
understanding. Providing Positive Feedback Comments aids in identifying locations of enhancement, encouraging continued development. Challenges in Multiplication Method and Solutions Motivation and
Engagement Difficulties Tedious drills can bring about disinterest; ingenious methods can reignite motivation. Conquering Fear of Mathematics Unfavorable assumptions around mathematics can impede
progress; creating a favorable learning environment is essential. Impact of Addition Subtraction Multiplication Division Fractions Worksheets on Academic Efficiency Research Studies and Study
Searchings For Research shows a favorable correlation in between constant worksheet usage and improved mathematics performance.
Addition Subtraction Multiplication Division Fractions Worksheets emerge as flexible tools, fostering mathematical proficiency in students while suiting diverse learning designs. From basic drills to
interactive online sources, these worksheets not only boost multiplication abilities however likewise promote critical thinking and analytical capacities.
Teach Child How To Read Free Printable Multiplying And Dividing
Free Printables For Kids Multiplying Fractions Mixed Fractions
Check more of Addition Subtraction Multiplication Division Fractions Worksheets below
Divisions De Fractions 3 me Math matiques
Addition Subtraction Multiplication Division Of Fractions
Addition Subtraction Multiplication And Division Of Fractions
Patterns Addition Subtraction Multiplication And Division
Dividing Fractions Worksheets Dividing Fractions Worksheets
Printables Add Subtract Multiply Divide Fractions Worksheet
Fractions Worksheets Math Drills
Fraction circle manipulatives are mainly used for comparing fractions but they can be used for a variety of other purposes such as representing and identifying fractions adding and subtracting
fractions and as probability spinners There are a variety of options depending on your purpose
Add Subtract Fractions Worksheets for Grade 5 K5 Learning
5th grade adding and subtracting fractions worksheets including adding like fractions adding mixed numbers completing whole numbers adding unlike fractions and mixed numbers and subtracting like and
unlike fractions and mixed numbers No login required
Fraction circle manipulatives are mainly used for comparing fractions but they can be used for a variety of other purposes such as representing and identifying fractions adding and subtracting
fractions and as probability spinners There are a variety of options depending on your purpose
5th grade adding and subtracting fractions worksheets including adding like fractions adding mixed numbers completing whole numbers adding unlike fractions and mixed numbers and subtracting like and
unlike fractions and mixed numbers No login required
Patterns Addition Subtraction Multiplication And Division
Addition Subtraction Multiplication Division Of Fractions
Dividing Fractions Worksheets Dividing Fractions Worksheets
Printables Add Subtract Multiply Divide Fractions Worksheet
Multiplying Fractions 4th 5th Grades Free Worksheet Add And
Addition Subtraction Multiplication Division Fractions Worksheets
Addition Subtraction Multiplication Division Fractions Worksheets
Fractions Addition Subtraction Multiplication And Division YouTube
FAQs (Frequently Asked Questions).
Are Addition Subtraction Multiplication Division Fractions Worksheets appropriate for all age groups?
Yes, worksheets can be tailored to different age and ability levels, making them adaptable for numerous learners.
Exactly how usually should trainees practice utilizing Addition Subtraction Multiplication Division Fractions Worksheets?
Consistent technique is crucial. Normal sessions, ideally a few times a week, can yield considerable improvement.
Can worksheets alone boost mathematics skills?
Worksheets are a beneficial tool but must be supplemented with different learning approaches for thorough skill growth.
Exist on-line systems providing free Addition Subtraction Multiplication Division Fractions Worksheets?
Yes, several educational internet sites use free access to a vast array of Addition Subtraction Multiplication Division Fractions Worksheets.
Just how can moms and dads support their kids's multiplication method at home?
Urging consistent technique, offering assistance, and creating a favorable discovering setting are useful steps. | {"url":"https://crown-darts.com/en/addition-subtraction-multiplication-division-fractions-worksheets.html","timestamp":"2024-11-13T21:26:15Z","content_type":"text/html","content_length":"29258","record_id":"<urn:uuid:73283180-e016-4859-bf68-e1baf0373912>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00573.warc.gz"} |
The Eisenstein Ideal and Jacquet-Langlands Isogeny over Function Fields
The Eisenstein Ideal and Jacquet-Langlands Isogeny over Function Fields
Mihran Papikian, Fu-Tsun Wei
Received: May 4, 2014 Revised: March 13, 2015 Communicated by Peter Schneider
Abstract. Let p andq be two distinct prime ideals ofFq[T]. We use the Eisenstein ideal of the Hecke algebra of the Drinfeld modular curveX0(pq) to compare the rational torsion subgroup of the
Jacobian J0(pq) with its subgroup generated by the cuspidal divisors, and to produce explicit examples of Jacquet-Langlands isogenies. Our results are stronger than what is currently known about the
analogues of these problems overQ.
2010 Mathematics Subject Classification: 11G09, 11G18, 11F12 Keywords and Phrases: Drinfeld modular curves; Cuspidal divi- sor group; Shimura subgroup; Eisenstein ideal; Jacquet-Langlands isogeny
1. Introduction 552
1.1. Motivation 552
1.2. Main results 554
1.3. Notation 557
2. Harmonic cochains and Hecke operators 558
2.1. Harmonic cochains 558
2.2. Hecke operators and Atkin-Lehner involutions 564
2.3. Fourier expansion 566
2.4. Atkin-Lehner method 570
3. Eisenstein harmonic cochains 572
3.1. Eisenstein series 572
3.2. Cuspidal Eisenstein harmonic cochains 575
1The first author was supported in part by the Simons Foundation. The second author was partially supported by National Science Council and Max Planck Institute for Mathematics.
3.3. Special case 579
4. Drinfeld modules and modular curves 583
5. Component groups 585
6. Cuspidal divisor group 590
7. Rational torsion subgroup 596
7.1. Main theorem 596
7.2. Special case 597
8. Kernel of the Eisenstein ideal 601
8.1. Shimura subgroup 601
8.2. Special case 606
9. Jacquet-Langlands isogeny 611
9.1. Modular curves ofD-elliptic sheaves 611
9.2. Rigid-analytic uniformization 612
9.3. Explicit Jacquet-Langlands isogeny conjecture 613
9.4. Special case 616
10. Computing the action of Hecke operators 619
10.1. Action onH 619
10.2. Action onH^′ 620
10.3. Computation of Brandt matrices 624
Acknowledgements 626
References 627
1. Introduction
1.1. Motivation. LetFq be a finite field withqelements, whereqis a power of a prime numberp. LetA=Fq[T] be the ring of polynomials in indeterminate T with coefficients in Fq, and F = Fq(T) the field
of fractions of A. The degree map deg : F →Z∪ {−∞}, which associates to a non-zero polynomial its degree in T and deg(0) =−∞, defines a norm onF by|a|:=q^deg(a). The corresponding place ofF is
usually called theplace at infinity, and is denoted by∞. We also define a norm and degree on the ideals ofA by|n|:= #(A/n) and deg(n) := log[q]|n|. Let F∞ denote the completion of F at ∞, and C∞
denote the completion of an algebraic closure of F∞. Let Ω :=C∞−F∞ be theDrinfeld half-plane.
Letn✁Abe a non-zero ideal. The level-nHecke congruence subgroupof GL2(A) Γ0(n) :=
a b c d
c≡0 modn
plays a central role in this paper. This group acts on Ω via linear fractional transformations. Drinfeld proved in [6] that the quotient Γ0(n)\Ω is the space ofC∞-points of an affine curveY0(n)
defined overF, which is a moduli space of rank-2 Drinfeld modules (we give a more formal discussion of Drinfeld modules and their moduli schemes in Section 4). The unique smooth projective curve over
F containing Y0(n) as an open subvariety is denoted by X0(n). The
cusps of X0(n) are the finitely many points of the complement of Y0(n) in X0(n); the cusps generate a finite subgroupC(n) of the Jacobian varietyJ0(n) of X0(n), called thecuspidal divisor group. By
the Lang-N´eron theorem, the group ofF-rational points ofJ0(n) is finitely generated, in particular, its torsion subgroupT(n) :=J0(n)(F)tor is finite. It is known that whennis square-free C(n)⊆ T(n).
For a square-free idealn✁Adivisible by an even number of primes, letD be the division quaternion algebra overF with discriminantn. The group of units Γ^nof a maximalA-order inD acts on Ω, and the
quotient Γ^n\Ω is the space of C∞-points of a smooth projective curveX^n defined overF; this curve is a moduli space ofD-elliptic sheaves introduced in [28]. Let J^nbe the Jacobian variety ofX^n.
The analogy between X0(n) and the classical modular curves X0(N) over Q classifying elliptic curves with Γ0(N)-structures is well-known and has been extensively studied over the last 35 years.
Similarly, the modular curves X^n are the function field analogues of Shimura curves X^N parametrizing abelian surfaces equipped with an action of the indefinite quaternion algebra over Q with
Let T(n) be the Z-algebra generated by the Hecke operatorsTm, m✁A, act- ing on the groupH^0(T,Z)^Γ^0^(n) ofZ-valued Γ0(n)-invariant cuspidal harmonic cochains on the Bruhat-Tits treeT of PGL2(F∞).
TheEisentein ideal E(n) of T(n) is the ideal generated by the elementsT[p]− |p| −1, wherep∤nis prime. In this paper we study the Eisenstein ideal in the case whenn=pqis a product of two distinct
primes, with the goal of applying this theory to two important arithmetic problem: 1) comparingT(n) withC(n), and 2) constructing explicit homomorphismsJ0(n)→J^n. Our proofs use the rigid-analytic
uniformizations ofJ0(n) andJ^noverF∞. It seems that the existence of actual geometric fibres at∞allows one to prove stronger results than what is currently known about either of these problems in the
classical setting; this is specific to function fields since the analogue of∞forQis the archimedean place.
Our initial motivation for studying E(pq) came from an attempt to prove a function field analogue of Ogg’s conjecture [37] about the so-called Jacquet- Langlands isogenies. We briefly recall what
this is about. A geometric conse- quence of the Jacquet-Langlands correspondence [25] is the existence of Hecke- equivariantQ-rational isogenies between the new quotientJ0(N)^new ofJ0(N) and the
JacobianJ^N ofX^N; see [45]. (HereN is a square-free integer with an even number of prime factors.) The proof of the existence of aforementioned isogenies relies on Faltings’ isogeny theorem, so
provides no information about them beyond the existence. It is a major open problem in this area to make the isogenies more canonical (cf. [24]). In [37], Ogg made several predictions about the
kernel of an isogeny J0(N)^new → J^N when N = pp^′ is a product of two distinct primes and p= 2,3,5,7,13. As far as the authors are aware, Ogg’s conjecture remains open except for the special cases
whenJ^N has dimen- sion 1 (N = 14,15,21,33,34) or dimension 2 (N = 26,38,58). In these cases, J^N and J0(N)^new are either elliptic curves or, up to isogeny, decompose into
a product of two elliptic curves given by explicit Weierstrass equations. One can then find an isogeny J0(N)^new → J^N by studying the isogenies between these elliptic curves; see the proof of
Theorem 3.1 in [21]. This argument does not generalize toJ^N of dimension≥3 because they contain absolutely simple abelian varieties of dimension ≥2, and one’s hold on such abelian varieties is
decidedly more fleeting.
Now returning to the setting of function fields, letn✁Abe a square-free ideal with an even number of prime factors. The global Jacquet-Langlands corre- spondence overF, combined with the main results
in [6] and [28], and Zarhin’s isogeny theorem, implies the existence of a Hecke-equivariantF-rational isogeny J0(n)^new→J^n. In Section 9, by studying the groups of connected components of the N´eron
models of J0(n) and J^n, we propose a function field analogue of Ogg’s conjecture (see Conjecture 9.3). This conjecture predicts that, when n=pqis a product of two distinct primes with deg(p)≤2,
there is a Jacquet- Langlands isogeny whose kernel comes from cuspidal divisors and is isomorphic to a specific abelian group. Our approach to proving this conjecture starts with the observation
thatC(n) is annihilated by the Eisenstein idealE(n) acting on J0(n), so we first try to show that there is a Jacquet-Langlands isogeny whose kernel is annihilated byE(n), and then try to describe the
kernel of the Eisen- stein ideal J[E(n)] in J0(n) explicitly enough to pin down the kernel of the isogeny. This naturally leads to the study ofJ[E(n)] for compositen. On the other hand,J[E(n)] also
plays an important role in the analysis ofT(n), as was first demonstrated by Mazur in his seminal paper [33] in the case of classical modular Jacobian J0(p) of prime level. These two applications of
the theory of the Eisenstein ideal constitute the main theme of this paper.
1.2. Main results. TheShimura subgroup S(n) ofJ0(n) is the kernel of the homomorphism J0(n) → J1(n) induced by the natural morphism X1(n) → X0(n) of modular curves (see Section 8.1).
Assume p✁A is prime. Define N(p) = ^|p|−1[q−1] if deg(p) is odd, and define N(p) = ^|p|−1[q]2−1, otherwise. In [38], P´al developed a theory of the Eisenstein ideal E(p) in parallel with Mazur’s
paper [33]. In particular, he showed that J[E(p)]
is everywhere unramified of orderN(p)^2, and is essentially generated by C(p) and S(p), both of which are cyclic of orderN(p). Moreover,C(p) =T(p) and S(p) is the largest µ-type subgroup scheme of J0
(p). These results are the analogues of some of the deepest results from [33], whose proof first establishes that the completion of the Hecke algebra T(p) at any maximal ideal in the support ofE(p)
is Gorenstein.
As we will see in Section 8, even in the simplest composite level case the kernel of the Eisenstein idealJ[E(n)] has properties quite different from its prime level counterpart. For example,J[E(n)]
can be ramified, generallyS(n) has smaller order than C(n), neither of these groups is cyclic, andS(n) is not the largest µ-type subgroup scheme ofJ0(n).
First, we discuss our results aboutC(n),S(n), andT(n):
Theorem 1.1.
(1) We give a complete description of C(pq)as an abelian group; see The- orem 6.11.
(2) For an arbitrary square-free nwe show that the group scheme S(n) is µ-type, and therefore annihilated by E(n), and we give a complete de- scription ofS(n)as an abelian group; see Proposition 8.5
and Theorem 8.6.
(3) If ℓ6=pis a prime number which does not divide (q−1)·gcd(|p|+ 1,|q|+ 1),
then theℓ-primary subgroups ofC(pq)andT(pq)are equal; see Theorem 7.3.
Usually, many of the primes dividing the order of C(pq) satisfy the condition in (3), so, aside from a relatively small explicit set of primes, we can determine theℓ-primary subgroupT(pq)ℓ ofT(pq).
For example, (1) and (3) imply that ifℓdoes not divide (|p|^2−1)(|q|^2−1), thenT(pq)ℓ= 0. The most advantageous case for applying (3) is when deg(q) = deg(p) + 1, since then gcd(|p|+ 1,|q|+ 1)
dividesq−1. In particular, ifq= 2 and deg(q) = deg(p) + 1, then we conclude that the odd part ofT(pq) coincides withC(pq). These results are qualitatively stronger than what is currently known about
the rational torsion subgroup J0(N)(Q)tor of classical modular Jacobians of composite square-free levels (cf.
Outline of the Proof of Theorem 1.1. Although it was known thatC(n) is finite for anyn(see Theorem 6.1), there were no general results about its structure, besides the prime level casen=p. The
curveX0(p) has two cusps, soC(p) is cyclic; its order was computed by Gekeler in [10]. The first obvious difference between the prime level and the composite level n=pqis that X0(pq) has 4 cusps, soC
(pq) is usually not cyclic and is generated by 3 elements. To prove the result mentioned in (1), i.e., to compute the group structure of C(pq), we follow the strategy in [10], but the calculations
become much more complicated.
The idea is to use Drinfeld discriminant function to obtain upper bounds on the orders of cuspidal divisors, and then use canonical specializations ofC(pq) into the component groups ofJ0(pq)
atpandqto obtain lower bounds on these orders.
To deduce the group structure ofS(n) mentioned in (2) we use the rigid-analytic uniformizations ofJ0(n) and J1(n) overF∞, and the “changing levels” result from [18], to reduce the problem to a
calculation with finite groups.
The proof of (3) is similar to the proof of Theorem 7.19 in [38], although there are some important differences, too. Suppose ℓis a prime that does not divide q(q−1). Since J0(pq) has split toric
reduction at ∞, the ℓ-primary subgroup T(pq)ℓ maps injectively into the component group Φ∞ ofJ0(pq) at
∞. Using the Eichler-Shimura relations, one shows that the image ofT(pq)ℓin Φ∞ can be identified with a subspace ofH^0(T,Z)^Γ^0^(pq)⊗Z/ℓ^nZannihilated by the Eisenstein ideal E(pq) for any
sufficiently large n ∈ N. Denote by
E^00(pq,Z/ℓ^nZ) the subspace ofH^0(T,Z)^Γ^0^(pq)⊗Z/ℓ^nZannihilated byE(pq).
Then we have the inclusions
C(pq)ℓ֒→ T(pq)ℓ֒→ E^00(pq,Z/ℓ^nZ).
The spaceE^00(pq,Z/ℓ^nZ) contains the reductions moduloℓ^n of certain Eisen- stein series. We prove that if ℓ does not divide q(q−1)gcd(|p|+ 1,|q|+ 1), then the whole E^00(pq,Z/ℓ^nZ) is generated by
the reductions of these Eisen- stein series (see Theorem 3.9 and Lemma 3.10). This allows us to compute E^00(pq,Z/ℓ^nZ). It turns out thatE^00(pq,Z/ℓ^nZ)∼= C(pq)ℓ, and consequently C(pq)ℓ = T(pq)ℓ.
To prove Theorem 3.9, we first prove a version of the key Theorem 1 in the famous paper by Atkin and Lehner [1] forZ/ℓ^nZ-valued har- monic cochains (see Theorem 2.26). The fact that we need to work
withZ/ℓ^nZ rather than C leads to technical difficulties, which results in the restriction ℓ ∤q(q−1)gcd(|p|+ 1,|q|+ 1). Note that in our definition the Hecke algebra T(pq) includes the operators Up
and Uq. This is important since we need to deal systematically with “old” forms of level p and q. The smaller algebra T(pq)^0generated by the Hecke operatorsTmwithmcoprime topqused by P´al in [38]
and [39] is not sufficient for getting a handle onE^00(pq,Z/ℓ^nZ).
Now we concentrate on the case where we investigate the Jacquet-Langlands isogenies. We fix two primesxandyofAof degree 1 and 2, respectively. This differs from our usualFrakturnotation for ideals
ofA. This is done primarily to make it easy for the reader to distinguish the theorems which assume that the level is xy. Several sections in the paper are titled “Special case” and deal exclusively
with the casepq=xy. Note thatX0(pq) has genus 0 ifpandqare distinct primes with deg(pq)≤2. The genus ofX0(xy) isq, so this curve is the simplest example of a Drinfeld modular curve of composite level
and positive genus. Also, by a theorem of Schweizer [49],X0(pq) is hyperelliptic if and only ifp=xandq=y, so one can think of this case as the hyperelliptic case.
The cusps ofX0(xy) can be naturally labelled [x],[y],[1],[∞]; see Lemma 2.14.
Letcxand cy denote the classes of divisors [x]−[∞] and [y]−[∞] inJ0(xy).
First, we show that (see Theorem 7.13)
T(xy) =C(xy) =hcxi ⊕ hcyi ∼=Z/(q+ 1)Z⊕Z/(q^2+ 1)Z.
The reason we can prove this stronger result compared to (3) of Theorem 1.1 is that we can computeE^00(xy,Z/ℓ^nZ) without any restrictions on ℓ, and we can deal with the 2-primary torsionT(xy)2 using
the fact thatX0(xy) is hyperelliptic.
To simplify the notation, for the rest of this section denote T= T(xy), E = E(xy), H:=H^0(T,Z)^Γ^0^(xy), H^′ :=H(T,Z)^Γ^xy, where this last group is the group ofZ-valued Γ^xy-invariant harmonic
cochains on T. We show that (see Corollary 3.18)
T/E∼=Z/(q^2+ 1)(q+ 1)Z,
so the residue characteristic of any maximal ideal of T containing E divides (q^2+ 1)(q+ 1). The Jacquet-Langlands correspondence over F implies that
there is an isomorphismH ⊗Q∼=H^′⊗Qwhich is compatible with the action ofT.
Theorem 1.2 (See Theorems 9.5 and 9.6).
(1) IfH ∼=H^′asT-modules, then there is an isogenyJ0(xy)→J^xydefined overF whose kernel is cyclic of order q^2+ 1 and is annihilated byE.
(2) IfH ∼=H^′asT-modules and for every primeℓ|(q^2+1)the completion of T⊗Zℓ atM= (E, ℓ)is Gorenstein, then there is an isogenyJ0(xy)→ J^xy whose kernel ishcyi ∼=Z/(q^2+ 1)Z.
Remark 1.3. An isogeny J0(xy)→J^xy with kernelhcyidoes not respect the canonical principal polarizations on the Jacobians sincehcyiis not a maximal isotropic subgroup ofJ0(xy) with respect to the
Weil pairing.
Outline of the Proof of Theorem 1.2. BothJ0(xy) andJ^xy have rigid-analytic uniformization over F∞. The assumption that H and H^′ are isomorphic T- modules allows us to identify the uniformizing tori
of both Jacobians with T⊗C^×[∞]. Next, we show that the groups of connected components of the N´eron models ofJ0(xy) andJ^xy at∞are annihilated byE. This allows us to identify the uniformizing
lattices of the Jacobians with ideals inT. These two observations, combined with a theorem of Gerritzen, imply (1). If in addition we assume that TM is Gorenstein, then we get an explicit description
of the kernel of the Eisenstein ideal from which (2) follows.
Proving that the assumptions in Theorem 1.2 hold seems difficult. First, even thoughH ⊗QandH^′⊗Qare isomorphicT-modules, the integral isomorphism is much more subtle. It is related to a classical
problem about the conjugacy classes of matrices in Matn(Z); cf. [27]. Second, whenℓ|(q^2+ 1) the kernel of MinJ0(xy) is ramified, and Mazur’s Eisenstein descent arguments for proving T[M] is
Gorenstein do not work in this ramified situation. (Both versions of Mazur’s descent discussed in [38,§§10,11] rely on subtle arithmetic properties ofJ0(p) which are valid only for prime level.)
Nevertheless, both assumptions in Theorem 1.2 can be verified computation- ally; Section 10 is devoted to these calculations. We were able to check the assumptions for several cases for each
primeq≤7. In particular, we were able to go beyond dimension 2, which is currently the only dimension where the Ogg’s conjecture is known to be true overQ. Section 10 is also of independent interest
since it provides an algorithm for computing the action of Hecke op- erators onH^′; this should be useful in other arithmetic problems dealing with X^xy. (An algorithm for computing the Hecke action
onHwas already known from the work of Gekeler; see Remark 10.2.)
1.3. Notation. Aside from∞, the places ofF are in bijection with non-zero prime ideals ofA. Given a placev ofF, we denote byFv the completion ofF at v, by O^v the ring of integers of Fv, and byFv the
residue field ofO^v. The valuation ordv :Fv →Zis assumed to be normalized by ordv(πv) = 1, where πv is a uniformizer ofO^v. The normalized absolute value onF∞ is denoted by
| · |.
Given a fieldK, we denote by ¯Kan algebraic closure ofKandK^sepa separable closure in ¯K. The absolute Galois group Gal(K^sep/K) is denoted by GK. Moreover, F[v]^nr and Ov^nr will denote the maximal
unramified extension of Fv
and its ring of integers, respectively.
Let R be a commutative ring with identity. We denote by R^× the group of multiplicative units ofR. Let Matn(R) be the ring of n×nmatrices overR, GLn(R) the group of matrices whose determinant is inR^
×, and Z(R)∼=R^× the subgroup of GLn(R) consisting of scalar matrices.
If X is a scheme over a base S and S^′ → S any base change, XS^′ denotes the pullback of X to S^′. If S^′ = Spec(R) is affine, we may also denote this scheme by XR. ByX(S^′) we mean theS^′-rational
points of theS-schemeX, and again, ifS^′= Spec(R), we may also denote this set byX(R).
Given a commutative finite flat group schemeGover a baseS(or just an abelian groupG, or a ringG) and an integern,G[n] is the kernel of multiplication by nin G, andGℓ is the maximal ℓ-primary subgroup
of G. The Cartier dual of Gis denoted byG^∗.
Given an ideal n✁A, by abuse of notation, we denote by the same symbol the unique monic polynomial in A generatingn. It will always be clear from the context in which capacity nis used; for example,
ifnappears in a matrix, column vector, or a polynomial equation, then the monic polynomial is implied.
The prime idealsp✁Aare always assumed to be non-zero.
2. Harmonic cochains and Hecke operators
2.1. Harmonic cochains. LetGbe an oriented connected graph in the sense of Definition 1 of§2.1 in [50]. We denote byV(G) andE(G) its set of vertices and edges, respectively. For an edge e ∈ E(G), let
o(e), t(e) ∈ V(G) and
e∈E(G) be its origin, terminus and inversely oriented edge, respectively. In particular,t(¯e) =o(e) ando(¯e) =t(e). We will assume that for anyv∈V(G) the number of edges with t(e) =v is finite, andt
(e)6=o(e) for any e∈E(G) (i.e., G has no loops). A path in G is a sequence of edges {ei}^i∈I indexed by the set I where I = Z, I = N or I = {1, . . . , m} for some m ∈ N such that t(ei) = o(ei+1) for
every i, i+ 1 ∈ I. We say that the path is without backtracking ifei 6= ¯ei+1 for every i, i+ 1∈I. We say that the path without backtracking {ei}^i∈N is a half-line if for every vertex v of G there
is at most one indexn∈Nsuch thatv=o(en).
Let Γ be a group acting on a graphG, i.e., Γ acts via automorphisms. We say that Γ acts with inversion if there is γ ∈Γ and e∈ E(G) such that γe= ¯e.
If Γ acts without inversion, then we have a natural quotient graph Γ\Gsuch that V(Γ\G) = Γ\V(G) andE(Γ\G) = Γ\E(G), cf. [50, p. 25].
Definition 2.1. Fix a commutative ringR with identity. An R-valuedhar- monic cochain onGis a functionf :E(G)→Rthat satisfies
f(e) +f(¯e) = 0 for alle∈E(G),
(ii) X
e∈E(G) t(e)=v
f(e) = 0 for allv∈V(G).
Denote byH(G, R) the group ofR-valued harmonic cochains onG.
The most important graphs in this paper are the Bruhat-Tits tree T of PGL2(F∞), and the quotients of T. We recall the definition and introduce some notation for later use. Fix a uniformizer π∞ of F∞.
The sets of ver- tices V(T) and edges E(T) are the cosets GL2(F∞)/Z(F∞)GL2(O^∞) and GL2(F∞)/Z(F∞)I^∞, respectively, whereI^∞ is the Iwahori group:
a b c d
The matrix
0 1 π∞ 0
normalizesI^∞, so the multiplication from the right by this matrix on GL2(F∞) induces an involution on E(T); this involution is e7→e. The matrices¯
(2.1) E(T)^+=
π[∞]^k u
0 1 k∈Z
u∈F∞, umodπ[∞]^k O^∞
are in distinct left cosets of I^∞Z(F∞), and there is a disjoint decomposition (cf. [12, (1.6)])
E(T) =E(T)^+G E(T)^+
0 1 π∞ 0
. We call the edges inE(T)^+positively oriented.
The group GL2(F∞) naturally acts on E(T) by left multiplication. This in- duces an action on the group of R-valued functions on E(T): for a func- tion f on E(T) and γ ∈ GL2(F∞) we define the function
f|γ on E(T) by (f|γ)(e) = f(γe). It is clear from the definition that f|γ is harmonic if f is harmonic, and for anyγ, σ∈GL2(F∞) we have (f|γ)|σ=f|(γσ).
Let Γ be a subgroup of GL2(F∞) which acts onT without inversions. Denote byH(T, R)^Γ the subgroup of Γ-invariant harmonic cochains, i.e.,f|γ=f for allγ∈Γ. It is clear thatf ∈ H(T, R)^Γ defines a
functionf^′ on the quotient graph Γ\T, and f itself can be uniquely recovered from this function: If e∈E(T) maps to ˜e∈E(Γ\T) under the quotient map, then f(e) = f^′(˜e).
The conditions of harmonicity (i) and (ii) can be formulated in terms off^′ as follows. Since Γ acts without inversion, (i) is equivalent to
f^′(˜e) +f^′(¯e) = 0˜ for all ˜e∈E(Γ\T).
Letv∈V(T) and ˜v∈V(Γ\T) be its image. The stabilizer group Γv={γ∈Γ| γv=v}
acts on the set{e∈E(T)| t(e) =v}, and the orbits correspond to {˜e∈E(Γ\T)|t(˜e) = ˜v}.
Let Γe:={γ∈Γ| γe=e}; clearly Γeis a subgroup of Γt(e). Theweight ofe w(e) := [Γt(e): Γe]
is the length of the orbit corresponding to e. Since w(e) depends only on its image ˜ein Γ\T, we can definew(˜e) :=w(e). Note thatP
t(˜e)=˜vw(˜e) =q+ 1.
We stress that, in general, w(e) depends on the orientation, i.e.,w(e)6=w(¯e).
With this notation, condition (ii) is equivalent to (ii^′)
e∈E(Γ\T) t(˜e)=˜v
w(˜e)f^′(˜e) = 0 for all ˜v∈V(Γ\T),
cf. [18, (3.1)].
Definition 2.2. The group of R-valued cuspidal harmonic cochains for Γ, denotedH^0(T, R)^Γ, is the subgroup ofH(T, R)^Γconsisting of functions which have compact support as functions on Γ\T, i.e.,
functions which have value 0 on all but finitely many edges of Γ\T. Let H^00(T, R)^Γ denote the image ofH^0(T,Z)^Γ⊗Rin H^0(T, R)^Γ.
Definition 2.3. It is known that the quotient graph Γ0(n)\T is the edge disjoint union
Γ0(n)\T = (Γ0(n)\T)^0∪ [
of a finite graph (Γ0(n)\T)^0with a finite number of half-lineshs, calledcusps;
cf. Theorem 2 on page 106 of [50]. The cusps are in bijection with the orbits of the natural action of Γ0(n) onP^1(F); cf. Remark 2 on page 110 of [50].
To simplify the notation, we put
H(n, R) :=H(T, R)^Γ^0^(n) H^0(n, R) :=H^0(T, R)^Γ^0^(n)
H^00(n, R) the image ofH^0(n,Z)⊗RinH^0(n, R).
One can show thatH^0(n,Z) andH(n,Z) are finitely generated freeZ-modules of rankg(n) andg(n) +c(n)−1, respectively, whereg(n) is the genus ofX0(n) andc(n) is the number of cusps.
From the above description it is clear that f is in H^0(n, R) if and only if it eventually vanishes on each hs. It is also clear that if R is flat over Z, then H^0(n, R) = H^00(n, R). On the other
hand, it is easy to construct examples where this equality does not hold.
Example 2.4. The quotient graph GL2(A)\T is a half-line; see Figure 1.
Denote the edge with origin vi and terminus vi+1 by ei. The stabilizers of vertices and edges of GL2(A)\T are well-known, cf. [17, p. 691]. From this one computes w(ei) = q for all i, w(¯e0) = q+ 1,
and w(¯ei) = 1 for i ≥ 1.
Therefore, ifϕ∈ H(1, R), thenϕ(ei) =q^iα(i≥0) for some fixedα∈R[q+ 1].
Now it is clear thatH(1, R) =R[q+ 1] andH^0(1, R) =H^00(1, R) = 0.
v0 v1 v2 v3
Figure 1. GL2(A)\T
v0 v1 v2 v3
v−1 v−2 v−3
Figure 2. Γ0(x)\T
v0 v1 v2 v3
v−1 v−2 v−3
Figure 3. Γ0(y)\T
Example2.5. The graph of Γ0(x)\ T is given in Figure 2, where the vertexvi
(i∈Z) is the image of
T^i 0
∈V(T); the positive orientation is induced from E(T)^+. Denote by ei the edge with origin vi−1 and terminusvi. Since 0 1
v−i=viand the stabilizers ofvi(i≥0) in GL2(A) are well-known (cf.
[17, p. 691]), one easily computes w(ei) =
(q ifi≥0
1 ifi≤ −1 w(¯ei) =
(1 ifi≥ −1 q ifi≤ −2
Suppose ϕ ∈ H(x, R) and denote α = ϕ(e−1). Since w(ei)ϕ(ei) = w(¯ei+1)ϕ(ei+1), we get
ϕ(ei) =
αq^i+1 ifi≥ −1 α ifi=−2 αq^−i−3 ifi≤ −3.
We conclude that H(x, R) =R, H^0(x, R) = Rp, and H^00(x, R) = 0. (Recall that Rp denotes thep-primary subgroup ofR.)
Example2.6. The graph Γ0(y)\T is given in Figure 3, wherevi is the image of
T^i 0
∈V(T) anduis the image of
T^−2 T^−1
. We denote the edge
with originvi−1 and terminusvi by ei, and the edge with terminus uby eu. One computes
w(ei) =
(q ifi≥0
1 ifi≤ −1 w(¯ei) =
(1 ifi≥0 q ifi≤ −1 w(eu) =q+ 1, w(¯eu) =q−1.
Letϕ∈ H(y, R). Denoteϕ(e0) =αandϕ(eu) =β. Then (q+ 1)β= 0 and ϕ(ei) =
(αq^i ifi≥0 q^−i−1(α+ (q−1)β) ifi≤ −1.
This implies thatH(y, R)∼=R⊕R[q+ 1]. Forϕto be cuspidal we must have q^nα= 0 andq^n(q−1)β= 0 for somen≥1. Thus,α∈Rp andβ ∈R[2] (resp.
β = 0) ifpis odd (resp. 2). We get an isomorphismH^0(y, R)∼=Rp⊕R[2] ifp is odd andH^0(y, R)∼=R2 ifp= 2. Note that H^00(y, R) = 0.
Lemma 2.7. The following holds:
(1) If n✁A has a prime divisor of odd degree, assume q(q−1) ∈ R^×. Otherwise, assumeq(q^2−1)∈R^×. Then H^0(n, R) =H^00(n, R).
(2) If n=p is prime andq(q−1)∈R^×, thenH^0(n, R) =H^00(n, R).
Proof. Our proof relies on the results in [17], and is partly motivated by the proof of Theorem 3.3 in [17]. Let Γ := Γ0(n). By 1.11 and 2.10 in [17], the stabilizer Γv for any v ∈ V(T) is finite,
contains the scalar matrices Z(Fq), and n(v) := #Γv/F^×[q] either divides (q−1)q^m for somem ≥0, or is equal to q+ 1. Moreover, n(v) =q+ 1 is possible only if all prime divisors ofn have even
degrees. Overall, we see that our assumptions in (1) imply that n(v) is invertible in R for any v ∈ V(T). Since the stabilizer Γe of anye ∈ V(T) is a subgroup of Γt(e) containingZ(Fq), we also haven
(e) := #Γe/F^×[q] ∈R^×. Note that n(e) does not depend on the orientation of e and depends only on its image ˜ein Γ\T, so we can define n(˜e) =n(e).
Let H^0(Γ\T, R) be the subgroup of H(Γ\T, R) consisting of compactly supported harmonic cochains on Γ\T. There is an injective homomorphism
H^0(Γ\T, R)→ H^0(n, R) (2.2)
defined byϕ^†(˜e) =n(˜e)ϕ(˜e). Indeed, sincen(˜e) does not depend on the orien- tation ofe,ϕ^† clearly satisfies (i^′). As for (ii^′), we have
e∈E(Γ\T) t(˜e)=˜v
w(˜e)ϕ^†(˜e) = X
e∈E(Γ\T) t(˜e)=˜v
n(˜e)n(˜e)ϕ(˜e) =n(˜v) X
e∈E(Γ\T) t(˜e)=˜v
ϕ(˜e) = 0.
The map (2.2) is also defined over Z, and by [17, Thm. 3.3] gives an isomor- phismH^0(Γ\T,Z)−→ H^∼ ^0(n,Z). Next, there is an isomorphism
H^0(Γ\T, R)∼=H^0(Γ\T,Z)⊗^ZR,
which follows, for example, by observing that H1(Γ\T, R) ∼=H^0(Γ\T, R) and applying the universal coefficient theorem for simplicial homology. Hence
H^0(Γ\T, R)∼=H^0(Γ\T,Z)⊗^ZR∼=H^0(n,Z)⊗^ZR.
Let g = rankZH^0(Γ\T,Z). Thinking of the elements ofH^0(Γ\T,Z) as 1- cycles, it is easy to show by induction on g that one can choosee1, . . . , eg ∈ E(Γ\T) and aZ-basisϕ1, . . . , ϕgofH^0(Γ\T,Z)
such that Γ\T−{e1, . . . , eg} is a tree, andϕi(ej) =δij =(Kronecker’s delta), 1≤i, j≤g. By slight abuse of notation, denote the image of ϕ^†[i] in H^00(n, R) by the same symbol. Let ψ∈ H^0(n, R).
ψ^′ :=ψ− Xg i=1
ψ(ei) n(ei)ϕ^†[i]
is supported on a finite subtreeSof Γ\T. Letv∈V(S) be a vertex such that there is a unique e ∈ E(S) with t(e) = v. Note that w(e) ∈ R^×. Condition (ii^′) givesw(e)ψ^′(e) = 0, so ψ^′(e) = 0. This
process can be iterated to show that ψ^′ = 0. This implies that the natural map H^0(n,Z)⊗^ZR→ H^0(n, R) is surjective, which is part (1).
To prove part (2), we can assume that deg(p) is even. A consequence of 2.7 and 2.8 in [17] is that there is a uniquev0∈V(Γ\T) withn(v0) =q+ 1 and a uniquee0∈E(Γ\T) witho(e0) =v0. For any otherv∈V(Γ\
T),n(v) divides (q−1)q^m. Since the stabilizer of any edgee∈E(Γ\T) is a subgroup of the stabilizers of botht(e) ando(e), we haven(e)∈R^×. After this observation, we can repeat the argument used to
prove (1) to reduces the problem to showing that ψ∈ H^0(p, R) supported on a finite treeS is identically 0. We can always choosev∈V(S) to be a vertex different fromv0but such that there is a unique
e∈E(S) witht(e) =v. Sincew(e) is a unit inR, we can also finish as in part
The conclusion in Example 2.6 that H^0(y, R) 6= H^00(y, R) if R[2] 6= 0 is a special case of a general fact:
Lemma 2.8. Assumepis odd and invertible inR. Let p✁A be prime of even degree. IfR[2]6= 0, thenH^0(p, R)6=H^00(p, R).
Proof. Let Γ := Γ0(p). As in Lemma 2.7, letv0 be the unique vertex of Γ\T withn(v0) =q+ 1, and lete0∈E(Γ\T) be the unique edge witho(e0) =v0. Note that w(¯e0) = q+ 1. As we already mentioned in the
proof of Lemma 2.7, for any other vertex v in Γ\T, n(v) divides (q−1)q^m. Moreover, it is easy to see, for example by case (a) of Lemma 2.7 in [17], that there is at least one vertexv such thatn(v)
is divisible byq−1. Consider all the paths without backtracking connectingv0to such a vertex, and fix a path of shortest length{e0, e1, . . . , em}. Thenw(¯ei) (1≤i≤m) is invertible in R, butw(em) is
divisible by q−1. For a fixed non-zeroα∈R[2], definef onE(Γ\T) by f(e0) =α, f(ei) = ^w(e[w(¯]^i−1[e][i][)]^)f(ei−1) (1 ≤i≤ m), f(¯ej) = f(ej) (0≤ j ≤m), and f(e) = 0 for all other edges. It is easy
to see thatf ∈ H^0(p, R). On the
other hand, any functionϕ∈ H^0(p,Z) must be zero one0, since condition (ii^′) forv0 gives (q+ 1)ϕ(¯e0) = 0. Therefore,f 6∈ H^00(p, R).
Remark 2.9. The fact stated in Lemma 2.8 is deduced in [38] by different (algebro-geometric) methods. Our combinatorial proof seems to answer the question in Remark 11.9 in [38].
2.2. Hecke operators and Atkin-Lehner involutions. Assumen✁Ais fixed. Given a non-zero idealm✁A, define anR-linear transformation of the space ofR-valued functions onE(T) by
f|Tm=X f|
a b 0 d
where f|γ for γ ∈ GL2(F∞) is defined in Section 2.1, and the above sum is over a, b, d ∈ A such that a, d are monic, (ad) = m, (a) +n = A, and deg(b)<deg(d). This transformation is them-th Hecke
operator. Following a common convention, for a prime divisorpofnwe often writeUp instead ofTp. Proposition 2.10. The Hecke operators preserve the spaces H(n, R) and H^0(n, R), and satisfy the
recursive formulas:
Tmm^′=TmTm^′ if m+m^′ =A, T[p]^i=T[p]^i−1Tp− |p|T[p]^i−2 if p∤n, T[p]^i=T[p]^i if p|n.
Proof. The group-theoretic proofs of the analogous statement for the Hecke operators acting on classical modular forms work also in this setting; cf. [34,
Definition2.11. LetT(n) be the commutative subalgebra of EndZ(H^0(n,Z)) with the same unity element generated by all Hecke operators. LetT(n)^0to be the subalgebra of T(n) generated by the Hecke
operatorsTm with m coprime to n.
For every ideal m dividing nwith gcd(m,n/m) = 1, let Wm be any matrix in Mat2(A) of the form
am b cn dm
such thata, b, c, d,∈Aand the ideal generated by det(Wm) inAism. It is not hard to check that forf ∈ H(n, R),f|Wm does not depend on the choice of the matrix forWm andf|Wm∈ H(n, R). Moreover,
asR-linear endomorphisms of H(n, R),Wm’s satisfy
(2.4) W[m][1]W[m][2]=W[m][3], where m3= m1m2
Therefore, the matrices W[m] acting on the R-module H(n, R) generate an abelian group W ∼= (Z/2Z)^s, called the group of Atkin-Lehner involutions,
wheresis the number of prime divisors ofn. The following proposition, whose proof we omit, follows from calculations similar to those in [1,§2].
Proposition2.12. Let
B[m]= m 0
. (1) If nis coprime to m andf ∈ H(n, R), then
(f|Bm)|Wm =f,
where Wm is the Atkin-Lehner involution acting on H(nm, R). (Note that by Lemma 2.25, f|Bm∈ H(nm, R).)
(2) Let m|nwithgcd(m,n/m) = 1, andbbe coprime tom. Iff ∈ H(n, R), then
(f|Bb)|Wm= (f|Wm)|Bb,
where on the left hand-side Wm denotes the Atkin-Lehner involution acting on H(nb, R)and on the right hand-sideWm denotes the involu- tion acting onH(n, R).
(3) Let f ∈ H(n, R). If q is a prime ideal which divides n but does not divide n/q, thenf|(Uq+Wq)∈ H(n/q, R).
The vector spaceH^0(n,Q) is equipped with a natural (Petersson) inner product hf, gi= X
where n(e) is defined in the proof of Lemma 2.7. The Hecke operator Tm is self-adjoint with respect to this inner product if m is coprime to n; one can prove this by an argument similar to the proof
of Lemma 13 in [1].
Definition2.13.Letmbe a divisor ofnanddbe a divisor ofn/m. By Lemma 2.25, the mapϕ7→ϕ|Bd gives an injective homomorphism
id,m:H^0(m,Q)→ H^0(n,Q).
We denote the subspace generated by the images of all id,m (m 6= n) by H^0(n,Q)^old. The orthogonal complement of H^0(n,Q)^old with respect to the Petersson product is the new subspace of H^0(n,Q),
and will be denoted by H^0(n,Q)^new. The new subspace ofH^0(n,Q) is invariant under the actionT(n) (this again can be proven as in [1]). We denote byT(n)^newthe quotient ofT(n) through which T(n)
acts onH^0(n,Q)^new.
As we mentioned, the cusps of Γ0(n) are in bijection with the orbits of the action of Γ0(n) on
P^1(F) =P^1(A) = a
b a, b∈A,gcd(a, b) = 1, ais monic
where Γ0(n) acts on P^1(F) from the left as on column vectors. We leave the proof of the following lemma to the reader.
Lemma 2.14. Assumen is square-free.
(1) For m|n let [m] be the orbit of 1
under the action of Γ0(n). Then [m] 6= [m^′] if m 6= m^′, and the set {[m] | m|n} is the set of cusps of Γ0(n). In particular, there are 2^s cusps, where s is the number of prime divisors of n.
(2) SinceWm normalizes Γ0(n), it acts on the set of cusps ofΓ0(n). There is the formula
Wm[n] = [n/m].
The cusp [n] is usually called the cusp at infinity. We will denote it by [∞].
2.3. Fourier expansion. An important observation in [38] is that the theory of Fourier expansions of automorphic forms over function fields developed in [57]
works over more general rings thanC. Here we follow Gekeler’s reinterpretation [12] of Weil’s adelic approach as analysis on the Bruhat-Tits tree, but we will extend [12] to the setting of these more
general rings.
Definition 2.15. Following [38] we say that Ris acoefficient ring ifp∈R^× andR is a quotient of a discrete valuation ring ˜Rwhich containsp-th roots of unity. Note that the image of the p-th roots of
unity of ˜R in Ris exactly the set of p-th roots of unity ofR. For example, any algebraically closed field of characteristic different frompis a coefficient ring.
η:F∞→R^× Xaiπ^i[∞]7→η0
where η0 :Fp →R^× is a non-trivial additive character fixed once and for all.
Letf be anR-valued function onE(T), which is invariant under the action of Γ∞:=
a b 0 d
and is alternating (i.e., satisfiesf(e) =−f(¯e) for all e∈E(T)). Theconstant Fourier coefficient off is theR-valued function f^0onπ[∞]^Z defined by
f^0(π^k[∞]) =
π[∞]^k u
ifk≥1 f
π[∞]^k 0
For a divisorm onF, them-th Fourier coefficient f^∗(m) off is f^∗(m) =q^−1−deg(m) X
π^2+deg(m)∞ u
if m is non-negative, and f^∗(m) = 0, otherwise; here m ∈ A is the monic polynomial such thatm= div(m)· ∞^deg(m).
Theorem2.16. Letf be anR-valued function onE(T), which isΓ∞-invariant and alternating. Then
π[∞]^k y
=f^0(π[∞]^k ) + X
06=m∈A deg(m)≤k−2
f^∗(div(m)· ∞^k−2)·η(my).
In particular, f is uniquely determined by the functionsf^0 andf^∗.
Proof. This follows from [38,§2] and [12,§2].
Lemma2.17. Assumef is alternating andΓ∞-invariant. Thenf is a harmonic cochain if and only if
(i) f^0(π[∞]^k ) =f^0(1)q^−k for any k∈Z;
(ii) f^∗(m∞^k) =f^∗(m)q^−k for any non-negative divisor m andk∈Z≥0.
Proof. See Lemma 2.13 in [12].
Lemma 2.18. For an idealm✁Aandf ∈ H(n,Z)we have (f|Tm)^∗(r) = X
a monic a|gcd(m,r)
|a|f^∗rm a^2
In particular,
(f|Tm)^∗(1) =|m|f^∗(m).
Proof. See Lemma 3.2 in [38].
Lemma 2.19. Assume n is square-free. A harmonic cochain f ∈ H(n, R) is cuspidal if and only if(f|W)^0(1) = 0for allW ∈W.
Proof. By definition, f is cuspidal if and only if it vanishes on all but finitely many edges of each cusp [m]. The positively oriented edges of the cusp [∞] are given by the matrices
π[∞]^k 0
,k≤1. By definition off^0and Lemma 2.17, f
π∞^k 0
=f^0(π∞^k ) =q^−kf^0(1).
Sinceqis invertible inR, we see thatf eventually vanishes on [∞] if and only if f^0(1) = 0. Next, by Lemma 2.14, f vanishes on [n/m] if and only iff|Wm
vanishes on [∞], which is equivalent to (f|Wm)^0(1) = 0.
Theorem2.20.IfRis a coefficient ring, then the bilinearT(n)⊗R-equivariant pairing
(T(n)⊗R)× H^00(n, R)→R T, f 7→(f|T)^∗(1) is perfect.
... bu
Figure 4. Γ0(xy)\T Proof. Theorem 3.17 in [11] says that the pairing
T(n)× H^0(n,Z)→Z (2.5)
T, f 7→(f|T)^∗(1)
is non-degenerate and becomes a perfect pairing after tensoring with Z[p^−1].
Sincepis invertible inR by assumption, the claim follows.
It is not known if in general the pairing (2.5) is perfect. This is in contrast to the situation over Q where the analogous pairing between the Hecke algebra and the space of weight-2 cusp forms on
Γ0(N) with integral Fourier expan- sions is perfect (cf. [46, Thm. 2.2]). This dichotomy comes from the formula (f|Tm)^∗(1) =|m|f^∗(m); in the classical situation the first Fourier coefficient of f|
Tmis just the mth Fourier coefficient off.
Proposition2.21. In the special case n=xy, the pairing (2.5) T(xy)× H^0(xy,Z)→Z
is perfect. Moreover, as Z-modules,
T(xy)^0=T(xy)∼=Z⊕ M
deg(p)=1 p6=x
Proof. Takeαx, βx∈Fq such thaty=x^2+αxx+βx. Let̟x:=x^−1, which is also a uniformizer at ∞. The quotient graph Γ0(xy)\T is depicted in Figure
4 with positively oriented edges c1=
̟x 0
, c2=
̟^3[x] 0
, c3=
̟[x]^4 ̟x
, c4=
̟^5[x] y^−1
; a1=
̟[x]^2 ̟x
, a2=
̟^3[x] ̟x
, a3=
̟^4[x] y^−1
, a4=
̟^3[x] ̟[x]^2
; a5=
̟^2[x] 0
, a6=
̟^4[x] ̟x−βx̟^3[x]
; bu=
̟^3[x] ̟x+u̟[x]^2
, u∈Fq.
Note that in this notationa2=b0. A small calculation shows that w(a1) =w(¯a2) =w(¯a3) =w(a4) =q−1, and the weights of all other edges in (Γ0(xy)\T)^0are 1.
It is easy to see that the map
H^0(xy,Z)→ M
Z f7→(f(bu))u∈Fq
is an isomorphism, so the harmonic cochains fv ∈ H^0(xy,Z), v ∈Fq, defined by fv(bu) =δv,u=(Kronecker’s delta) form a Z-basis. Letf ∈ H^0(xy,Z) and κ∈Fq. By Lemma 2.18
q(f|Tx−κ)^∗(1) =q^2f^∗(x−κ) = X
̟^3[x] w
̟^3[x] 0
+ X
̟^3[x] β̟^2[x]
η −(̟^−1[x] −κ)β̟^2[x]
+ X
̟^3[x] β(̟x+u̟^2[x])
η −(̟^−1[x] −κ)β(̟x+u̟^2[x]) .
Since the double class of
̟^3[x] w
does not change if w is replaced by βw (β ∈F^×[q]), f
̟[x]^3 0
=f(c2) = 0, and P
β∈F^×[q] η(β̟x) =−1, the above sum reduces to
−f(a4) + X
Using (ii^′),
(q−1)f(a1) +f(a5) = 0, (q−1)f(a4) +f(¯a5) = 0, f(a1) = X
Therefore,f(a4) =−P
u∈Fqf(bu) and we get (f|Tx−κ)^∗(1) =f(bκ).
In particular, (fv|Tx−κ)^∗(1) =δκ,v. This implies that the homomorphism (2.6) T(xy)→Hom(H^0(xy,Z),Z)
induced by the pairing (2.5) is surjective. Comparing the ranks of both sides, we conclude that this map is in fact an isomorphism, which is equivalent to the pairing being perfect. Let M be the
Z-submodule of T(xy) generated by {Tx−κ | κ ∈ Fq}. The composition of M ֒→ T(xy) with (2.6) gives a surjectionM →Hom(H^0(xy,Z),Z). This implies that M =T(xy) and M ∼= L
An easy consequence of the definitions is thatf^∗(1) =−f(a1), cf. [11, (3.16)].
If we denoteS =P
κ∈FqTx−κ, then (2.7) (f|S)^∗(1) = X
f(bκ) =f(a1) =−f^∗(1).
The non-degeneracy of the pairing implies thatS =−1. Therefore T(xy) =Z⊕ M
which impliesT(xy) =T(xy)^0.
Remark 2.22. In [44], we have extended the statement of Proposition 2.21 to arbitraryn✁A of degree 3. More precisely, we proved that the pairing (2.5) is perfect if deg(n) = 3. Moreover, if nhas
degree 3 but is not a product of three distinct primes of degree 1, thenT(n) =T(n)^0. Finally, ifnis a product of three distinct primes of degree 1, thenT(n)/T(n)^0 is finite but non-zero.
2.4. Atkin-Lehner method. For b ∈ A, let Sb = 1 b
. Define a linear operatorU[p] on the space ofR-valued functions onE(T) by
f|U[p]= X
b∈A deg(b)<deg(p)
f|B^−1[p] Sb.
Note that the action ofB[m]^−1on functions onE(T) is the same as the action of the matrix
0 m
(since the diagonal matrices act trivially), so this operator agrees with the Hecke operatorUpwhen restricted toH(n, R) for anyndivisible byp.
Lemma2.23. Letpandqbe two distinct prime ideals ofA. Iff ∈ H(T, R)^Γ^∞, then
(f|Bp)|Up =|p| ·f, (f|Bp)|Uq= (f|Uq)|Bp.
Proof. We have
(f|Bp)|Up= X
b∈A deg(b)<deg(p)
(f|Bp)|B[p]^−1Sb = X
b∈A deg(b)<deg(p)
Since Sb ∈ Γ∞, we have f|Sb = f for all b, so the last sum is equal to |p|f. Next, forb∈A representing a residue moduloqwe have
BpB^−1[q] Sb= p bp
0 q
By the division algorithm there isa∈Aandb^′∈Awith deg(b^′)<deg(q) such that bp=aq+b^′. Now
1 a 0 1
p bp 0 q
= p b^′
0 q
=B^−1[q] Sb^′Bp.
As b runs over the residues modulo q, b^′ runs over the same set since p 6=q.
Thus, using Γ∞-invariance off, we get (f|Bp)|Uq= (f|Uq)|Bp. Lemma 2.24. For any non-zero ideal m✁A andf ∈ H(T, R)^Γ^∞
(f|Bm)^0(π[∞]^k ) =f^0(π^k−deg(m)[∞] ), (f|Bm)^∗(n) =f^∗(n/m).
Proof. See Proposition 2.10 in [12].
Given idealsn,m✁A, denote Γ0(n,m) =
a b c d
∈GL2(A)c∈n, b∈m
Lemma 2.25. If f ∈ H(n, R), then f|Bm is Γ0(nm)-invariant and f|B[m]^−1 is Γ0(n/gcd(n,m),m)-invariant.
Proof. This follows from a straightforward manipulation with matrices.
Theorem 2.26. Letp andq be two distinct primes such thatpqdividesn, and pq is coprime to n/pq. Let ϕ ∈ H(n, R). Assume ϕ^∗(m) = 0 unless p or q divides m. Then there exist ψ1∈ H(n/p, R)andψ2∈ H(n/q,
R)such that
sp,q·ϕ=ψ1|Bp+ψ2|Bq, wheresp,q= gcd(|p|+ 1,|q|+ 1).
Proof. Takeφ2:=|q|^−1·ϕ|Uq∈ H(n, R). We have
(φ2)^0(π^k[∞]) =ϕ^0(π[∞]^k+deg(q)), φ^∗[2](m) =ϕ^∗(mq).
Letϕ1:=ϕ−φ2|Bq∈ H(nq, R). Then by Lemma 2.24,
(ϕ1)^0(π[∞]^k ) = 0, ϕ^∗[1](m) =ϕ^∗(m) if q∤m, ϕ^∗[1](m) = 0 ifq|m.
Letφ1:=ϕ1|B[p]^−1, which is Γ0(nq/p,p)-invariant by Lemma 2.25. In particular, ϕ^∗[1](m) = 0 unless p|m, which implies that φ1 is Γ∞-invariant. Since Γ∞ and Γ0(nq/p,p) generates Γ0(nq/p), we getφ1∈ H
(nq/p, R) with
(φ1)^0(π^k[∞]) = 0, φ^∗[1](m) =ϕ^∗(mp) ifq∤m, φ^∗[1](m) = 0 ifq|m,
By Proposition 2.12, ψ1 :=ϕ|(Up+Wp)∈ H(n/p, R). Using Proposition 2.12 and Lemma 2.23,
(φ1|Bp)|(Up+Wp) =φ1|Bp|Up+φ1|Bp|Wp=|p|φ1+φ1= (|p|+ 1)φ1. On the other hand, using the fact thatφ2∈ H(n, R), we have
(φ2|Bq)|(Up+Wp) =φ2|(Up+Wp)|Bq. If we denoteψ:=φ2|(Up+Wp), then we proved that
ψ1= (|p|+ 1)φ1+ψ|Bq∈ H(n/p, R).
(|p|+ 1)ϕ= (|p|+ 1)φ1|Bp+ (|p|+ 1)φ2|Bq
= ((|p|+ 1)φ1+ψ|Bq)|Bp+ ((|p|+ 1)φ2−ψ|Bp)|Bq=ψ1|Bp+ψ2|Bq, where ψ2 := (|p|+ 1)φ2 −ψ|Bp. We already proved that ψ1 ∈ H(n/p, R).
Obviously ψ2|Bq ∈ H(n, R). By Lemma 2.25, ψ2is Γ0(n/q,q)-invariant. Since it is also Γ∞-invariant, we concludeψ2∈ H(n/q, R).
Finally, interchanging the roles ofpandq we obtain (|q|+ 1)ϕ=ψ[1]^′|Bp+ψ^′[2]|Bq
with ψ[1]^′ ∈ H(n/p, R) and ψ[2]^′ ∈ H(n/q, R). This implies the claim of the
3. Eisenstein harmonic cochains
3.1. Eisenstein series. In this sectionRalways denotes a coefficient ring, in particular,pis invertible inR. We say that a harmonic cochainϕ∈ H(n, R) is Eisenstein ifϕ|Tp= (|p|+ 1)ϕfor every prime
idealp✁Anot dividingn. It is clear that the Eisenstein harmonic cochains form anR-submodule ofH(n, R) which we denote byE(n, R).
The Drinfeld half-plane
Ω =P^1(C∞)−P^1(F∞) =C∞−F∞
has a natural structure of a smooth connected rigid-analytic space over F∞; see [18, §1]. The group Γ0(n) acts on Ω via linear fractional transformations:
a b c d
z=az+b cz+d. This action is discrete, so the quotient
(3.1) Y0(n)(C∞) = Γ0(n)\Ω
has a natural structure of a rigid-analytic curve over F∞, which is in fact an affine algebraic curve; cf. [6, Prop. 6.6]. If we denote Ω = Ω∪P^1(F), then
X0(n)(C∞) = Γ0(n)\Ω | {"url":"https://123deta.com/document/yng294l1-eisenstein-ideal-jacquet-langlands-isogeny-function-fields.html","timestamp":"2024-11-05T11:57:11Z","content_type":"text/html","content_length":"217358","record_id":"<urn:uuid:8b63917e-1a7b-4762-943a-48b6e5bbc8f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00594.warc.gz"} |
Embedding to non-Euclidean spaces
Embedding to non-Euclidean spaces
By default UMAP embeds data into Euclidean space. For 2D visualization that means that data is embedded into a 2D plane suitable for a scatterplot. In practice, however, there aren’t really any major
constraints that prevent the algorithm from working with other more interesting embedding spaces. In this tutorial we’ll look at how to get UMAP to embed into other spaces, how to embed into your own
custom space, and why this sort of approach might be useful.
To start we’ll load the usual selection of libraries. In this case we will not be using the umap.plot functionality, but working with matplotlib directly since we’ll be generating some custom
visualizations for some of the more unique embedding spaces.
import numpy as np
import numba
import sklearn.datasets
import matplotlib.pyplot as plt
import seaborn as sns
from mpl_toolkits.mplot3d import Axes3D
import umap
%matplotlib inline
sns.set(style='white', rc={'figure.figsize':(10,10)})
As a test dataset we’ll use the PenDigits dataset from sklearn – embedding into exotic spaces can be considerably more computationally taxing, so a simple relatively small dataset is going to be
digits = sklearn.datasets.load_digits()
Plane embeddings
Plain old plane embeddings are simple enough – it is the default for UMAP. Here we’ll run through the example again, just to ensure you are familiar with how this works, and what the result of a UMAP
embedding of the PenDigits dataset looks like in the simple case of embedding in the plane.
plane_mapper = umap.UMAP(random_state=42).fit(digits.data)
plt.scatter(plane_mapper.embedding_.T[0], plane_mapper.embedding_.T[1], c=digits.target, cmap='Spectral')
Spherical embeddings
What if we wanted to embed data onto a sphere rather than a plane? This might make sense, for example, if we have reason to expect some sort of periodic behaviour or other reasons to expect that no
point can be infinitely far from any other. To make UMAP embed onto a sphere we need to make use of the output_metric parameter, which specifies what metric to use for the output space. By default
UMAP uses a Euclidean output_metric (and even has a special faster code-path for this case), but you can pass in other metrics. Among the metrics UMAP supports is the Haversine metric, used for
measuring distances on a sphere, given in latitude and longitude (in radians). If we set the output_metric to "haversine" then UMAP will use that to measure distance in the embedding space.
sphere_mapper = umap.UMAP(output_metric='haversine', random_state=42).fit(digits.data)
The result is the pendigits data embedded with respect to haversine distance on a sphere. The catch is that if we visualize this naively then we will get nonsense.
plt.scatter(sphere_mapper.embedding_.T[0], sphere_mapper.embedding_.T[1], c=digits.target, cmap='Spectral')
What has gone astray is that under the embedding distance metric a point at \((0, \pi)\) is distance zero from a point at \((0, 3\pi)\) since that will wrap all the way around the equator. You’ll
note that the scales on the x and y axes of the above plot go well outside the ranges \((-\pi, \pi)\) and \((0, 2\pi)\), so this isn’t the right representation of the data. We can, however, use
straightforward formulas to map this data onto a sphere embedded in 3d-space.
x = np.sin(sphere_mapper.embedding_[:, 0]) * np.cos(sphere_mapper.embedding_[:, 1])
y = np.sin(sphere_mapper.embedding_[:, 0]) * np.sin(sphere_mapper.embedding_[:, 1])
z = np.cos(sphere_mapper.embedding_[:, 0])
Now x, y, and z give 3d coordinates for each embedding point that lies on the surface of a sphere. We can visualize this using matplotlib’s 3d plotting capabilities, and see that we have in fact
induced a quite reasonable embedding of the data onto the surface of a sphere.
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, c=digits.target, cmap='Spectral')
If you prefer a 2d plot we can convert these into lat/long coordinates in the appropriate ranges and get the equivalent of a map projection of the sphere data.
x = np.arctan2(x, y)
y = -np.arccos(z)
plt.scatter(x, y, c=digits.target.astype(np.int32), cmap='Spectral')
Embedding on a Custom Metric Space
What if you have some other custom notion of a metric space that you would like to embed data into? In the same way that UMAP can support custom written distance metrics for the input data (as long
as they can be compiled with numba), the output_metric parameter can accept custom distance functions. One catch is that, to support gradient descent optimization, the distance function needs to
return both the distance, and a vector for the gradient of the distance. This latter point may require a little bit of calculus on the users part. A second catch is that it is highly beneficial to
parameterize the embedding space in a way that has no coordinate constraints – otherwise the gradient descent may step a point outside the embedding space, resulting in bad things happening. This is
why, for example, the sphere example simply has points wrap around rather than constraining coordinates to be in the appropriate ranges.
Let’s work through an example where we construct a distance metric and gradient for a different sort of space: a torus. A torus is essentially just the outer surface of a donut. We can parameterize
the torus in terms of x, y coordinates with the caveat that we can “wrap around” (similar to the sphere). In such a model distances are mostly just euclidean distances, we just have to check for
which is the shorter direction – across or wrapping around – and ensure we account for the equivalence of wrapping around several times. We can write a simple function to calculate that.
def torus_euclidean_grad(x, y, torus_dimensions=(2*np.pi,2*np.pi)):
"""Standard euclidean distance.
D(x, y) = \sqrt{\sum_i (x_i - y_i)^2}
distance_sqr = 0.0
g = np.zeros_like(x)
for i in range(x.shape[0]):
a = abs(x[i] - y[i])
if 2*a < torus_dimensions[i]:
distance_sqr += a ** 2
g[i] = (x[i] - y[i])
distance_sqr += (torus_dimensions[i]-a) ** 2
g[i] = (x[i] - y[i]) * (a - torus_dimensions[i]) / a
distance = np.sqrt(distance_sqr)
return distance, g/(1e-6 + distance)
Note that the gradient just derives from the standard euclidean gradient, we just have to check the direction according to the way we’ve wrapped around to compute the distance. We can now plug that
function directly in to the output_metric parameter and end up embedding data on a torus.
torus_mapper = umap.UMAP(output_metric=torus_euclidean_grad, random_state=42).fit(digits.data)
As with the sphere case, a naive visualisation will look strange, due the the wrapping around and equivalence of looping several times. But, also just like the torus, we can construct a suitable
visualization by computing the 3d coordinates for the points using a little bit of straightforward geometry (yes, I still had to look it up to check).
R = 3 # Size of the doughnut circle
r = 1 # Size of the doughnut cross-section
x = (R + r * np.cos(torus_mapper.embedding_[:, 0])) * np.cos(torus_mapper.embedding_[:, 1])
y = (R + r * np.cos(torus_mapper.embedding_[:, 0])) * np.sin(torus_mapper.embedding_[:, 1])
z = r * np.sin(torus_mapper.embedding_[:, 0])
Now we can visualize the result using matplotlib and see that, indeed, the data has been suitably embedded onto a torus.
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, c=digits.target, cmap='Spectral')
ax.set_zlim3d(-3, 3)
ax.view_init(35, 70)
And as with the torus we can do a little geometry and unwrap the torus into a flat plane with the appropriate bounds.
u = np.arctan2(x,y)
v = np.arctan2(np.sqrt(x**2 + y**2) - R, z)
plt.scatter(u, v, c=digits.target, cmap='Spectral')
A Practical Example
While the examples given so far may have some use (because some data does have suitable periodic or looping structures that we expect will be better represented in a sphere or a torus), most data
doesn’t really fall in the realm of something that a user can, apriori, expect to lie on an exotic manifold. Are there more practical uses for the ability to embed in other spaces? It turns out that
there are. One interesting example to consider is the space formed by 2d-Gaussian distributions. We can measure the distance between two Gaussians (parameterized by a 2d vector for the mean, and 2x2
matrix giving the covariance) by the negative log of the inner product between the PDFs (since this has a nice closed form solution, and is reasonably computable). That gives us a metric space to
embed into where samples are represented not as points in 2d, but as Gaussian distributions in 2d, encoding some uncertainty in how each sample in the high dimensional space is to be embedded.
Of course we still have the issues of parameterizations that are suitable for SGD – requiring that the covariance matrix be symmetric and positive definite is challenging. Instead we can parameterize
the covariance in terms of a width, height and angle, and recover the covariance matrix from these if required. That gives us a total of 5 components to embed into (two for the mean, 3 for parameters
describing the covariance). We can simply do this since the appropriate metric is defined already. Note that we have to specifically pass n_components=5 since we need to explicitly embed into a 5
dimensional space to support all the covariance parameters associated to 2d Gaussians.
gaussian_mapper = umap.UMAP(output_metric='gaussian_energy',
Since we have embedded the data into a 5 dimensional space visualization is not as trivial as it was earlier. We can get a start on visualizing the results by looking at just the means, which are the
2d locations of the modes of the Gaussians. A traditional scatter plot will suffice for this.
plt.scatter(gaussian_mapper.embedding_.T[0], gaussian_mapper.embedding_.T[1], c=digits.target, cmap='Spectral')
We see that we have gotten a result similar to a standard embedding into euclidean space, but with less clear clustering, and more points between clusters. To get a clearer idea of what is going on
it will be necessary to devise a means to display some of the extra information contained in the extra 3 dimensions providing covariance data. To do this it will be helpful to be able to draw
ellipses corresponding to super-level sets of the PDF of the 2d Gaussian. We can start on this by writing a simple function to draw ellipses on a plot accoriding to a position, a width, a height, and
an angle (since this is the format the embedding computed the data).
from matplotlib.patches import Ellipse
def draw_simple_ellipse(position, width, height, angle,
ax=None, from_size=0.1, to_size=0.5, n_ellipses=3,
alpha=0.1, color=None,
ax = ax or plt.gca()
angle = (angle / np.pi) * 180
width, height = np.sqrt(width), np.sqrt(height)
# Draw the Ellipse
for nsig in np.linspace(from_size, to_size, n_ellipses):
ax.add_patch(Ellipse(position, nsig * width, nsig * height,
angle, alpha=alpha, lw=0, color=color, **kwargs))
Now we can plot the data by providing a scatterplot of the centers (as before), but overlaying that over a super-level-set ellipses of the associated Gaussians. The obvious catch is that this will
induce a lot of over-plotting, but it will at least provide a way to start understanding the embedding we have produced.
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
colors = plt.get_cmap('Spectral')(np.linspace(0, 1, 10))
for i in range(gaussian_mapper.embedding_.shape[0]):
pos = gaussian_mapper.embedding_[i, :2]
draw_simple_ellipse(pos, gaussian_mapper.embedding_[i, 2],
gaussian_mapper.embedding_[i, 3],
gaussian_mapper.embedding_[i, 4],
ax, color=colors[digits.target[i]],
from_size=0.2, to_size=1.0, alpha=0.05)
c=digits.target, cmap='Spectral', s=3)
Now we can see that the covariance structure for the points can vary greatly, both in absolute size, and in shape. We note that many of the points falling between clusters have much larger variances,
in a sense representing the greater uncertainty of the location of the embedding. It is also worth noting that the shape of the ellipses can vary significantly – there are several very stretched
ellipses, quite distinct from many of the very round ellipses; in a sense this represents where the uncertainty falls more along a single line for example.
While this plot highlights some of the covariance structure in the outlying points, in practice the overplotting here obscures a lot of the more interesting structure in the clusters themselves. We
can try to see this structure better by plotting only a single ellipse per point and using a lower alpha channel value for the ellipses, making them more translucent.
fig = plt.figure(figsize=(10,10))
ax = fig.add_subplot(111)
for i in range(gaussian_mapper.embedding_.shape[0]):
pos = gaussian_mapper.embedding_[i, :2]
draw_simple_ellipse(pos, gaussian_mapper.embedding_[i, 2],
gaussian_mapper.embedding_[i, 3],
gaussian_mapper.embedding_[i, 4],
ax, n_ellipses=1,
from_size=1.0, to_size=1.0, alpha=0.01)
c=digits.target, cmap='Spectral', s=3)
This lets us see the variation of density of clusters with respect to the covariance structure – some clusters have consistently very tight covariance, while others are more spread out (and hence
have, in a sense, greater associated uncertainty. Of course we still have a degree of overplotting even here, and it will become increasingly difficult to tune alpha channels to make things visible.
Instead what we would want is an actual density plot, showing the the density of the sum over all of these Gaussians.
To do this we’ll need to define some functions, whose execution will be accelerated using numba: the evaluation of the density of a 2d Gaussian at a given point; an evaluation of the density of a
given point summing over a set of several Gaussians; and a function to generate the density for each point in some grid (summing only over nearby Gaussians to make this naive approach more
from sklearn.neighbors import KDTree
def eval_gaussian(x, pos=np.array([0, 0]), cov=np.eye(2, dtype=np.float32)):
det = cov[0,0] * cov[1,1] - cov[0,1] * cov[1,0]
if det > 1e-16:
cov_inv = np.array([[cov[1,1], -cov[0,1]], [-cov[1,0], cov[0,0]]]) * 1.0 / det
diff = x - pos
m_dist = cov_inv[0,0] * diff[0]**2 - \
(cov_inv[0,1] + cov_inv[1,0]) * diff[0] * diff[1] + \
cov_inv[1,1] * diff[1]**2
return (np.exp(-0.5 * m_dist)) / (2 * np.pi * np.sqrt(np.abs(det)))
return 0.0
def eval_density_at_point(x, embedding):
result = 0.0
for i in range(embedding.shape[0]):
pos = embedding[i, :2]
t = embedding[i, 4]
U = np.array([[np.cos(t), np.sin(t)], [np.sin(t), -np.cos(t)]])
cov = U @ np.diag(embedding[i, 2:4]) @ U
result += eval_gaussian(x, pos=pos, cov=cov)
return result
def create_density_plot(X, Y, embedding):
Z = np.zeros_like(X)
tree = KDTree(embedding[:, :2])
for i in range(X.shape[0]):
for j in range(X.shape[1]):
nearby_points = embedding[tree.query_radius([[X[i,j],Y[i,j]]], r=2)[0]]
Z[i, j] = eval_density_at_point(np.array([X[i,j],Y[i,j]]), nearby_points)
return Z / Z.sum()
Now we simply need an appropriate grid of points. We can use the plot bounds seen above, and a grid size selected for the sake of computability. The numpy meshgrid function can supply the actual
X, Y = np.meshgrid(np.linspace(-7, 9, 300), np.linspace(-8, 8, 300))
Now we can use the function defined above to compute the density at each point in the grid, given the Gaussians produced by the embedding.
Z = create_density_plot(X, Y, gaussian_mapper.embedding_)
Now we can view the result as a density plot using imshow.
plt.imshow(Z, origin='lower', cmap='Reds', extent=(-7, 9, -8, 8), vmax=0.0005)
Here we see the finer structure within the various clusters, including some of the interesting linear structures, demonstrating that this Gaussian uncertainty based embedding has captured quite
detailed and useful information about the inter-relationships among the PenDigits dataset.
Bonus: Embedding in Hyperbolic space
As a bonus example let’s look at embedding data into hyperbolic space. The most popular model for this for visualization is Poincare’s disk model. An example of a regular tiling of hyperbolic space
in Poincare’s disk model is shown below; you may note it is similar to famous images by M.C. Escher.
Ideally we would be able to embed directly into this Poincare disk model, but in practice this proves to be very difficult. The issue is that the disk has a “line at infinity” in a circle of radius
one bounding the disk. Outside of that circle things are not well defined. As you may recall from the discussion of embedding onto spheres and toruses it is best if we can have a parameterisation of
the embedding space that it is hard to move out of. The Poincare disk model is almost the opposite of this – as soon as we move outside the unit circle we have moved off the manifold and further
updates will be badly defined. We therefore instead need a different parameterisation of hyperbolic space that is less constrained. One option is the Poincare half-plane model, but this, again, has a
boundary that it is easy to move beyond. The simplest option is the hyperboloid model. Under this model we can simply move in x and y coordinates, and solve for the corresponding z coordinate when we
need to compute distances. This model has been implemented under the distance metric "hyperboloid" so we can simply use it out-of-the-box.
hyperbolic_mapper = umap.UMAP(output_metric='hyperboloid',
A straightforward visualization option is to simply view the x and y coordinates we have arrived at:
c=digits.target, cmap='Spectral')
We can also solve for the z coordinate and view the data lying on a hyperboloid in 3d space.
x = hyperbolic_mapper.embedding_[:, 0]
y = hyperbolic_mapper.embedding_[:, 1]
z = np.sqrt(1 + np.sum(hyperbolic_mapper.embedding_**2, axis=1))
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, c=digits.target, cmap='Spectral')
ax.view_init(35, 80)
But we can do more – since we have embedded the data successfully in hyperbolic space we can map the data into the Poincare disk model. This is, in fact, a straightforward computation.
disk_x = x / (1 + z)
disk_y = y / (1 + z)
Now we can visualize the data in a Poincare disk model embedding as we first wanted. For this we simply generate a scatterplot of the data, and then draw in the bounding circle of the line at
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(disk_x, disk_y, c=digits.target, cmap='Spectral')
boundary = plt.Circle((0,0), 1, fc='none', ec='k')
Hopefully this has provided a useful example of how to go about embedding into non-euclidean spaces. This last example ideally highlights the limitations of this approach (we really need a suitable
parameterisation), and some potential approaches to get around this: we can use an alternative parameterisation for the embedding, and then transform the data into the desired representation. | {"url":"https://umap-learn.readthedocs.io/en/latest/embedding_space.html","timestamp":"2024-11-07T13:54:30Z","content_type":"text/html","content_length":"81658","record_id":"<urn:uuid:7a5dde96-ffde-4723-92f0-e131649677ef>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00082.warc.gz"} |
Solution Methods In Computational Fluid Dynamics
by T. H. Pulliam
Publisher: NASA 2005
Number of pages: 90
Implicit finite difference schemes for solving two dimensional and three dimensional Euler and Navier-Stokes equations will be addressed. The methods are demonstrated in fully vectorized codes for a
CRAY type architecture. We shall concentrate on the Beam and Warming implicit approximate factorization algorithm in generalized coordinates.
Download or read it online for free here:
Download link
(1.1MB, PDF)
Similar books
Introductory Fluid Mechanics
Simon J.A. Malham
Heriot-Watt UniversityContents: Introduction; Fluid flow; Trajectories and streamlines; Conservation of mass; Balance of momentum; Transport theorem; Simple example flows; Kelvin's circulation
theorem; Bernoulli's Theorem; Irrotational/potential flow; etc.
Introduction to Statistical Theory of Fluid Turbulence
Mahendra K. Verma
arXivFluid and plasma flows exhibit complex random behaviour, called turbulence. This text is a brief introduction to the statistical theory of fluid turbulence, with emphasis on field-theoretic
treatment of renormalized viscosity and energy fluxes.
Computational Turbulent Incompressible Flow
Johan Hoffman, Claes Johnson
SpringerIn this book we address mathematical modeling of turbulent fluid flow, and its many mysteries that have haunted scientist over the centuries. We approach these mysteries using a synthesis of
computational and analytical mathematics.
Turbulence for (and by) amateurs
Denis Bernard
arXivSeries of lectures on statistical turbulence written for amateurs but not experts. Elementary aspects and problems of turbulence in two and three dimensional Navier-Stokes equation are
introduced. A few properties of scalar turbulence are described. | {"url":"https://e-booksdirectory.com/details.php?ebook=3415","timestamp":"2024-11-14T01:06:30Z","content_type":"text/html","content_length":"11414","record_id":"<urn:uuid:eacbcccc-5e6e-4f14-947f-2aaf67f4c234>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00744.warc.gz"} |
Multinomial Logistic Models and Customer Choice Analytics
We usually make use of the Multinomial regression model to make a description of certain data and to even provide an explanation of the association that exists between a dependent nominal variable
and one or more than one continuous-type (interval or ratio scale) of independent variables. Moreover, it creates a type of extended logistic regression that in turn makes an analysis of the
dichotomous or binary variable dependent.
Here is a short description on the modeling process of the multinomial regression. Supposing a certain dependent variable has say, M categories, and one particular value of a dependent variable is
chosen as the category of reference. When it comes to a multinomial regression, the real dependent variable should be (log) odds, and the proportions of probability of membership in all the other
categories to the probability of membership that is in the reference category. Thus,
if the initial category is termed as the reference, then, for m = 2, …, M,
where X[ik ]is the observed attributes of various individuals.
As the dependent variable has two or more than two values, the probability of membership in m^th category can be depicted as;
For the category of reference, the probability can be shown as the follows;
Here we can see that, when M = 2, the multinomial logistic regression condenses into a logistic regression.
So the logit as well as the multinomial logit models are termed as the most widely used choice model types in the field of marketing. Also, the logit models were presented as a binary choice and its
simplification to more than two options had made the multinomial logit model immensely popular by McFadden. There are many marketers who are fascinated by how pricing, promotions, and other variables
in the marketing mix have an impact on their market share and sales revenue. An early appeal for the MNL model was because it was having a random probability distribution and still confessing the
decision variables such as price and promotions. For instance, marketing professionals are usually want to know what a person thinks about the products, and their views are usually encoded by using
scales, such as 1 stands for “strongly disagree”, 2 for “agree”, 3 for “neutral”, etc. The opinions in this situation was that the variable is ordered, which means that 3 is better than 2, and
2 is better than 1. By using the data in the scanner panel, the supermarket can create the multinomial logit model to know the effect of different marketing variables on consumer choice in the
product alternatives.
This article will focus on a simple case to show how one can build the multinomial logistic regress using R. Over here, the query is why couples were not sleeping together or how regularly they did
(see Figure 1), the data is taken from the website of Fivethirtyeight. The analytical goal is to understand the causes for the levels of sleeping separately in these couples.
Figure 1 Distribution of dependent variable
  Before we start, we need to understand that the exploratory model-free analysis is essential in obtaining an understanding between the probable factors and the dependent variable. For instance,
we can use cross-table analysis as well as the Chi-square test to figure out where a forecaster has the major effect on the dependent variable.
 #R code
tab1= table(sleep.alone$Sep_Bed_Level,sleep.alone$Gender)
Figure 2 Results of Cross-tab Analysis
 Also, we can imagine a relationship amid the forecasters and the dependent variable.
 #R code
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Snores, xlab = “Sleep Alone”, ylab = “Snores”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Bathroom, xlab = “Sleep Alone”, ylab = “Bathroom”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Sick, xlab = “Sleep Alone”, ylab = “Sick”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Non_Intimate, xlab = “Sleep Alone”, ylab = “Non_Intimate”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Rm_Temp, xlab = “Sleep Alone”, ylab = “Different Temperature”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Argument, xlab = “Sleep Alone”, ylab = “Argument/Fight”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Non_Space, xlab = “Sleep Alone”, ylab = “No Space”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Sleep_Child, xlab = “Sleep Alone”, ylab = “Sleep with Child”)
plot(sleep.alone$Sep_Bed_Level, sleep.alone$Night_Work, xlab = “Sleep Alone”, ylab = “Night Work”)
Figure 3 Visualization of Predictors and Dependent Variable
 Lastly, we can create the ordered multinomial regression in order to model the data. To understand the best model, we employ AIC-statistics to filter suitable variables and probability ratio test
to relate the performance of two models.
  R code
sleep.alone.2 = sleep.alone[c(1:13,18:21)]
sleep.alone.2 <- na.omit(sleep.alone.2)
Sleep.ord.m3<-polr(Sep_Bed_Level~., data=sleep.alone.2, Hess = TRUE)
Sleep.ord.m4<-stepAIC(Sleep.ord.m3,direction = c(“both”))
ci <- confint(Sleep.ord.m4)
Moreover, we use the imagining to report the analytical results and assist non-technical users understand them more efficiently. In Figure 4, we compare coefficients to learn the effects of different
Figure 4 Coefficients of Different Forecasters | {"url":"https://crescointl.com/multinomial-logistic-models-customer-choice-analytics/","timestamp":"2024-11-08T22:02:32Z","content_type":"text/html","content_length":"216300","record_id":"<urn:uuid:ce3e34ad-565f-4e5a-9455-e2e5c5727f53>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00382.warc.gz"} |
seminars - New approaches to discovering symplectic non-convexity
In this talk, we will provide new examples of star-shaped (toric) domains in C^2 that are dynamically convex but not symplectically convex. Our examples are based on two approaches: one is from
Chaidez-Edtmair’s criterion via Ruelle invariant and systolic ratio; the other is from the ECH capacities and an analog non-linear version of Banach-Mazur distance in symplectic geometry. In
particular, from the second approach, we derive the first family of examples that can be numerically verified (instead of taking a certain limit from the first approach). We will also illustrate that
the information given by these two approaches is in general independent of each other. This talk is based on joint work with Dardennes, Gutt, and Ramos. | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=67&l=en&sort_index=Time&order_type=asc&document_srl=1154999","timestamp":"2024-11-12T10:06:02Z","content_type":"text/html","content_length":"45539","record_id":"<urn:uuid:4f095361-e18a-4b50-bc2b-d8cccdeec727>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00852.warc.gz"} |
Length of an Archimedes spiral
An Archimedian spiral has the polar equation
r = b θ^1/n
This post will look at the case n = 1. I may look at more general values of n in a future post. The case n = 1 is the simplest case, and it’s the case I needed for the client project that motivated
this post.
In this case the spacing between points where the spiral crosses an axis is constant. Call this constant h. Then
h = 2Ï€b.
For example, when rolling up a carpet, h corresponds to the thickness of the carpet.
Suppose θ runs from 0 to 2πm, wrapping around the origin m times. We could approximate the spiral by m concentric circles of radius h, 2h, 3h, …, mh. To visualize this, we’re approximating the
length of the red spiral on the left with that of the blue circles on the right.
We could approximate this further by saying we have m/2 circles whose average radius is πmb. This suggests the length of the spiral should be approximately
How good is this approximation? What happens to the relative error as θ increases? Intuitively, each wrap around the origin is more like a circle as θ increases, so we’d expect the approximation
to improve for large θ.
According to Mathworld, the exact length of the spiral is
πbm √(1 + (2πm)²) + b arcsinh(2πm) /2
When m is so large that we can ignore the 1 in √(1 + (2πm)²) then the first term is the same as the circle approximation, and all that’s left is the arcsinh term, which is on the order of log m
arcsinh(x) = log(x + (1 + x²)^1/2).
So for large m, the arc length is on the order of m² while the error is on the order of log m. This means the relative error is O( log(m) / m² ). [1]
We’ve assumed m was an integer because that makes it easier to visual approximating the spiral by circles, but that assumption is not necessary. We could restate the problem in terms of the final
value of θ. Say θ runs from 0 to T. Then we could solve
T = 2Ï€m
for m and say that the approximate arc length is
½ bT²
and the exact length is
½ bT(1 + T²)^1/2 + ½ b arcsinh(T).
The relative approximation error is O( log(T) / T² ).
Related posts
[1] The error in approximating √(1 + (2πm)²) with 2πm is on the order of 1/(4πm) and so is smaller than the logarithmic term. | {"url":"https://portalscorner.com/length-of-an-archimedes-spiral/","timestamp":"2024-11-13T06:32:22Z","content_type":"text/html","content_length":"402488","record_id":"<urn:uuid:cb07511a-43fc-4993-9128-bb18a60b24ff>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00101.warc.gz"} |
A regulation tennis court for a double match is laid out so that it's length is 6 ft more than two times its width. the area of a doubles court is 2808 square feet what is the length and width of the singles Court
A regulation tennis court for a double match is laid out so that it's length is 6 ft more than two times its width. the area of a doubles court is 2808 square feet what is the length and width of the
singles Court
Find an answer to your question 👍 “A regulation tennis court for a double match is laid out so that it's length is 6 ft more than two times its width. the area of a doubles ...” in 📗 Mathematics if
the answers seem to be not correct or there’s no answer. Try a smart search to find answers to similar questions.
Search for Other Answers | {"url":"https://cpep.org/mathematics/1250547-a-regulation-tennis-court-for-a-double-match-is-laid-out-so-that-its-l.html","timestamp":"2024-11-07T19:30:17Z","content_type":"text/html","content_length":"24505","record_id":"<urn:uuid:6635bf22-e308-4447-8bd2-fcc31716439c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00836.warc.gz"} |
Commutation relations of quantum orbital angular momentum operators - Mono Mole
Commutation relations of quantum orbital angular momentum operators
The three orbital angular momentum component operators do not commute with one another.
To show that $\left&space;[&space;\hat{L}_x,\hat{L}_y&space;\right&space;]eq&space;0$, we substitute eq72 and eq73 in $\left&space;[&space;\hat{L}_x,\hat{L}_y&space;\right&space;]$, giving $\left&
space;[&space;\hat{L}_x,\hat{L}_y&space;\right&space;]=\hbar^{2}\left&space;[&space;x\frac{\partial}{\partial&space;y}-y\frac{\partial}{\partial&space;x}\right&space;]$, which when substituted with
eq74, returns
Repeating the above procedure, we get
Hence, each of the three orbital angular momentum component operators do not commute with the other two. Next, to show that $\hat{L}^{2}$ commutes with all 3 orbital angular momentum component
operators, we begin with
Using the identity $\small&space;\left&space;[&space;\hat{L}_a^{\;&space;2},\hat{L}_b&space;\right&space;]=\hat{L}_a\left&space;[&space;\hat{L}_a,\hat{L}_b&space;\right&space;]+\left&space;[&space;\
Substitute eq99 and eq101 in the above equation, noting that $\small&space;\left&space;[&space;\hat{L}_a,\hat{L}_b&space;\right&space;]=-\left&space;[&space;\hat{L}_b,\hat{L}_a&space;\right&space;]$,
we have $\small&space;\left&space;[&space;\hat{L}^{2},\hat{L}_x&space;\right&space;]=0$. Repeating the steps for $\small&space;\left&space;[&space;\hat{L}^{2},\hat{L}_y&space;\right&space;]$ and $\
small&space;\left&space;[&space;\hat{L}^{2},\hat{L}_z&space;\right&space;]$ gives
As mentioned in an earlier article, a common complete set of eigenfunctions can be selected for two operators only if they commute. Therefore, $\small&space;\hat{L}^{2}$ shares a common set of
eigenfunctions with each of $\small&space;\hat{L}_x$, $\small&space;\hat{L}_y$ and $\small&space;\hat{L}_z$, but we cannot select a common set of eigenfunctions for any pair of angular momentum
component operators.
Show that each of the three orbital angular momentum component operators commute with $\small&space;p^{2}$, $\small&space;\hat{p}^{2}$, $\small&space;r$, $\small&space;r^{2}$ and $\small&space;\frac
{1}{r}$, where $\small&space;p^{2}=p_x^{\;&space;2}+p_y^{\;&space;2}+p_z^{\;&space;2}$ and $\small&space;r^{2}=x^{\;&space;2}+y^{\;&space;2}+z^{\;&space;2}$.
Substituting eq74 in $\small&space;\left&space;[&space;\hat{L}_z,x&space;\right&space;]$, $\small&space;\left&space;[&space;\hat{L}_z,y&space;\right&space;]$, $\small&space;\left&space;[&space;\hat
{L}_z,z&space;\right&space;]$, $\small&space;\left&space;[&space;\hat{L}_z,p_x&space;\right&space;]$, $\small&space;\left&space;[&space;\hat{L}_z,p_y&space;\right&space;]$ and $\small&space;\left&
space;[&space;\hat{L}_z,p_z&space;\right&space;]$ (noting that $\small&space;p_i=m\frac{i}{t}$, where $\small&space;i=x,y,z$) and carrying out the derivatives, we have
Using the identities $\small&space;\left&space;[&space;\hat{A},\hat{B}+\hat{C}+\hat{D}&space;\right&space;]=\left&space;[&space;\hat{A},\hat{B}&space;\right&space;]+\left&space;[&space;\hat{A},\hat
{C}&space;\right&space;]+\left&space;[&space;\hat{A},\hat{D}&space;\right&space;]$ and $\small&space;\left&space;[&space;\hat{A},\hat{B}\hat{C}&space;\right&space;]=\left&space;[&space;\hat{A}\hat{B}
$\small&space;\left&space;[&space;\hat{L}_z,\hat{p}^{2}&space;\right&space;]=0$ can be inferred from eq103. Repeating the same logic for $\small&space;\hat{L}_x$ and $\small&space;\hat{L}_y$. we have
The commutation relations in the above Q&A are applicable to hydrogenic systems. For a system of 2-electrons, there are cross terms like:
which are useful in determining the commutation relations between $\small&space;\hat{L}^{2}$ and the multi-electron Hamiltonian, for example $\small&space;\left&space;[&space;\hat{L}_{1z}+\hat{L}_
Show that $\small&space;\left&space;[&space;\hat{L}^{2},\hat{p}^{2}&space;\right&space;]=0$ and $\small&space;\left&space;[&space;\hat{L}^{2},\frac{1}{r}&space;\right&space;]=0$.
Using eq75 and the identities $\small&space;\left&space;[&space;\hat{A},\hat{B}+\hat{C}+\hat{D}&space;\right&space;]=\left&space;[&space;\hat{A},\hat{B}&space;\right&space;]+\left&space;[&space;\hat
{A},\hat{C}&space;\right&space;]+\left&space;[&space;\hat{A},\hat{D}&space;\right&space;]$ and $\small&space;\left&space;[&space;\hat{A},\hat{B}\hat{C}&space;\right&space;]=\left&space;[&space;\hat | {"url":"https://monomole.com/commutation-relations-quantum-orbital-angular-momentum-operators/","timestamp":"2024-11-03T09:43:41Z","content_type":"text/html","content_length":"119244","record_id":"<urn:uuid:ef6bab1b-ace2-43e7-84a0-7ad8a2d4df3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00005.warc.gz"} |
The bootcamp: Slowmotion UNIVERSE - This is the digital archives (1989 -2024) of the neurodivergent Danish educational philosopher, philosophical toolmaker, author, songwriter and Frontrunner / Kaospilot co-founder Thomas Heide, in which he tries to answer the impossible question "how can chaotic human made systems be navigated?"
Slowmotion UNIVERSE is a card game mimicking reality as probability. Click on picture to get access to 100 training sets.
Important note to be read and then read again when you have familiarized yourself with slowmotion UNIVERSE:
To apply slowmotion Universe as a bootcamp for understanding, entering and living in reality as probability, try always to use the equation (Y – X = Z) when looking for tricks.
The Z-values (even numbers from 2 – 38) of your tricks represent your shifting positions on the infinite scale of the equation of change experience (map of the world as change). As such the Z-values
visualize in ultra-extreme slowmotion how your reality (universe) is formed as a probability path.
The probability path as experienced by unconscious instinct slowed down (sequenced) to be analyzed and controlled consciously. Click picture to enlarge.
Change experience DNA sequencer.
Playing slowmotion UNIVERSE qualifies a discussion about global overflow and reality as probability with the intention of empowering players to handle consciously and with intent local indeterminism
as a part of the basic human condition in the 21st century. Click picture to enlarge.
The scale produced by slowmotion UNIVERSE is a representative interpretation designed for learning purposes. In reality, the intervals between the positions on each side of the most probable outcomes
are respectively exponentially rising and falling with a reversely proportionate density/”weight”-ratio equilibration (if there where only one position, P1/P2, in each extreme, the exponentialized
system would be balanced). Since humans are bodies first depending on a stable tangible context, the most probable outcome will be dense and “heavy” (mirroring the bodies interpretational capacity)
and close to the event horizon of standstill/everything. When transferring data from Slowmotion Universe to map of the world as change, this should be taken into account. When inputting data directly
into map of the world of change, this should be implicit and not affect the plotting. The number-graphics on the illustration does not reflect exponentiality 1:1. Click picture to enlarge.
Gauge to be used to understand the scale and to identify positions and tricks.
The full slowmotion UNIVERSE toolbox: Change experience DNA Sequencer, fluidity solidity collider, map of the world as change, scale and trick gauge and the card game itself in a box.
And now: Let’s play!
A game package contains 250 cards:
1 card with general information
9 game rule- and probability field cards
240 playing cards.
To purchase games, get support or book workshops, please write to: go@byebyespacetime.com
A game package contains 250 cards in total.
Game rules:
Players must have minimum skills in subtraction (and/or addition of positive integers). The highest number is 39 and the highest possible result is 38. The lowest number is 1 and the lowest possible
result is 2.
Number of players: 2+.
Playtime: 10+ minutes.
Shuffle cards thoroughly before each game and secure random horizontal and vertical positions so that all players, independent of position in relation to cards, experience equal readability of the
cards. When shuffling, take into account, that high value Z-cards will have a statistically determined tendency to end in the bottom of the card stacks in a game.
Choose a dealer. The dealer places 16 cards (default amount – modify as you please) face up in a spacey square on a level surface. When playing in small groups, the dealer should participate. When
playing in larger groups a designated dealer is an option.
Players identify and cash in tricks by saying ”trick” first. Tricks are placed face down. A trick is a combination of three cards meeting the following equations:
(Y – X = Z),
(Y – Z = X) and
(X + Z = Y).
Three tricks illustrated – but maybe there are …1…2…3… tricks still hiding. Or more? And which trick is the most valuable?
When ever a trick is cashed in the dealer fills up the empty spots. If no tricks are available or visible to any of the players, 16 new cards are placed face up on top of the cards all ready on the
level surface. The dealer can always decide to take this action. If the dealer runs out of cards, the open card-stacks on the level surface are collected, shuffled and re-used.
If a player falsely states ”trick”, the player’s latest trick must be re-entered into the dealers card stack.
The winner is the player with the most points when a player cashes in 10 tricks in total (default mode) as the first. Playing time can be reduced by minimizing the number of tricks required to end
the game.
(Be aware 9 and 6 are not marked and must be identified by card type: 6 is an even number on a Z-card and that 9 is an uneven number on a X-card)
Points are given according to the following rules:
Z2…Z6, Z34…Z38 = 5 points,
Z8…Z12, Z28…Z32 = 3 points and
Z14…Z26 = 2 points.
Points given are multiplied according to the color-code mixes of the numbers on the face of the cards in a trick:
2 + 1 color x 1
3 colors x 2 and
1 color x 3.
The Z-result of a trick decides the point-value of the trick according to the probability of the Z-value occurring in the trick. The points of a trick is multiplied according to the probability of
the 3 different color-code mixes available: 2+1 color, 3 different colors or 1 color.
Slowmotion UNIVERSE can be played on four different levels:
Level 1 (easy): Tricks. The winner is the first player to get 10 (or any other number of) tricks.
Level 2 (medium): Tricks and points: The winner is the first player to get 10 (or any other number of) tricks and the most points according to the point-value of Z-cards only.
Level 3 (advanced): Default play mode. As in the game rules starting on the top of the page. Note that the potential of point-value awareness increases with the number of cards on the table. Try
experimenting with a 5 x 5 or 6 x 6 cards lay-out to make the point-system work its magic.
Solo Meditation Level: Play alone. Relax and enjoy identifying potentiality and controlling the speed of the temporary outcomes of the probability-field while simoultaneously expanding your
consciousness according to the fundamental principle of your ability to experience as it unfolds in a tempo defined by you.
Probability level: Play to get as close as possible to predefined paths of probability mirroring specific or non-specific real-life events. Play against each other or together.
Slowmotion UNIVERSE is the gamified expression of the functionality of the probability-driven universe reduced to mixes of its two basic elements, change (X≠X) and standstill (X=X), slowed down to a
near halt to meet the capacity of the human everyday mind:
The source of slowmotion UNIVERSE.
Slowmotion UNIVERSE was created by Danish philosopher (BA), educational philosopher (MA) and philosophical toolmaker Thomas Heide as an attempted commonly accessible response to the question: How
come we experience that which changes as if it is there? As such, the game summarizes and makes available to layman the conclusions of the philosopher’s creative metaphysical investigations and
designs as presented on byebyespacetime.com.
Slowmotion UNIVERSE is the tip of a rather large iceberg comprised of the philosopher’s creative metaphysical investigations and designs.
The early game-designs that eventually lead to slowmotion UNIVERSE:
“School of Philosophy” is a board game concept based on the equation of change experience. The game can be printed and played. Click here or on the pictures to download manual including game pieces.
“Quantum Mechanic” is a board game concept based on the equation of change experience. The object was produced as unika in plywood, but is infortunately temporarily lost. Click on the picture to
download design draft used to create the vector graphis for the lasercutter. Note that the webadress in the design draft is not active.
“Ice – Water – Fire”. Stick game based on the equation of change experience. Rough outline of gameplay: The three stick-dice are thrown to determine wich direction your life-piece (LIV 1-5) must take
one step. 2-1 decides wether the direction is toward fire or ice. If the dice are all either ice or water, you can turn the event-horizons of respectively fire and water or move two steps in a
direction of your own choice. When you collide with an event-horizon you loose. The last LIV in water wins. Rules can at all times be adjusted and developed by the players.
A game with no rules… | {"url":"http://byebyespacetime.com/games/","timestamp":"2024-11-11T16:21:16Z","content_type":"text/html","content_length":"117211","record_id":"<urn:uuid:52e79201-1b32-499c-aa76-c4932d87d0f1>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00840.warc.gz"} |
Alto, CA, United States","name":"Annual IEEE Symposium on Foundations of Computer
Science","end_date":"1998-11-11","start_date":"1998-11-08"},"date_published":"1998-09-01T00:00:00Z","publication":"Proceedings of the 39th Annual Symposium on Foundations of Computer
Science","author":[{"last_name":"Agarwal","full_name":"Agarwal, P. K.","first_name":"P. K."},{"full_name":"EppsteinL. J. Guibas, D.","first_name":"D.","last_name":"EppsteinL. J. Guibas"},
{"id":"540c9bbd-f2de-11ec-812d-d04a5be85630","last_name":"Henzinger","orcid":"0000-0002-5008-6530","first_name":"Monika H","full_name":"Henzinger, Monika H"}],"title":"Parametric and kinetic minimum
spanning trees","language":[{"iso":"eng"}],"publication_identifier":{"issn":["0272-5428"],"isbn":["0-8186-9172-7"]},"status":"public","abstract":[{"text":"We consider the parametric minimum spanning
tree problem, in which we are given a graph with edge weights that are linear functions of a parameter /spl lambda/ and wish to compute the sequence of minimum spanning trees generated as /spl lambda
/ varies. We also consider the kinetic minimum spanning tree problem, in which /spl lambda/ represents time and the graph is subject in addition to changes such as edge insertions, deletions, and
modifications of the weight functions as time progresses. We solve both problems in time O(n/sup 2/3/log/sup 4/3/) per combinatorial change in the tree (or randomized O(n/sup 2/3/log/sup 4/3/ n) per
change). Our time bounds reduce to O(n/sup 1/2/log/sup 3/2/ n) per change (O(n/sup 1/2/log n) randomized) for planar graphs or other minor-closed families of graphs, and O(n/sup 1/4/log/sup 3/2/ n)
per change (O(n/sup 1/4/ log n) randomized) for planar graphs with weight changes but no insertions or
deletions.","lang":"eng"}],"type":"conference","extern":"1","publication_status":"published","citation":{"ama":"Agarwal PK, EppsteinL. J. Guibas D, Henzinger M. Parametric and kinetic minimum
spanning trees. In: Proceedings of the 39th Annual Symposium on Foundations of Computer Science. ; 1998:596-605. doi:10.1109/SFCS.1998.743510","short":"P.K. Agarwal, D. EppsteinL. J. Guibas, M.
Henzinger, in:, Proceedings of the 39th Annual Symposium on Foundations of Computer Science, 1998, pp. 596–605.","ieee":"P. K. Agarwal, D. EppsteinL. J. Guibas, and M. Henzinger, “Parametric and
kinetic minimum spanning trees,” in Proceedings of the 39th Annual Symposium on Foundations of Computer Science, Palo Alto, CA, United States, 1998, pp. 596–605.","chicago":"Agarwal, P. K., D.
EppsteinL. J. Guibas, and Monika Henzinger. “Parametric and Kinetic Minimum Spanning Trees.” In Proceedings of the 39th Annual Symposium on Foundations of Computer Science, 596–605, 1998. https://
doi.org/10.1109/SFCS.1998.743510.","apa":"Agarwal, P. K., EppsteinL. J. Guibas, D., & Henzinger, M. (1998). Parametric and kinetic minimum spanning trees. In Proceedings of the 39th Annual Symposium
on Foundations of Computer Science (pp. 596–605). Palo Alto, CA, United States. https://doi.org/10.1109/SFCS.1998.743510","mla":"Agarwal, P. K., et al. “Parametric and Kinetic Minimum Spanning
Trees.” Proceedings of the 39th Annual Symposium on Foundations of Computer Science, 1998, pp. 596–605, doi:10.1109/SFCS.1998.743510.","ista":"Agarwal PK, EppsteinL. J. Guibas D, Henzinger M. 1998.
Parametric and kinetic minimum spanning trees. Proceedings of the 39th Annual Symposium on Foundations of Computer Science. Annual IEEE Symposium on Foundations of Computer Science, | {"url":"https://research-explorer.ista.ac.at/record/11682.json","timestamp":"2024-11-13T21:03:13Z","content_type":"application/json","content_length":"4491","record_id":"<urn:uuid:b601c3de-c830-4218-bb2b-7e9acac5ee0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00778.warc.gz"} |
OpenStax College Physics, Chapter 19, Problem 16 (Problems & Exercises)
How far apart are two conducting plates that have an electric field strength of $4.50\times 10^{3}\textrm{ V/m}$ between them, if their potential difference is 15.0 kV?
Question by
is licensed under
CC BY 4.0
Solution video
OpenStax College Physics, Chapter 19, Problem 16 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. Two parallel conducting plates have an electric field between them of 4.50 times 10 to the 3 volts per meter and a potential difference of 15.0
kilovolts, which is 15.0 times 10 to the 3 volts. Now electric field between two conducting parallel plates is the potential difference divided by the distance by which they are separated and we can
solve for this separation d by multiplying both sides by d over E and so the separation is voltage divided by electric field and this works out to 3.33 meters. | {"url":"https://collegephysicsanswers.com/openstax-solutions/how-far-apart-are-two-conducting-plates-have-electric-field-strength-450times","timestamp":"2024-11-04T01:14:39Z","content_type":"text/html","content_length":"162742","record_id":"<urn:uuid:0f1408a1-9447-4fb3-a57b-d916e3ef7795>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00425.warc.gz"} |
Guild Serial Number Database
Guild’s official documentation of serial numbers is not comprehensive, as some lists have been lost in the past.
Table of Contents
Where to Find Guild Serial Number?
You can find the serial number of a guitar stamped on the back of the headstock. In the case of (semi) acoustic guitars, it may also be located on the bottom inside of the body or stamped on the neck
Guild Serial Number Location
The database below provide the most accurate information available to Guild to distinguish the date of manufacture by serial number. But as already mentioned, the documentation of the serial numbers
is not complete because there are lists in the past that have been lost leaving some gaps in the dating.
Guild Serial Number 1952-1959
Year The Last Serial Number Produced Is Estimate
1953 1000-1500
1954 1500-2200
1955 2200-3000
1956 3000-4000
1957 4000-5700
1958 5700-8300
Guild Serial Number 1960s
The table below shows the first and last serial numbers of guitars produced from 1960 to 1969.
Year First Number Last Number
1965 – 46606
1966 – 46608
1967 – 46637
1968 – 46656
1969 – 46695
Guild Serial Number Database 1965-1967
From 1965 to 1969, Guild used a separate serial numbering system for each model. The table below lists the year, model, start, and final serial numbers for each guitar.
MODEL 1965 1966 1967
A50 AB101 – 136 AB137 – 162 AB163 – 203
A150 N/A N/A N/A
A350 N/A N/A N/A
A500 N/A AF101 – 102 N/A
ARTIST AWARD AA101 AA102 – 113 AA114 – 139
BB195-241 N/A N/A N/A
GEORGE BARNES a/e N/A N/A N/A
CA100 N/A N/A N/A
CE100 EF101 – 211 EF212 – 396 EF397 – 549
D25 N/A N/A N/A
D35 N/A N/A N/A
D40 AJ101 – 333 AJ334 – 1136 AJ1137 – 2244
D44 AC101 – 166 AC167 – 318 AC319 – 435
D50 AL101 – 192 AL193 – 301 AL302 – 513
D55 N/A N/A N/A
DD238-395 N/A N/A N/A
DE400 EH101 – 126 EH127 – 233 EH234 – 276
DE500 EI101 – 107 EI108 – 116 EI117 – 136
F20 AG101 – 316 AG317 – 1534 AG1535 – 2499
F30 AI101 – 341 AI352 – 1142 AI1143 – 1855
F47 AK101 – 128 AK129 – 218 AK219 – 418
F50 AD101 – 119 AD120 – 190 AD191 – 291
F112 N/A N/A N/A
F212 AN101 – 228 AN229 – 810 AN811 – 1558
F312 AS101 – 136 AS142 – 230 AS231 – 335
F412 N/A N/A N/A
F512 N/A N/A N/A
JET STAR BASS SD101 – 108 SD109 – 327 SD328 – 343
MARK I CA101 – 110 CA317 – 996 CA997 – 1973
MARK II CB101 – 247 CB248 – 967 CB968 – 1773
MARK III CC101 – 252 CC253 – 666 CC667 – 992
MARK IV CD101 – 128 CD129 – 292 CD293 – 491
MARK V CE101 – 120 CE121 – 137 CE138 – 195
MARK VI N/A N/A CF101 – 128
M20 ѕ Some AH #’s N/A N/A
M65 ѕ EC101 – 182 EC183 – 267 EC268 – 322
M65 ED101 – 160 ED161 – 194 ED195 – 270
M75 N/A N/A N/A
M85 N/A N/A N/A
S50 SA101 – 201 SA202 – 490 SA491 – 584
S100 SB101 – 169 SB170 – 220 SB221 – 251
S200 SC101 SC102 – 153 SC154 – 166
ST100 N/A N/A N/A
STARFIRE II EK101 – 387 EK388 – 2098 EK2099 – 2819
STARFIRE III EK101 – 387 EK388 – 2098 EK2099 – 2819
STARFIRE IV EL101 – 276 EL277 – 1167 EL1168 – 1840
STARFIRE V EN101 – 194 EN195 – 927 EN928 – 1807
STARFIRE VI DB101 – 174 DB175 – 274 DB275 – 329
STARFIRE XII N/A DC101 – 586 DC587 – 896
STARFIRE BASS BA101 – 177 BA178 – 654 BA655 – 1696
T50 EB101 – 196 EB197 – 391 EB392 – 558
T100 EE101 – 601 EE602 – 1939 EE1940 – 2794
X50 EA101 – 202 EA203 – 326 EA327 – 491
X175 EG101 – 107 EG108 – 160 EG161 – 239
X500 DA101 – 106 DA107 – 138 DA139 – 180
Guild Serial Number Database 1968-1969
MODEL 1968 1969
A50 AB204 – 240 N/A
A150 AI101 – 108 AI109 – 113
A350 OD101 – 109 OD110 – 112
A500 AF103 – 115 N/A
ARTIST AWARD AA140 – 157 AA158 – 167
BB195-241 N/A N/A
GEORGE BARNES a/e N/A OF101 – 104
CA100 OH101 – 113 OH114
CE100 EF550 – 719 EF720 – 760
D25 OG101 – 192 OG203 – 233
D35 OJ101 – 1003 OJ1004 – 1592
D40 AJ2245 – 2825 AJ2826 – 3218
D44 AC436 – 488 AC489 – 570
D50 AL514 – 584 AL585 – 698
D55 OI101 – 105 OI106 – 113
DD238-395 N/A N/A
DE400 EH277 – 301 N/A
DE500 N/A EI137 – 141
F20 AG2500 – 2793 AG2794 – 2822
F30 AI1856 – 2270 AI2271 – 2554
F47 AK419 – 488 AK489 – 583
F50 AD292 – 355 AD356 – 418
F112 OA101 – 511 OA512 – 695
F212 AN1559 – 2009 AN2010 – 2271
F312 AS336 – 376 AS377 – 497
F412 OB101 – 110 OB111 – 114
F512 OC201 – 206 OC207 – 223
JET STAR BASS N/A N/A
MARK I CA1974 – 2156 N/A
MARK II CB1774 – 2018 N/A
MARK III CC993 – 1203 N/A
MARK IV CD492 – 541 N/A
MARK V N/A N/A
MARK VI CF129 – 175 CF176 – 197
M20 ѕ N/A OE101 – 102
M65 ѕ EC323 – 334 N/A
M65 ED271 – 335 ED336 – 414
M75 DD101 – 138 DD139 – 237
M85 BB101 – 109 BB110 – 194
S50 N/A N/A
S100 SB252 – 269 N/A
S200 SC167 – 191 N/A
ST100 ES101 – 275 ES276 – 318
STARFIRE II EK2820 – 3028 EK3029 – 3098
STARFIRE III EK2820 – 3028 EK3029 – 3098
STARFIRE IV EL1841 – 2223 EL2224 – 2272
STARFIRE V EN1808 – 2141 EN2142 – 2272
STARFIRE VI DB330 – 339 N/A
STARFIRE XII DC897 DC899 – 910
STARFIRE BASS BA1697 – 1946 BA1947 – 2043
T50 EB559 – 607 EB608 – 652
T100 EE2795 – 3003 EE3004 – 3109
X50 EA492 – 502 EA503 – 506
X175 EG240 – 322 EG323 – 346
X500 DA181 – 235 DA236 – 244
Guild Serial Number 1970-1979
The table below shows the first and last serial numbers of guitars, produced from 1970 to 1979. Corresponding model names or numbers are not available.
Year First Number Last Number
1979 195068 211877 (tot 30 Sep. 1979)
Guild Serial Number 1979-1983
From 1979 through 1983, Guild continued to use separate serial number prefixes for every guitar model. The table below shows the year, model, and last serial number.
MODEL 1979 1980 1981 1982 1983
ARTIST AWARD JA100012 JA100043 JA100067 JA100097 JA100107
B301-B302 BB100400 BB100846 N/A N/A N/A
B301A-B302A BC100061 BC100196 BB100235 N/A N/A
B401-B402 N/A BD100212 BD100335 N/A N/A
B50 BA100002 BA100012 BA100168 BA100212 BA100249
BLUES BIRD N/A N/A N/A N/A N/A
BRIAN MAY N/A N/A N/A N/A N/A
CE100 KA100015 KA100077 KA100136 KA100159 KA100169
D15 N/A N/A N/A N/A AH100617
D15/12 N/A N/A N/A N/A EL100064
D17 N/A N/A N/A N/A N/A
D17/12 N/A N/A N/A N/A N/A
D25 DA100914 DA105752 DA109433 DA111910 DA112936
D25C N/A N/A N/A N/A EH100098
D35 DB100503 DB102097 DB103268 DB103743 DB104078
D40 DC100247 DC101105 DC101638 DC101782 DC101889
D40C N/A DG100542 DG100818 DG100959 DG101002
D46 N/A DL100131 DL100622 N/A DL100784
D47CE N/A N/A N/A N/A GE100026
DS48CE N/A N/A N/A N/A GF100024
D50 DD100212 DD100944 DD101382 DD101588 DD101737
D52 N/A N/A N/A N/A EK100034
D55 DE100236 DE100661 DE101058 DE101186 DE101247
D62 N/A N/A N/A N/A N/A
D64 N/A N/A N/A N/A N/A
D66 N/A N/A N/A N/A N/A
D70 N/A N/A EF100151 EF100208 EF100250
D80 N/A N/A N/A N/A EG100014
D212 N/A N/A AA101085 AA101529 AA101895
D312 N/A N/A N/A N/A N/A
N/A N/A N/A N/A N/A AG100019
F20 FA100051 FA100244 FA100394 FA100490 FA100524
F30 FB100073 FB100235 FB100382 FB100440 FB100472
F30R N/A GG100123 GG1-174 GG100203 GG100207
F40 FC100015 FC100197 FC100381 FC100393 N/A
F42 N/A N/A N/A N/A N/A
F44 N/A N/A N/A N/A N/A
F45CE N/A N/A N/A GG100006 GG100409
F45/12 N/A N/A N/A N/A N/A
F46 N/A N/A N/A N/A N/A
FS46BASS N/A N/A N/A N/A BF100008
FS46CE N/A N/A N/A N/A EJ100205
FS46CE/12 N/A N/A N/A N/A GH100005
F50 FD100018 FD100286 FD100424 FD100484 FD100535
F50R FE100025 FE100261 FE100340 FE100426 FE100431
F112 FF100199 FF100277 FF100286 FF100294 N/A
F212 FG100014 FG100194 FG100308 FG100324 FG100342
F212C FH100001 FH100056 FH100424 N/A N/A
F212XL N/A FJ100233 FJ100401 FJ100486 FJ100525
F412 FK100045 FK100221 FK100342 FK100385 FK100405
F512 FL100040 FL100225 FL100362 FL100440 FL100477
G37 DF100052 DF100814 DF101339 DF101579 DF101774
G45 N/A N/A N/A GD100002 GD100050
G212 DH100035 DH100248 DH100398 DH100419 DH100436
G312 DJ100014 DJ100164 DJ100235 DJ100263 DJ100281
M80 N/A HA100003 HA100229 HA100339 HA100350
MARK II CA100005 CA100246 CA100424 CA100510 CA100570
MARK III CB100014 CB100163 CB100283 CB100305 CB100367
MARK IV CC100017 CC100101 CC100199 CC100218 CC100232
MARK V CD100021 CD100064 CD100089 CD100124 CD100137
MKS10CE N/A N/A N/A N/A N/A
NIGHT BIRD N/A N/A N/A N/A N/A
PROTOTYPES N/A LL100031 LL100108 LL100147 LL100198
S25 N/A N/A AC100159 AC100293 AC100339
S60 – S65 ED100050 ED100349 ED100499 ED100500 N/A
S60D EC100019 EC100169 EC100207 N/A N/A
S70 EE100019 EE100154 EE100246 N/A N/A
S250 N/A N/A AB100154 AB100236 AB100250
S260 N/A N/A N/A N/A AK100030
S275 N/A N/A N/A AF100139 AF100110
S280 – S281 N/A N/A N/A N/A HC100050
S282 N/A N/A N/A N/A N/A
S284 N/A N/A N/A N/A N/A
S285 N/A N/A N/A N/A N/A
S300 EA100023 EA100112 EA100468 N/A EA100470
S300A EB100023 EB100039 EB100229 N/A EB100230
SB201-SB202 N/A N/A N/A AE100426 AE100452
SB600 SB602 SB603 N/A N/A N/A N/A BE100050
SB604 N/A N/A N/A N/A BH100266
SB605 N/A N/A N/A N/A N/A
SB608 N/A N/A N/A N/A N/A
SF4 GA100051 GA100439 GA100686 GA100713 GA100713
STUDIO 24 N/A N/A N/A N/A N/A
T50 N/A N/A N/A HD100019 N/A
T250 N/A N/A N/A N/A N/A
X79 N/A N/A AD100304 AD100342 AD100509
X80 N/A N/A N/A N/A HD100023
X82 N/A N/A JF100107 JF100311 JF100430
X88 N/A N/A N/A N/A N/A
X92 N/A N/A N/A N/A N/A
X100 N/A N/A N/A N/A N/A
X170 N/A N/A N/A N/A N/A
X175 JC100014 JC100082 JC100177 JC100184 JC100205
X500 JB100036 JB100082 JB100136 JB100144 JB100148
X701-X702 N/A N/A N/A JD100174 JD100234
F50 FD100018 FD100286 FD100424 FD100484 FD100535
F50R FE100025 FE100261 FE100340 FE100426 FE100431
F112 FF100199 FF100277 FF100286 FF100294 N/A
F212 FG100014 FG100194 FG100308 FG100324 FG100342
F212C FH100001 FH100056 FH100424 N/A N/A
F212XL N/A FJ100233 FJ100401 FJ100486 FJ100525
F412 FK100045 FK100221 FK100342 FK100385 FK100405
F512 FL100040 FL100225 FL100362 FL100440 FL100477
G37 DF100052 DF100814 DF101339 DF101579 DF101774
G45 N/A N/A N/A GD100002 GD100050
G212 DH100035 DH100248 DH100398 DH100419 DH100436
G312 DJ100014 DJ100164 DJ100235 DJ100263 DJ100281
M80 N/A HA100003 HA100229 HA100339 HA100350
MARK II CA100005 CA100246 CA100424 CA100510 CA100570
MARK III CB100014 CB100163 CB100283 CB100305 CB100367
MARK IV CC100017 CC100101 CC100199 CC100218 CC100232
MARK V CD100021 CD100064 CD100089 CD100124 CD100137
MKS10CE N/A N/A N/A N/A N/A
NIGHT BIRD N/A N/A N/A N/A N/A
PROTOTYPES N/A LL100031 LL100108 LL100147 LL100198
S25 N/A N/A AC100159 AC100293 AC100339
S60 – S65 ED100050 ED100349 ED100499 ED100500 N/A
S60D EC100019 EC100169 EC100207 N/A N/A
S70 EE100019 EE100154 EE100246 N/A N/A
S250 N/A N/A AB100154 AB100236 AB100250
S260 N/A N/A N/A N/A AK100030
S275 N/A N/A N/A AF100139 AF100110
S280 – S281 N/A N/A N/A N/A HC100050
S282 N/A N/A N/A N/A N/A
S284 N/A N/A N/A N/A N/A
S285 N/A N/A N/A N/A N/A
S300 EA100023 EA100112 EA100468 N/A EA100470
S300A EB100023 EB100039 EB100229 N/A EB100230
SB201-SB202 N/A N/A N/A AE100426 AE100452
SB600 SB602 SB603 N/A N/A N/A N/A BE100050
SB604 N/A N/A N/A N/A BH100266
SB605 N/A N/A N/A N/A N/A
SB608 N/A N/A N/A N/A N/A
SF4 GA100051 GA100439 GA100686 GA100713 GA100713
STUDIO 24 N/A N/A N/A N/A N/A
T50 N/A N/A N/A HD100019 N/A
T250 N/A N/A N/A N/A N/A
X79 N/A N/A AD100304 AD100342 AD100509
X80 N/A N/A N/A N/A HD100023
X82 N/A N/A JF100107 JF100311 JF100430
X88 N/A N/A N/A N/A N/A
X92 N/A N/A N/A N/A N/A
X100 N/A N/A N/A N/A N/A
X170 N/A N/A N/A N/A N/A
X175 JC100014 JC100082 JC100177 JC100184 JC100205
X500 JB100036 JB100082 JB100136 JB100144 JB100148
X701-X702 N/A N/A N/A JD100174 JD100234
Guild Serial Number 1984-1986
From 1984 through 1989, Guild continued to add serial number prefixes to each model. The tables below show the year, model, and last serial number.
MODEL 1984 1985 1986
ARTIST AWARD JA100122 JA100127 JA100133
ASHBORY BASS N/A N/A AJ23
B30 N/A N/A N/A
B50 BA100269 BA100306 BA100326
B301-B302 N/A N/A N/A
B301A-B302A N/A N/A N/A
B401-B402 N/A N/A N/A
BLUES BIRD N/A BJ100060 BJ100215
BRIAN MAY BHM150 BHM286 BMH316
CE100 KA100175 N/A N/A
D15 AH100924 AH101371 AH101815
D16 AH100924 AH101371 AH101815
D15/12 EL100144 EL100211 N/A
D17 AL100092 AL100402 AL100575
D17/12 GC100026 N/A N/A
D25 DA113675 DA114523 DA115528
D25/12 N/A N/A N/A
D25C EH100101 N/A N/A
D30 N/A N/A N/A
D35 DB104288 DB104477 DB104697
D40 DC101972 DC102066 DC102190
D40C DG101032 DG101068 DC102190
D46 DL100839 DL100870 N/A
N/A GE100047 GE100049 N/A
DS48CE GF100026 N/A N/A
D50 DD101789 DD101878 DD101928
D50/12 D5012014 N/A N/A
D52 EK100054 N/A N/A
D55 DE101298 DE101374 DE101406
D60 N/A N/A N/A
D62 KB100046 KB100060 N/A
D64 KC100031 KC100051 KC100193
D65 N/A N/A N/A
D66 KD100110 KD100162 KD100208
D70 EF100263 EF100280 N/A
D80 EG100017 EG100019 N/A
D100 N/A N/A N/A
D212 AA102114 AA102395 AA102792
D225 N/A N/A N/A
D312 N/A N/A DJ100299
DE500 AG100148 AG100153 AG100169
DETONATOR JH000341 JH100580 N/A
F20 FA100545 FA100595 FA100663
F30 FB100509 FB100527 N/A
F30R N/A N/A N/A
F40 N/A N/A N/A
F42 KG100036 KG100065 N/A
F44 KH100078 KH100186 KH100249
F45 N/A N/A N/A
F45CE GB100533 GB100683 GB100839
F45/12 GJ100030 N/A GJ100157
F46 KJ100029 KJ100089 KJ100121
FS46BASS N/A N/A N/A
FS46CE EJ100306 EJ100370 EJ100401
FS46CE/12 N/A N/A N/A
F50 FD100600 FD100650 FD100693
F50R FE100462 FE100479 FE100505
F112 N/A N/A N/A
F212 FG100358 FG100382 N/A
F212C N/A N/A N/A
F212XL FJ100545 FJ100567 N/A
F412 FK100445 FK100476 FK100506
F512 FL100519 FL100543 FL100569
G37 DF101890 DF101990 DF102135
G45 GD100053 GD100080 N/A
G212 N/A N/A N/A
G312 DJ100287 N/A N/A
GF25 N/A N/A N/A
GF30 N/A N/A N/A
GF40 N/A N/A N/A
GF50 N/A N/A N/A
GF50R N/A N/A N/A
GF50/12 N/A N/A N/A
GF55 N/A N/A GF100002
GF60M N/A N/A GF600048
GF60R N/A N/A GF60R0087
GX SERIES N/A N/A N/A
JF30 N/A N/A JF300234
JF30/12 N/A N/A JF30120198
JF50 N/A N/A JF500050
JF50/12 N/A N/A JF501250007
JF55 N/A N/A FE100562
JF65M N/A N/A JF650031
JF65R N/A N/A JF65R0019
JF65/12 N/A N/A JF65120064
JF65R/12 N/A N/A JF65R0018
JF212XL N/A N/A FJ100612
LIBERATOR JK000085 JK000206 N/A
M80 N/A N/A N/A
MARK II CA100657 CA100689 CA100733
MARK III CB100406 CB100425 CB100461
MARK IV CC100253 CC100270 N/A
MARK V CD100156 CD100176 CD100184
MKS10CE CE100046 CE100056 N/A
NIGHT BIRD N/A BL100104 BL100324
NIGHT BIRD I N/A N/A BE100083
NIGHT BIRD II N/A N/A BL100426
NIGHTINGALE JL000040 JL000082 N/A
PRESTIGE ST N/A N/A N/A
PRESTIGE CL N/A N/A N/A
PRESTIGE EX N/A N/A N/A
PROTOTYPES LL100228 LL100234 LL100237
S25 N/A N/A N/A
S60 – S65 N/A N/A N/A
S60D N/A N/A N/A
S70 N/A N/A N/A
S250 N/A N/A N/A
S260 N/A N/A N/A
S275 N/A N/A N/A
S280 – S281 HC100481 HC101039 HC101493
S282 HE100047 N/A N/A
S284 HG100101 HG100435 HG100637
S285 N/A N/A GK100011
S300 N/A N/A N/A
S300A N/A N/A N/A
SB201-SB202 N/A N/A N/A
SB600 BE100446 BE101135 BE101726
SB602 BE100446 BE101135 BE101726
SB603 BE100446 BE101135 BE101726
SB604 BH100616 N/A N/A
SB605 N/A N/A BK100177
SB608 BG100068 BG100116 N/A
SB902 N/A N/A N/A
SB905 N/A N/A N/A
SF4 GA100898 GA100911 GA100982
STUDIO 24 N/A N/A GL100030
T50 N/A N/A N/A
VORTEX BL100085 BL100229 BL100404
VORTEX 12 BL100044 BL100193 BL100372
VORTEX I BL100035 N/A BL100410
VORTEX II BL100099 BL100229 BL100402
VORTEX DELUXE BL100230 BL100243 BL100407
VORTEX CUSTOM BL100404 N/A N/A
Guild Serial Number 1987-1989
MODEL 1987 1988 1989
ARTIST AWARD JA100147 JA100203 JA100227
ASHBORY BASS AJ1109 AJ1235 N/A
B30 B300100 B300230 B300310
B50 BA100212 BA100249 BA300310
B301-B302 N/A N/A N/A
B301A-B302A N/A N/A N/A
B401-B402 N/A N/A N/A
BLUES BIRD BJ100216 N/A N/A
BRIAN MAY N/A N/A N/A
CE100 N/A N/A N/A
D15 D1500642 (?) D151898 D151898
D16 N/A N/A N/A
D15/12 N/A N/A N/A
D17 AL100886 AL100888 N/A
D17/12 N/A N/A N/A
D25 DA001232 ? DA252265 D252986
D25/12 D2512467 N/A D251005
D25C N/A N/A N/A
D30 D3000367 ? D300719 D300921
D35 DB104754 DB104754 N/A
D40 D0400233 DC400482 D400640
D40C D0400233 DC400482 D400640
D46 N/A N/A N/A
N/A N/A N/A N/A
DS48CE N/A N/A N/A
D50 DD101210 D500447 D500478
D50/12 N/A N/A N/A
D52 N/A N/A N/A
D55 DE101407 N/A DE101376
D60 D0600113 D600234 D600306
D62 N/A N/A N/A
D64 N/A N/A N/A
D65 D0650012 N/A N/A
D66 N/A N/A N/A
D70 N/A N/A N/A
D80 EG100024 N/A N/A
D100 N/A N/A N/A
D212 N/A N/A N/A
D225 N/A D225768 N/A
D312 DJ100300 N/A N/A
DE500 N/A N/A N/A
DETONATOR N/A N/A N/A
F20 N/A N/A N/A
F30 N/A N/A N/A
F30R N/A N/A N/A
F40 N/A N/A N/A
F42 N/A N/A N/A
F44 F0440017 N/A N/A
F45 F0450311 F450535 F450664
F45CE N/A N/A N/A
F45/12 N/A N/A N/A
F46 N/A N/A N/A
FS46BASS N/A N/A N/A
FS46CE N/A N/A N/A
FS46CE/12 N/A N/A N/A
F50 N/A N/A N/A
F50R FE100515 N/A N/A
F112 N/A N/A N/A
F212 N/A N/A N/A
F212C N/A N/A N/A
F212XL N/A N/A N/A
F412 N/A N/A N/A
F512 N/A N/A N/A
G37 N/A N/A N/A
G45 GD100089 N/A N/A
G212 N/A N/A N/A
G312 N/A N/A N/A
GF25 GF250845 N/A N/A
GF30 GF300604 N/A N/A
GF40 N/A N/A N/A
GF50 GF500322 N/A N/A
GF50R N/A N/A N/A
GF50/12 GF50120043 N/A N/A
GF55 GF100002 N/A N/A
GF60M GF600082 N/A N/A
GF60R GF600275 N/A N/A
GX SERIES N/A N/A N/A
JF30 JF300663 N/A N/A
JF30/12 J230561 N/A N/A
JF50 JF500112 N/A N/A
JF50/12 N/A N/A N/A
JF55 FE100562 N/A N/A
JF65M N/A N/A N/A
JF65R FE100516 N/A N/A
JF65/12 JF265227 N/A N/A
JF65R/12 N/A N/A N/A
JF212XL FJ100612 N/A N/A
LIBERATOR N/A N/A N/A
M80 N/A N/A N/A
MARK II N/A N/A N/A
MARK III N/A N/A N/A
MARK IV N/A N/A N/A
MARK V CD100185 N/A N/A
MKS10CE N/A N/A N/A
NIGHT BIRD N/A N/A N/A
NIGHT BIRD I BE100083 N/A N/A
NIGHT BIRD II BL100530 N/A N/A
NIGHTINGALE N/A N/A N/A
PRESTIGE ST N/A N/A N/A
PRESTIGE CL N/A N/A N/A
PRESTIGE EX N/A N/A N/A
PROTOTYPES LL100238 FF000006
S25 N/A N/A N/A
S60 – S65 ED100050 ED100349 ED100499
S60D EC100019 EC100169 EC100207
S70 EE100019 EE100154 EE100246
S250 N/A N/A AB100154
S260 N/A N/A N/A
S275 N/A N/A N/A
S280 – S281 N/A N/A N/A
S282 N/A N/A N/A
S284 N/A N/A N/A
S285 N/A N/A N/A
S300 EA100023 EA100112 EA100468
S300A EB100023 EB100039 EB100229
SB201-SB202 N/A N/A N/A
SB600 SB602 SB603 N/A N/A N/A
SB604 N/A N/A N/A
SB605 N/A N/A N/A
SB608 N/A N/A N/A
SF4 GA100051 GA100439 GA100686
STUDIO 24 N/A N/A N/A
T50 N/A N/A N/A
T250 N/A N/A N/A
X79 N/A N/A AD100304
X80 N/A N/A N/A
X82 N/A N/A JF100107
X88 N/A N/A N/A
X92 N/A N/A N/A
X100 N/A N/A N/A
X170 N/A N/A N/A
X175 JC100014 JC100082 JC100177
X500 JB100036 JB100082 JB100136
X701-X702 N/A N/A N/A
Guild Serial Number 1990-1993
From 1990 to 1993, every model also had a serial number prefix. The table below shows the year, model, and last serial number.
MODEL 1990 1991 1992 1993
ARTIST AWARD JA100274 JA100280 JA100328 JA100379
B4E N/A N/A N/A LD000271
B30 B300373 B300425 B300590 B300727
B500CE N/A N/A KF000018 KF000061
BRIAN MAY N/A N/A N/A BM20022
CE100 N/A N/A N/A LB000069
D4 N/A CF000323 CF003381 CF006541
D4-12 N/A N/A CH000631 CH001045
D6/D7 N/A N/A N/A KL001210
D15 D152418 D152700 D153183 D153484
D25 D253561 D253790 D254482 D254666
D25-12 D251271 D2251372 D2251570 D2251630
D30 D301140 D301259 D301458 D301709
D40 D400824 D400857 D400884 N/A
D40C D400824 D400857 N/A N/A
D50 D500703 D500782 D500964 D501104
D55 DE101476 DE101527 DE101688 DE101833
D60 D600308 N/A N/A N/A
D100/D100C EG100073 EG100085 EG100105 EG100148
DCE1 N/A N/A N/A LH000003
DETONATOR JH000605 N/A N/A N/A
DV52/DV62 N/A N/A N/A EK100611
F4CE N/A N/A CK000003 N/A
F5 N/A N/A N/A LE000122
F15 F150054 F150167 F150254 N/A
F20 FA100818 FA100874 FA100890 N/A
F25E/F4 N/A N/A CJ000597 CJ002255
F30CE N/A N/A F3000130 F3000242
F30R N/A N/A N/A GG100118
F35 F350029 F350038 F350049 N/A
F45 F450816 F450860 F450918 N/A
F48 N/A N/A F480006 N/A
F65CE N/A N/A F650062 F650120
F512 FL100603 N/A N/A N/A
FF5/FF5CE N/A N/A N/A LF000046
GF25 GF250935 GF250946 GF250948 GF250999
GF30 GF300699 GF300766 GF300836 GF300845
GF50 GF500337 N/A N/A N/A
GF55 GF100069 GF100103 GF100126 GF100166
GF60R GF60277 N/A N/A N/A
GX SERIES HJ000771 N/A N/A HJ000772
JF4 N/A N/A CG000746 CG001572
JF4-12 N/A N/A N/A LA000166
JF30 JF301099 JF301224 JF301528 JF301880
JF30-12 JF230820 JF230970 JF231262 JF231502
JF55 FE100658 FE100707 FE100869 FE101037
JF55-12 N/A FL100621 FL100660 FL100708
JF65 N/A N/A N/A N/A
JF65-12 JF265352 JF265427 JF265544 JF265635
JF100 N/A JF000002 JF100008 JF100021
JF100-12 N/A N/A N/A JF200003
JF212-XL N/A N/A N/A N/A
NIGHTBIRD CU BL100597 BL100626 BL100634 N/A
NIGHTBIRD DX LC000015 LC000020 N/A N/A
NIGHTBIRD ST LB000015 LB000020 N/A N/A
PRESTIGE CL JK000025 JK000038 JK000044 N/A
PRESTIGE EX JL000011 JL000016 JL000037 N/A
PRESTIGE ST JH000089 JH000120 JH000172 N/A
PROTOTYPES FF000015 FF000019 FF000036 FF000057
S4CE N/A N/A KE000042 N/A
PILOT/SB602 BE103981 BE104111 BE104420 BE104434
PILOT/SB605 BK100569 BK100594 BK100748 BK100783
PILOT/SB902 JE000195 N/A N/A N/A
PILOT/SB905 N/A N/A N/A N/A
STARFIRE/SF4 GA101002 GA101072 GA101115 GA101147
SONGBIRD KK001267 KK001352 KK001847 KK002243
SPECIAL ORDER FL100571 N/A N/A N/A
X160/161 HJ000154 HJ000176 HJ000261 HJ000302
X170 HL100636 HL100660 HL100717 HL100765
X2000/4000 N/A N/A CL000053 CL000054
X500 JB100373 JB100389 JB100421 JB100173
Guild Serial Number 1994-1996
For 1994, only the latest serial number of each model is available. For 1995 and 1996, Guild continued to use serial number prefixes for each model, and the first and last serial number of the year.
MODEL 1994 1995 1996
ARTIST AWARD AA000039 AA000040 – AA000089 AA000089 – AA000105
A25 N/A N/A AF250272 – AF250893
A50 N/A N/A AF500079 – AF500194
B4E AB040487 AB040488 – AB040714 AB040714 – AB041044
B30/B30E AB300132 AB300133 – AB300207 AB300208 – AB300286
N/A N/A N/A CL000060 – CL000152
BRIAN MAY BM00415 N/A N/A
BRIAN MAY SIGNATURE ME00316 N/A N/A
BRIAN MAY SPECIAL BHM30114 N/A N/A
BRIAN MAY STANDARD BHD00039 N/A N/A
BM01/BM01(ST) N/A BHP00004 – BHP00057 N/A
CE100/CE100(HG) AC100023 N/A N/A
CROSSROADS(CR01) FA000097 N/A N/A
D4 AD042116 AD042117 – AD044518 AD044518 – AD048099
D4E N/A AD042117 – AD044518 N/A
D4-12 AD420368 AD420369 – AD420802 N/A
D6 AD060701 AD060702 – AD061343 N/A
D6E, D6HG, DV6 N/A AD060702 – AD061343 N/A
D15 D153520 N/A N/A
D25 N/A AD042117 – AD044518 AD044518 – AD048099
D25-12 N/A AD420369 – AD420802 N/A
D26 N/A AD260001 – AD260458 N/A
D30 AD300184 AD300185 – AD300301 N/A
D30BLD N/A N/A AD300302 – AD300348
D50 D501106 D501107 N/A
D55 AD550114 AD550115 – AD550220 AD550221 – AD550412
D60 AD600028 AD600029 – AD600043 N/A
D65 GC000020 N/A N/A
D100 AD100014 AD100015 – AD100022 N/A
D100C N/A AD100015 – AD100022 AD100023 – AD100039
DC1 AD110841 AD110842 – AD111713 N/A
DCE1 N/A AD110842 – AD111713 AD111714 – AD113572
DC5 FC050298 FC050299 – FC050578 FC050579 – FC050964
DC130 FD000019 FD000020 – FD000022 N/A
DV6HR N/A N/A AD061343 – AD062106
DV52 AD520445 AD520446 – AD520945 AD520946 – AD521850
DV52HG N/A AD520446 – AD520945 AD520946 – AD521850
DV62 AD620145 AD620146 N/A
DV72 AD720197 N/A N/A
DV73 AD730014 AD730015 – AD730052 N/A
DV74 N/A N/A AD740008 – AD740022
DV76 AD760010 N/A N/A
DV82 FK000001 N/A N/A
F4CE AF040801 AF040802 – AF041355 AF041356 – AF042131
F5CE AF050200 AF050201 – AF050353 AF050354 – AF050614
F30 AF300151 AF300153 N/A
F65CE AF650118 AF650119 – AF650237 AF650238 – AF650398
F20 AF200049 AF200050 – AF200131 N/A
F50 AF500012 AF500013 – AF500015 N/A
FF5CE AF150119 AF150120 – AF150143 N/A
G045/HANK WILLIAMS FG450005 N/A N/A
G45/HANK WILLIAMS FG000067 N/A N/A
GV52 FE000225 N/A N/A
GV70 FJ000059 N/A N/A
JF100-12C N/A N/A AJ120005 – AJ120012
JF4 AJ040387 AJ040388 – AJ040424 N/A
JF4E N/A AJ040388 – AJ040424 N/A
JF4-12 AJ420064 N/A N/A
JF30 AJ300329 AJ300330 – AJ300779 AJ300780 – AJ301720
JF30-12 AJ320240 AJ320241 – AJ320592 AJ320593 – AJ321101
JF55 AJ550129 AJ550130 – AJ550293 AJ550294 – AJ550483
JF55-12 AJ520062 AJ520063 – AJ520175 AJ520176 – AJ520253
JF65-12 AJ620090 AJ620091 – AJ620188 AJ620188 – AJ620304
JF100 AJ110121 AJ110122 – AJ100027 N/A
JF100C N/A N/A AJ100028 – AJ100041
JF200 AJ120005 AJ120006 – AJ100007 N/A
JV52 FJ520028 N/A N/A
JV72 FJ720018 FJ720019 – FJ720023 N/A
PRO4 AL040139 N/A N/A
PRO5 AL050078 AL050079 – AL050096 N/A
PROTOTYPES FB000004 JJ000001 – JJ000009 N/A
S100 FB000132 FB000133 – FB000194 FB000195 – FB000378
S4CE AE040103 AE040104 – AE040343 N/A
SONGBIRD FB000132 AE040104 – AE040343 N/A
STARFIRE 4 AG000014 AG000015 – AG000023 AG000024 – AG000219
X160 AK160031 N/A N/A
X170 AK170059 AK170060 – AK170138 AK170139 – AK170401
X500 AK500008 AK500009 – AK000010 N/A
X700 AK700028 AK700029 – AK700069 AK700070 – AK700118
X2000/NIGHTBIRD N/A N/A N/A
X3000/NIGHTINGALE N/A N/A N/A
Guild Serial Number 1997
The table 1997 shows the year, model, and start and end serial number.
MODEL 1997
ARTIST AWARD AA000106 – AA000158
A25 AF250894 – AF250896
A50 AF500195 – AF500209
B4E AB041045 – AB041257
B4EHG AB041045 – AB041257
B30E AB300287 – AB301349
BB BLUESBIRD CL000153 – CL001013
D4 AD048100 – AD401129
D4-12 AD421508 – AD421738
D25 AD048100 – AD401129
D25-12 AD421508 – AD421738
D30BLD AD300349 – AD300784
D55 AD550413 – AD550691
D100C AD100040 – AD100049
DCE1 AD113573 – AD114800
DCE1HG AD113573 – AD114800
DCE5 FC050965 – FC051258
DV6HR AD062107 – AD062419
DV6HG AD062107 – AD062419
DV52 AD521850 – AD522272
DV52HG AD521850 – AD522272
F4CE AF042131 – AF042598
F5CE AF050615 – AF050660
F65CE AF650398 – AF650556
JF30 AJ301720 – AJ302129
JF30-12 AJ321102 – AJ321291
JF55 AJ550484 – AJ550613
JF55-12 AJ520254 – AJ520300
JF65-12 AJ620305 – AJ620369
JF100C AJ100042 – AJ100046
JF100C-12 AJ100042 – AJ100046
S100 FB000379 – FB000770
S4CE AE040541 – AE040672
SONGBIRD HG AE040541 – AE040672
STARFIRE II AG300001 – AG301083
STARFIRE III AG300001 – AG301083
STARFIRE IV AG000220 – AG000937
X150 AK150001 – AK150004
X170 AK170401 – AK170799
X700 AK700119 – AK700213
Guild GAD Serials
The round label on the inside with a GAD (number) was consecutive but does not refer to the production date of the guitar.
The serial number that correlates with the production date can be found on the heel block.
Guild GAD Serial Number
Guild Serial Number Tacoma
Fender Musical Instruments Corporation (FMIC) began building Guild guitars in Tacoma, Washington in 2005.
These have a serial number consisting of 2 letters and 6 digits.
Example: TK 135012
The first letter “T” indicates that the guitar was made in the production facility in Tacoma.
The second letter indicates the year of manufacture and is based on a dating system introduced by the Tacoma guitar factory in 1998 which associated the letter “B“ with that year.
The following years are linked in sequence to the alphabet (I = 2005, J = 2006, K = 2007, L = 2008, M = 2009, N = 2010). In this system, the letter “K” stands for the year 2007.
The first 3 digits are derived from the “Julian” calendar, which associates each day of the year with a respective number in numerical order from 1 to 365. These three digits indicate the production
month and day.
In this example, the number 135 indicates that the guitar was made on the 135th day of the year (2007), which is May 15 according to the Julian calendar.
The last 3 digits of the serial number refer to the unit number built on that day.
In this example, 012 indicates that this is the 12th guitar built on May 15th.
So, serial number TK 135012 is the 12th (012) guitar made in Tacoma (T) in 2007 (K) on 15 May (135).
Guild Serial Number New Hartford Since December 2008
The same dating system described above also applies to Guild guitars made in New Hartford, Connecticut.
The first letter indicates the location of the production, however, this is now “N”, meaning “New Hartford” indicates.
Example: NL 348 004
Serial number NL 348 004 is the 4th (004) guitar on that day built in New Hartford, in 2008 (L) on 14 December (348).
Other Guild Serial Number Formats
1) Serial Number PYYxxxx (7 digits) is P = Production location (3 Made by Cordoba, Valencia, Spain), YY = Year, xxxx = Production number.
2) Serial Number PYYxxxxx (8 digits) is P = Production location (1 = China, 8 = Oxnard, California), YY = Year, xxxxx = Production number.
3) Serial Number PPYYMMxx (2 letters 6 digits) is PP = Production location (GY = China), YY = Year (20YY), MM = Month, xx = Production number.
4) Serial Number PPYxxxxxx (2 letters 7 digits) is PP = Production location (KC = Korea Cort, IC = Indonesia Cort), Y = Year (199Y), xxxxxx = Production number.
5) Serial Number PPYYxxxxxx (2 letters 8 digits) is PP = Production location (KC = Korea Cort), YY = Year (20YY), xxxxxx = Production number.
6) Serial Number PPPYYxxxxx (3 letters 7 digits) PPP = Production location (KWM = Korea World Musical), KSG = Korea SPG), YY = Year (20YY), xxxxx = Production number.
7) Serial Number PMMxxxxxx (3 letters 6 digits) P = Production location (C = Corona, California), MM = Model (example: UV = F50, PM = D25 ), xxxxxx = Production number.
Sometimes in life, popular things pass you by. That was the case with Guild guitars for me – I only knew them in theory. Where I lived and worked, none of my acquaintances had one, and the stores
didn’t carry the brand. I wonder why? Still, I always had an interest in Guild, and when I finally encountered one, it was a thrilling experience – like touching a piece of guitar history. I can’t
recall the model, but the depth of its sound left such an impression that I even considered buying one. | {"url":"https://serialnumberdecoder.com/guild-serial-number/","timestamp":"2024-11-06T07:04:14Z","content_type":"text/html","content_length":"235541","record_id":"<urn:uuid:128172c0-7653-473d-8d43-93d570e82ae6>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00612.warc.gz"} |
Conquering Catalan’s Conjecture
Innocent-looking problems involving whole numbers can stymie even the most astute mathematicians. As in the case of Fermats last theorem, centuries of effort may go into proving such tantalizing,
deceptively simple conjectures in number theory.
Now, Preda Mihailescu of the University of Paderborn in Germany finally may have the key to a venerable problem known as Catalans conjecture, which concerns the powers of whole numbers.
Consider the sequence of all squares and cubes of whole numbers greater than 1, a sequence that begins with the integers 4, 8, 9, 16, 25, 27, and 36. In this sequence, 8 (the cube of 2) and 9 (the
square of 3) are not only powers of integers but also consecutive whole numbers.
In 1844, Belgian mathematician Eugène Charles Catalan (1814–1894) asserted that, among all powers of whole numbers, the only pair of consecutive integers is 8 and 9. Solving Catalans problem amounts
to a search for whole-number solutions to the equation x^p – y^q = 1, where x, y, p, and q are all greater than 1. The conjecture proposes that there is only one such solution: 3^2 – 2^3 = 1.
Interestingly, more than 500 years before Catalan formulated his conjecture, Levi ben Gerson (1288–1344) had already shown that the only powers of 2 and 3 that differ by 1 are 3^2 and 2^3.
A breakthrough in solving the problem occurred in 1976 when Robert Tijdeman of the University of Leiden in the Netherlands shows that, should the conjecture not hold, there can be only a finite
rather than an infinite number of solutions to the equation. In effect, each of the exponents p and q must be less than a certain value, initially shown to be astronomically huge but later reduced to
more manageable levels.
In 2000, Mihailescu proved that if additional solutions to the equation exist, the pair of exponents must be of a rare type known as double Wieferich primes. These prime numbers obey the following
relationship: p^(q – 1) must leave a remainder of 1 when divided by q^2, and q^(p – 1) must leave a remainder of 1 when divided by p^2.
Double Wieferich primes are extremely rare. Only six examples have been identified so far: 2 and 1,093; 3 and 1,006,003; 5 and 1,645,333,507; 83 and 4,871; 911 and 318,917; and 2,903 and 18,787. None
of these pairs are relevant to the question of proving Catalan’s conjecture.
Mihailescu continued to work on the problem, and he apparently cracked it earlier this year. His proof of Catalan’s conjecture takes advantage of his earlier result on double Wieferich primes,
Mihailescu says.
It isnt absolutely certain yet that Mihailescus proof will hold up, but there are very encouraging signs. Yuri F. Bilu of the University of Bordeaux I in Talence, France, has analyzed Mihailescus
work and written a favorable commentary outlining the proofs main steps. “I am sure that Mihailescu’s proof is correct,” Bilu declares.
Mihailescu presented the proof publicly for the first time on May 24 at a Canadian Number Theory Association meeting in Montreal. His presentation was well received and, encouraged by the positive
response from several prominent number theorists, Mihailescu is now preparing a manuscript of his proof for publication.
It looks like Catalan’s conjecture is about to join the mathematical pantheon of illustrious theorems. | {"url":"https://www.sciencenews.org/article/conquering-catalans-conjecture","timestamp":"2024-11-09T05:49:25Z","content_type":"text/html","content_length":"291881","record_id":"<urn:uuid:363c6bc6-fdb7-4e41-bbaf-bef01fee2d1c>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00770.warc.gz"} |
On Continuous time
Hello! This week I want to write about Stochastic Calculus. Why? Because it is interesting and useful for economic models. What I want to do is a brief introduction for those who know nothing about
it, but are eager to learn, as well as point you towards relevant literature which might help.
To begin with, let me give a little bit of intuition on the Binomial Asset Pricing, as well as a random walk, which both are basic elements for continuous time models.
Suppose an asset bought today can have up and low returns tomorrow with a given probability. This way, the value of the portfolio would be $X_1$:
$$ X_1 = n S_1 + (1+r)(X_0 – n S_0),$$
where $n$ is the number of stocks bought, $r$ is the risk free rate (money market), $X_0$ represents the initial wealth, and $S_i$ represents the price of the asset in time $0$ and $1$ respectively
($i \in \{0,1\}$). It turns out that the initial wealth happens to be also the no-arbitrage price of the option at time zero. What option? It is what is called a European call option, which confers
the owner the right to buy one share of the stock at time one for the strike price of $K$, so it is betting on the stock to go up. It is similar to what you do when you buy a house: you make a
downpayment and guarantee that the price you will pay for the house remains static, even if the price of the house per se goes up.
There is a lot of theory there, but now I move on to random walks. Imagine you have a frog standing in $F_0 = 0$. Now the frog can go up or down with one leap, so that $F_1 \in \{ 1, -1 \}.$ Then one
can measure the expected value in leap $N$, $E[F_N].$ Hopefully it is easy to see that that value is zero. Why? Because each jump has an expected value of zero. What is interesting, though, is what
the variance is. Because each jump has a variance of $1$, and each leap is independent, then the variance of $N$ jumps is equal to $N.$
With these simple concepts, we have all the basic ingredients for Brownian motions, which I will write about on the next blog. For now, I send you a picture of the place I will hike this Saturday! | {"url":"https://moralesmendozar.com/dir/2020/12/03/on-continuous-time/","timestamp":"2024-11-13T15:51:38Z","content_type":"text/html","content_length":"33415","record_id":"<urn:uuid:3f614050-bf5d-471c-9acd-1e3a2df90798>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00172.warc.gz"} |
Matematisk ordbok för högskolan: engelsk-svensk, svensk-engelsk
English Language Learners Definition of mutually exclusive. : related in such a way that each thing makes the other thing impossible : not able to be true at the same time or to exist together. See
the full definition for mutually exclusive in the English Language Learners Dictionary. Comments on mutually exclusive. Mutually exclusive is a statistical term describing two or more events that
cannot happen simultaneously. It is commonly used to describe a situation where the occurrence of one outcome supersedes mutually exclusive {adjektiv} They are not mutually exclusive; they are
GitHub Gist: instantly share code, notes, and snippets. This application for probability is a great work from a team of teachers and educators.here we tried our best efforts to includes almost all
topics of probability and define a mutually exclusive choice. on how to continue a proof. % furthermore, repeated choices are. independent, i.e., no stochastic. memoization.
If two events are considered disjoint events, then the probability of both events occurring at the same time will be zero.
Freediving and fashion are not mutually exclusive! - Bild från
If one scenario occurs it is not possible for the others to take place. What Does Mutually Exclusive Mean? In statistics, mutually exclusive scenarios are identified as events that can’t happen at
the same time.
Ranking hypotheses to minimize the search cost in
These securities can consist of stocks, money market instruments, bonds, and other assets. Many companies featured on Money advertise with us.
The existence of mutually exclusive events Mutually exclusive events: Two events are said to be mutually exclusive when they cannot occur simultaneously in a single trial (e.g., can't get heads and
tails at Either the options in the are mutually exclusive in the context of the current EXEC CICS command or the first options in the list is invalid when the Here, we introduce TiMEx, a
generative probabilistic model for detecting patterns of various degrees of mutual exclusivity across genetic alterations, which can The Mutually Exclusive edit file included edits where two
procedures could not be performed at the same patient encounter because the two procedures were 26 Jun 2019 Mutually Exclusive Events. A A and B B are mutually exclusive events if A A and B B cannot
both occur at the same time. For example, when a Mutually Exclusive Elements.
Tele2 unlimited surf vad gör man om den slutar fungera och säger surfen är slut
Find clues for mutually exclusive or most any crossword answer or clues for crossword answers. Synonyms for mutually exclusive include incompatible, conflicting, incongruous, inconsistent, clashing,
discordant, discrepant, disagreeing, inconsonant and Definition: Mutually exclusive is a situation where two or more scenarios can’t happen simultaneously. If one scenario occurs it is not possible
for the others to take place. What Does Mutually Exclusive Mean?
In statistics, mutually exclusive scenarios are identified as events that can’t happen at the same time. In probability theory, two events are said to be mutually exclusive if they cannot occur at
the same time or simultaneously. In other words, mutually exclusive events are called disjoint events. If two events are considered disjoint events, then the probability of both events occurring at
the same time will be zero. The expression “mutually exclusive” is used in statistics to refer to events that cannot occur at the same time. For example, with $10 in my pocket, I go into a store
intending to buy a battery and a jump drive, but each item costs $10.
Omixon toth peter
As we have already discussed, mutually exclusive events occur independent of each other and never occur together. As such, the probability will be zero for this case. You will know the events are
mutually exclusive if, the following is true: P(A/B mutually translate: 互相地. Learn more in the Cambridge English-Chinese simplified Dictionary. Mutually Exclusive Events [latex]A[/latex] and
[latex]B[/latex] are mutually exclusive events if they cannot occur at the same time. This means that [latex]A[/latex] and [latex]B[/latex] do not share any outcomes and [latex]P(A \text{ AND } B) =
If we wrote this article, we’d end up with 2,000 words of copy that largely skirted around the topic at hand: how to build a content marketing pipeline. Mutually exclusive and independent are almost
opposites of each other. If they are mutually exclusive then if one happens the other cannot happen - quite the opposite of being independent. Independence essentially means that if one event happens
it has no effect on whether the other event happens. mutually exclusive, mutually-exclusive adj adjective: Describes a noun or pronoun--for example, "a tall girl," "an interesting book," "a big
house." (that cannot coexist) mutuamente excluyente adj adjetivo: Describe el sustantivo. Puede ser posesivo, numeral, demostrativo ("casa [b]grande[/b]", "mujer [b]alta[/b]"). To begin, I'd like to
make two different checkboxes mutually exclusive.
Arbetsmiljöverket inspektion 2021
Stochastic Logic Programs — ProbLog: Probabilistic - DTAI
We confirm this trend in cell and in xenotransplantation models Love and friendship are mutually exclusive. Marriage should be mutually and infinitely educational. The judge said he had never seen
such Vad som är viktigt att tänka på är att trädet i varje nivå ska vara vad som kallas MECE – mutually exclusive and collectively exhaustive, det vill The term aquaculture is mutually exclusive
with “Inland Fishery” and “Marine Fishery” given that aquaculture always involves in a controlled environment and not Legislation and voluntary action are not mutually exclusive. Lagstiftning och
frivilliga åtgärder utesluter inte varandra.
Phthalate metabolites in urine and asthma, allergic
A mutual fund Mutual funds are a common investment Mutual funds are a simple way for investors to buy a group of securities with one purchase.
everywhere enjoy releases at the same time -Giving filmmakers more ways to share art These things are not mutually exclusive. Netflix filmer Full body is mutually exclusive from out-of-frame
censoring, cowboy shot, and other tags which imply part of the body being hidden from view. Do not use to Definition of mutually exclusive. : being related such that each excludes or precludes the
other mutually exclusive events also : incompatible their outlooks were not mutually exclusive. | {"url":"https://valutaryxui.netlify.app/21669/78283.html","timestamp":"2024-11-11T04:47:25Z","content_type":"text/html","content_length":"17603","record_id":"<urn:uuid:949e0323-5783-4e15-9444-9be800c09071>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00395.warc.gz"} |
8 Free Missing Addend Anchor Chart Activities
8 Free Missing Addend Anchor Chart Examples
Get 150+ Free Math Worksheets!
These missing addend anchor chart examples will help to visualize and understand the relation between two addends in the addition process and number systems. 1st-grade students will learn basic
addition and subtraction methods and can improve their basic math skills with our free printable missing addend anchor chart examples.
8 Fun Examples to Learn Missing Addend Anchor Chart
Please download the following anchor chart examples and understand each concept given on the pages.
What Is an Anchor Chart?
In the previous section, you learned the basics of a missing addend. This time, you will be familiar with a new term named anchor chart. What does a missing addend with an anchor chart mean? A poster
used to highlight key concepts in a lesson is known as an anchor chart.
The poster or chart acts as an anchor for keeping both the teachers’ and students’ gathered concepts, methods, and ideologies for solving a problem in one place. To solve a given problem, both the
students and the teacher can share and save their ideas in these types of charts.
Missing Addend Anchor Chart Using Number Line
Finding missing addend is a fun thing for strengthening your little champ’s mathematical skills.
You can see the problem in the image below. Your first job is to find the required information and the value to be found. To find the missing addend, you can use a number line.
First, we are talking about the number line. Put the two values in the number line that you previously found in the question. Now, what to do? Here comes your analysis skill. You see there is some
gap between the two numbers.
Start to hop from the first number to the second one. Remember how many points you have earned. When you have exactly come to the second number, stop hopping.
Missing Addend Anchor Chart Using Part Part Whole
The second way to find missing addend is to use a part-part whole box. This type of box is very useful for finding missing addend-related problems. You have to put the whole or the total number in
the lower part of the box.
Then, you will notice two extra parts on the top side of the box. Place the first part, the first number, or the first number or first addend that you will find in your problem. Now the other part is
What to do? As both the upper parts come together to make the lower value just subtract the given upper part from the total or the whole and find your missing addend.
Missing Addend Anchor Chart Using Counters
This fun activity is also useful for finding the missing addend. Let us explain.
Like the previous two activities, you can see a problem in the following image. Now, identify the total and the given addend from the problem. Then, draw an exact amount of your favorite shape equal
to the whole or total. After that, give a cross sign on the number of shapes equal to the given addend.
Finally, count the number of counters without any cross marks and tell me what you know! This is your missing addend. Now, surprise your teacher by telling him the answer.
Missing Addend Anchor Chart Using Number Bond
To make a number mountain, From your given problem, take the bigger number and place it on the mountaintop. It is clear that the top is built by adding the bottom values. So, the next task is to
place the smaller of the two numbers from the question and put it on the bottom-right side of the mountain.
It’s time to count. Start from the lower-left side of the mountain. See, there is no value here, and you have to find it. Remember the number from the bottom right? Start to count after that number
and put a dot on the way from the missing addend’s space to the top of the mountain.
Keep putting dots on the page until you reach the top and your counting is finished. Now count the number of draws that you have to make to reach the top. Hurray!!! Show the answer and earn a good
performer badge from your teacher.
Missing Addend Anchor Chart Using Ten Frames
Another strategy to find your missing addend is the use of ten frames. This is a rectangular-shaped frame where ten spaces are built. You can use this tool to practice your number problems that are
between 1 to 10.
So what’s your role in using this tool? Let me explain. First of all, draw ten frames like the following image. Under the image, write the total or whole value. Then mark the room that is equal to
the whole by counting from the left. This is your destination.
Take a colored pen and start to color the spaces in the frame. How many? Equal to the number that is given aside from the total in your problem. After all of these, you will see some white spaces or
gaps left between the dots and the mark that was made in the first place. Count the number of white spaces and taste victory.
Missing Addend Anchor Chart Using Block Activity
Take your paper and some colored pencils. Like the previous instructions identify the given numbers. In your practice sheet, make a pillar of blocks that match the whole or total of your problem.
Color the blocks after gathering. Then, notice the smaller number, and build the same number of blocks on the right side of the previous one.
Use another color to differentiate it from the other column. Count how many blocks are missing to make the whole column look like the previous one. The gap is our required missing addend.
Missing Addend Anchor Chart Using Balancing Addends
Imbalance means imperfection. Without balance, all your hard work will go to waste. Our following problem is also related to balance. Your job is to make the value equal on both sides in order to
lift the dumbbell. For the solution take the help of the previous activity.
Missing Addend Anchor Chart Using Hand Signs
The last activity of missing the addend anchor chart will help you in this regard. Then counting with your fingers is your life savior. Firstly, find the whole problem and the given addend from the
problem. Secondly, start to count from your left hand’s finger to the right.
Count the number of fingers that is equal to the given addend. Afterward, find the number of fingers that are required to fill the gap between the whole and the given addend. That’s the number we
want to find.
Download Free Printable PDF
Download the following combined PDF and enjoy your practice session.
So today, some interactive missing addend anchor chart examples using the concepts of the part-part whole, counters, ten frames, and some interactive activities. Going through these examples, the
students will improve their skills in finding missing addend. Download our free worksheets, and after going through these examples, students will surely improve their mathematical skills and have a
better understanding of finding missing adend.
Hello, I am Md. Araf Bin Jayed. I have completed my B.Sc in Industrial and Production Engineering from Ahsanullah University of Science and Technology. Currently I am working as a Content Developer
for You Have Got This Math at Softeko. With proper guidelines and aid from the parent organization Softeko, I want to represent typical math problems with easy solutions. With my acquired knowledge
and hard work, I want to contribute to the overall growth of this organization. | {"url":"https://youvegotthismath.com/missing-addend-anchor-chart/","timestamp":"2024-11-05T04:14:45Z","content_type":"text/html","content_length":"359714","record_id":"<urn:uuid:66ae2b4e-a12d-4ec5-86b7-7ce774e37715>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00893.warc.gz"} |
Eighteen guests have to be seated half on each side of a long t... | Filo
Eighteen guests have to be seated half on each side of a long table. Four particular guests desire to sit on one particular side and three other on the other side. Determine the number of ways in
which the sitting arrangements can be made.
Not the question you're searching for?
+ Ask your question
Let the two sides be and . Assume that four particular guests wish to sit on side . Four guests who wish to sit on side can be accommodated on nine chairs in ways and three guests who wish to sit on
side can be accommodated in ways. Now, the remaining guests are left who can sit on 11 chairs on both the sides of the table in (11!) ways. Hence, the total number of ways in which 18 persons can be
Was this solution helpful?
Found 4 tutors discussing this question
Discuss this question LIVE
10 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Questions from JEE Advanced 1991 - PYQs
View more
Practice questions from Permutations and Combinations in the same exam
Practice more questions from Permutations and Combinations
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Eighteen guests have to be seated half on each side of a long table. Four particular guests desire to sit on one particular side and three other on the other side. Determine the
Question Text number of ways in which the sitting arrangements can be made.
Updated On Sep 20, 2023
Topic Permutations and Combinations
Subject Mathematics
Class Class 11
Answer Type Text solution:1 Video solution: 2
Upvotes 285
Avg. Video 11 min | {"url":"https://askfilo.com/math-question-answers/eighteen-guests-have-to-be-seated-half-on-each-side-of-a-long-table-four","timestamp":"2024-11-13T19:20:35Z","content_type":"text/html","content_length":"449953","record_id":"<urn:uuid:b5e6d0e9-7fef-48b8-80f2-96013e9a8fbb>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00687.warc.gz"} |
What is the domain and range of y=1/2x^2+4? | Socratic
What is the domain and range of #y=1/2x^2+4#?
1 Answer
Consider the function $y = f \left(x\right)$
The domain of this function is all the values of x for which the function holds. The range is all those values of y for which the function is valid.
Now, coming to your question.
$y = {x}^{2} / 2 + 4$
This function is valid for any real value of x. Thus the domain of this function is the set of all real numbers, i.e. , $R$.
Now, separate out x.
$y = {x}^{2} / 2 + 4$
=> $y - 4 = {x}^{2} / 2$
=> $2 \left(y - 4\right) = {x}^{2}$
=> ${\left\{2 \left(y - 4\right)\right\}}^{\frac{1}{2}} = x$
Thus, the function is valid for all real numbers greater than or equal to 4. Therefore the range of this function is [4, $\infty$).
Impact of this question
5799 views around the world | {"url":"https://socratic.org/questions/what-is-the-domain-and-range-of-y-1-2x-2-4#111256","timestamp":"2024-11-10T16:16:37Z","content_type":"text/html","content_length":"33840","record_id":"<urn:uuid:3d443127-ff31-430b-b913-0446976e3e20>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00815.warc.gz"} |
Mathematics of Classical and Quantum Physics (Dover Books on Physics)
This textbook is designed to complement graduate-level physics texts in classical mechanics, electricity, magnetism, and quantum mechanics. Organized around the central concept of a vector space,
the book includes numerous physical applications in the body of the text as well as many problems of a physical nature. It is also one of the purposes of this book to introduce the physicist to the
language and style of mathematics as well as the content of those particular subjects with contemporary relevance in physics.
Chapters 1 and 2 are devoted to the mathematics of classical physics. Chapters 3, 4 and 5 — the backbone of the book — cover the theory of vector spaces. Chapter 6 covers analytic function theory. In
chapters 7, 8, and 9 the authors take up several important techniques of theoretical physics — the Green’s function method of solving differential and partial differential equations, and the theory
of integral equations. Chapter 10 introduces the theory of groups. The authors have included a large selection of problems at the end of each chapter, some illustrating or extending mathematical
points, others stressing physical application of techniques developed in the text.
Essentially self-contained, the book assumes only the standard undergraduate preparation in physics and mathematics, i.e. intermediate mechanics, electricity and magnetism, introductory quantum
mechanics, advanced calculus and differential equations. The text may be easily adapted for a one-semester course at the graduate or advanced undergraduate level
Available from:
AmazonUS | AmazonCA | AmazonUK | AmazonAU | B&N | {"url":"https://www.robertworksfuller.com/books/mathematics-of-classical-and-quantum-physics-dover-books-on-physics/","timestamp":"2024-11-02T17:07:57Z","content_type":"text/html","content_length":"39432","record_id":"<urn:uuid:bfc0869b-95f5-4d55-a5df-5e172670b33d>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00361.warc.gz"} |
Print Variables a, b, c in 5 Digit Decimal Format
Given three variables, a, b, c, of type double that have already been declared and initialized, write a statement that prints each of them on the same line, separated by one space, in such away that
scientific (or e-notation or exponential notation) is avoided. Each number should be printed with 5 digits to the right of the decimal point. For example, if their values were 4.014268319, 14309,
0.00937608, the output would be:|4.01427×14309.00000×0.00938 NOTE: The vertical bar, |, on the left above represents the left edge of the print area; it is not to be printed out. Also, we show x in
the output above to represent spaces– your output should not actually have x’s!
LANGUAGE: C++
Given three variables, a, b, c, of type double that have already been declared and initialized, write a statement that prints each of them on the same line, separated by one space, in such away that
scientific (or e-notation or exponential notation) is avoided. Each number should be printed with 5 digits to the right of the decimal point. For example, if their values were 4.014268319, 14309,
0.00937608, the output would be:|4.01427×14309.00000×0.00938 NOTE: The vertical bar, |, on the left above represents the left edge of the print area; it is not to be printed out. Also, we show x in
the output above to represent spaces– your output should not actually have x’s!
cout << setprecision(5);
cout << fixed << a << "" "" << b << "" "" << c << "" ""; | {"url":"https://matthew.maennche.com/2014/05/given-three-variables-b-c-type-double-already-declared-initialized-write-statement-prints-line-separated-one-space-away/","timestamp":"2024-11-05T00:17:27Z","content_type":"text/html","content_length":"96332","record_id":"<urn:uuid:b6f2dadd-9e5d-4ab2-9638-05ad655fe836>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00318.warc.gz"} |
slinberg.net: MS797 Glossary
> Academics > MS797 > Glossary
Page numbers in parenthesis after terms, from ISLR 2nd edition. Non-page numbers indicate other sources; “biostats” references material from Biostatistics 690Z (Health Data Science: Statistical
Modeling), fall 2021.
Chapter 2
input variable (15)
also predictor, independent variable, feature; usually written \(X_1, X_2\), etc. The parameter or parameters we are testing to see if they are related to or affect the output.
output variable (15)
also response, dependent variable; usually written \(Y\). The outcome being measured.
error term (16)
\(\epsilon\) in the equation
\[Y = f(X) + \epsilon\] a random quantity of inaccuracy, independent of X and with mean 0.
systematic (16)
\(f\) in the equation
\[Y = f(X) + \epsilon\] the function that describes the (systematic) information \(X\) provides about \(Y\). This plus the error term equals \(Y\).
reducible error (18)
The amount of the error \(\epsilon\) that could be eliminated by improving our estimator \(\hat{f}\); the difference between \(\hat{f}\) and \(f\). This book and course is mostly about ways to
minimize the reducible error.
irreducible error (18)
The amount of \(\epsilon\) that could not be reduced even if \(f\) was a perfect estimator of \(Y\). Always greater than 0. Could be due to hidden variables in \(\epsilon\), or random
fluctuations in Y, like a measure of “[a] patient’s general feeling of well-being on that day”.
expected value (19)
average value of an expected measure.
training data (21)
data used to develop the model for estimating \(f\).
parametric methods (21)
A model based on one or more input parameters, that yields a value for Y, as in: \[f(X) = \beta_0 + \beta_1X_1 + \beta_2X_2 + \dots + \beta_pX_p\] \[Y \approx \beta_0 + \beta_1X_1 + \beta_2X_2 +
\dots + \beta_pX_p\] \[\text{income} \approx \beta_0 + \beta_1 \times \text{education} + \beta_2 \times \text{seniority}\] This creates a predictive, inflexible model which usually does not match
the true \(f\), but which has advantages of simplicity and interpretability. It can be used to predict values for \(Y\) based on its parameters, or inputs. Linear and logistic regression are
non-parametric methods (23)
methods that do not attempt to estimate \(f\). More flexible and have the potential to very closely match observations, but with the risk of overfitting the data and increasing the variance of
subsequent observations. They require much more data than parametric models, and may be difficult to interpret, K-Nearest Neighbor and Support Vector Machines are non-parametric.
prediction (26)
seeking to guess the value of an response variable \(y_i\) given a set of observations and a predictor \(f\).
inference (26)
a model that seems to better understand the relationship between the response and the predictors.
supervised learning (26)
a category of model that allows us to guess a \(y_i\) response to a set of predictor measurements \(x_i, i = 1, \dots, n\).
unsupervised learning (26)
a category of model in which there are observations/measurements \(x_i, i = 1, \dots, n\), but no associated response \(y_i\). Linear regression cannot be used because there is no response
variable to predict.
cluster analysis (27)
in unsupervised learning, a statistical method for determining whether a set of observations can be divided into “relatively distinct groups,” looking for similarities within the groups. (Topic
modeling may be an example of this.)
quantitative variables (28)
numeric values; age, height, weight, quantity. Usually the response variable type for regression problems.
qualitative variables (28)
also categorical: values from a discrete set. Eye color, name, yes/no. Usually the response variable type for classification problems.
regression problems (28)
problems with quantitative response variables. Given predictors foo, bar, and baz, how big is the frob?
classification problems (28)
problems with qualitative response variables. Given predictors foo, bar, and baz, is the outcome likely to be a frob, a frib or a freeb?
mean squared error (MSE) (29)
the average squared error for a set of observations: \[MSE = \frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{f}(x_i))^2\] MSE is small if the predicted responses are close to the true responses, and larger
as it becomes less accurate; computed from training data, and Gareth et al. suggest it should be called training MSE.
variance (34)
“the amount by which \(\hat{f}\) would change if we estimated it using a different training data set” More practically: the average of squared differences from the mean, often expressed as \(\
sigma^2\), where \(\sigma\) (or the square root of the variance) is the standard deviation per StatQuest: “the difference in fits between data sets” (like training and test)
bias (35)
“the error that is introduced by approximating a real-life problem, which may be extremely complicated, by a much simpler model”, as in the error from the (presumed) linearity of a regression
against non-linear data whose complexity it does not capture. More flexible models increase variance and decrease bias. per StatQuest: “The inability for a machine learning method (like linear
regression) to capture the true relationship is called bias” - a straight line trying to model a curved separation in classes will never get it right and always be biased
bias (65)
in an estimator, something that systematically misses the true parameter; for an unbiased estimator, \(\hat{\mu} = \mu\) when averaged over (huge) numbers of observations
bias-variance trade-off (36)
The tension in seeking the best model for the data between missing the true \(f\) with an overly simple (biased) model, vs. an overfitted model with too much variance from mapping too closely to
test data.
error rate (37)
In classification, the proportion of classifications that are mistakes. \[\frac{1}{n}\sum_{i=1}^{n}I(y_i \neq \hat{y}_i)\] \(I\) is 1 if \(y_i \neq \hat{y}_i\) - if the guess for any given \(y\)
is wrong. The error rate is the percentage of incorrect classifications. Also the training erorr rate.
indicator variable (37)
\(I\) in the error rate definition above; a logical variable indicating the presence or absence of a characteristic or trait (such as an accurate classification).
test error rate (37)
like the training error rate but applied to the test data. Uses \(\text{Ave}\) instead of sum notation: \[\text{Ave}(I(y_0 \neq \hat{y}_0))\] \(\hat{y}_0\) is the predicted class label from the
conditional probability (37)
The chance that \(Y = j\) given an observed \(x_0\), as in the Bayes classifier: \[\text{Pr}(Y = j|X = x_0)\] In a two-class, yes/no classifier, we decide based on whether \(\text{Pr}(Y = j|X =
x_0)\) is \(> 0.5\), or not. Note that \(Y\) is the class, as in “ham”/“spam”, not a \(y\)-axis coordinate.
Bayes decision boundary (38)
a visual depiction of the line of 50% probability dividing (exactly two?) classes in a two-dimensional space
Bayes error rate (38)
the expected (average) probability of classification error over all values of X in a data set. \[1 - E(\underset{j}{maxPr}(Y = j|X))\] The \(\underset{j}{maxPr}\) whichever of the \(j\) classes
has the highest probability for any given value of \(X\). Again, \(Y\) is not a y-axis coordinate of a two-dimensional space, it’s the class of the classification: “yes”/“no”, “ham”/“spam”,
“infected”/“not infected”. Also: “The Bayes error rate is analogous to the irreducible error, discussed earlier.”
K-nearest-neighbors (KNN) (39)
a classifier that assigns a class Y to an observation based on the population proportions of its nearest neighbors; a circular “neighborhood” on a two-dimensional plot. It looks at actual data
points that have been classified, and asks what any given non-classified point would be classified as based on its nearest neighbors.
Chapter 3
Synergy effect / interaction effect (60)
when two or more predictors affect each other as well as the outcome; when 50k each in TV or radio ads give different results than 100k in either one
Simple linear regression
the simplest model, predicting \(Y\) from a single predictor \(X\). \[Y \approx \beta_0 + \beta_1X\] \(\approx\) = “is approximately modeled as”
least squares (61)
the most common measure of closeness of a regression line to its data points, the sum of squares of the distances between the points and the closest point on the line (directly above or below)
residual (61)
the difference between \(y_i\) and \(\hat{y}_i\), also \(e_i\); the difference between the \(i\)th response variable and the \(i\)th response variable predicted by the model
residual sum of squares (RSS) (62)
the sum of the squared residuals for each point on the regression line \[\text{RSS} = e_1^2 + e_2^2 + \dots + e_n^2\] Formulas for \(\hat{\beta_0}\) and \(\hat{\beta_1}\) are on p. 62
intercept (\(\beta_0\)) (63)
the expected value of \(Y\) when \(X = 0\)
slope (\(\beta_1\)) (63)
the average increase in \(Y\) associated with a one-unit increase in \(X\)
error term (\(\epsilon\)) (63)
whatever we missed with the model, due to the true model not being linear (it almost never is), measurement error, or other variables that cause variation in \(Y\)
population regression line (63)
“the best linear approximation to the true relationship between \(X\) and \(Y\)” \[Y = \beta_0 + \beta_1X + \epsilon\] least squares line (63)
the regression line made of the least-squares estimates for \(\beta_0\) and \(\beta_1\)
standard error (SE) (65)
the average amount that an estimate \(\hat{\mu}\) (sample mean) differs from the actual value of \(\mu\) (population mean) \[\text{Var}(\hat{\mu}) = \text{SE}(\hat{\mu})^2 = \frac{\sigma^2}{n} (\
text{also} = \frac{\sigma}{\sqrt{n}})\] \(\sigma\) is “the standard deviation of each of the realizations \(y_i\) of \(Y\). Since \(\sigma^2\) is divided by \(n\), the standard error shrinks as
observations increase. It represents the amount we would expect means of additional samples to”jump around" simply due to random chance and the limitations of the model’s accuracy.
residual standard error (RSE) (66)
the estimate of \(\sigma\) \[\text{RSE} = \sqrt{RSS / (n-2)}\]
confidence interval (66)
a range of values within which we have a measured probability (often 95%) of containing the true value of the parameter; a 95% confidence interval in linear regression takes the form \[\hat{\
beta_1} \pm 2 \cdot \text{SE}(\hat{\beta_1})\]
t-statistic (67)
the number of standard deviations that \(\hat{\beta_1}\) is away from \(0\). \[t = \frac{{\hat{\beta_1}} - 0}{\text{SE}(\hat{\beta_1})}\] For there to be a relationship between \(X\) and \(Y\), \
(\hat{\beta_1}\) has to be nonzero (i.e. have a slope). The standard error (SE) of \(\hat{\beta_1}\) (in the denominator above) measures its accuracy; if it is small, then \(t\) will be larger,
and if it is large, then \(t\) will be smaller. \(t\) is around 2 for a p-value of 0.05 (actually about 1.96, as 2 standard deviations is 95.45% of a normal distribution), and around 2.75 for a
p-value of 0.01.
p-value (67)
the probability of observing a value greater than \(|t|\) by chance.
model sum of squares (MSS) (biostats)
Also sometimes ESS, “explained sum of squares”: the total variance in the response \(Y\) that can be accounted for by the model \[\text{MSS} = \sum(\hat{y_i} - \bar{y})^2\]
residual sum of squares (RSS) (biostats)
the total variance in the response \(Y\) that cannot be accounted for by the model \[\text{RSS} = \sum(y_i - \hat{y_i})^2\] also \[\text{RSS} = e_i^2 + e_2^2 + \dots + e_n^2\] or \[\text{RSS} =
(y_1 - \hat{\beta_0} - {\hat{\beta_1}x_1})^2 + (y_2 - \hat{\beta_0} - {\hat{\beta_1}x_2})^ + \dots + (y_n - \hat{\beta_0} - {\hat{\beta_1}x_n})^2\]
total sum of squares (TSS) (70)
the total variance in the response \(Y\); the total variability of the response about its mean \[\text{TSS} = \sum(y_i - \bar{y})^2\] compare with RSS, the amount of variability left unexplained
after the regression. TSS - RSS is the amount of variability (or error) explained by the regression (MSS).
NOTE: there is a nice visual here on stackexchange; if anybody knows how to tell Zotero to use a custom bibtex citation entry over the ones it generates, please let me know so I can integrate it
better here :frown:
\(R^2\) statistic (70)
the proportion of variance in \(Y\) explained by \(X\), a range from 0 to 1 \[R^2 = \frac{\text{TSS - RSS}}{\text{TSS}} = 1 - \frac{\text{RSS}}{\text{TSS}}\] \(R^2\) values close to 1 indicate a
regression that explains a lot of the variability in the response, and a stronger model. A value close to 0 indicates that the regression doesn’t explain much of the variability.
correlation (70)
a measure of the linearity of the relationship between \(X\) and \(Y\); values close to 0 indicate weak-to-no relationship, values near 1 or -1 indicate strong positive or negative correlation \
[\text{Cor(X, Y)} = {\frac {\sum _{i=1}^{n}(x_{i}-{\bar {x}})(y_{i}-{\bar {y}})}{{\sqrt {\sum _{i=1}^{n}(x_{i}-{\bar {x}})^{2}}}{\sqrt {\sum _{i=1}^{n}(y_{i}-{\bar {y}})^{2}}}}}\]
standard linear regression model (72)
The modal used for standard linear models, used to interpret the the effect on \(Y\) of a one-unit increase in any predictor \(\beta_j\) while holding all other predictors constant \[Y = \beta_0
+ \beta_1X_1 + \beta_2X_2 + \dots + \beta_pX_p + \epsilon\]
variable selection (78)
the task of refining a model to include only the variables associated with the response
null model (79)
a model that conatins an intercept, but no predictors; used as a first stage in forward selection
forward selection (79)
a variable selection method that starts with a null model, and then runs simple linear regressions on all predictors \(p\) and adding the one that results in the lowest RSS; repeated until some
threshold is reached
backwards selection (79)
a variable selection method that starts with a model with all predictors, and removing the one with the lowest \(p\)-value until all remaining predictors are significant, whether by \(p\)-value
or some other criterion
mixed selection (79)
a hybrid approach starting with a null model, adding predictors one at a time that produce the best fit, and removing any that acquire a larger \(p\)-value in the process until all predictors are
added or eliminated
interaction (81)
when predictors affect each other, in addition to providing their own effect on the model
confidence interval (82)
a range with a percentage component in which there is that percentage chance that the true value of an estimated parameter lies; a 95% confidence interval is a range in which we can be 95%
certain \(f(X)\) will be found
prediction interval (82)
similar to confidence interval, but a prediction range within which we are \(X\)% certain that any singular future observation will fall, rather than a statistic like an overall mean; a 95%
prediction interval is a range in which we are confident that 95% of future observations will fall. Prediction ranges are substantially wider than confidence intervals.
qualitative predictor / factor (83)
a categorical predictor with a fixed number of factors, like “yes” / “no” or “red” / “yellow” / “green”
dummy variable (83)
a numeric representation of a factor to use in a model, as in representing “yes” / “no” factor variables as 1 / 0 in a regression
baseline (86)
the factor level where there is no dummy variable; a factor with 3 levels will use 2 dummy variables, with the factor’s absence signifying the 3rd value (usually 0)
additivity assumption (87)
the assumption that the association between a predictor \(X\) and the response \(Y\) does not depend on the value of other predictors; used by the standard linear regression model
linearity assumption (87)
the assumption, also used by the standard linear regression model, that unit changes in \(X_j\) result in the same change to Y regardless of its value
interaction term (88)
the product of two predictors in a multiple regression model, quantifying their effect on each other
main effect (89)
isolated effects; the effect of a single predictor on the outcome
hierarchical principle (89)
the principle that main effects should be left in a model even if they are statistically insignificant, if they are also part of an interaction that is significant
polynomial regression (91)
an extension of linear regression to accommodate non-linear relationships
residual plot (93)
a plot of the residuals or errors (\(e_i = y_i - \hat{y}_i\)), used to check for non-linearity (a potential problem that would likely indicate something was missed in the model)
time series (94)
data consisting of observation made at discrete points in time
tracking (95)
when (residuals / variables?) tend to have similar values
heteroscedasticity (96)
non-constant variances in errors; “unequal scatter”
homoscedasticity (extra)
constant variances in errors; follows the assumption of equal variance required by most methods
weighted least squares (97)
an extension to ordinary least squares used in circumstances of heteroscedasticity, to weight data points proportionally with the inverse variances
outlier (97)
an observation whose value is very far from its predicted value
studentized residual (98)
a residual divided by its estimated standard error; observations with student residuals higher than 3 (indicating 3 standard deviations) are likely outliers
high leverage (98)
observations with an unusual \(x_i\) value, far from other / expected \(x\) values
leverage statistic (99)
a quantification of a point’s leverage \[h_i = \frac{1}{n} + \frac{(x_i - \bar{x})^2}{\sum_{i'=1}^{n}(x_{i`} - \bar{x})^2}\]
collinearity (99)
when two or more predictor variables are closely related to each other
power (101)
the probability of a test correctly detecting a nonzero coefficient (and correctly rejecting \(H_0 : \beta_j = 0\))
multicollinearity (102)
when collinearity exists between three or more predictors even when no pair of predictors is collinear (or correlated)
variance inflation factor (102)
“the ratio of the variance of \(\hat{\beta_j}\) when fitting the full model divided by the variance of \(\hat{\beta_j}\) on its own”; smallest possible value of 1 indicates the absence of
collinearity, 5-10 indicates a “problematic amount”. \[\text{VIF}(\hat{\beta_j}) = \frac{1}{1 - {R_{X_j|X_-j}^2}}\] “\(R_{X_j|X_-j}^2\) is the \(R^2\) from a regression of \(X_j\) onto all of the
other predictors”
K-nearest neighbors regression (105)
a mode of regression that seeks to classify observations by their proximity to classified neighbors
curse of dimensionality (107)
when an observation has no nearby neighbors due to a high number of dimensions exponentially increasing the available space for other observations to be spread out in
Chapter 4
qualitative / categorical (129)
interchangeable term for a variable with a non-quantitative value, such as color
classification (129)
predicting a qualitative response for an observation
classifier (129)
a classification technique, such as: logistic regression, linear discriminant analysis, quadratic discriminant analysis, naive Bayes, and K-nearest neighbors
logistic function (134)
a specific function returning an S-shaped curve with values between 0 and 1, used in logistic regression \[p(X) = \frac{e^{\beta_0 + \beta_1X}}{1 + e^{\beta_0 + \beta_1X}}\]
odds (134)
the likelihood of a particular outcome: the ratio of the number of results that produce the outcome versus the number that do not, between 0 and \(\infty\) (very low or very high probabilities) \
[odds = \frac{p(X)}{1 - p(X)} = e^{\beta_0 + \beta_1X}\] Note: odds is not the same as probability! Odds of 1/4 means a 20% probability, not 25%.
To convert from odds to probability: divide odds by (1 + odds), as in:
\[\frac{1}{4} \div \left[ 1 + \frac{1}{4} \right] = \frac{1}{4} \div \frac{5}{4} = 1/5 = 0.2\]
To convert from probability to odds, divide probability by (1 - probability), as in: \[\frac{1}{5} \div \left[ 1 - \frac{1}{5} \right] = \frac{1}{5} \div \frac{4}{5} = \frac{1}{5} \times \frac{5}{4}
= 1/4 = 0.25\] log odds / logit (135) : the log of the odds \[\text{log} \left( \frac{p(X)}{1 - p(X)} \right)\]
likelihood function (135)
a function that produces the closest possible match of a set of observations, for use in predicting the classifications of other points hairy equation not rendered
confounding (139)
when predictors correlate; usually this is something to work to avoid
multinomial logistic regression (140)
logistic regression into more than 2 categories
softmax coding (141)
an alternative coding for multiple logistic regression that treats all classes symmetrically, rather than establishing one as a baseline
(probability) density function (142)
a function returning the likelihood of a particular value occurring in a given sample, between 0 and 1; often used in pairs to establish likelihood of a value within a range rather than the
likelihood of an infinitely-thin single “slice”
prior (142)
the probability that a random observation comes from class \(k\)
posterior (142)
the probability that an observation belongs to a class \(k\) given its predictor value
normal / Gaussian (143)
characterized by a bell-shaped curve, not uniformly distributed but tending towards a likeliest/central value
overfitting (148)
mapping too closely to idiosyncrasies in training data, increasing variance in a model
null classifier (148)
a classifier that always predicts a zero or null status
confusion matrix (148)
a matrix showing how many predictions were made, and how accurate the classifications were (predicted x actual, hits on TL-BR diagonal and misses on BL-TR diagonal)
sensitivity (149)
the percentage of a class (like defaulters) that is correctly identified
specificity (149)
the percentage of a different class (like non-defaulters) that is correctly identified
Note: sensitivity and specificity can and should be separately considered when evaluating ML methods. Each column (class) of a confusion matrix has a sensitivity and a specificity. In a 2x2, this is
simple; in larger than 2x2, it involves summation.
\[\text{Sensitivity} = \frac{\text{True Positives}}{\text{True Positives + False Negatives}}\] True positives is the cell in the class column containing the number of correct predictions in a
category; the false negatives are the rest of the column.
\[\text{Specificity} = \frac{\text{True Negatives}}{\text{True Negatives + False Positives}}\] True negatives is the sum of cells, in the other columns, that correctly do not predict the class (even
if they wrongly predict some other class as well); the false positives are the rest of the row for the class that incorrectly predicted that class.
ROC charts show the true positive rate (sensitivity) on the Y axis, and the false positive rate (1 - sensitivity) on the X axis.
ROC curve (150)
Receiver Operating Characteristics; name of a curve showing the overall performance of a classifier resembling the top left corner of a rounded rectangle; area under the curve (AUC) indicates the
percentage of correct classifications; the closer it is to square, the bigger the AUC, the better the classifier. The Y axis is true positive rate (sensitivity). the X axis is false positive rate
(1 - sensitivity).
StatQuest: AUC (area under the curve) of the ROC shows the overall effectiveness / quality of a model; good models will fill the space more. And the points of the individual curve help indicate the
best thresholds to pick for the classifier; points with X = 0 have no false positives, and points with Y = 1 get all of the true positives.
Also, precision is another option for the X-axis, rather than if it’s more important to know the proportion of the true positives were correctly identified, as opposed to the false positive rate.
\[\text{Precision} = \frac{\text{True Positives}}{\text{True Positives + False Positives}}\]
marginal distribution (155)
the distribution of an individual predictor
joint distribution (155)
the association between different predictors
kernel density estimator (156)
“essentially a smoothed version of a histogram”
Chapter 5
model assessment (197)
the process of evaluating a model’s performance
model selection (197)
the process of selecting the proper level of flexibility for a model
validation set approach (198)
randomly dividing a set of observations into a training set and a validation or hold-out set, to assess the test error rate
leave one out cross-validation (LOOCV) (200)
using a single observation for a validation set and the rest as a training set
k-fold cross-validation (CV) (203)
dividing a set of observation into \(k\) groups of approximately equal size, using the first as a validation set and fitting the method on the remaining \(k-1\) folds
bootstrap (209)
a tool to quantify the uncertainty associated with a given estimator or statistical learning method
sampling with replacement (211)
picking from a set without removing the picked elements
If you see mistakes or want to suggest changes, please create an issue on the source repository.
Text and figures are licensed under Creative Commons Attribution CC BY-NC 4.0. Source code is available at https://github.com/stevelinberg/distill-blog, unless otherwise noted. The figures that have
been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Linberg (2022, Jan. 30). slinberg.net: MS797 Glossary. Retrieved from https://slinberg.net/academics/ms797-glossary/
BibTeX citation
author = {Linberg, Steve},
title = {slinberg.net: MS797 Glossary},
url = {https://slinberg.net/academics/ms797-glossary/},
year = {2022} | {"url":"https://stevelinberg.github.io/distill-blog/academics/ms797-glossary/","timestamp":"2024-11-03T21:38:00Z","content_type":"application/xhtml+xml","content_length":"104403","record_id":"<urn:uuid:c36b1c40-21f7-4db8-aaa6-5f84e4d6a968>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00191.warc.gz"} |
Simulate QQ reference plots
qreference {DAAG} R Documentation
Simulate QQ reference plots
This function computes the QQ plot for given data and specified distribution, then repeating the comparison for data simulated from the specified distribution. The plots for simulated data give an
indication of the range of variation that is to expected, and thus calibrate the eye.
qreference(test = NULL, m = 30, nrep = 6, pch=c(16,2), distribution = function(x) qnorm(x,
mean = ifelse(is.null(test), 0, mean(test)), sd = ifelse(is.null(test),
1, sd(test))), seed = NULL, nrows = NULL, cex.strip = 0.75,
xlab = NULL, ylab = NULL)
test a vector containing a sample to be tested; if not supplied, all qq-plots are for data simulated from the reference distribution
m the sample size for the reference samples; default is test sample size if test sample is supplied
nrep the total number of samples, including reference samples and test sample if any
pch plot character(s)
distribution reference distribution; default is standard normal
seed the random number generator seed
nrows number of rows in the plot layout
cex.strip character expansion factor for labels
xlab label for x-axis
ylab label for y-axis
QQ plots of the sample (if test is non-null) and all reference samples
J.H. Maindonald
# qreference(rt(30,4))
# qreference(rt(30,4), distribution=function(x) qt(x, df=4))
# qreference(rexp(30), nrep = 4)
# toycars.lm <- lm(distance ~ angle + factor(car), data = toycars)
# qreference(residuals(toycars.lm), nrep = 9)
version 1.25.6 | {"url":"https://search.r-project.org/CRAN/refmans/DAAG/html/qreference.html","timestamp":"2024-11-04T05:48:49Z","content_type":"text/html","content_length":"3813","record_id":"<urn:uuid:93438530-cd06-4e87-8568-75ec6aea8208>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00898.warc.gz"} |
Kinetics and Mechanism
Basic Ideas, Activation Energies, Steady State Approximation, then more complex reactions such as Enzyme Kinetics
Kinetics & Mechanism Notes
Basic Ideas
aA + bB → xX + yY
Molecularity, a wrt A and b wrt B (a+b+…)
Rate Equation:
Rate Constant:
Rate = k [A] α [B] β
Units – conc time-1/(concα concβ) → conc(1-(α+β)) time-1
i.e. 1st order → time -1, 2nd order → conc-1time-1 = dm3 mol-1s-1
How to Determine Order
3 methods:
Differential – if initial rates given.
Fractional Lives – order only.
Integral – more than order to determine.
Need to isolate, e.g.
2NO → 2NO2
v = k[NO]2[O2]
Excess O2 → v = k’ [NO]2, where k’ = k[O2], i.e. becomes pseudo-2nd order in [NO].
Differential Methods –
One species / pseudo one species.
log (rate) = log k’ + α log [A]
(when rate = k’[A]α)
Plot a graph of log(rate) against log [A] – intercept k’, gradient α.
Fractional Lives –
1st order: τ1/2 = ln 2 / k (independent of concentration)
i.e. zeroth order → Ao/2k, 2nd order → 1/kAo, etc.
Must remember that although the proportionality is valid for any fractional order, the constant will change between them.
Also worth remembering that if there’s not enough data, lower fractions e.g. ¼ lives can be used instead with the same result.
Integral Methods –
1st Order:
Activation Energies & Collision Theory
Increased temperature → k(T) increases (normally).
k = Ae-E/RT
(Arrhenius Equation)
→ ln k = ln A – EA/RT
Can have negative Activation Energies though, e.g.
NO + NO ⇌ (NO)2
(NO)2 + O2 ⇌ 2NO2
Increase T → [dimer] falls, i.e. ↑T → ↓ rate. Basically, K1 drops much faster than k2 can increase.
Collision Theory of Reaction Rates
Collision Theory – simplest, and a theoretical rationalisation for the Arrhenius Equation. It seeks to predict the rate of a bimolecular elementary step:
A + B → products v = k[A][B]
The rate of reaction is assumed to be equal to the collision frequency, ZAB, multiplied by the fraction of collisions having an energy greater than a critical energy Ec (the minimum energy required
to react). The collision rate is given by the gas-kinetics result for hard-spheres:
The fraction of collisions that lead to reaction is assumed to have the form e-E/kT. The justification that is often given is that the Maxwell-Boltzmann distribution has the form:
where f(E)dE is the fraction of molecules with energies between E and E+dE. Integrating this expression from Ec to infinity gives the required expression.
Note: the apparent simplicity of this derivation is somewhat disingenuous for several reasons.
1. Conservation of momentum requires that only kinetic energy associated with the relative motion of A and B can be used for reaction. The one-dimensional Maxwell-Boltzmann distribution for relative
velocity is not of the form above.
2. Molecules with a higher relative velocity collide more often. Thus the Maxwell-Boltzmann distribution needs to be weighted by the relative velocity.
3. Angular momentum needs to be conserved; unless the collision is head-on, only some of the kinetic energy of relative motion is available for reaction.
The collision theory expression is thus:
From the definition of the activation energy,
Ea = Ec + ½ RT (in molar terms)
Generally Ea >> RT, so the term ½ RT can be neglected.
Some Limitations of Collision Theory
• Calculated values of A ~ 1011-1012 dm3 mol-1 s-1 independent of the reaction, since A varies rather slowly with mass and temperature (A scales as √(T/m)).
Steric Factor, P, introduced to account for the difference between observed and calculated pre-exponential factors.
Limitation: except for a few special cases, there is no simple way of calculating P.
• For a reaction at equilibrium,
Collision Theory predicts the following relationship for the equilibrium constant K:
But from thermodynamics, , i.e. has entropy and enthalpy components. The entropy component has been lost in collision theory.
• For complicated molecules, measured values of k1 greatly exceed values calculated from collision theory because the energy stored in internal degrees of freedom (vibrations, internal rotations)
have been ignored. Transition State Theory fixes this (see Reaction Dynamics Notes).
Measuring Rates of Reaction
1. Mixing – FAST. In situ generation.
2. Measuring – FLASH PHOTOLYSIS.
Flow Methods – (1ms to 1s) – no photochemical precursor.
Complex Reactions
The simplest complex reaction consists of two consecutive, irreversible elementary steps, e.g.
k1 k2
A → B → C
If k2 >> k1, [B] is consumed as fast as it is produced and the equation for [C] reduces to:
k2 >> k1
The rate of production of [C] no longer depends on k2. The initial step is described as the rate determining step.
Conversely, if k1 >> k2 then all of [A] is rapidly converted to [B], which only slowly forms [C]. The integrated rate law for [C] is now independent of k1, except at the very beginning of the
k1 >> k2
The second step is now rate determining.
For mechanisms that are only slightly more complicated, e.g.
k1 k2
A + B ⇌ C → D
it is generally impossible to find a solution to the differential equations in a closed form. One either has to integrate the differential equations numerically or resort to approximate methods.
Simple approximate solutions exist when (i) A, B and C are in equilibrium, and (ii) when C is a reactive intermediate.
The reaction scheme above occurs widely in chemical problems. If k-1 >> k2, A, B and C can be considered to be in an equilibrium that is barely perturbed by the slow leakage of [C] into product.
Keq = [C]/[A][B]
At equilibrium,
k-1[C] = k1[A][B]
For the reaction,
The reaction follows a second-order rate law with a composition rate constant as shown.
The pre-equilibrium assumption will not hold in the very early stages of the reaction while the equilibrium between A, B and C is being established.
Steady-State Approximation
The steady-state approximation (SSA) can be applied to reactive intermediates. More specifically, if the rate of change of the concentration of some intermediate, d[B]/dt is small compared to the
rate of change of the concentrations of the reactants and products then we can set d[B]/dt equal to zero. As a general rule, if [B] << [reactants] throughout the reaction, the SSA will hold except at
the very beginning of the reaction.
e.g. apply SSA to the simple consecutive scheme:
k1 k2
A → B → C
The result of the SSA agrees with the full expression in the limit k2 >> k1:
Limiting cases for the reaction mechanism:
k1 k2
A + B ⇌ C → D
(i) k-1 << k2: consecutive irreversible reactions.
(ii) k-1 >> k2: pre-equilibrium.
(iii) k2, k-1 >> k1[A], k1[B] → d[C]/dt ~ 0: SSA can be applied to [C].
Chain Reactions
e.g. H2+Cl2
Cl2 → Cl + Cl [ initiation ]
Cl + H2 → HCl + H [ propagation ]
H + Cl2 → HCl + Cl [ propagation ]
Cl + Cl + M → Cl2 + M [ termination ]
e.g. H2 + Br2
Br2 → Br + Br
Br + H2 → HBr + H
H + Br2 → HBr + Br slows the reaction due to competition.
H + HBr → H2 + Br
Br + Br + M → Br2 + M
e.g. H2 + I2
I2 → I + I
I + H2 + I → 2HI [ termolecular – not a chain reaction ]
I + H2 has a large EA → slow.
Branched-Chain Reactions
φ is the net branching factor.
Hydrogen-Oxygen Reaction
H2 + O2 → OH + OH [ initiation ]
OH + H2 → H2O + H [ straight-chain ]
H + O2 → OH + O [ branching ]
O + H2 → OH + O [ branching ]
H,OH,O + wall → loss [ surface loss ]
H + O2 + M → HO2 + M Rate Determining Step
HO2 + wall → loss [ gas phase loss ]
Divide φ into branching term f and breaking term g.
φ = f – gwall - ggas
Unimolecular Reactions
The term “unimolecular reactions” is often used rather loosely to refer to gas-phase reactions that exhibit first-order kinetics and apparently involve only one chemical species. Examples include the
isomerisation of cyclopropane:
And the decomposition of azomethane:
CH3N2CH3 → C2H6 + N2 v = kuni[CH3N2CH3]
The key question that arose in the early years of the century is how the molecules acquired sufficient energy to react, since the first-order kinetics appeared to preclude a bimolecular activation
step. In 1922, Lindemann proposed the following mechanism:
A + M ⇌ A* + M
A* → P
Where A* is an energised molecule and M is a collision partner that provides A with sufficient energy to react. Note that the Lindemann mechanism for decomposition reactions, is the reverse of the
association reaction of radicals.
To determine the rate law for this mechanism, put A* in steady-state:
Comparing this equation with the experimental laws, one predicts that the rate “constant” kuni should not be a constant at all! At high pressures, however, k-1[M] >> k2 and the predicted rate law
reduces to the experimental rate law with a rate constant k∞ = k1k2/k-1. At low pressures where k-1[M] << k2, the predicted rate law becomes second-order, reflecting the bimolecular nature of the
activation step which has now become rate-determining. The change over the second-order kinetics at low pressures was first observed by Ramsperger in 1927.
Lindemann mechanism predicts that a double reciprocal plot of kuni-1 vs. [A]-1 should be linear.
While the essential activation process embodied in the Lindemann mechanism is generally accepted, this model does have a number of serious failings. See Reaction Dynamics Notes for more sophisticated
Enzyme Kinetics
The rate of enzymatic reactions is often found empirically to follow the Michaelis-Menton Equation:
where S is the substrate and KM is called the Michaelis constant. The maximum rate, vmax, is found to be linearly proportional to the total concentration of enzyme:
vmax = kcat[E]o
kcat is the turnover number (maximum number of molecules of substrate that each molecule of enzyme can “turn over” per second). Simplest mechanism consistent with this rate law:
E + S ⇌ ES ES → P + E
E = enzyme, S = substrate, ES = enzyme-substrate complex, P = product.
Apply SSA to ES:
Rate of Reaction:
Expressing the (unknown) concentration of free enzyme, [E], in terms of the total concentration of enzyme [E]o = [E] + [ES] yields the rate law:
This rate law has the same form as the Michaelis-Menton Equation if we identify kcat with k2 and KM with (k2+k-1)/k1.
Several standard plots for obtaining vmax and KM from kinetic data on enzymatic reactions, the simplest being a double reciprocal or Lineweaver-Burke plot. Usually the initial rate method is employed
since it avoids any complications resulting from reactions of the products.
These notes are copyright Alex Moss, © 2003-present.
I am happy for them to be reproduced, but please include credit to this website if you're putting them somewhere public please! | {"url":"https://alchemyst.co.uk/note/kinetics-and-mechanism","timestamp":"2024-11-11T14:21:53Z","content_type":"text/html","content_length":"83493","record_id":"<urn:uuid:8e26bf56-5f51-45e5-ba0f-1676671d0fc8>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00574.warc.gz"} |
Returns the last element in the sequence.
template <typename Sequence>
typename result_of::back<Sequence>::type
back(Sequence& seq);
template <typename Sequence>
typename result_of::back<Sequence const>::type
back(Sequence const& seq);
Return type: Returns a reference to the last element in the sequence seq if seq is mutable and e = o, where e is the last element in the sequence, is a valid expression. Else, returns a type
convertible to the last element in the sequence.
Precondition: empty(seq) == false
Semantics: Returns the last element in the sequence.
#include <boost/fusion/sequence/intrinsic/back.hpp>
#include <boost/fusion/include/back.hpp>
vector<int, int, int> v(1, 2, 3);
assert(back(v) == 3); | {"url":"https://live.boost.org/doc/libs/1_66_0/libs/fusion/doc/html/fusion/sequence/intrinsic/functions/back.html","timestamp":"2024-11-09T04:56:30Z","content_type":"text/html","content_length":"13184","record_id":"<urn:uuid:b0e70180-e891-4401-a783-035617e98644>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00224.warc.gz"} |
Factor Analysis: a means for theory and instrument development in support of construct validity
EDITORIAL 11953 DOWNLOADS
Factor Analysis: a means for theory and instrument development in support of construct validity
Mohsen Tavakol^1 and Angela Wetzel^2
^1School of Medicine, Medical Education Centre, the University of Nottingham, UK
^2School of Education, Virginia Commonwealth University, USA
Submitted: 13/10/2020; Accepted: 25/10/2020; Published: 06/11/2020
Int J Med Educ. 2020; 11:245-247; doi: 10.5116/ijme.5f96.0f4a
© 2020 Mohsen Tavakol & Angela Wetzel. This is an Open Access article distributed under the terms of the Creative Commons Attribution License which permits unrestricted use of work provided the
original work is properly cited. http://creativecommons.org/licenses/by/3.0
Factor analysis (FA) allows us to simplify a set of complex variables or items using statistical procedures to explore the underlying dimensions that explain the relationships between the multiple
variables/items. For example, to explore inter-item relationships for a 20-item instrument, a basic analysis would produce 400 correlations; it is not an easy task to keep these matrices in our
heads. FA simplifies a matrix of correlations so a researcher can more easily understand the relationship between items in a scale and the underlying factors that the items may have in common. FA is
a commonly applied and widely promoted procedure for developing and refining clinical assessment instruments to produce evidence for the construct validity of the measure.
In the literature, the strong association between construct validity and FA is well documented, as the method provides evidence based on test content and evidence based on internal structure, key
components of construct validity.^1 From FA, evidence based on internal structure and evidence based on test content can be examined to tell us what the instrument really measures - the intended
abstract concept (i.e., a factor/dimension/construct) or something else. Establishing construct validity for the interpretations from a measure is critical to high quality assessment and subsequent
research using outcomes data from the measure. Therefore, FA should be a researcher’s best friend during the development and validation of a new measure or when adapting a measure to a new
population. FA is also a useful companion when critiquing existing measures for application in research or assessment practice. However, despite the popularity of FA, when applied in medical
education instrument development, factor analytic procedures do not always match best practice.^2 This editorial article is designed to help medical educators use FA appropriately.
The Applications of FA
The applications of FA depend on the purpose of the research. Generally speaking, there are two most important types of FA: Explorator Factor Analysis (EFA) and Confirmatory Factor Analysis (CFA).
Exploratory Factor Analysis
Exploratory Factor Analysis (EFA) is widely used in medical education research in the early phases of instrument development, specifically for measures of latent variables that cannot be assessed
directly. Typically, in EFA, the researcher, through a review of the literature and engagement with content experts, selects as many instrument items as necessary to fully represent the latent
construct (e.g., professionalism). Then, using EFA, the researcher explores the results of factor loadings, along with other criteria (e.g., previous theory, Minimum average partial,^3 Parallel
analysis,^4 conceptual meaningfulness, etc.) to refine the measure. Suppose an instrument consisting of 30 questions yields two factors - Factor 1 and Factor 2. A good definition of a factor as a
theoretical construct is to look at its factor loadings.^5 The factor loading is the correlation between the item and the factor; a factor loading of more than 0.30 usually indicates a moderate
correlation between the item and the factor. Most statistical software, such as SAS, SPSS and R, provide factor loadings. Upon review of the items loading on each factor, the researcher identifies
two distinct constructs, with items loading on Factor 1 all related to professionalism, and items loading on Factor 2 related, instead, to leadership. Here, EFA helps the researcher build evidence
based on internal structure by retaining only those items with appropriately high loadings on Factor 1 for professionalism, the construct of interest.
It is important to note that, often, Principal Component Analysis (PCA) is applied and described, in error, as exploratory factor analysis.^2^,^6 PCA is appropriate if the study primarily aims to
reduce the number of original items in the intended instrument to a smaller set.^7 However, if the instrument is being designed to measure a latent construct, EFA, using Maximum Likelihood (ML) or
Principal Axis Factoring (PAF), is the appropriate method.^7 These exploratory procedures statistically analyze the interrelationships between the instrument items and domains to uncover the unknown
underlying factorial structure (dimensions) of the construct of interest. PCA, by design, seeks to explain total variance (i.e., specific and error variance) in the correlation matrix. The sum of the
squared loadings on a factor matrix for a particular item indicates the proportion of variance for that given item that is explained by the factors. This is called the communality. The higher the
communality value, the more the extracted factors explain the variance of the item. Further, the mean score for the sum of the squared factor loadings specifies the proportion of variance explained
by each factor. For example, assume four items of an instrument have produced Factor 1, factor loadings of Factor 1 are 0.86, 0.75, 0.66 and 0.58, respectively. If you square the factor loading of
items, you will get the percentage of the variance of that item which is explained by Factor 1. In this example, the first principal component (PC) for item1, item2, item3 and item4 is 74%, 56%, 43%
and 33%, respectively. If you sum the squared factor loadings of Factor 1, you will get the eigenvalue, which is 2.1 and dividing the eigenvalue by four (2.1/4= 0.52) we will get the proportion of
variance accounted for Factor 1, which is 52 %. Since PCA does not separate specific variance and error variance, it often inflates factor loadings and limits the potential for the factor structure
to be generalized and applied with other samples in subsequent study. On the other hand, Maximum likelihood and Principal Axis Factoring extraction methods separate common and unique variance
(specific and error variance), which overcomes the issue attached to PCA. Thus, the proportion of variance explained by an extracted factor more precisely reflects the extent to which the latent
construct is measured by the instrument items. This focus on shared variance among items explained by the underlying factor, particularly during instrument development, helps the researcher
understand the extent to which a measure captures the intended construct. It is useful to mention that in PAF, the initial communalities are not set at 1s, but they are chosen based on the squared
multiple correlation coefficient. Indeed, if you run a multiple regression to predict say item1 (dependent variable) from other items (independent variables) and then look at the R-squared (R2),
you will see R2 is equal to the communalities of item1 derived from PAF.
Confirmatory Factor Analysis
When prior EFA studies are available for your intended instrument, Confirmatory Factor Analysis extends on those findings, allowing you to confirm or disconfirm the underlying factor structures, or
dimensions, extracted in prior research. CFA is a theory or model-driven approach that tests how well the data “fit” to the proposed model or theory. CFA thus departs from EFA in that researchers
must first identify a factor model before analysing the data. More fundamentally, CFA is a means for statistically testing the internal structure of instruments and relies on the maximum likelihood
estimation (MLE) and a different set of standards for assessing the suitability of the construct of interest.^7^,^8
Factor analysts usually use the path diagram to show the theoretical and hypothesized relationships between items and the factors to create a hypothetical model to test using the ML method. In the
path diagram, circles or ovals represent factors. A rectangle represents the instrument items. Lines (→ or ↔) represent relationships between items. No line, no relationship. A single-headed arrow
shows the causal relationship (the variable that the arrowhead refers to is the dependent variable), and a double-headed shows a covariance between variables or factors.
If CFA indicates the primary factors, or first-order factors, produced by the prior PAF are correlated, then the second-order factors need to be modelled and estimated to get a greater understanding
of the data. It should be noted if the prior EFA applied an orthogonal rotation to the factor solution, the factors produced would be uncorrelated. Hence, the analysis of the second-order factors is
not possible. Generally, in social science research, most constructs assume inter-related factors, and therefore should apply an oblique rotation. The justification for analyzing the second-order
factors is that when the correlations between the primary factors exist, CFA can then statistically model a broad picture of factors not captured by the primary factors (i.e., the first-order
factors).^9 The analysis of the first-order factors is like surveying mountains with a zoom lens binoculars, while the analysis of the second-order factors uses a wide-angle lens.^10 Goodness of-
fit- tests need to be conducted when evaluating the hypothetical model tested by CFA. The question is: does the new data fit the hypothetical model? However, the statistical models of the goodness
of- fit- tests are complex, and extend beyond the scope of this editorial paper; thus,we strongly encourage the readers consult with factors analysts to receive resources and possible advise.
Factor analysis methods can be incredibly useful tools for researchers attempting to establish high quality measures of those constructs not directly observed and captured by observation.
Specifically, the factor solution derived from an Exploratory Factor Analysis provides a snapshot of the statistical relationships of the key behaviors, attitudes, and dispositions of the construct
of interest. This snapshot provides critical evidence for the validity of the measure based on the fit of the test content to the theoretical framework that underlies the construct. Further, the
relationships between factors, which can be explored with EFA and confirmed with CFA, help researchers interpret the theoretical connections between underlying dimensions of a construct and even
extending to relationships across constructs in a broader theoretical model. However, studies that do not apply recommended extraction, rotation, and interpretation in FA risk drawing faulty
conclusions about the validity of a measure. As measures are picked up by other researchers and applied in experimental designs, or by practitioners as assessments in practice, application of
measures with subpar evidence for validity produces a ripple effect across the field. It is incumbent on researchers to ensure best practices are applied or engage with methodologists to support and
consult where there are gaps in knowledge of methods. Further, it remains important to also critically evaluate measures selected for research and practice, focusing on those that demonstrate
alignment with best practice for FA and instrument development.^7^, ^11
Conflicts of Interest
The authors declare that they have no conflicts of interest.
1. Nunnally J, Bernstein I. Psychometric theory. New York: McGraw-Hill; 1994.
2. Wetzel AP. Factor analysis methods and validity evidence: a review of instrument development across the medical education continuum. Acad Med. 2012; 87: 1060-1069.
Full Text PubMed
3. Bandalos DL, Boehm-Kaufman MR. Four common misconceptions in exploratory factor analysis. In: Lance CE, Vandenberg RJ, editors. Statistical and methodo-logical myths and urban legends: doctrine,
verity and fable in the organizational and social sciences. New York: Routledge Taylor & Francis Group; 2009.
4. HORN JL. A RATIONALE AND TEST FOR THE NUMBER OF FACTORS IN FACTOR ANALYSIS. Psychometrika. 1965; 30: 179-185.
Full Text PubMed
5. JR R. Factors as theoritical constructs. In: Jakson DN, Messick S, editors. Problems in human assessment. New York: McGrawHill; 1963.
6. Cattell R. The scientific use of factor analysis in behavioral and life sciences. New York: Plenum Press; 1978.
7. Tabachnick BG, Fidell LS. Using multivariate statistics. Boston: Pearson; 2013.
8. Floyd FJ and Widaman KF. Factor analysis in the development and refinement of clinical assessment instruments. Psychological Assessment. 1995; 7: 286-299.
Full Text
9. Gorsuch R. Factor analysis. Hillsdale, NJ: Erlbaum; 1983.
10. McClain AJ. Hierarchical analytic methods thatyield different perspectives on dynamics: aids to interpretation. In: Thompson B, editor. Advances in social science methodology. Greenwich, CT: JAI
Press; 1996.
11. American Educational Research Association, American Psychological Association NCoMiE. Standards for educational and psychological testing. Washington, DC: American Educational Research
Association; 2014. | {"url":"https://www.ijme.net/archive/11/factor-analysis/?ref=linkout","timestamp":"2024-11-13T12:45:34Z","content_type":"text/html","content_length":"26494","record_id":"<urn:uuid:cc348194-de85-49bc-941f-a81930a99f28>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00762.warc.gz"} |
Softmax parameterisation and optimisation
The softmax function provides a convenient parameterisation of the probability distributions over a fixed number of outcomes. Using the softmax, such probability distributions can be learned
parametrically using gradient methods to minimise the cross-entropy (or equivalently, the Kullback-Leibler divergence) to observed distributions. This is equivalent to maximum likelihood learning
when the distributions to be learned are one-hot (i.e. we are learning for a classification task). In the notes below, the softmax parameterisation and the gradient updates with respect to the cross
entropy are derived explicitly.
This material spells out section 4 of the paper of Bridle referenced below, where the softmax was first proposed as an activation function for a neural network. It was in this paper that softmax was
named, moreover. The name contrasts the outputs of the function with those of the “winner-takes-all” function, whose outputs are one-hot distributions.
Bridle, J.S. (1990a). Probabilistic Interpretation of Feedforward Classification Network Outputs, with Relationships to Statistical Pattern Recognition. In: F.Fogleman Soulie and J.Herault (eds.),
Neurocomputing: Algorithms, Architectures and Applications, Berlin: Springer-Verlag, pp. 227-236.
One Reply to “Softmax parameterisation and optimisation” | {"url":"http://building-babylon.net/2016/04/25/softmax-parameterisation-and-optimisation/","timestamp":"2024-11-09T17:37:59Z","content_type":"text/html","content_length":"76774","record_id":"<urn:uuid:9e199284-4363-4280-8c39-416628c7b478>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00199.warc.gz"} |
xplore mode
Explore mode#
The “explore” parts mode allows the creation of adaptive questions.
Rather than showing the student a fixed list of parts that they must answer in sequence, explore mode presents the student with a single part at a time. The student is shown a list of options for
“next parts” to navigate to. The available parts can vary depending on the student’s interaction with the current part - you could offer a hint before the student submits their answer, or only offer
a certain path if the student answers the part correctly.
When the student moves to another part, you can update the question’s variables using data from the student’s answer to the current part. With this, you can create powerful, adaptive exploration
Use cases#
Here are some of the use cases that explore mode was designed for:
• Offer a selection of hints at varying degrees of helpfulness.
• Walk the student through an iteractive algorithm, giving feedback on each step.
• Allow the student to choose the method they want to use to solve a problem.
• Take free input from the student, such as measurements or an example of an object, then ask them questions about it.
• Ask the student to define the criteria for a test, then assess their decisions based on those criteria.
Parts in explore mode#
In the editor, you define one or more question parts. The first part in the list is the one that students are shown when they start the question.
These are definitions of parts; when a the student moves to a particular part, an instance of it is created, using the current values of the question variables. There can be more than one instance of
a part: when the student takes a “next part” option that they haven’t taken before, an entirely new instance of that part is created, and any existing instances are unaffected.
In explore mode, the question statement is always visible above the current part. The statement is not updated to reflect variable replacements when you move to another part.
The student’s scores for each part they visit are collected into pre-defined Objectives. The student’s total score for the question is the sum of their objectives minus any penalties accrued for
visiting parts, or the question’s Maximum mark, whichever is lower.
Click on Explore mode options at the top of the parts list to set up the question’s objectives and penalties.
Maximum mark#
The maximum mark the student can be awarded for this question. If the total obtained by adding up the scores for the objectives and taking away penalties exceeds this amount, this amount is
awarded instead.
Show objectives#
If Always is chosen, all objectives are shown in the score breakdown table.
If When active is chosen, only objectives corresponding to parts that the student has visited are shown.
Show penalties#
If Always is chosen, all penalties are shown in the score breakdown table.
If When active is chosen, only penalties which have been applied are shown.
Each objective has a Name, which is shown to the student, and a Limit. Students can accumulate marks toward an objective up to the limit.
Use the limit to restrict how many marks the student can earn for performing a certain task.
Each penalty has a Name, which is shown to the student, and a Limit. Each time the student chooses a next part option which applies a penalty, the defined number of marks is added to the
corresponding penalty, up to the limit.
The penalty is not re-applied each time the student revisits an instance of a part.
Use the limit to avoid over-penalising the student for taking a particular option repeatedly.
Next parts#
Each part has a Next parts tab, where you define which parts the student can visit next.
To add an option, click the Add a next part option button, and select a part.
For each “next part” option, you can define a condition for its availability, a list of variable replacements to make when chosen, and an optional penalty to apply when the student chooses this
The student can navigate back to previous parts at any time, using the navigation tree at the top of the question. If the student changes their answer to a previous part, this could invalidate any
next parts they have chosen, so all instances of next parts which use the student’s answer in variable replacements are removed when the student changes their answer.
Suggest going back to the previous part?#
This option applies to the current part. A button labelled Go back to the previous part will be shown at the end of the part, at the top of the list of next part options. Use this if the current
part is a dead end, such as a standalone hint, and the student should proceed by going back to the previous part and choosing another option.
The label on the button shown to the student. If you leave this blank, the next part’s name is used. You might want to change the label so you don’t reveal the destination, or to differentiate
two options which lead to the same part.
Lock this part?#
If ticked, the current part will be locked when the student chooses this next part option. The student will not be able to change or resubmit their answer to this part.
If not ticked, the student can come back to this part and change their answer.
Use this if a subsequent part would reveal information which the student could use to improve their answer to this part, and you don’t want them to do that.
Define when the option is available to the student.
□ Always - always available.
□ When answer submitted - available once the student has submitted a valid answer to this part, whether it’s correct or not
□ When unanswered or incorrect - available if the student hasn’t submitted an answer, or if they’ve submitted an incorrect answer. Unavailable once they submit a correct answer.
□ When incorrect - available after the student submits an incorrect answer.
□ When correct - available once the student submits a correct answer.
□ Depending on expression - available if the Available if expression evaluates to true.
Available if#
This field is only shown when Availability is set to Depending on expression.
Write a JME expression which evaluates to true when the option should be available to the student, and false otherwise.
The following variables are defined during the evaluation of this expression:
Penalty to apply when visited#
If you want to apply a penalty when the student chooses this option, select the name of a penalty here.
Amount of penalty#
The number of marks to add to the chosen penalty.
Only shown if Penalty to apply when visited is not “None”.
Show penalty hint?#
If ticked, the label of this option will have a hint of the form “(lose N marks)” added on the end, describing the number of marks that will be added to the chosen penalty when this option is
Variable replacements#
When the student selects a next part option, you can replace the values of question variables before the part instance is created. These changes only affect the next part, not the current one.
Here are some examples of what you can do with variable replacements:
• Track the number of times a student has visited a certain part. For example: replace n with n+1.
• Replace a question variable with the student’s answer. For example: ask them to give a number which you’ll later ask them to factorise; ask them to enter measurements from an experiment.
• Update the state of a simulation. For example: when factorising the number n, the student enters a factor and you replace n with n/interpreted_answer.
Click Add a variable replacement to define a new variable replacement.
For each replacement, you must select the name of the variable you want to replace, and then define what it’s replaced with, from the following options:
Student’s answer#
The student’s answer to this part, drawn from the interpreted_answer marking note.
Credit awarded#
The amount of credit awarded to the student for this part, a number between 0 and 1.
JME expression#
The variable’s value is replaced with the result of the given JME expression.
The following variables are defined during the evaluation of the expression:
□ all question variables;
□ the values of any marking notes produced by this part’s marking algorithm.
Identifying the current part in JME#
Every question part has a unique path, which can be used to identify it.
In explore mode, a part instance’s path can’t be known until it’s created, so each part instance defines a variable part_path, which can be used while substituting values into content areas or in
marking algorithms. | {"url":"https://docs.numbas.org.uk/en/latest/question/explore.html","timestamp":"2024-11-01T21:57:59Z","content_type":"text/html","content_length":"39518","record_id":"<urn:uuid:b19867ea-6728-41aa-acd9-ee0d29d6bc29>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00439.warc.gz"} |
Using filters with Fixed LODs causes problems for many users.
Here we will see
"How-to" write a LOD
When and how to apply Context or Dimension filters
Effect filters have in 2 common use cases
Probably the best place to start is with an understanding of what Fixed does and then move on to how they are affected by filters.
Think of your data as a pyramid. The base of the pyramid is the data you uploaded. The pointed top is the total of all the individual records. Fixed LODs create layers in your data set that are
in-between the lowest level and the top of the pyramid.
You decide where to place the layers using a combination of dimensions and, while Sum() is the usual aggregation you can use avg(), max(), stdev(), or any of an available variety).
The calculation returns a value that is an aggregation of lower-level data, but the field itself is not an aggregate. Similar to a subtotal, it can be used in any other type of calculation.
Let's look at the syntax and then some filtering examples: (I will be using Superstore data in the examples and use 3 dimensions from that data Region (4 values), Segment(3 Values), and Category (3
values) – there are 10,000 records in the Superstore dataset
{Fixed Region, Segment,Category: Sum(Sales)}
The LOD is always enclosed in curvey brackets {} and uses the keyword Fixed. That is followed by a string of dimension names and a colon : The combination of dimensions that come next determines the
level in the pyramid you need – In this case there are 3 segments, 4 regions, and 3 categories – so 3X4X3 will result in 36 total point.
The value to be stored in those points is determined by the expression that follows the colon – in this case, the sum(sales).
A column has been added to the dataset for the LOD. For each of the 10,000 records in the Superstore dataset, the value in that column will one of the 36 values determined by the combination of
Region, Segment, and Category in the record. (The 36 values below)
OK – Now let's see how Fixed works and how it is affected by filters
The sequence for the application of filters is controlled by the Order of Operations.
Fixed LODs are calculated in Step 4 adding a new column of data in the data set-
Context filters are applied in Step 3 and filter at the detail record level (the 10,000 records in the Superstore data set for example) BEFORE the LOD is calculated. Since the data is filtered out
before aggregation, it is not in the LOD (Note Context filters appear as Gray pills on the filter shelf)
Dimension filters are applied in Step 5 AFTER the LOD has been calculated and eliminate the entire row of data from the data table including the aggregated LOD value. (note Dimension filters appear
as Blue pills on the filter shelf)
While the rules around Context and Dimension filter application seem simple enough, their application can be confusing.
There are a limitless number of examples that can be made using LOD expression. Here we are only looking at 2 – the percent to total and the max date. We will look at the expression at different
levels and see what happens when filters are applied.
1 Overall (Table) LOD
We'll start with an overall Fixed statement that is the equivalent of the total for all records in the data set – the expression is useful to find the percent to total or to find the max value in the
data set or the latest date
It can be written in two ways (they are equivalent and will return the same result:
{Fixed : Sum(Sales)}
Or you can use what is known as a Table LOD that looks at the entire data table
The expression returns a single value
if we add other dimensions to the rows or columns the value of the expression is unchanged:
Let's see what happens when we add a filter on Region (for a complete discussion on filters see 6 Types of filters and how they affect the data table)
First, as a Dimension filter (Blue pill Not in Context) – Dimension filters are applied after the Fixed LOD is calculated so "East" has been filtered out of the view but the LOD total remains
Filters on Dimensions in Context (Gray pill) are applied before the LOD is calculated – filter East removes the region from the view and the LOD value now excludes the East region
The same can be seen when filtering dimensions not in the view. Here Category is not in the view –
Category is applied as a Dimension filter (Blue pill) and the value of the LOD is the original total in all regions and segments
Now placing Category in Context and filtering out Office Supplies before the LOD is calculated changes the value to:
That is all you need to remember about filters and their effect on LODs – if you want the filter applied before the LOD is calculated then place the Dimension into Context. If you don't want the LOD
to reflect the application of the filter do NOT place the filter in Context –
2 Overall LOD use case examples
Let's see how the overall LOD can be used to determine the percent of total or the latest sales value
To calculate the percent of total we need the total sales value for the denominator
{ FIXED :sum([Sales])}
and the numerator is just the sales value – but notice LODs are not an aggregate so we need to use sum() in the numerator and in the denominator
sum([Sales])/sum([1 Fixed sales])
as expected, applying a Context filter on East will apply the filter before the LOD is calculated and the percent of total is based on the 3 remaining regions
2 Latest Date
The latest date in the data table can be found using a Table LOD
{max([Order Date])}
and then applying a conditional statement based on the last date
if [Order Date]=[1 latest date in the data set] then [Sales] end
3 Add a Dimension to the LOD- 1 level
When you add Dimensions to the LOD, you are creating a virtual layer – like a subtotal on a spreadsheet -which you can then use in any other calculation –
{ FIXED [Region]: sum([Sales])}
The LOD will create totals at the 4 regions and store those totals so you can use them in other calculations
The starting point is the same – with no filters applied to Region, Segment, or Category values and Grand Total are the same as the original
Let's see how filtering affects each LOD.
First applying a Dimension filter (i.e. NOT in context) on Region will filter the East region out of the viz but will not change the overall LOD value
Adding the Region dimension to Context and filtering out East happens before the LODs are calculated will affect the overall LOD value but not the Region based LOD – those values are already at the
region level
Appling filters to Dimensions not in the view, as before, will affect all LODs when the Dimension is in context
If the Dimension is NOT in Context (a Dimension filter) then neither the region nor overall LOD value is changed – the filter was applied after the LODs were calculated
OK, let's revisit the percent of total and latest date LOD calculation but use the Region LOD to see how they are affected
4 Percent of total – Region LOD
To see the effect I have added the Category Dimension to the viz on Columns –
The percent to the overall total is:
sum([Sales])/sum({ FIXED :sum([Sales])})
And to calculate the percent to total at the region level Region has been added to the LOD in the denominator
sum([Sales])/sum({ FIXED [Region]: sum([Sales])})
the percentages overall and in the region with no filtering applied are:
Appling a Dimension Filter (Blue Pill-not in Context) to filter out East leaves all the individual percent to totals unchanged – East has been filtered out of the view and the total of the 3
remaining regions is 70% of the overall unfiltered total sales – remember East was filtered out after the LOD was calculated so the East region LOD values are not in the total
If the filter on Region is placed in Context then East is filtered out before the LOD is calculated so the Overall percent to totals will change but the Regional percentages will not (East is
filtered out of the view but there is no effect to the remaining regions
Let's see what happens when we filter on Category, not in context
When the Category filter is placed in Context then Office Supplies is filtered out before the LODs are calculated and all values change
6 Latest Date
Similarly, LODs and be used to find the latest date by region use this:
{ FIXED [Region]: max([Order Date])}
and filters can be applied in or out of Context just like in the numeric calculations above
With no filters applied, the LOD just returns the last date for any record in the Region regardless of the Category or Segment (i.e. the "Fixed" latest record in the Region)
As expected if the Dimension filters are not in context then filters are applied after the LOD is calculated and the dates are unchanged but the view is changed
When the dimensions are placed in Context the latest dates now reflect the last record for the Segment, Region combination
If we add a filter directly on date but the filter is not in Context then it is applied after the LOD is calculated and the last dates in the data set are returned
but if all filters are placed in Context then the LOD will filter out 2022 and return the last date by region and segment resulting:
7 Filtering with multiple dimensions in the LOD
I would like to do one more example. This one with 2 dimensions in the LOD:
{ FIXED [Region],[Segment]:sum([Sales])}
There are 4 regions and 3 segments so 12 values are returned by the expression:
if we drop in Category the 12 values are repeated for each value of the Category dimension – as expected:
We can start adding filters, first on Region and Segment NOT in Context –
The filters are applied after the LOD is calculated – only 6 values are visible in the views but the individual values remain unchanged – (remember when you create a LOD the aggregation is made at
the level of the combination of dimensions and stored for later use)
The Grand Totals have changed because they are the sum of the 6 LOD values in the view.
Now place the Segment and Region filters in Context – What happens?
There are the same 6 values and the Grand Totals are the same – Why would that be?
The LOD is at calculated at the Region / Segment level and the values are independent of each other. Filtering out the base record before the LOD is calculated or filtering out the aggregate after
the LOD is evaluated will return the same result.
Now see what happens if we add a filter on a dimension that is not in the LOD dimension list – here Category
The starting point is the same – without any filters applied to any dimension the 12 values are unchanged:
When we apply a Dimension Filter (Blue pill) not in Context to dimension:
The filter on Category is applied after the LOD is calculated so there is no change in the output
But if the filter is placed in Context (Gray pill) and the individual records for Office Supplies are filtered out before the LOD is evaluated then all the LOD values are affected
Please experiment with other combinations of filters in and out of Context – they all follow the same rule – when the Dimension is in Context (Gray pills) the individual records are filtered out
before the LOD is calculated – if applied as Dimension filters (Blue pills) they are applied to the results the LOD after it is evaluated
We could carry this on by adding more dimensions to the LODs and creating layers deeper in the dataset, but the effect is the same.
Hope this helps clear up how to use filters with Fixed LODs. It takes some practice so get busy –
Also – see the VizConnect Data Dr recording at Filtering Fixed LODs
The workbook containing all the examples used here can be downloaded from my Tableau Public site at Download filtering workbook
4 Responses
1. Hi Jim, thank you for this post!
I have an LOD filtering question but I think is a bit different from what you got here.
I am trying to SUM a measure and generate a T/F from it if it is >=0. My calculation:
{ INCLUDE [Service SKU Nbr], [Contract Nbr]:
IF SUM([Contract Line Amt])<=0
THEN FALSE
ELSE TRUE
The calculation seems correct but when I add it to my filter's card, then I get a measure instead of T/F and I have to select Min/Max, etc. which doesn't really filter the view properly.
Any suggestions to get T/F when dragging the calc to filters? Thank you so much for your help!
1. Do not use include – use Fixed – if that doesn't solve your issue post the question and your workbook on the Tableau Community Forums
2. What if you have two fixed calculations, one that requires filters, and one that doesn't? How do you make it work without using context filters since the context filter will apply the filtering
to both? Also both need to be fixed calculations because of fields in the calculation that need to be fixed. Is there a way to make that work?
1. you use a fixed lod on one an include on the other and do NOT place the filter in Context see https://jimdehner.com/2022/06/03/just-what-does-include-or-exclude-do/ the 3rd example | {"url":"https://jimdehner.com/2022/07/23/fixed-lods-and-how-they-are-affect-by-filters/","timestamp":"2024-11-09T19:20:14Z","content_type":"text/html","content_length":"123794","record_id":"<urn:uuid:8d064b65-6698-4d2f-a0c7-25d7c37887eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00632.warc.gz"} |
Certainty-based Preference Completion
As from time to time it is impractical to ask agents to provide linear orders over all alternatives, for these partial rankings it is necessary to conduct preference completion. Specifically, the
personalized preference of each agent over all the alternatives can be estimated with partial rankings from neighboring agents over subsets of alternatives. However, since the agents' rankings are
nondeterministic, where they may provide rankings with noise, it is necessary and important to conduct the certainty-based preference completion. Hence, in this paper firstly, for alternative pairs
with the obtained ranking set, a bijection has been built from the ranking space to the preference space, and the certainty and conflict of alternative pairs have been evaluated with a well-built
statistical measurement Probability-Certainty Density Function on subjective probability, respectively. Then, a certainty-based voting algorithm based on certainty and conflict has been taken to
conduct the certainty-based preference completion. Moreover, the properties of the proposed certainty and conflict have been studied empirically, and the proposed approach on certainty-based
preference completion for partial rankings has been experimentally validated compared to state-of-arts approaches with several datasets.
In a preference completion problem, with a set of agents (users) and a set of alternatives (items), each agent (user) has his/her partial ranking over a subset of alternatives (items) and the goal of
this problem is to infer each agent (user)'s personalized ranking or preference over all the alternatives (items) including those alternatives (items) the agent (user) has not yet handled. Obviously,
from time to time it is impractical to ask agents to provide linear orders over all alternatives, especially in big data environments [1]. For example, perhaps the agent does not know the status of
some alternatives because there are too many alternatives, which makes it hard for the agent to rank all of them. Or perhaps some alternatives are incomparable for a certain agent. All these
situations mentioned above result in partial rankings, and it is necessary to introduce preference completion.
The preference completion problem has been applied to applications in many areas, such as social choice, and recommender system [2], which can be very useful in community detection [3, 4], or graph
anomaly detection [5]. For example, in social choice, each voter (agent) can cast a ballot by a ranking over all candidates (alternatives), or a partial ranking over some candidates (alternatives).
As for these partial rankings, it is necessary to form a ranking over all candidates by a certain voting rule. In a recommendation system, each user can rate some items. Then the task of the
recommendation system is to predict the rate on the items that have not been rated by him/her. To satisfy this requirement, two common approaches including the matrix factorization approach and the
neighborhood-based approach are introduced to handle the preference completion. The traditional algorithms on these two approaches are usually rating-oriented, while a recent line of work focuses on
the ranking-oriented algorithms [6, 7] due to the drawbacks of the rating-oriented algorithms. In this paper, we focus on the ranking-oriented neighborhood-based approach.
Traditionally, in neighborhood-based preference completion, it is first to find the near neighbors of each agent and then aggregate these neighbors' rankings to produce the predicted preference by a
certain voting rule [6]. However, this task has some inevitable issues. For example, an agent may exhibit irrational behaviors or provide rankings in a noise setting. To address this issue, many
rating-oriented trust-based approaches have been proposed with additional contextual information. Meanwhile, the ranking-oriented approach has left much room for better research. Liu et al. [8]
proposed an anchor-based algorithm with many other agents' ranking information leveraged to ignore the presence of randomness.
Here in this paper a certainty-based preference completion algorithm is proposed on the basis of Liu's [8] work. More precisely, after finding the k-nearest neighbors by the anchor-kNN algorithm Liu
proposed, we use the certainty-based voting algorithm introduced in this paper to complete the preference (ranking) instead of using the traditional majority voting rule. The traditional majority
voting rule tends to cause wrong judgment especially when both sides have close votes. In this case, a slight randomness even can cause different outcomes by the majority voting rule. For this
reason, this paper introduces a certainty-based voting algorithm to deal with this problem. Importantly, when we take a vote on two alternatives, the certainty which measures the degree that the two
alternatives can be preferred or comparable should be introduced. Only when the certainty value satisfies a defined threshold, we can go further to have three-way preference decision instead of
assigning 0 or 1 for the two alternatives simply. Hence, the certainty-based voting algorithm avoids the wrong judgment when both sides have close scores or rankings made in a noise setting. In this
paper, before formulating the certainty and presenting the certainty-based preference completion algorithm, we consider the certainty and preference space first to introduce the three-way preference
between two alternatives.
Technically, in a ranking pool gathered from agents, the rankings including alternative pair A and B can be aggregated to form the preference between A and B. Mathematically, a bijection can be built
from the ranking space to the preference space for alternative pair A and B. Here, the ranking space consists of all the partial rankings on A and B from agents, while the preference space consists
of three-way preference between A and B, which includes
• preference (prefer A to B, denoted as $PAB+$),
• dispreference (prefer B to A, denoted as $PAB-$), and
• uncertainty (no preference between A and B, denoted as $CAB-$),
according to the trisecting and acting models of human cognitive behaviors [1, 9]. Thus, the following three situations are distinguished:
• The agents prefer alternative A to alternative B, which can be confirmed by high preference $PAB+$, low dispreference $PAB-$, and low uncertainty $CAB-$.
• The agents prefer alternative B to alternative A, which can be confirmed by low $PAB+$, high $PAB-$, and low $CAB-$.
• The agents are uncertain about the preference between alternative pair A and B, i.e., A and B are unpreferred, which can be confirmed by low $PAB+$, low $PAB-$, and high $CAB-$.
It is obvious that when $CAB-$ is low, the preference between A and B can be determined, i.e., A and B are preferable. Hence, the certainty of preference can be introduced to describe the
trustworthiness of the preference, which is denoted as $CAB+$, and it can be calculated as $CAB+$ = 1-$CAB-$. The certainty of preference can be taken as the subjective probability of the
preference, following the proposition that the certainty is the degree of belief that an individual has on the preference [10]. Hence, in this paper, the certainty can be evaluated based on a
well-built statistical measurement, which defines a bijection from ranking space to preference space, enabling the estimation on the pairwise preference with neighbors' partial rankings via mapping
them to
(preference $PAB+$, dispreference $PAB-$, uncertainty $CAB-$).
Our definition on certainty should capture the following key properties:
• - Property 1: Certainty $CAB+$ increases as the number of rankings between alternative pair A and B increases for a fixed ratio of rankings from A to B and rankings from B to A.
• - Property 2: Certainty $CAB+$ decreases as the extent of conflict increases in the partial rankings between alternative pair A and B.
Our main contributions in this paper can be summarized as follows:
• As pointed out in [11], it is necessary and important to introduce the certainty and conflict of the preference between alternative pairs, and from time to time the certainty and conflict of the
preference are more important than the preference itself. In this paper, a probability-based certainty and conflict are introduced under Properties 1 & 2, to describe the trustworthiness of the
• A certainty-based voting algorithm using the certainty and conflict is proposed for conducting the certainty-based preference completion in nondeterministic settings.
• We empirically study the properties of the proposed approach, and experimentally validate the proposed approach compared to the state-of-the-art approaches with several datasets.
This paper is organized as follows. Section 2 reviews existing works on the Plackett-Luce model, Kendall-Tau distance and anchor-kNN algorithm. In Section 3, a bijection has been built from ranking
space to preference space, and certainty and conflict of alternative pairs have been evaluated based on a well-built statistical measurement. In Section 4, a certainty-based voting algorithm has been
taken to conduct the preference completion with the certainty and conflict. In addition, Section 5 studies empirically the properties of the proposed approach about certainty and conflict. Moreover,
Section 6 has been experimentally validated compared to the state-of-the-art approaches with several datasets. Finally, Section 7 summarizes this paper and presents the future work.
2. BACKGROUND
2.1 Plackett-Luce Model
Given a set of m alternatives and a set of n agents, let y(y[1], y[2], …, y[m]) denotes the latent features of alternatives and x(x[1], x[2], …, x[n]) denotes the latent features of agents. Agent i's
ranking R[i] is determined by a statistical model for ranking data. Hence, as a widely-used statistical model, the Plackett-Luce model [12, 13] is adopted to generate the rankings of agents. In this
paper, each alternative is assigned a positive value named utility. The greater this utility is, the more likely its corresponding alternative is ranked at a higher position [14]. In [14], the
realized utility for every alternative j on agent i is determined by
$uij(Xj, Yj) = θ (Xi, Yj) + εi,j,$
where θ(χ[i], y[j]) is agent I's expected utility on alternative j and can be determined by the closeness of the latent feature χ[i,] and y[j,] measured by θ(χ[j], y[j]) = exp(-||x[i] - y[j]||[2]),
and ε[i,j] is a zero mean independent random variable that follows a Gumbel distribution. When the realized utilities set u[i](u[i1], u[i2], …, u[im]) of agent i is obtained, agent i ranks the
alternatives in a decreasing order according to the realized utilities. After repeating this for n times, synthetic datasets of all the agents can be generated for experiments. For more details,
please refer to the following Algorithm 1.
Sampling from Plackett-Luce Model.
2.2 Kendall-Tau Distance
Given two agents' rankings R[1] and R[2] over the same alternatives, the Kendall-Tau distance can be introduced to measure the similarity of R[1] and R[2], which is the total number of disagreements
in pairwise comparisons between alternatives in the linear rankings. For alternative j in R[i], R[¡](j) represents the position in R[¡]. For example, with a ranking of alternatives represented by R
[¡], if j in R¡ is the top-ranked alternative, then R[i](j) = 1. The normalized Kendall-Tau distance between R[1] and R[2] is
$NK(R1, R2) = ∑j1≠j2∈R1I(Πk=1,2 (Rk(j1)-Rk(j2)) < 0)(|R1|2)$
where I(v) is an indicator that is set to be 1 if the argument v is true; otherwise, it is set to be 0.
Moreover, if the rankings have not shared completely the same alternatives, the intersection of the two alternative sets can be taken for computing the normalized Kendall-Tau distance.
2.3 Anchor-kNN Algorithm
Before the introduction of the anchor-kNN proposed in [8], we first present the idea of KT-kNN, which simply uses the Kendall-Tau distance to find the agent's neighbors. If the Kendall-Tau distance
between two rankings R¡ and R[j] is small, the latent feature of the agents x[¡] and x[j] should be close, i.e., the two agents have a similar opinion on alternatives.
As the KT-kNN algorithm has not considered that agents' preferences may be nondeterministic or agents' rankings are made in noise setting, different from KT-kNN, anchor-kNN uses other agents' (named
as anchors) ranking data to determine the closeness of two agents rather than considering the two agents' rankings only. The anchor-kNN develops a feature F[i,j], for agents i and j to represent the
Kendall-Tau distance between R[i] and R[j], i.e., F[i,j] = NK(R[i], R[j]). Then for measuring the closeness of two agents denoted as D[i,][j,] we use the sum of the difference between F[i,t] and F
[j,t] to find the k-nearest neighbors, where t is the third agent that belongs to all the other agents except agents i and j.
In this section, let us present some preliminary definitions first. For an arbitrary alternatives pair A and B, the certainty can be adopted to describe the trustworthiness of the preference between
A and B. Technically, following [15], a Probability-Certainty Density Function (PCDF) can be introduced to capture the subjective probability of the ranking. However, unlike [15], following [16] and
[17], in this paper certainty is defined based on the PCDF to satisfy Properties 1 & 2.
3.1 Ranking Space
The ranking space consists of all the weighted partial rankings on the alternative pair A and B from agents, including
• the rankings ${OAB(i)}$ where A is ranked ahead of B with weight $wAB(i)$ for the ranking $OAB(i)$, and n[AB]denotes the accumulated weight of rankings ${OAB(i)}$, represented by $nAB=∑iwAB(i)$
• the rankings ${OBA(j)}$ where B is ranked ahead of A with weight $wBA(j)$ for the ranking $OBA(j)$, and n[BA] denotes the accumulated weight of rankings ${OBA(j)}$, represented by $nBA=∑iwBA(i)
$, and
• the unordered ones ${OAB¯(k)}$ where A and B are not comparable with weight $wAB¯(k)$ for the ranking $OAB¯(k)$, and $nAB¯$ denotes the accumulated weight of rankings $OAB¯(k)$, represented by
$nAB¯=∑iwAB¯(k)$. Obviously, we have $wAB¯(k)=wBA¯(k)$, and $OAB¯(k)=OBA¯(k)$.
Moreover, the weight $wAB(i)$ for $OAB(i)$ means the quality of ranking $OAB(i)$. Without additional knowledge, we assign $wAB(i)$ to be 1.
Definition 1. Ranking space
$O = {< nAB, nBA, nAB¯ > |min{nAB, nBA, nAB¯} > 0}.$
3.2 Preference Space
Traditionally, the uncertainty is usually ignored, and sometimes dispreference has not been taken into account as well, which leads to some disturbing results shown in empirical study section.
According to the trisecting and acting models of human cognitive behaviors [9, 18], the preference space consists of three-way preference between alternatives, which includes
• preference $PAB+$ (prefer A to B),
• dispreference $PAB-$ (prefer B to A), and
• uncertainty $CAB-$ (no preference between A and B).
Definition 2. Preference space
$P = {<PAB+, PAB-, CAB- > |PAB++PAB-+CAB-=1, min{PAB+, PAB-, CAB-}> 0}.$
3.3 Certainty of Rankings in Alternative Pairs
The Bayesian inference [19, 20] here is adopted to update the probability with the available contextual information about the rankings in alternative pairs, i.e., update the prior distribution to the
posterior distribution [21, 22]. Currently, the offline Bayesian inference has been utilized in this paper. The Bayesian inference can also be applied to online/streaming scenario [23, 24].
Let x[AB], x[BA] and $XAB¯$ be the probability of rankings ${OAB(i)}$, ${OBA(j)}$ and ${OAB(k)}$, respectively, where $XAB¯=1-XAB-XBA$ and $X=<XAB,XBA>$. In addition, $xAB∈[0, 1], xBA∈[0, 1] and
xAB¯≥0$, and thus we then have $xAB+xBA≤1$.
Without any additional information, the prior distribution f(X|O) is a uniform distribution. As the cumulative probability of a distribution within [0,1] equals 1, the density of a PCDF has the mean
value 1 within [0,1], and this makes f(X|O) = 1.
As the ranking sample O conforms to a multinomial distribution [16, 22], we have
$f(O) = 6(XAB)nAB(XBA)nBA(XAB¯)nAB¯nAB!nBA!(nAB¯)!$
As for posterior distribution f(O|X), it can be estimated as [16, 22]:
$f(O|X) = f(X|O)f(O)∫01f(X|O)f(O)dX = (XAB)nAB (XBA)nBA(XAB¯)nAB¯∫01(XAB)nAB(XBA)nBA(XAB¯)nAB¯dX$
Then, the certainty can be determined by the deviations of posterior distribution from the prior distribution, i.e., uniform distribution. Hence, we have the following definition about certainty.
Definition 3. The certainty $CAB+$ of rankings ${<nAB,nBA,nAB¯>}$ can be estimated as
$CAB+=12∫01|f(O|X)-f(X|O)|dX=12∫01|(XAB)nBA(XAB¯)nAB¯∫01(XAB)nAB(XBA)nBA(XAB¯)nAB¯ dX-1|dX$
where $12$ is to remove the double counting of the deviations.
From this definition, we have $CAB+=CBA+$.
3.4 Conflict of Rankings in Alternative Pairs
The conflict can be determined by the relative difference between weighted rankings n[AB] and n[BA], as in [17]. More specifically,
• there is the largest conflict, when weighted rankings n[AB] = n[BA];
• there is the smallest conflict, when weighted rankings n[AB] = 0 or n[BA] = 0.
Hence, we have the following definition about conflict.
Definition 4. The conflict c[AB] of rankings ${<nAB,nBA,nAB¯>}$ can be estimated as
$CAB=min{nABnAB+nBA, nBAnAB+nBA}$
From this definition, we have c[AB] = c[BA].
3.5 Bijection from Ranking Space to Preference Space
With Definitions 1, 2, 3 and 4, the following definition can be introduced.
Definition 5. The bijection from ranking space ${<nAB,nBA,nAB¯>}$ to preference space ${<PAB+,PAB-,CAB->}$ can be estimated as
$PAB+= nABnAB+nBA+nAB¯ CAB+$
$PAB-= nBAnAB+nBA+nAB¯ CAB+$
This section proposes the certainty-based preference completion approach. The framework of our approach is shown in Figure 1. It includes two processes. One is to find the k-nearest neighbors for
user i with the anchor-kNN algorithm Liu [8] proposed. The other one is to conduct a linear ranking for user i over all alternatives. In this section, we focus on the latter one. As for the latter
one, with the neighbors' partial ranking, a certainty-based voting algorithm is introduced to estimate pairwise preference for all pair alternatives, and then these pairwise preferences can form a
linear ranking for the user i.
Certainty-based preference completion process.
4.1 Certainty-based Voting Algorithm
First, let us introduce a definition.
Definition 6. With preference space ${<PAB+,PAB-,CAB->}$, the following conclusions can be obtained:
• if uncertainty $CAB-≥ε1$, alternatives A and B are unpreferred;
• if $CAB-<ε1$,
□ - if $PAB+-PAB+≥ε2$, user ¡ prefers A to B;
□ - if $PAB--PAB+≥ε2$, user ¡ prefers B to A;
□ - otherwise, A and B are unpreferred;
where ε[1] and ε[2] are thresholds to rule out the fuzziness of comparison.
In the existing work, with the rankings of neighbors obtained by k-nearest neighbors algorithm, common voting rules^^①, such as majority voting, can be taken to estimate pairwise preference for
conducting the preference completion.
In contrast, in this paper, we use a certainty-based voting rule with certainty and conflict to obtain pairwise preference. The certainty and conflict measure the trustworthiness that the pair
alternatives can be preferred or comparable. If the certainty satisfies a defined threshold, we can then evaluate the degree that the user i prefers one to another denoted by $PAB-$ and $PAB+$.
Then, only if the difference between the two-way preference has reached a value, we can make a preference decision on the two alternatives. Technically, for the alternative pair A and B with $CAB-
<ε1$ and $|PAB+-PAB-|≥ε2$, a preference decision between A and B can be made. The process for estimating pairwise preference is also shown in Algorithm 2. We apply this algorithm on all alternative
pairs, and then we get all the pairwise preferences.
Certainty-based voting algorithm for estimating pairwise preference.
4.2 Greedy Order Algorithm
Next, let us combine all the pairwise preferences to form a linear ranking over all alternatives. One possible approach is the greedy order algorithm [25]. This algorithm follows a greedy idea: the
order algorithm always picks the alternative that currently has the maximum potential value in the alternatives pool I and ranks it above all the other remaining items. Here, for item ¡, the
potential value v¡ is equal to $∑j∈lψi,j-∑j∈lψj,i$. This value aggregates all the pairwise preferences obtained in the previous subsection and represents the preference for item ¡ among all the
neighbors' rankings. Then it deletes the picked one from the alternatives pool and updates the potential values of the remaining items by removing the effects of the picked one. Repeat the picking
process until the alternatives pool is empty, and then a linear ranking for user ¡ is produced. See Algorithm 3.
Greedy order algorithm.
In this section, we study the properties of certainty and conflict in our proposed model.
5.1 Increasing Rankings with Fixed Conflict
Figure 2 plots how certainty $CAB+$ varies with weighted rankings n[AB] and $nAB→$ under fixed conflict c[AB.]
Certainty increases with N[AB] + n[AB] when $nABnAB+nBA$ and $nABnAB+nBA$ is fixed.
This should confirm Property 1.
Theorem 1. As for fixed $nABnAB+nBA$ and $nAB→$, the certainty $CAB+$ increases with n[AB] + n[BA].
Proof: Let $nABnAB+nBA=α, nAB+nBA=β$, and
$f(•) = (XAB)n AB(XBA)nBA(1-XAB-XBA)nAB¯∫01(XAB)nAB(XBA)nBA(1-XAB-XBA)nAB¯dX$
As in [17], x[1], x[2], x[3], x[4] can be defined, such that f(x[1]) = f(x[2]) = f(x[3]) = f(x[4]) = 1 and
$CAB+=∫x1x2∫x3x4[f(•)-1] dxABdXBA$
where x[1], x[2], x[3], and x[4] are functions of β. Then
$∂CAB+∂β= ∂X2∂β ∫x3x4[f(X2)-1]dXAB-∂X1∂β ∫x3x4[f(X1)-1]dXAB+∫x1x2∂∂β∫X3X4[f(•)-1]dxABdxBA= ∫x1x2∂∂β∫x3x4[f(•)-1]dxABdxBA$
$∂∂β ∫x3x4[f(•)-1]dXAB= ∂X4∂β[f(X4)-1]-∂X3∂β[f(X3)-1]+∫X3X4∂∂β[f(•)-1]dxAB= ∫x3x4∂∂β[f(•)-1]dxAB$
Following Lemma 9 in [17], we have
$∂∂β ∫x3x4[f(•)-1]dxAB>0$
With Equation (13), we have
This confirms the results of Theorem 1.
5.2 Increasing Conflict with Fixed Rankings
Figure 3 plots how certainty $CAB+$ varies with weighted rankings n[AB] and $nAB→$ under the fixed summation of n[AB] + n[BA] and the fixed $nAB→$. This should confirm Property 2.
Certainty is concave when $nAB+nBA+nAB¯ and nAB¯$ is fixed, and the minimum occurs at n[AB] = n[BA].
Theorem 2. As for fixed $nAB→$, the certainty $CAB+$ is decreasing with n[AB] ≤ n[BA], and increasing with n[AB] ≥ n[BA].
Proof: The details of validation process can be omitted here, as it is similar to one in the proof of Theorem 1. More specifically, with removing the absolute sign and then differentiating it, it can
be proved that the derivation is negative for n[AB] ≤ n[BA], and positive for n[AB] ≥ n[BA].
6. EXPERIMENTS
In this section, we examine the empirical performance of the certainty-based preference completion algorithm. In the experiments, we compare our certainty-based preference completion algorithm with
the common majority voting algorithm [8] and the classic collaborative filtering algorithm (CF) [26]. Both our certainty-based preference completion algorithm and majority voting algorithm use the
anchor-kNN algorithm to find k-nearest neighbors' rankings and utilize these rankings to conduct the preference completion of the target user. While the collaborative filtering algorithm is a
rating-oriented algorithm different from the other two. It computes user's similarity to find user's neighbors, and uses their ratings to generate item prediction.
6.1 Datasets
The experiments adopt two forms of datasets to evaluate algorithms' performance.
• One type of dataset is the synthetic one created by the sampler using a Plackett-Luce model with Algorithm 1. The produced synthetic dataset has over 20,000 rankings from agents on the set of 20
alternatives. Each ranking follows a Gumbel distribution.
• The other type of dataset is the Flixster dataset that collects the movie ratings by users with social trust. It has over 8,000,000 ratings on over 2,000 movies. For the experiments, we convert
the ratings to rankings, and select over 9,000 rankings on over 50 movies.
6.2 Evaluation Metrics
We evaluate the performance on three metrics: (a) Prediction error, (b) Spearman correlation coefficient, (c) Kendall rank correlation coefficient. The first one measures the quality of the predicted
ranking, and the others measure the degree of correlation on the predicted ranking with the original one. Please refer to Pearson [27] and Liu et. al. [2] for more details.
• Evaluation Metric 1: This evaluation metric estimates the accuracy on the predicted ranking with the original true one.
where M is the maximum of the pairwise error, Y[i,j,k] = 1 means in predicted ranking, alternative user i prefers alternative j to alternative k and X[i,j,k] = 1 represents alternative user i
prefers alternative j to alternative k in original ranking. I^-(v) equals 1 when v < 0, and equals 0 otherwise.
• Evaluation Metric 2: The Spearman correlation coefficient measures the difference of the position for every alternative in predicted ranking and the original one to evaluate the similarity
between the predicted ranking and the original one. The greater its value, the more precise our predicted ranking.
to simplify, we have
where d[i] represents the difference on the position of alternative i with the predicted ranking and the original one.
• Evaluation Metric 3: The Kendall rank correlation coefficient is very similar to the above evaluation Metric 2, except that it uses the Kendall distance to measure the correlation:
where the symbol in Equation (20) has the same meaning with the evaluation Metric 1, I[x] represents the alternatives set in original ranking, and I[y] represents the alternatives set in
predicted ranking.
$Φprdiction Error = 1M ∑Xi,j,kI-(Yi,j,k)$
$ΦSpearman CC = ∑i(Xi-X¯)(y1-yi)∑i(Xi-X¯)2 ∑i(yi-y¯)2$
$ΦSpearman CC = 1 -6 ∑d12n(n2-1)$
$Φkendall CC = 1 - 4 ∑I-(Xi, j, k Yi,j,k)|Ix ∩ Iy|⋅ |(Ix ∪ Iy)-1|$
6.3 Experimental Results on Synthetic Dataset and Flixster Dataset
In this section, we conduct the experiments on a synthetic dataset and the Flixster dataset. With the evaluation metrics separately, the comparison results with different approaches can be presented.
The prediction error measures the difference in pairwise preference with the predicted ranking and original ranking. The goal is to reduce the prediction error as far as possible. While the Spearman
correlation coefficient and the Kendall rank correlation coefficient measure the similarity between the predicted ranking and the original ranking. We expect the values on these two evaluation
metrics can be higher possibly.
• As shown in Figure 4, it is very clear that the prediction error tends to be smaller when using certainty-based algorithm than the CF algorithm and the majority voting algorithm. In addition, the
two ranking-oriented approaches outperform the rating-oriented approach. For one thing, the ranking contains more preference relation information over alternatives than rating score, and thus it
may be easier and more accurate in finding the user's neighbors and completing preference. As a result, the ranking-oriented approach has a lower prediction error. For another, the comparison
between the certainty-based voting algorithm and the majority voting algorithm shows the superiority of the certainty-based one. The preference completion algorithm with certainty considered does
reduce the effect of randomness.
• Figure 5(a) shows the performance of Spearman correlation coefficient. On this evaluation metric, the certainty-based voting algorithm performs better than the other two algorithms. This is
because our approach with preference space and certainty considered can filter out those pair preferences which have close votes and have lower certainty. This behavior causes the predicted rank
much more trustworthy.
• Figure 5(b) shows the performance of Kendall rank correlation coefficient. We can get a similar conclusion with the Spearman correlation coefficient in Figure 5(a), so we do not repeat
explanation here.
Prediction error on synthetic dataset: x-axis denotes the number of neighbors. Plots show the prediction error. For this evaluation metric, smaller values are better.
Performance on synthetic dataset: x-axis denotes the number of neighbors. Plots show the Spearman correlation coefficient (Spearman CC) and Kendall rank correlation coefficient (Kendall CC). For both
evaluation metrics, higher values are better.
Roughly speaking, from the experiments on the synthetic dataset, we verify the effectiveness of our proposed certainty-based preference completion algorithm.
(b) Flixster dataset The performance of the three approaches is examined on a real-world dataset, Flixster dataset, which contains the rating information. Because the proposed algorithm and the
majority voting algorithm both use the anchor-kNN algorithm which need ranking data instead of rating data, we need to convert rating data to ranking data first.
• As shown in Figure 6, when the number of neighbors, k > 300, our approach outperforms the other two and the ranking-oriented method still performs better than the rating-oriented method. While
when k < 300, the result does not perform as expected. A possible reason may be that the process of converting ranking data to rating data inevitably brings errors on the pairwise preference.
With more neighbors considered, our proposed algorithm shows its superiority. Thus, the prediction error descends when the number of neighbors grows.
• In Figure 7(a), as we can observe, the certainty-based approach outperforms the other two approaches significantly. This shows a consistent result with the experiments on the synthetic dataset.
• Figure 7(b) shows the a similar performance with Figure 7(a).
Prediction error on Flixster dataset: x-axis denotes the number of neighbors. Plots show the prediction error. For this evaluation metric, smaller values are better.
Performance on Flixster dataset: x-axis denotes the number of neighbors. Plots show the Spearman correlation coefficient (Spearman CC) and Kendall rank correlation coefficient (Kendall CC). For both
evaluation metrics, higher values are better.
In general, with the experiments on the synthetic dataset and Flixster dataset, we can come to a conclusion that the experiments validate our proposed certainty-based preference completion algorithm.
7. CONCLUSION AND FUTURE WORK
Due to the fact that the agents' rankings are nondeterministic, where they may provide their rankings under noisy environments, it is necessary and important to conduct the certainty-based preference
completion. Hence, in this paper firstly, for alternative pairs a bijection has been built from the ranking space to the preference space, and its certainty and conflict have been evaluated based on
a well-built statistical measurement Probability-Certainty Density Function. Then, a certainty-based voting algorithm based on the certainty and conflict has been taken to conduct the preference
completion. More specifically, the ranking with high certainty and low conflict can be obtained with the proposed algorithm to conduct the preference completion. Moreover, the properties of the
proposed approach about certainty and conflict have been studied empirically, and the proposed approach has been experimentally validated compared to the state-of-the-art approaches with several
As in real applications, the data is usually unbalanced [28], i.e., some alternative pairs have a lot of rankings, while others only have a few rankings. In our future work, we will propose
algorithms to handle unbalanced preference completion both effectively and efficiently.
All authors including L. Li ([email protected]), M.H. Xue ([email protected]), Z. Zhang (zanzhang@ hfut.edu.cn), H.H. Chen ([email protected]), and X.D. Wu ([email protected]) took part in writing
the paper. In addition, L. Li designed the algorithm and experiments, and provided the funding; M.H. Xue designed and conducted experiments, and analyzed the data; Z. Zhang analyzed the data.
This work has been supported by the National Natural Science Foundation of China (No. 62076087, No. 61906059 & No. 62120106008) and the Program for Changjiang Scholars and Innovative Research Team in
University (PCSIRT) of the Ministry of Education of China under grant IRT17R32.
The first author would like to thank his wife Jun Zhang, his parents and friends during his fight with lung adenocarcinoma. “I leave no trace of wings in the air, but I am glad I have had my flight.”
common voting rules may include positional scoring rules, maximin, and Bucklin. For more details, please refer to [21].
, et al.:
Weighted partial order oriented three-way decisions under score-based common voting rules
International Journal of Approximate Reasoning
Learning to rank for information retrieval
, et al.:
Deep learning for community detection: Progress, challenges and opportunities
. In: Proceedings of the 29th International Joint Conference on Artificial Intelligence (IJCAI 2020), pp.
, et al.:
A comprehensive survey on community detection with deep learning
. arXiv preprint arXiv:2105.12584 (
, et al.:
A comprehensive survey on graph anomaly detection with deep learning
. arXiv preprint arXiv:2106.07178 (
Nonparametric preference completion
. In: Proceedings of International Conference on Artificial Intelligence and Statistics (AISTATS 2018), pp.
Eigenrank: A ranking-oriented approach to collaborative filtering
. In: Proceedings of the 31st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, pp.
, et al.:
Near-neighbor methods in random preference completion
. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2019), pp.
Three-way granular computing, rough sets, and formal concept analysis
International Journal of Approximate Reasoning
Context based trust normalization in service-oriented environments
. In: Proceedings of the IEEE Conference on Autonomic and Trusted Computing, pp.
Individual choice behavior: A theoretical analysis
Dover Publications
New York
The analysis of permutations
Applied Statistics
, et al.:
Learning plackett-luce mixtures from partial preferences
. In: Proceedings of the AAAI Conference on Artificial Intelligence (AAAI 2019), pp.
A subjective metric of authentication
. In: Proceedings of the 5th European Symposium on Research in Computer Security (ESORICS 98), pp.
Subjective trust inference in composite services
. In: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence (AAAI 2010), pp.
Evidence-based trust: A mathematical model geared for multiagent systems
ACM Transactions on Autonomous and Adaptive Systems
), Article No. 14 (
Three-way decision: An interpretation of rules in rough set theory
. In: Proceedings of the 4th International Conference on Rough Sets and Knowledge Technology (RSKT 2009), pp.
Probabilistic classification vector machines
IEEE Transactions on Neural Networks
Predictive ensemble pruning by expectation propagation
IEEE Transactions Knowledge Data Engineering
, et al.:
Bayesian reliability
, et al.:
Probability and statistics in engineering
John Wiley & Sons
Efficient probabilistic classification vector machine with incremental basis function selection
IEEE Transactions on Neural Networks and Learning Systems
, et al.:
Scalable graph-based semi-supervised learning through sparse bayesian model
IEEE Transactions on Knowledge and Data Engineering
Learning to order things
Journal of Artificial Intelligence Research
, et al.:
Using collaborative filtering to weave an information tapestry
Communications of the ACM
Tests for rank correlation coefficients. I
Model-based oversampling for imbalanced sequence classification
. In: Proceedings of the 25th ACM International Conference on Information and Knowledge Management (CIKM’16), pp.
© 2022 Chinese Academy of Sciences. Published under a Creative Commons Attribution 4.0 International (CC BY 4.0) license.
Chinese Academy of Sciences
This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited. For a full description of the license, please visit | {"url":"https://direct.mit.edu/dint/article/4/1/112/109194/Certainty-based-Preference-Completion","timestamp":"2024-11-06T09:41:03Z","content_type":"text/html","content_length":"358830","record_id":"<urn:uuid:a6b98c92-e962-47ad-80af-97a0232e4683>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00288.warc.gz"} |
Generate Useful Random Numbers in JavaScript - JavaScript Tutorial | Spicy Yoghurt
Spicy Yoghurt | Last updated: 17 September 2019 | JavaScript tutorial
Generate useful random numbers in JavaScript
Learn all about generating random numbers in JavaScript you can actually use. Get numbers between a certain range, generate random integer- and boolean values and learn about the use of seeds.
Generate a basic random number with Math.random()
In JavaScript, generating random numbers is done through the Math object. It contains many helpful mathematical functions and one of which is the random() function. This function generates a
floating-point number between 0 and 1 (including 0, excluding 1) and can be used as the base for calculating other random values. You can access the Math object from anywhere in your code.
Here's an example of how to use the basic random() function on the Math object to generate a random decimal number:
//Possible output
As you can see, it just generates numbers between 0 and 1 and isn't very applicable in its current state.
The numbers generated by the random() function are just the start and need to be processed in other calculations to really make them useful. This way you can generate numbers within a certain range
or meet other requirements.
Get a random number between two values
On their own, the results from the random() function aren't very useful, but they can be scaled to generate other types of random values.
Imagine you need a random number within a certain range, let's say somewhere between 0 and 10. The basic random() function won't go beyond 1 (it will never reach 1, to be precise), but by multiplying
the result you can generate larger numbers.
In the following example random() is used to generate a random decimal number between a min- and max value (including min, excluding max).
function getRandomNumber(min, max) {
return Math.random() * (max - min) + min;
getRandomNumber(0, 10)
//Possible output
In the example, the range is set between 0 and 10, but you can use any range you like and the function will generate numbers between it. Be careful tough, the output will always be smaller than (not
equal to) the max value.
Get an integer within a range
In some cases you need an whole number instead of a decimal number. In math a whole number that can be both positive or negative is called an integer Most programming languages have an integer data
type for this occasion, but JavaScript doesn't. It uses the Number data type, containing floating point values, for every type of number. To go from decimal to whole number you'll need to apply a
form of rounding.
The next function uses Math.floor() to turn a decimal floating point number into an integer. It returns numbers between a min- and max value. You can read more about the floor() function here.
This time, the results will include both the min and the max.
function getRandomInt(minInt, maxInt) {
return Math.floor(Math.random() * (maxInt - minInt + 1)) + minInt;
getRandomInt(0, 10)
//Possible output
Generate a random boolean
From random integers it is only a small step to random boolean values. A boolean in JavaScript can have a value of true or false. If you apply the integer technique to generate a number between 0 and
1, you can easily use the outcome to generate random booleans. This can be helpful if you're just looking for a randomized true/false value.
function getRandomBoolean() {
return getRandomInt(0, 1) > 0;
Convert an integer to boolean in JavaScript
There are many ways to convert an integer to a boolean. You could also use the
== operators
, or the
object. Here's an alternative approach to the previous example using
a double bang operator
function getRandomBoolean() {
return !!getRandomInt(0, 1);
//Possible output
Creative ways to use random values
When you create a game or animation you can't really go without random values. You can apply the techniques covered in this tutorial to spicy up your work and make it feel less scripted. Here are
some fun and practical ways in which random values can be applied:
• Use randomness in animations to make small differences in size, color or motion. This will make the animation less static.
• Play sounds with a random pitch, so no sound sounds the same, but you can still use the same source files.
• Generate particle effects by creating particles with random locations, speed and decay time.
• Randomize boss behaviour to make boss battles less artificial.
Can you seed the random number generator?
In some programming languages it is possible to manually set the seed for the random numbers. You can tell the random() function where to start getting its random numbers, so to say. So, when you
have a piece of code that generates ten random numbers, you can start back at the top by setting the same seed and get the exact same ten 'random' numbers again when you re-run the script.
In JavaScript however, setting your own seeds is not natively supported. You could use (or build) an external random number generator to do the job. But always be sure to check if the generator is
truly random and the numbers are uniformly distributed.
As you can see it's actually quite easy to generate random values with the Math object. With some effort you can transform them into useful numbers or even booleans. Always be sure to check if your
functions return values that are uniformly distributed.
Random numbers are essential for animations and games and make them feel less scripted. With the examples given in this tutorial you can try to find new ways to apply random numbers. If you have any
questions feel free to ask them in the comment section below.
(Login with
to view the comment section) | {"url":"https://spicyyoghurt.com/tutorials/javascript/generate-random-number","timestamp":"2024-11-10T01:02:29Z","content_type":"text/html","content_length":"130698","record_id":"<urn:uuid:63c39953-44ba-48c1-bf02-cf765359bd87>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00293.warc.gz"} |
The Fibonacci Sequence: Unveiling Nature's Mathematical Patterns
Scott Britton
Science, with its systematic approach of questioning, experimenting, and discovering, has unraveled the mysteries of the natural world. Applied sciences, such as water treatment, draw inspiration
from various disciplines like mathematics, limnology, hydraulics, microbiology, genetics, kinetics, chemistry, and more. However, it is often through observing nature that we gain valuable insights
to address the challenges we face. In this article, we explore the intriguing relationship between the Fibonacci sequence and its relevance in understanding the wonders of nature. Let's embark on a
journey through history, where the Fibonacci sequence acts as a guiding principle in uncovering nature's hidden secrets.
1. The Golden Ratio and the Nautilus Shell:
In the 13th century, mathematician Leonardo Fibonacci introduced the Fibonacci sequence to the Western world. Little did he know that this sequence would hold an astonishing connection to nature. The
Fibonacci sequence, where each number is the sum of the two preceding numbers (0, 1, 1, 2, 3, 5, 8, 13, and so on), led to the discovery of the Golden Ratio.
The Golden Ratio, approximately equal to 1.618, is a mathematical phenomenon found in various natural phenomena, including the mesmerizing spiral pattern of the Nautilus shell. As the Nautilus grows,
it adds new chambers to its shell in a logarithmic spiral, adhering to the Fibonacci sequence. The ratio of the size of each chamber to the previous one approaches the Golden Ratio, creating a
visually stunning masterpiece of nature.
2. Sunflowers and the Fibonacci Spiral:
Have you ever wondered why the seeds of a sunflower form such a perfectly intricate pattern? It turns out that the Fibonacci sequence plays a significant role here too. Sunflowers exhibit a
fascinating arrangement of seeds in tightly packed spirals, following the Fibonacci sequence. The number of clockwise and counterclockwise spirals often corresponds to two consecutive Fibonacci
numbers, such as 21 and 34, or 34 and 55.
This arrangement maximizes the number of seeds that can fit within the flower head, ensuring efficient packing and growth. The Fibonacci spiral, derived from connecting arcs within squares based on
Fibonacci numbers, perfectly encapsulates this natural phenomenon, providing a mathematical explanation for the mesmerizing beauty of sunflower seed patterns.
3. The Pinecone's Perfect Harmony:
The Fibonacci sequence continues to unveil its influence in the intricate patterns found in pinecones. As we examine the scales of a pinecone, we notice that the number of spirals in one direction
corresponds to a Fibonacci number, while the number of spirals in the opposite direction corresponds to the succeeding Fibonacci number. This phenomenon ensures optimal seed arrangement and efficient
The Fibonacci sequence's presence in pinecones, like sunflowers and nautilus shells, demonstrates nature's inclination towards mathematical harmony. It is as if nature follows an unseen mathematical
blueprint, (drafted by the hand of God) utilizing the Fibonacci sequence to create balance and efficiency in His design.
The Fibonacci sequence, initially introduced as a mathematical curiosity, has found a profound connection to the natural world. From the mesmerizing spiral of the nautilus shell to the intricate seed
patterns of sunflowers and the harmonious scales of pinecones, nature embraces the principles of the Fibonacci sequence to optimize growth, efficiency, and beauty.
By understanding and appreciating these connections, scientists, engineers, and innovators can draw inspiration from nature's wisdom to solve complex problems and create a better world. The Fibonacci
sequence serves as a timeless reminder of the intricate relationships between mathematics, science, and the wonders of nature that continue to amaze and inspire us. | {"url":"https://www.hamblywater.com/post/the-fibonacci-sequence-unveiling-nature-s-mathematical-patterns","timestamp":"2024-11-14T13:56:53Z","content_type":"text/html","content_length":"1050587","record_id":"<urn:uuid:7941fc64-aef6-4779-b411-d9491bc49403>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00055.warc.gz"} |
Inflaton or Curvaton? Constraints on Bimodal Primordial Spectra from Mixed Perturbations
Cite as:
W. H. Kinney, A. Moradinezhad Dizgah, B. Powel, A. Riotto, Phys.Rev. D86 (2012) 023527
We consider Cosmic Microwave Background constraints on inflation models for which the primordial power spectrum is a mixture of perturbations generated by inflaton fluctuations and fluctuations in a
curvaton field. If future experiments do not detect isocurvature modes or large non-Gaussianity, it will not be possible to directly distinguish inflaton and curvaton contributions. We investigate
whether current and future data can instead constrain the relative contributions of the two sources. We model the spectrum with a bimodal form consisting of a sum of two independent power laws, with
different spectral indices. We quantify the ability of current and upcoming data sets to constrain the difference Δn in spectral indices, and relative fraction f of the subdominant power spectrum at
a pivot scale of k=0.017 Mpc^-1 h. Data sets selected are the WMAP 7-year data, alone and in conjunction with South Pole Telescope data, and a synthetic data set comparable to the upcoming Planck
data set. We find that current data show no increase in quality of fit for a mixed inflaton/curvaton power spectrum, and a pure power-law spectrum is favored. The ability to constrain independent
parameters such as the tensor/scalar ratio is not substantially affected by the additional parameters in the fit. Planck will be capable of placing significant constraints on the parameter space for
a bimodal spectrum. | {"url":"https://cosmology.unige.ch/content/inflaton-or-curvaton-constraints-bimodal-primordial-spectra-mixed-perturbations-0","timestamp":"2024-11-10T09:36:58Z","content_type":"text/html","content_length":"36730","record_id":"<urn:uuid:815bcc5c-7cfd-4987-b402-00ee57e66ab5>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00461.warc.gz"} |
Postfix to Infix Conversion Algorithm, example and program - Quescol
Postfix to Infix Conversion Algorithm, example and program
In this tutorial We have explored an algorithm to convert a given Postfix expression to Infix expression using Stack.
Algorithm For Postfix to Infix Conversion
Iterate the given expression from left to right, one character at a time
Step 1 : If a character is operand, push it to stack.
Step 2: If a character is an operator,
if there are fewer than 2 values on the stack
give error "insufficient values in expression" goto Step 4
pop 2 operands from stack
create a new string and by putting the operator between operands.
push this string into stack
Repeat Steps 1 and 2
Step 3: At last there will be only one value or one string in the stack which will be our infix expression
Step 4: Exit
Some important terminology
Postfix Expression
In Postfix Expression operator appear after operand, this expression is known as Postfix operation.
If Infix Expression operator is in between of operands, this expression is known as Infix operation.
Steps to Convert Postfix to Infix
• Start Iterating the given Postfix Expression from Left to right
• If Character is operand then push it into the stack.
• If Character is operator then pop top 2 Characters which is operands from the stack.
• After poping create a string in which comming operator will be in between the operands.
• push this newly created string into stack.
• Above process will continue till expression have characters left
• At the end only one value will remain if there is integers in expressions. If there is character then one string will be in output as infix expression.
Example to Convert Postfix to Infix
Postfix Expression : abc-+de-+
Token Stack Action
a a push a in stack
b a, b push b in stack
c a, b, c push c in stack
– a , b – c pop b and c from stack and put – in between and push into stack
+ a + b – c pop a and b-c from stack and put + in between and push into stack
d a + b – c, d push d in stack
e a + b – c, d , e push e in stack
– a + b – c, d – e pop d and e from stack and put – in between and push into stack
+ a + b – c + d – e pop a + b – c and d – e from stack and put + in between and push into stack
Solution for Postfix expression
postfix expression: 752+*415-/-
Token Stack Action
7 7 push 7 in stack
5 7, 5 push 5 in stack
2 7 , 5, 2 push 2 in stack
+ 7, 7 pop 2 and 5 from stack, sum it and then again push it
* 49 pop 7 and 7 from stack and multiply it and then push it again
4 49, 4 push 4 in stack
1 49, 4, 1 push 1 in stack
5 49, 4, 1, 5 push 5 in stack
– 49, 4, -4 pop 5 and 1 from stack
/ 49, -1 pop 4 and 4 from stack
– 50 pop 1 and 49 from stack
Java Program to Convert Infix to Postfix
import java.util.*;
public class Main {
public static String convert(String exp) {
int len = exp.length();
Stack<String> stack = new Stack<>();
for (int i = 0; i < len; i++) {
char c = exp.charAt(i);
if (c == '*' || c == '/' || c == '^' || c == '+' || c == '-') {
String s1 = stack.pop();
String s2 = stack.pop();
String temp = "(" + s2 + c + s1 + ")";
} else {
stack.push(c + "");
String result = stack.pop();
return result;
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Please enter Postfix Expression: ");
String exp = sc.nextLine();
System.out.println("Infix Expression: " + Main.convert(exp));
Please enter Postfix Expression:
Infix Expression: ((a+(b-c))+(d-e))
The Java program provided is designed to convert expressions from postfix notation to infix notation using a stack. It leverages the Stack class from the Java util package to perform this operation.
Here’s a breakdown of how the program works:
Import Statement
import java.util.*;
This line imports the Java utility package, which contains the Stack class used in this program.
The Main Class
The class named Main encapsulates the entire program.
The convert Method
for (int i = 0; i < len; i++) {
char c = exp.charAt(i);
This is a static method that takes a single String parameter (exp) representing the postfix expression and returns a String that represents the converted infix expression.
len: Stores the length of the postfix expression.
stack: A Stack<String> object used to hold parts of the expression during conversion.
The Conversion Loop:
for (int i = 0; i < len; i++) {
char c = exp.charAt(i);
This loop iterates over each character in the postfix expression.Inside the loop, the program checks if the current character (c) is an operator (*, /, ^, +, -). If it is, the program performs the
following steps:
Result Compilation:
String result = stack.pop();
return result;
return result; After the loop completes, the only element left on the stack is the fully converted infix expression. This is popped from the stack and returned.
The main Method
public static void main(String[] args) {
Scanner sc = new Scanner(System.in);
System.out.println("Please enter Postfix Expression: ");
String exp = sc.nextLine();
System.out.println("Infix Expression: " + Main.convert(exp));
This is the entry point of the program. It prompts the user to input a postfix expression, reads the input using a Scanner object, and then calls the convert method to transform the input into infix
notation. Finally, it prints the resulting infix expression.
How the Stack Works in This Context
The stack is used to temporarily hold operands. When an operator is encountered, the two most recent operands are popped from the stack, combined with the operator in infix format, and the resulting
string is pushed back onto the stack. This process continues until the entire expression is converted to infix notation. The use of a stack is crucial for correctly handling the order of operations
and parentheses in the resulting infix expression. | {"url":"https://quescol.com/data-structure/postfix-to-infix","timestamp":"2024-11-05T15:54:09Z","content_type":"text/html","content_length":"89021","record_id":"<urn:uuid:18ab4daa-3507-464d-aa27-0062ee4b69ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00691.warc.gz"} |
Mathematics for Elementary Teachers
Our number system is a western adaptation of the Hindu-Arabic numeral system developed somewhere between the first and fourth centuries AD. However, numbers have been recorded with tally marks
throughout history. The Ishango Bone^[1] from Africa is about 25,000 years old. It’s the lower leg bone from a baboon, and contains tally marks. We know the marks were used for counting because they
appear in distinct groups.
This reindeer antler^[2] from France is about 15,000 years old, and also shows clearly grouped tally marks.
Of course, we still use tally marks today!^[3]
Base ten numbers (the ones you have probably been using your whole life), and base b numbers (the ones you’ve been learning about in this chapter) are both positional number systems.
A positional number system is one way of writing numbers. It has unique symbols for 1 through b – 1, where b is the base of the system. Modern positional number systems also include a symbol for 0.
The positional value of each symbol depends on its position in the number:
• The positional value of a symbol in the first position is just its face value.
• The positional value of a symbol in the second position is b times its value.
• The positional value of a symbol in the third position is
• And so on.
The value of a number is the sum of the positional values of its digits.
In an additive number system, the value of a written number is the sum of the face values of the symbols that make up the number. The only symbol necessary for an additive number system is a symbol
for 1, however many additive number systems contain other symbols.
History: Roman numerals
The ancient Romans used a version of an additive number systems. The Romans represented numbers this way:
number Roman Numeral
1 I
5 V
10 X
50 L
100 C
500 D
1,000 M
So the number 2013 would be represented as MMXIII. This is read as 2,000 (two M’s), one ten (one X), and three ones (three I’s).
For any additive number system very large numbers become impractical to write. To represent the number one million in Roman numerals it would take one thousand M’s!
However, the Roman numerals did have one efficiency advantage: The order of the symbols mattered. If a symbol to the left was smaller than the symbol to the right, it would be subtracted instead of
added. So for example nine is represented as IX rather than VIIII.
Think / Pair / Share
If you don’t already know how to use Roman numerals, research it a little bit. Then answer these questions.
• Write the numbers 1–20 in Roman numerals.
• What is the maximum number of symbols needed to write any number between 1 and 1,000 in Roman Numerals? Justify your answer.
The earliest positional number systems are attributed to the Babylonians (base 60) and the Mayans (base 20). These positional systems were both developed before they had a symbol or a clear concept
for zero. Instead of using 0, a blank space was used to indicate skipping a particular place value. This could lead to ambiguity.
Suppose we didn’t have a symbol for 0, and someone wrote the number
It would be impossible to tell if they mean 23, 203, 2003, or maybe two separate numbers (two and three).
Leonardo Pisano Bigollo, more commonly known as Fibonacci^[4], played a pivotal role in guiding Europe out of a long period in which the importance and development of math was in marked decline. He
was born in Italy around 1170 CE to Guglielmo Bonacci, a successful merchant. Guglielmo brought his son with him to what is now Algeria, and Leonardo was educated in mathematics mathematics there.
At the time, Roman Numerals dominated Europe, and the official means of calculations was the abacus. Muḥammad ibn Mūsā al-Khwārizmī^[5] described the use of Hindu-Arabic system in his book On the
Calculation with Hindu Numerals in 825 CE, but it was not well-known in Europe.
Statue of al-Khwarizmi at Amirkabir University of Technology
Fibonacci’s book Liber Abaci described the Hindu-Arabic system and its business applications for a European readership. His book was well-received throughout Europe, and it marked the beginning of a
reawakening of European mathematics.
History: Hawaiian numbers
The Hindu-Arabic number system is now used nearly exclusively throughout the globe. But many cultures had their own number systems before contact and trade with other countries spread the work
of al-Khwārizmī throughout the world.
There is evidence that pre-contact Hawaiians actually used two different number systems. Depending on what they were counting, they might use base 4 instead (or a mixed base-10 and base-4 system).
One theory is that certain objects (fish, taro, etc.) were often put in bundles of 4, so were more natural to count by 4’s than by 10’s. The number four also had spiritual significance in Hawaiian
Humans have 5 fingers on each hand^[6], making base ten a natural choice for counting. But there are 4 gaps between the fingers, meaning that a hand can carry four fish or taro plants or similar
objects, making base four a natural choice for some cultures.
In the mixed base system, instead of powers of 10, numbers are broken down into sums of numbers that look like 4 times a power of 10 (40, 400, 4000, etc.).
1 ‘ekahi
2 ‘elua
3 ‘ekolu
4 ‘ehā (or kauna)
5 ‘elima
6 ‘eono
7 ‘ehiku
8 ‘ewalu
9 ‘eiwa
10 ‘umi
11–19 ‘umi kumamā {kahi, lua, kolu, hā, etc.}
20 iwakālua
21–29 Iwakālua kumamā {kahi, lua, kolu, hā, etc.}
30 kanakolu
31–39 kanakolu kumamā {kahi, lua, etc.}
40 kanahā
400 lau
4,000 mano
40,000 kini
400,000 lehu
Here are a few examples (refer to the table above for the Hawaiian names of the numbers):
‘ekolu kini, ‘ewalu lau me ‘ekahi
translates to three 40,000’s, eight 400’s, and one;
3 ⋅ 40000 + 8 ⋅ 400 + 1 = 123201
5207 = 1 ⋅ 4000 + 3 ⋅ 400 + 7
would be ‘ekahi mano, ‘ekolu lau me ‘ehiku
On Your Own
Work on the following exercises on your own or with a partner.
1. Translate this Hawaiian number to English and then write it in base ten.
‘ekahi kanahā me kanakolu kumamāiwa
2. Translate this base‐ten number to Hawaiian.
Think / Pair / Share
How is learning about different number systems (including representing numbers in different bases) valuable to you as a future teacher? | {"url":"https://pressbooks-dev.oer.hawaii.edu/math111/chapter/number-systems/","timestamp":"2024-11-11T18:09:30Z","content_type":"text/html","content_length":"87077","record_id":"<urn:uuid:9f829cac-7449-4193-aa7b-3f135685ebf6>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00008.warc.gz"} |
Normal-phase propagation with increasing temperature level in high-temperature superconductors
Three regimes are identified in the normal-phase propagation in a high-temperature superconductor. When the current (i) is less than the critical current (i-asterisk), an ordinary bistable wave of
finite amplitude propagates. When i is greater than i-asterisk, the system is 'monostable' with the propagation of an autowave with an exponentially increasing crest and a constant front velocity.
However, if in the ranges i less than i sub 1, i greater than i sub 2 this velocity is determined by the entire form of the source, in the range i sub 1 less than i less than i sub 2, it is
determined only by the increase in the resistance of the normal substrate.
Pisma v Zhurnal Tekhnischeskoi Fiziki
Pub Date:
August 1989
□ High Temperature Superconductors;
□ Superconductivity;
□ Thermal Stability;
□ Wave Propagation;
□ Mathematical Models;
□ Propagation Modes;
□ Propagation Velocity;
□ Temperature Profiles;
□ Solid-State Physics | {"url":"https://ui.adsabs.harvard.edu/abs/1989PZhTF..15...39L/abstract","timestamp":"2024-11-07T01:59:37Z","content_type":"text/html","content_length":"34996","record_id":"<urn:uuid:0b007821-a1b4-4c20-9880-3c9f09bb4e14>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00022.warc.gz"} |
Make a 2D mesh object — fm_mesh_2d
Make a 2D mesh object
loc = NULL,
loc.domain = NULL,
offset = NULL,
n = NULL,
boundary = NULL,
interior = NULL,
max.edge = NULL,
min.angle = NULL,
cutoff = 1e-12,
max.n.strict = NULL,
max.n = NULL,
plot.delay = NULL,
crs = NULL,
Currently passed on to fm_mesh_2d_inla
Matrix of point locations to be used as initial triangulation nodes. Can alternatively be a sf, sfc, SpatialPoints or SpatialPointsDataFrame object.
Matrix of point locations used to determine the domain extent. Can alternatively be a SpatialPoints or SpatialPointsDataFrame object.
The automatic extension distance. One or two values, for an inner and an optional outer extension. If negative, interpreted as a factor relative to the approximate data diameter (default=
The number of initial nodes in the automatic extensions (default=16)
one or more (as list) of fm_segm() objects, or objects supported by fm_as_segm()
one object supported by fm_as_segm()
The largest allowed triangle edge length. One or two values.
The smallest allowed triangle angle. One or two values. (Default=21)
The minimum allowed distance between points. Point at most as far apart as this are replaced by a single vertex prior to the mesh refinement step.
The maximum number of vertices allowed, overriding min.angle and max.edge (default=-1, meaning no limit). One or two values, where the second value gives the number of additional vertices allowed
for the extension.
The maximum number of vertices allowed, overriding max.edge only (default=-1, meaning no limit). One or two values, where the second value gives the number of additional vertices allowed for the
If logical TRUE or a negative numeric value, activates displaying the result after each step of the multi-step domain extension algorithm.
An optional fm_crs(), sf::crs or sp::CRS object
• fm_mesh_2d_inla(): Legacy method for INLA::inla.mesh.2d() Create a triangle mesh based on initial point locations, specified or automatic boundaries, and mesh quality parameters.
INLA compatibility
For mesh and curve creation, the fm_rcdt_2d_inla(), fm_mesh_2d_inla(), and fm_nonconvex_hull_inla() methods will keep the interface syntax used by INLA::inla.mesh.create(), INLA::inla.mesh.2d(), and
INLA::inla.nonconvex.hull() functions, respectively, whereas the fm_rcdt_2d(), fm_mesh_2d(), and fm_nonconvex_hull() interfaces may be different, and potentially change in the future.
See also
fm_rcdt_2d(), fm_mesh_2d(), fm_delaunay_2d(), fm_nonconvex_hull(), fm_extensions(), fm_refine()
Other object creation and conversion: fm_as_fm(), fm_as_lattice_2d(), fm_as_mesh_1d(), fm_as_mesh_2d(), fm_as_segm(), fm_as_sfc(), fm_as_tensor(), fm_lattice_2d(), fm_mesh_1d(), fm_segm(),
fm_simplify(), fm_tensor()
fm_mesh_2d_inla(boundary = fm_extensions(cbind(2, 1), convex = 1, 2))
#> fm_mesh_2d object:
#> Manifold: R2
#> V / E / T: 17 / 32 / 16
#> Euler char.: 1
#> Constraints: 16 boundary edges (1 group: 1), 0 boundary edges
#> Bounding box: (1.000151,2.999849) x (0.0001506077,1.9998493923)
#> Basis d.o.f.: 17 | {"url":"https://inlabru-org.github.io/fmesher/reference/fm_mesh_2d.html","timestamp":"2024-11-07T06:37:02Z","content_type":"text/html","content_length":"15303","record_id":"<urn:uuid:db15b2d7-9a3b-4459-b2bf-63a503372bc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00616.warc.gz"} |
4th grade binary number worksheet
4th grade binary number worksheet Related topics: basic algebra online
www.math and algebra signs.com
math 1 - sections 16 & 17 handout for chapter 5
answer key to prentice hall nys chemistry the physical setting
algebra problem creator
can someone give me a example on how to solve fractions
calculator complex fractions
adding and subtracting rational expressions,3
how to solve a matrix
finding real zeros of a polynomial
compass math sample,3
Author Message
sj26 Posted: Sunday 08th of Jun 19:09
Guys , I need some help with my algebra homework. It’s a really long one having almost 30 questions and it covers topics such as 4th grade binary number worksheet,
4th grade binary number worksheet and 4th grade binary number worksheet. I’ve been trying to solve those questions since the past 4 days now and still haven’t been
able to solve even a single one of them. Our teacher gave us this assignment and went for a vacation, so basically we are all on our own now. Can anyone show me the
way? Can anyone solve some sample questions for me based on those topics; such solutions would help me solve my own questions as well.
Registered: 14.12.2001
From: /dev/cpu/[0-9]+ :D
Back to top
Jahm Xjardx Posted: Monday 09th of Jun 21:20
You might want to take a look at Algebrator. I bought it some time back to help me with my College Algebra course and I can say that it was a wise decision. There are
so many demos present which you can go through. You can also try out the questions related to linear equations and function range by just typing them in. Algebrator
provides detailed description to the problems which helps to make difficult concepts very clear. I would say that this program is absolutely the best that money can
Registered: 07.08.2005
From: Odense, Denmark, EU
Back to top
Dxi_Sysdech Posted: Tuesday 10th of Jun 17:43
I agree. Algebrator not only gets your assignment done faster, it actually improves your understanding of the subject by providing very useful information on how to
solve similar questions. It is a very popular software among students so you should try it out.
Registered: 05.07.2001
From: Right here, can't you see
Back to top
Troigonis Posted: Wednesday 11th of Jun 07:12
Algebrator is a very user friendly software and is definitely worth a try. You will find lot of exciting stuff there. I use it as reference software for my math
problems and can swear that it has made learning math more fun .
Registered: 22.04.2002
From: Kvlt of Ø
Back to top
Dlokmeh Posted: Thursday 12th of Jun 12:18
Thank you for your assistance . How do I get this software?
Registered: 27.05.2004
From: Coventry, England
Back to top
Paubaume Posted: Friday 13th of Jun 19:16
Check out this link https://softmath.com/comparison-algebra-homework.html. I hope your math will get better and you will do a great job on the test! Good luck!
Registered: 18.04.2004
From: In the stars... where you
left me, and where I will wait
for you... always...
Back to top | {"url":"https://www.softmath.com/algebra-software-5/4th-grade-binary-number.html","timestamp":"2024-11-09T21:54:56Z","content_type":"text/html","content_length":"43198","record_id":"<urn:uuid:8be3de64-ea29-434e-951b-dc426ac885fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00787.warc.gz"} |
Merge Sort: stack-safe, tail-recursive, in pure immutable Scala, N-way
2023-08-20 https://www.scala-algorithms.com 2023-08-20
Print a binary tree vertically
Check Sudoku board
Merge intervals
Check word in grid (stack-safe)
Check word in grid (depth-first search)
Median of two sorted arrays
Find the longest palindrome within a string
Print a binary tree
Count dist intersections
Maximum wait at a fuel station
Binary heap (min-heap)
Compute missing ranges
Find kth largest element in a List
Find the minimum item in a rotated sorted array
Compute the steps to transform an anagram only using swaps
Compute minimum number of Fibonacci numbers to reach sum
Fixed Window Rate Limiter
Token Bucket Rate Limiter
Leaky Bucket Rate Limiter
Least-recently used cache (MRU)
Game of Life
Find triplets that sum to a target ('3Sum')
Reverse Polish Notation calculator
Mars Rover
Find indices of tuples that sum to a target (Two Sum)
Sliding Window Rate Limiter
Compute modulo of an exponent without exponentiation
Count pairs of a given expected sum
Establish execution order from dependencies
Check a binary tree is balanced
Compute keypad possibilities
Reverse first n elements of a queue
Least-recently used cache (LRU)
Find height of binary tree
Check a binary tree is a search tree
Make a binary search tree (Red-Black tree)
Check a directed graph has a routing between two nodes (depth-first search)
Longest common prefix of strings
Check if a directed graph has cycles
Pure-functional double linked list
Count passing cars
Remove duplicates from an unsorted List
Make a queue using Maps
Make a queue using stacks (Lists in Scala)
Single-elimination tournament tree
Compute the length of longest valid parentheses
Binary search in a rotated sorted array
Find combinations adding up to N (unique)
Find the index of a substring ('indexOf')
Reverse bits of an integer
Find k closest elements to a value in a sorted Array
Reshape a matrix
Remove duplicates from a sorted list (Sliding)
Remove duplicates from a sorted list (state machine)
Find combinations adding up to N (non-unique)
Compute single-digit sum of digits
Add numbers without using addition (plus sign)
Tic Tac Toe MinMax solve
Compute nth row of Pascal's triangle
Print Alphabet Diamond
Reverse a String's words efficiently
Monitor success rate of a process that may fail
QuickSelect Selection Algorithm (kth smallest item/order statistic)
Rotate a matrix by 90 degrees clockwise
Read a matrix as a spiral
Count number of contiguous countries by colors
Length of the longest common substring
Find minimum missing positive number in a sequence
Binary search a generic Array
Run-length encoding (RLE) Decoder
Run-length encoding (RLE) Encoder
Find the contiguous slice with the minimum average
Tic Tac Toe board check
Find the minimum absolute difference of two partitions
Compute a Roman numeral for an Integer, and vice-versa
Quick Sort sorting algorithm in pure immutable Scala
Fibonacci in purely functional immutable Scala
In a range of numbers, count the numbers divisible by a specific integer
Is an Array a permutation?
Merge Sort: stack-safe, tail-recursive, in pure immutable Scala, N-way
Closest pair of coordinates in a 2D plane
Counting inversions of a sequence (array) using a Merge Sort
Rotate Array right in pure-functional Scala - using an unusual immutable efficient approach
Count factors/divisors of an integer
Merge Sort: in pure immutable Scala
Longest increasing sub-sequence length | {"url":"https://www.scala-algorithms.com/feed.atom","timestamp":"2024-11-10T19:05:57Z","content_type":"application/atom+xml","content_length":"42866","record_id":"<urn:uuid:8848d9c9-9234-4e62-8b4b-453dadc1af0d>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00535.warc.gz"} |
Escaping the Path
January 14, 2019
There’s a lovely forest near my house. It’s a wonderful place that looks exceptional in the autumn, where the fallen leaves of the trees cover the path in a flurry of orange, red, and yellow. I love
running there because it’s so peaceful.
Imagine that I told you I would show you this forest. After hearing me wax poetic about it, you’re excited to see it. We get to the forest, and I show you the path that goes through. We walk along
it, and after a while you ask if we can get off the path to see the forest in its more “natural” state.
Puzzled, I ask, “But this path is the forest. There’s nothing else of interest other than what’s on the path anyway.”
We might not use the same words, but this is how a lot of us view mathematics. There’s a path (the curriculum), and following it is the only way to learn about mathematics. Forget about going
off-path. That’s not even a thought that crosses your mind!
Unless you are really into mathematics, chances are you haven’t seen the wonderful little niches that the subject has to offer. This is unfortunate, but it’s a consequence of the fact that we tend to
look at mathematics in terms of the path forged by the curriculum. It’s also not a problem which is limited to mathematics. Almost any subject will have this standard “path” that most people end up
associating with the subject itself.
If I could send one message to my younger self, it would this. Don’t make the mistake of seeing the path as the subject itself. It’s only one particular way of looking at a subject, but there are so
many more available. It just takes a willingness to look past the usual offering.
Unlike what we’re taught in school, mathematics isn’t a linear subject. Sure, it’s probably a good idea to learn about arithmetic before you learn algebra, but it’s not always as clear. The web of
mathematics is thick and highly-connected, which means there are many paths you can take through the subject. Just because there’s a clear trail that has been created by countless curriculums does
not mean you are forced to take that same path. In fact, I would encourage you to explore more. Look for those smaller connections. They can be as interesting as the regular path.
My hope here is to encourage you that mathematics is not only the curriculum you learn in school. It has so many other aspects that are off the path, if only you start exploring.
To me, this indicates two things. First, it means that we need to spread the message through our educational institutions, because it’s important that students see mathematics as more than only a
curriculum. Second, it suggests that a way to get people interested in mathematics is to find something that they are attracted to. The key point is that this may not lie on the main path, but who
cares? I’m more concerned with getting people to see mathematics as it is: an ensemble of many ideas, not just a linear path.
It’s worth wandering off the path every so often to see what else is on offer. | {"url":"https://jeremycote.net/escaping-the-path","timestamp":"2024-11-08T17:55:27Z","content_type":"text/html","content_length":"5659","record_id":"<urn:uuid:e67acf4d-a897-464a-ba8a-f525d1f176a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00195.warc.gz"} |
Dimensions can be confusing. Today we will be learning about 0D, 1D, 2D, 3D and also 4D properly in detail. 4D is a new concept that not many people have actually heard of It is different
than the others. 3D, 2D and 1D we have all learnt about. Even though 0D is not heard of everywhere, it is relatively simple compared to 4D. Let’s begin.
We learn about dimensions in physics and math as well. This is popularly used in space and astronomy as well, to learn more about the universe. Dimension in math
means the measurement of any object’s length, width or height. It also means the measure of the distance or size of a thing or a place or just empty space in a direction. So, there are many types of
dimensions in the universe, some of which I will explain to you today.
Everybody knows about 1D, 2D and 3D. To recap, one dimension is just a line, without area or width or height. It only has length to measure. The second dimension is a flat surface with
area, perimeter, width, length, and maybe even height (in triangles and parallelograms). 3D is where things have width, length, height, and volume. Stuff like you, planets, laptops and books have
three dimensions. Very few things in the universe have zero dimensions. It means that they have no length, no breadth, no height and no area. Only a point can be of zero dimensions. For example, a
quark is so tiny that it has absolutely zero dimensions.
The fourth dimension is where it gets interesting. We all know that 3D is where objects have volume. 4D is where objects have length, breadth, height, area, volume and time. Time is
considered to be the fourth dimension. When an object is at a certain place at a certain time, it is in the 4th Dimension. When explaining our position, we have to mention length, breadth, height and
the time at when we are there. | {"url":"https://www.kevinjacob.in/post/dimensions","timestamp":"2024-11-11T05:01:47Z","content_type":"text/html","content_length":"1050036","record_id":"<urn:uuid:1b7da0d5-529c-48b5-87a6-fa21d55d4c27>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00895.warc.gz"} |
Holt,rinehart and winston algebra 2 tests
holt,rinehart and winston algebra 2 tests Related topics: free online algebra calculator+show work
graphing hyperbola in program calculator
dummy "math book"
yr 8 maths
solve logarithmic equations +calculater
cheat math answers
maths simplification cube
rationalize the denominator
how to do cubic root on ti-84 silver plus
substitution method with fractions
Author Message
Dea Posted: Friday 29th of Dec 09:32
Well there are just two people who can guide me right now , either it has to be some math guru or it has to be God himself. I’m fed up of trying to solve problems on holt,rinehart and
winston algebra 2 tests and some related topics such as reducing fractions and rational expressions. I have my midterms coming up in a week from now and I don’t know what to do ? Is there
anyone out there who can actually take out some time and help me with my problems ? Any sort of help would be highly appreciated .
kfir Posted: Friday 29th of Dec 11:44
How about some more particulars about your problem with holt,rinehart and winston algebra 2 tests? I might be able to suggest . If you are not able to get a good assistance or some one to
sit and sort out your problem or if if it is too expensive , then there might be another way out . There are some good algebra software that you can check out . I tried them out myself.
It came across to me as fine as any tutor can be. I would choose Algebrator for the kind of answer that you are looking out for . What is pretty about it is that it assists you step by
step to the solutions rather than plainly giving the answer. Why not try it out?
From: egypt
pcaDFX Posted: Saturday 30th of Dec 10:56
It would really be nice if you could let us know about a software that can offer both. If you could get us a home tutoring software that would give a step-by-step solution to our problem,
it would really be good . Please let us know the authentic websites from where we can get the software.
velihirom Posted: Sunday 31st of Dec 16:28
Great! I think that’s what I need . Can you tell me where to get it?
Mov Posted: Monday 01st of Jan 13:52
You can order this software online: https://algebra-cheat.com/equations-with-parentheses.html. You won’t regret spending money on it, besides it’s so cheap considering the depth of
knowledge you gain from using it. They even offer an ‘no catch’ money back guarantee. All the best for your assignment. | {"url":"https://algebra-cheat.com/algebra-cheat/algebra-cheat-sheets/holtrinehart-and-winston.html","timestamp":"2024-11-04T12:19:44Z","content_type":"text/html","content_length":"162018","record_id":"<urn:uuid:e357919d-80e2-492d-887f-9299347e4300>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00311.warc.gz"} |
Applications of Exponential FunctionsAlgebraLAB: Lessons
The best thing about exponential functions is that they are so useful in real world situations. Exponential functions are used to model populations, carbon date artifacts, help coroners determine
time of death, compute investments, as well as many other applications.
We will discuss in this lesson three of the most common applications:
population growth
exponential decay
, and
compound interest
. For other applications, consult your textbook or ask your teacher for additional examples.
1. Population
Many times scientists will start with a certain number of bacteria or animals and watch how the population grows. For example, if the population doubles every 5 days, this can be represented as
an exponential function. Most population models involve using the number e. To learn more about e, click here (link to exp-log-e and ln.doc)
Population models can occur two ways. One way is if we are given an exponential function. The second way involves coming up with an exponential equation based on information given. Let’s look at
each of these separately.
Let's Practice:
i. The population of a city is P = 250,342e^0.012t where t = 0 represents the population in the year 2000.
a. Find the population of the city in the year 2010.
To find the population in the year 2010, we need to let t = 10 in our given equation.
P = 250,342e^0.012(10) = 250,342e^0.12 = 282,259.82
Since we are dealing with the population of a city, we normally round to a whole number, in this case 282,260 people.
b. Find the population of the city in the year 2015.
To find the population in the year 2015, we need to let t = 15.
P = 250,342e^0.012(15) = 250,342e^0.18 = 299,713.8
We’ll round this answer to 299,714 people.
c. Find when the population will be 320,000.
We know the population in the year 2015 is almost 300,000 from our work in part (b). So it makes sense that the answer has to be higher than 2015. Remember that P in the equation
represents the population value, which we are given to be 320,000. Only now we do not know the time value t. The equation we need to solve is
320,000 = 250,342e^0.012(t)
To review solving exponential equation, click here. So it will take between 20 and 21 years for the population to reach 320,000. This means between the years 2020 and 2021 the population
will be 320,000.
Summary: Before we do the next example, let’s look at a general form for population models. Most of the time, we start with an equation that looks like
P = P[o]e^kt
□ P represents the population after a certain amount of time
□ P[o] represents the initial population or the population at the beginning
□ k represents the growth (or decay) rate
□ t represents the amount of time
□ Remember that e is not a variable, it has a numeric value. We do not replace it with information given to us in the problem.
Let's Practice this once more:
ii. A scientist starts with 100 bacteria in an experiment. After 5 days, she discovers that the population has grown to 350.
a. Determine an equation for this bacteria population.
To find the equation, we need to know values for P[o] and k. Remember the equation is in the form P = P[o]e^kt. where P, e, and t are all parts of the equation we will come up with. We
only need values for P[o] and k. P[o] is given by the amount the scientist starts with which is 100. Finding k requires a little more work.
We know that P[o] is 100 and after t = 5 days the population P is 350. We can use this information to find k. Now that we know k, we go back to our general form of and replace P[o] and k.
So our equation is
P = 100e^0.25055t
b. Use the equation to find out the population after 15 days.
We will substitute the value of 15 for t in P = 100e^0.25055t.
P = 100e^0.25055(15) = 100e^3.75825 = 4287.33
or approximately 4287 bacteria after 15 days.
c. Use the equation to find out when the population is 1000.
We will set our equation equal to 1000 to get 1000 = 100e^0.25055t and solve. So between 9 and 10 days, the bacteria population will be 1000.
2. Exponential Decay
Solving an exponential decay problem is very similar to working with population growth. In fact, certain populations may decrease instead of increase and we could still use the general formula we
used for growth. But in the case of decrease or decay, the value of k will be negative.
Let's Practice:
iii. The number of milligrams of a drug in a persons system after t hours is given by the function D = 20e^-0.4t.
a. Find the amount of the drug after 2 hours.
To solve the problem we let t = 2 in the original equation.
D = 20e^-0.4(2) = 20e^-0.8 = 8.987
After 2 hours, 8.987 milligrams of the drug are left in the system.
b. Find the amount of the drug after 5 hours.
Replace t with 5 in the equation to get
D = 20e^-0.4(5) = 20e^-2.0 = 2.707
After 5 hours, 2.707 milligrams remain in the body.
c. When will the amount of the drug be 0.1 milligram (or almost completely gone from the system)?
We need to let D = 0.1 and solve the equation 0.1 = 20e^-0.4t After approximately 13 hours and 15 minutes, the amount of the drug will be almost gone with only 0.1 milligrams remaining in
the body.
3. Compound Interest
The formula for interest that is compounded is
□ A represents the amount of money after a certain amount of time
□ P represents the principle or the amount of money you start with
□ r represents the interest rate and is always represented as a decimal
□ n is the number of times interest is compounded in one year
if interest is compounded annually then n = 1
if interest is compounded quarterly then n = 4
if interest is compounded monthly then n = 12
□ t represents the amount of time in years
Let's Practice:
iv. Suppose your parents invest $1000 in a savings account for college at the time you are born. The average interest rate is 4% and is compounded quarterly. How much money will be in the college
account when you are 18 years old?
We will use our formula
v. Suppose your parents had invested that same $1000 in a money market account that averages 8% interest compounded monthly. How much would you have for college after 18 years?
P = 1000, r = 0.08, n = 12 and t = 18
The population of a town is modeled by P = 12,500e^0.015t. When will the population be 25,000?
What is your answer?
The initial population of rabbits in a lab is 20. After 100 days, the population is 80. When will the rabbit population be 200?
What is your answer?
Wounds heal at a certain rate which is given by the equation W = W[o]e^-0.25t where W is the size of the wound after a certain amount of time, W[o] is the initial size of the wound and t is the
amount of time in days. Initially the wound is 25 square millimeters. What would be the size of the wound after 4 days?
What is your answer?
How much money should be invested at 5% compounded quarterly for 20 years so that you have $20000 at the end of the 20 years?
What is your answer? | {"url":"https://algebralab.net/lessons/lesson.aspx?file=Algebra_ExponentsApps.xml","timestamp":"2024-11-14T19:55:50Z","content_type":"text/html","content_length":"34043","record_id":"<urn:uuid:b28ba251-463e-4e63-b0cd-da1f4344ce6c>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00633.warc.gz"} |
Cycle stealing (grid/distributed computing)
Cycle stealing (grid/distributed computing)
Cycle stealing or CPU scavenging is a concept in distributed computing that relates to utilizing networked resources to accomplish a common computational goal. Some computational tasks can easily
overwhelm limits of a single dedicated computer, and cycle stealing is a method for providing additional computing horse power to a task.
Banks, insurance companies, hedge funds, and many other businesses use so-called distributed or grid computing for their modeling efforts. Imagine a bank with a portfolio of tens of thousands of
stocks, bonds, options, and other instruments. For the bank to manage risk, finance and economic analysts need to run models to try to estimate how their portfolio will react to market and economic
changes. Running such models on such portfolios is something that needs much more horse power and data than a single stand-alone computer can handle. Banks and insurance companies often build
so-called distributed and/or grid networks.
Cycle-stealing is the first step in building distributed networks, and it works in a way that additional resources are connected through the corporate network to the main processing unit.
Where do the other computers and other resources in a grid or distributed computing come from? This can be enlightened by looking at the following example. Let’s start off with a question:
How much do you use your computer?
Some may say, "all the time." Some may say, "less than you think." However, in the perspective of a 24 hour day, most workplace users use their computers for less than 50% of their work day, which
roughly equates to 4 hours per day out of a total of 24 hours. This works out to a daily utilization of 17%. This figure may seem low, but it is actually higher than what the latest studies suggest.
Studies show that the average PC user at work utilizes his or her machine at a rate of 13% per day with an average consumption of less than 8%. This consumption figure points to the amount of load
placed on the PC’s processor (you can view this by right-clicking on the Windows Taskbar and selecting Task Manager and clicking the Performance tab). A large number of users use their machines only
for email, web traffic, and word/excel processing. With the ever-increasing speed and capabilities of today’s desktop hardware, this leaves IT professionals with a disproportionate share of unused
This leads us into grid computing in a shared environment. If the average user uses his or her PC for 4 hours per day, then there are 16 hours left in the day for this machine to still have value.
What about your lunch hour, what is your PC doing then? How many meetings do you have a day? What about when you go home at night? What if we could harness these unused clock cycles for the common
good? This is also called cycle-stealing.
What is cycle stealing?
Cycle-stealing is a grid-related policy-driven service which transparently recruits unused resources from the network for the grid or distributed computing system based on system criteria. When these
grid criteria determine that you are not currently using your PC, your computer starts processing grid tasks. In order to prevent using the computer by the grid at the same time when the user uses it
and slowing him down, the cycle-stealing mechanism is intelligent enough to distinguish between a trip to the water cooler and your trip home based on these policies.
How does cycle stealing relate to grid computing?
The term "grid computing" relates to joining geographically and politically independent distributed computing environments, but many business people use the term "grid" for any complex cluster of
computational resources. For the simplicity of our explanation, we will use the term "grid" in the narrower sense now as well, that is in the meaning of distributed computing to be compatible with
the language business people use.
Grid computing virtualizes the processing resources of multiple grid engines. A grid engine is just another name for your computer; it is the object of the cycle-stealing method. Grid engines are
hardware resources with a small grid application running in the background. These grid engines establish themselves as available to a grid server. A grid server is a central management unit which
authenticates the grid engines and distributes work.
When you have a running grid, next you have to supply work for it to process. This is done through client applications. Users run grid-enabled applications from their desktop which submit jobs to the
grid server. Authenticated grid engines then pull this job’s tasks. A grid engine pulls work from the grid server until it is disrupted. If the engine does not successfully return the completed work,
the grid server offers the work to another grid engine.
A collection of networked grid engines that work on cycle stealing basis with one or more grid server is what business people often call a grid network.
How large a grid can be?
There is probably no limit to how many engines you can have in a grid. Depending on the application, banks and insurance companies often run grids with 100 - 200 grid engines (dedicated machines) for
their modeling. Companies can have hundreds of cycle-stealing engines in their grids.
How fast can a grid be?
It is not unusual to work with a grid that runs at 500 gigaflops or more. A gigaflop is a measure of one billion floating point operations (instructions) per second.
I am a power user who needs a lot of performance, will my PC appear sluggish?
No. The idea of grid cycle-stealing is that the engine is only used when no one is using the host computer. The engine waits a given period of time and determines that you are not currently running
an intensive job before it volunteers for the grid.
What about grid computing security?
The grid acts only as a service on your local machine. Engines volunteer for the grid, there is no outside intervention to force your PC to participate in the grid.
Local security is addressed by user privileges. Jobs run on the grid engines as guest users, so they have no rights to user information on the machine.
Server security is addressed by user authentication and encryption. Only those users setup for access to the server are able to gain access. It is also possible to encrypt all engine-server
Data security is addressed through the actual data being passed between the grid server and its engines. Each task submitted to a grid engine is only a small unit of work and each grid engine only
witnesses a small percentage of the total execution.
Can I ever notice the grid on my PC?
No. The grid service runs in the background. Users are not be prompted with informational or error messages. Also, the grid engine only requires 15 seconds or so to return the PC for local use, so by
the time a user presses CTRL-ALT-DELETE and login, the grid engine execution should have halted.
Does grid leave anything behind on my machine? Does grid fill my hard drive?
No. The grid has the ability to synchronize each engine with whatever files it needs to execute its jobs. These files are stored in a temporary directory. When jobs are completed the output is
immediately returned to the submitter’s machine before another set of work is pulled from the grid server. Whenever the engine removes itself from the grid, all temporary storage is deleted.
What applications can be used on the Grid?
Applications to be used on a grid must be grid-enabled to be able to take advantage of grid computing. Software must be designed to work with the grid, either through software drivers or scripts.
Discuss this article or this topic in our discussion
(The table bellow shows a list of 8 most recent topics posted in our
discussion forum
. Visit our discussion forum to see more. It is possible the links below are not related to this page, but you can be certain you will find related posts in the discussion forum. You can post one
yourself too.)
Email this article to a friend:
How can I link to this web page?
It is easy, just include the code provided below into your HTML code. | {"url":"http://maxi-pedia.com/cycle+stealing","timestamp":"2024-11-04T01:45:48Z","content_type":"application/xhtml+xml","content_length":"36626","record_id":"<urn:uuid:49df8e62-4f79-4f4f-92dd-d5d17212c4de>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00680.warc.gz"} |
How Do You Calculate Betting Odds? - Winners Wire
Betting odds are the probabilities of a certain outcome occurring in a given event. They can be expressed in a variety of ways, such as fractions, decimals, or money lines. Betting odds are used by
bookmakers to determine the returns on a bet placed by a gambler. The higher the odds, the more likely it is that the gambler will win, and the lower the odds, the less likely it is that the gambler
will win.
Why Do Betting Odds Matter?
Betting odds are important to gamblers, as they allow them to understand the likelihood of a certain outcome occurring in any given event. They also allow them to calculate their potential winnings
should they choose to place a bet on that outcome.
Types of Betting Odds
Betting odds can be expressed in a variety of ways, including:
• Fractional Odds: These are expressed as a fraction, and represent the amount of money you will win if you place a bet of a given size. For example, if the odds are 2/1, you will win two times the
amount of your bet if you are successful.
• Decimal Odds: These are expressed as a decimal, and represent the amount of money you will win, including your original stake, if you place a bet of a given size. For example, if the odds are
3.00, you will win three times the amount of your bet if you are successful.
• Moneyline Odds: These are expressed as a positive or negative number and represent the amount of money you will win or lose if you place a bet of a given size. For example, if the odds are -150,
you will need to bet $150 to win $100.
How to Calculate Betting Odds
Calculating betting odds can be done in a few simple steps:
• First, decide which type of betting odds you would like to use.
• Then, decide how much you would like to bet.
• Once you have decided on the type of betting odds and the amount you would like to bet, you can then calculate your potential winnings using the following formulas:
For Fractional Odds:
Winning Amount = Stake x (Numerator / Denominator)
For Decimal Odds:
Winning Amount = Stake x Decimal Odds
For Moneyline Odds:
Winning Amount = Stake x (1 + Moneyline Odds)
Understanding the Odds
Once you have calculated the betting odds, you will need to understand them in order to make an informed decision about whether or not to place a bet. Generally speaking, the higher the odds, the
less likely it is that the bet will win, and the lower the odds, the more likely it is that the bet will win.
Calculating Probability from Odds
Once you understand the betting odds, you may also want to calculate the probability of a certain outcome occurring. This can be done using the following formula:
Probability = (1 / Decimal Odds) x 100
For example, if the odds are 3.00, the probability of that outcome occurring is 33.3%.
Calculating betting odds is an important part of gambling, as it allows you to determine the potential winnings from a given bet and understand the likelihood of a certain outcome occurring. It is
also possible to calculate the probability of a certain outcome occurring from the betting odds. Understanding betting odds is essential for making informed decisions when gambling. | {"url":"https://www.winnerswire.com/how-do-you-calculate-betting-odds/","timestamp":"2024-11-05T01:29:54Z","content_type":"text/html","content_length":"103589","record_id":"<urn:uuid:4b75b21e-6813-40ec-8094-5c2ed6a91387>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00157.warc.gz"} |
Python Program to Sort a Dictionary by Value in 3 ways - Tutcoach
Python Program to Sort a Dictionary by Value in 3 ways
In this example, you will learn to sort a dictionary by value in Python, there is another way to sort a dictionary in python which is sorting a dictionary by key its key.
To sort a dictionary by value in Python we will see three different methods which are as follows.
• Python dictionary sorting Using sorted() function and lambda expression
• Python dictionary sorting Using items() method and lambda expression
• Python dictionary sorting Using operator module
Sort a Dictionary Using sorted() function and lambda expression
In this technique, we use a Python built-in sorted() function to sort the dictionary by its values and lambda function. lambda functions are also known as anonymous functions.
A lambda function is used to extract the values from the dictionary and then sort them in ascending order.
The sorted() function returns a list of tuples, where each tuple contains the key-value pairs of the original dictionary, sorted by their values.
Source code
# Sample dictionary
my_dict = {'apple': 10, 'banana': 5, 'cherry': 20, 'orange': 15}
# Sort the dictionary by values in ascending order
sorted_dict = sorted(my_dict.items(), key=lambda x: x[1])
# Print the sorted dictionary
Here in the output, you can see all the dictionaries are sorted by value in ascending order.
Output of a python program to sort a dictionary
Python program to Sort a Dictionary using items() method and lambda expression
In this method to sort the dictionary by value, we use python lambda function to extract the values from the dictionary and then sort them in ascending order.
Instead of using the sorted() function, we use the items() method to convert the dictionary into a list of key-value pairs.
Source code
Source code to sort adictionary by value python using items and lambda function is shown below.
my_dict = {'apple': 10, 'banana': 5, 'cherry': 20, 'orange': 15}
# Convert the dictionary into a list of key-value pairs and sort it by values
sorted_dict = sorted(my_dict.items(), key=lambda x: x[1])
# Convert the sorted list back into a dictionary
sorted_dict = dict(sorted_dict)
# Print the sorted dictionary
Output of python sort dictionary is given below.
C:\Users\user\tutcoach.com>python program.py
{'banana': 5, 'apple': 10, 'orange': 15, 'cherry': 20}
Python program to Sort a Dictionary Using python operator module
In this example, we will sort dictionary value in python using python operator module. To use operator module we need to import it by using import operator in our source code shown below.
Source code
The itemgetter() function from the operator module is used to extract the values from the dictionary and then sort them in ascending order using sorted function.
import operator
my_dict = {'apple': 10, 'banana': 5, 'cherry': 20, 'orange': 15}
# Sort the dictionary by values in ascending order using the operator module
sorted_dict = dict(sorted(my_dict.items(), key=operator.itemgetter(1)))
# Print the sorted dictionary
C:\Users\user\tutcoach.com>python program.py
{'banana': 5, 'apple': 10, 'orange': 15, 'cherry': 20}
To sort the dictionary in python by values in descending order, you can just change the sorting order by adding the parameter reverse=True in sorted() function.
Here is the source code to sort python dictionary by value in descending order.
sort dictionary by value python in descending order
import operator
my_dict = {'apple': 10, 'banana': 5, 'cherry': 20, 'orange': 15}
# Sort the dictionary by values in descending order using the operator module
sorted_dict = dict(sorted(my_dict.items(), key=operator.itemgetter(1), reverse=True))
# Print the sorted dictionary
C:\Users\user\tutcoach.com>python program.py
{'cherry': 20, 'orange': 15, 'apple': 10, 'banana': 5}
sorting a Python dictionary by values can be programmed in multiple ways and their relative efficiency:
1. Using sorted() function and lambda expression: simple and easy to understand but may not be the most efficient for large dictionaries since it involves creating a list of tuples and sorting it.
It has a time complexity of O(n log n).
2. Using items() method and lambda expression: This method is similar to the first method but avoids creating a list of tuples and instead sorts the dictionary in place. It has a time complexity of
O(n log n) and is slightly more efficient than the first method.
3. Using operator module: This method is the most efficient since it uses the operator module to sort the dictionary by its values without creating a list of tuples. It has a time complexity of O(n
log n) and is the preferred method for sorting large dictionaries.
Leave a Comment | {"url":"https://tutcoach.com/python-examples/python-program-to-sort-a-dictionary-by-value/","timestamp":"2024-11-10T03:20:10Z","content_type":"text/html","content_length":"77043","record_id":"<urn:uuid:66a10b20-b6b3-4037-869f-03a0bf5f6420>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00890.warc.gz"} |
Solar Azimuth Angle Calculator & Solar Panels - SolarSenaSolar Azimuth Angle Calculator & Solar Panels - SolarSena
The solar azimuth angle is a way to identify the position of the sun in the sky. It defines the horizontal position of the sun on the local horizon. Knowing the azimuth angle of the sun is an
essential aspect in deciding the orientation of solar panels.
Solar azimuth angle calculator
Select your date & time of the day, your time zone from UTC and enter your longitude & latitude to calculate the solar azimuth angle.
What is the solar azimuth angle?
The solar azimuth angle defines the horizontal coordinates of the sun relative to the observer. It is defined as the angular distance between the projection of the sun on the imaginary horizontal
plane on which the observer is standing and the reference direction. In solar technology, the reference direction is north.
There are other conventions, but here we will stick with National Renewable Energy Laboratory (NREL) standards. Thus, the azimuth angle is the angle between the north and the sun on the local horizon
with the observer. The angle is positive clockwise and negative counterclockwise—see the figure below.
The solar azimuth angle is the angle between the north and the sun on the local horizon.
As per the convention in the above diagram, the azimuth angle is 90° when the sun is along the east direction and is 180° along the south. When the sun is west to the observer, the azimuth angle is
270° (or −90°).
The azimuth angle varies throughout the day. In the morning, the sun is east; the azimuth angle will be close to 90°. It can be greater or less than 90° that depends upon your location (latitude and
As the sun ascends in the sky, the azimuth angle may increase or decrease depending upon your latitude and longitude and the day of the year.
In the evening, during sunset, the azimuth angle will approach toward 270°.
Solar azimuth angle formula
The azimuth angle is calculated using the following formula:
Here, A is the azimuth angle, δ is the declination angle, φ is the latitude, h is the hour angle, and ɑ is the solar elevation angle.
The hour angle (h) can be positive (after solar noon) and negative (before the solar noon). When h is positive, we have to subtract A from 360°.
You can find your latitude from any standardized online maps, e.g., Google Maps.
How do you find the azimuth angle?
You need to first estimate the declination angle, hour angle, and solar elevation angle to find the azimuth angle.
Quick and simple equations to estimate these variables are as follows:
For the declination angle,
Here, d is the number of days since January 1st UTC (00:00: hr).
For hour angle,
Here, LST is the local solar time in hours.
For the solar elevation angle,
Once we find all three variables, we can substitute them into the above-mentioned formula to determine the solar azimuth angle.
Let us take an example to make things more clear. Tucson, Arizona, is at 32.22° N latitude. We want to find the solar azimuth angle at 10:00 AM, 12:00 noon, and 2:00 PM on March 3rd.
The solar hour angle at 10:00 AM will be 15°× (10−12) = −30°. Similarly, at 12:00 noon & 2:00 PM will be 0° & 30°.
The number of days from January 1st to March 3rd is 31+28+2 = 61.Substituting d = 61 days,
Now, we have δ = −8.01°; φ = 32.22°; and h = −30°, 0°, 30°.
ɑ for h = −30° is calculated as:
The elevation angles for the other two timings are 49.77° (12:00 noon) and 40.63° (2:00 PM).
Finally, we can find the azimuth angle as follows:
Similarly, A for the other two are 176.92° (12:00 noon) and 139.27° (2:00 PM).
2:00 PM is past solar noon, so A = 360−139.27 = 220.73°.
According to the above calculator, A = 129.40° (10:00 AM), 165.86° (12:00 noon), and 210.57° (2:00 PM).
These angles from the calculator differ from those calculated manually. That is because the equations used in the above calculation are approximated and always give an error of a few degrees, whereas
the calculator uses more rigorous equations.
Variation in the azimuth angle
The solar azimuth angle changes every single second. In the morning, it will always be around 90°, and in the evening, the angle will approach 270°. From morning to evening, the angle may decrease or
increase depending upon your location and time of the year.
The graph below represents the daily variation of the azimuth angle of Tucson and Sydney on March 3rd. Tucson is a city in Arizona, US, and in the northern hemisphere, while Sydney is in the southern
hemisphere. We can see how dramatic the difference is in the solar azimuth angle for different locations.
Daily variation in the azimuth angle in Tucson and Sydney on March 3rd, 2020
As we see from the previous graph, in Tucson, the azimuth angle increases gradually throughout the day.
On the other side, in Sydney, there is a gradual decrease in the angle, but in the afternoon, we see a spike from 0° to 360°. This rapid shift in the graph does not imply a sudden change in the sun’s
position in the sky. In fact, the sun keeps moving at its normal pace, and the azimuth angle keeps gradually decreasing since 360° is equivalent to 0°.
The solar azimuth angle does not only vary throughout the day for different locations but also changes monthly for the same location.
The graph below depicts the monthly variation in the azimuth angle for Denver, Sydney, Macapa, and Austin at 12:00 noon.
Monthly variation in solar azimuth angle (Denver, Sydney, Macapa, and Austin)
Sydney, which is in the southern hemisphere, has a lower azimuth angle than Austin and Denver, which are in the northern hemisphere. Macapa, which is close to the equator, fits in between the other
curves and has a drastic up and down.
The difference between elevation and azimuth angles?
The elevation and azimuth angles are important in deciding the position of the sun, but both angles are different measurements. Azimuth tells the horizontal coordinates of the sun, while elevation is
about the vertical coordinates. The elevation angle gives us the altitude of the sun in the sky.
The diagram below explains the same.
The azimuth angle is the angle between the north and the sun on the local horizon, while the elevation angle is the vertical angle between the horizon and the sun.
The relation between both is not simple. The graphs below give daily and monthly variations in the angles at Tucson.
Daily variation of azimuth angle and elevation angle
Monthly variation in solar azimuth angle and solar elevation angle
Solar azimuth angle and solar panels
The understanding of the solar azimuth angle is a vital aspect of photovoltaic and thermal design. Solar power production is maximum when solar panels are right in front of the sun.
Since the azimuth angle dictates the horizontal coordinates of the sun, our solar panels must be angles at the azimuth angle to get maximum solar power.
The solar panel angled at the solar azimuth angle
The position of the sun in the sky changes continuously. And it is impossible to synchronize the direction of solar panels with the position of the sun unless you are using a solar tracking system.
Solar tracking is an unaffordable and uncommon option for homeowners and small projects.
Most times, solar panels are permanently fixed in a particular direction. This optimal direction of solar panels is decided by the solar azimuth angle.
If you have carefully looked at one of the preceding monthly variation graphs, you must have noticed a relationship between the solar azimuth angle and location. For Sydney, which lies in the
southern hemisphere, the azimuth angle always remains below 90° at noon over the entire year. Thus, people in Sydney will always see the sun in the northern sky. It is also true for other cities in
the southern hemisphere. That means the sun appears in the north for regions in the southern hemisphere, so the best direction for solar panels will be south.
With the same reasoning, solar panels in Austin, Tucson, and Denver must be oriented toward the south direction. The azimuth angle always remains more than 90° for these cities at noon; they are in
the northern hemisphere. People in the northern hemisphere will see the sun in the southern sky.
What is the solar azimuth angle?
The solar azimuth angle is the angle between the sun and the reference direction (usually north) with the observer on the local horizon.
What is the difference between the solar azimuth angle and the solar elevation angle?
The solar azimuth angle defines the horizontal coordinates of the sun, whereas the solar elevation angle decides the vertical position of the sun or its altitude.
What is the solar azimuth angle for sunrise and sunset?
The solar azimuth angle for sunrise will be close to 90° and for sunset will be close to 270°. | {"url":"https://solarsena.com/solar-azimuth-angle-calculator-solar-panels/","timestamp":"2024-11-07T13:23:10Z","content_type":"text/html","content_length":"97220","record_id":"<urn:uuid:9d6cff33-fad4-44bc-b565-f677fa096127>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00126.warc.gz"} |
Developmental Math
This badge demonstrates the earner's ability to solve problems involving basic geometry including lines, angles, perimeter, area, and circumference, solve problems involving geometry including
volume, square roots, the Pythagorean Theorem, and triangles, solve problems involving basic geometry including lines, angles, perimeter, area, and circumference and solve problems involving geometry
including volume, square roots, the Pythagorean Theorem, and triangles.
Demonstration of success on this exam will result in the achievement of the digital badge shown here to be available for display on the earners' digital portfolio or profile on CampusEd. | {"url":"https://nwca.edu2.com/category/52/developmental-math","timestamp":"2024-11-05T13:26:57Z","content_type":"application/xhtml+xml","content_length":"71910","record_id":"<urn:uuid:d4640cb3-d719-45bf-b731-b62fd671b201>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00609.warc.gz"} |
Make the Graph Complete - Algorithms and Data Structures
Make the Graph Complete
Given an undirected graph with v vertices and e edges, you are asked to find all the edges that need to be added to the graph to make it complete.
The first line of the input contains two integers v (1 ≤ v ≤ 500) and e (1 ≤ e ≤ ).
The following e lines contain pairs of integers v1, v2 (1 ≤ v1, v2 ≤ v) representing an edge between v1 and v2.
The program should print all the edges that need to be added to the graph in lexicographical order (from the smallest vertex id to the largest). All the edges need to be printed on separate lines,
while the vertices should be separated by a space.
Input Output
1. Vertex 4 was not initially connected to any of the vertices in the graph. So, we should connect it to all the other vertices.
To check your solution you need to sign in | {"url":"https://profound.academy/algorithms-data-structures/amKbcpKmCkGo7ur6uNec","timestamp":"2024-11-05T06:35:40Z","content_type":"text/html","content_length":"206606","record_id":"<urn:uuid:907190e6-b850-4774-a63c-4f7c6243ffbe>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00780.warc.gz"} |
A question about limits
Please help evaluate the following expression. It seems that it is 0. But I have no clue how that happens.
(log is based on e. don't know about the constraints about a and b. Let's assume they are not 0s).
$$\lim_\infty { } b\cdot log(e^{ax}+1)-a\cdot (e^{bx}+1)$$
Re: A question about limits
I assume that the second term has a log in front of it as well?
The quickest solution is probably to observe that $$\lim_{x \rightarrow \infty} log (e^{ax} + 1 ) = ax$$, which shows that the limit you care about is zero.
Alternatively, you could start by rewriting the expression as the log of a fraction, interchange the limit and the log and then show that the limit of the fraction is 1:
$$\lim_{x \rightarrow \infty }log \frac{ (e^{ax}+1)^b}{(e^{bx}+1)^a }= log \lim_{x \rightarrow \infty } \frac{ (e^{ax}+1)^b}{(e^{bx}+1)^a }$$
Re: A question about limits
owlpride wrote:I assume that the second term has a log in front of it as well?
The quickest solution is probably to observe that $$\lim_{x \rightarrow \infty} log (e^{ax} + 1 ) = ax$$, which shows that the limit you care about is zero.
Alternatively, you could start by rewriting the expression as the log of a fraction, interchange the limit and the log and then show that the limit of the fraction is 1:
$$\lim_{x \rightarrow \infty }log \frac{ (e^{ax}+1)^b}{(e^{bx}+1)^a }= log \lim_{x \rightarrow \infty } \frac{ (e^{ax}+1)^b}{(e^{bx}+1)^a }$$
Thanks for helping me out again, owlpride. I should have put a big parentheses for these two terms.
The key lies in this thing $$\lim_{x \rightarrow \infty} log (e^{ax} + 1 ) = ax$$. Is this a common rule or something that obvious? That's where I got stuck. Could you also show some intermediate
Re: A question about limits
Hom wrote:
owlpride wrote:I assume that the second term has a log in front of it as well?
The quickest solution is probably to observe that $$\lim_{x \rightarrow \infty} log (e^{ax} + 1 ) = ax$$, which shows that the limit you care about is zero.
Alternatively, you could start by rewriting the expression as the log of a fraction, interchange the limit and the log and then show that the limit of the fraction is 1:
$$\lim_{x \rightarrow \infty }log \frac{ (e^{ax}+1)^b}{(e^{bx}+1)^a }= log \lim_{x \rightarrow \infty } \frac{ (e^{ax}+1)^b}{(e^{bx}+1)^a }$$
Thanks for helping me out again, owlpride. I should have put a big parentheses for these two terms.
The key lies in this thing $$\lim_{x \rightarrow \infty} log (e^{ax} + 1 ) = ax$$. Is this a common rule or something that obvious? That's where I got stuck. Could you also show some intermediate
Sorry, my bad. There is another condition, a>0 and b>0. in this case, it's obvious $$\lim_{x \rightarrow \infty} log (e^{ax} + 1 ) = ax$$.
Re: A question about limits
I think that you meant lim ax, not ax | {"url":"https://mathematicsgre.com/viewtopic.php?f=1&t=642","timestamp":"2024-11-02T14:27:24Z","content_type":"text/html","content_length":"26789","record_id":"<urn:uuid:a8d4c794-5aea-4ba0-9201-cba20150bebb>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00672.warc.gz"} |
load.f dependent on region
8 Apr 2014 8 Apr '14
2:01 p.m.
I have another question relating to the thermal diffusion problem I am working on. There is a background heat production distributed throughout the volume, but the heat production for any given cell
will depend on which region the cell is in, where region is determined by cell group. I.e., all cells of group 1 are defined to comprise region 1, and they have a heat production that is different
from that in region 2.
So far the examples I've seen have a load that depends only on coordinate. Is there any example in which the load depends on the region?
Many thanks,
New subject: [sfepy-devel] load.f dependent on region
On 04/08/2014 04:01 PM, seismo_phil wrote:
I have another question relating to the thermal diffusion problem I am working on. There is a background heat production distributed throughout the volume, but the heat production for any given
cell will depend on which region the cell is in, where region is determined by cell group. I.e., all cells of group 1 are defined to comprise region 1, and they have a heat production that is
different from that in region 2.
So far the examples I've seen have a load that depends only on coordinate. Is there any example in which the load depends on the region?
You can do it in the exactly same way as you define the diffusion coefficient. The 'load.f' is a material parameter just like 'coef.val', so use a function similar to get_coef().
Best regards, r.
Last active (days ago)
1 comments
2 participants
participants (2)
• Robert Cimrman
• seismo_phil | {"url":"https://mail.python.org/archives/list/sfepy@python.org/thread/EPTTEYOFPF737ELM75E44IGXQM6HVHMX/?sort=date","timestamp":"2024-11-11T17:08:08Z","content_type":"text/html","content_length":"22621","record_id":"<urn:uuid:f95fd93a-c193-4de5-b0ae-cbec3a95a84a>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00274.warc.gz"} |
Fruit Basket - OpenQuant
You have a basket with an assortment of fruits. Inside this basket, there are $10$ apples, $20$ oranges, and $30$ peaches. You take out fruits one by one and at random. What is the probability that
there will be at least $1$ orange and $1$ peach in the basket after you've taken out all the apples? | {"url":"https://openquant.co/questions/fruit-basket","timestamp":"2024-11-05T13:44:20Z","content_type":"text/html","content_length":"31161","record_id":"<urn:uuid:3a1f6c5a-650d-487e-9aba-e418e043bf9f>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00439.warc.gz"} |
Encryption and Hashing Explained
Encryption and Hashing Application
Encryption techniques have been around for thousands of years. Still, the advent of computing devices has meant that ever more complex encryption mechanisms are required to resist brute force attacks
of encoded information. Modern encryption mechanisms revolve around using cryptographic keys and mathematical-based algorithms to encode data resistant to decoding without access to the keys. The
terminology we will use is that the original uncoded information is referred to as plaintext, and the output of the encryption process is referred to as ciphertext.
Encryption Terminology
Hashing is a technique using a defined algorithm to generates a value based on the contents of information that can be used to indicate if the information is subsequently changed. This can be used to
protect messages in transit between a sender and recipient or data at rest in a storage device. Hash values are not secret. They can be recreated by anyone that knows which hashing algorithm to use.
Hashing is used alongside encryption techniques to provide data integrity and non-repudiation checks.
This article will explain both encryption and hashing, along with digital signatures that utilize hashing principles. We will look at different forms of encryption and hashing, along with some
standard algorithms.
Encryption and hashing between them can offer multiple benefits:
• Encryption prevents eavesdroppers on an insecure communications channel such as the internet from reading information passing over that channel.
• Encryption prevents malicious actors from accessing or modifying sensitive information on network-connected shared or personal storage devices.
• Encryption and hashing prevent malicious actors from intercepting and altering information passing across an insecure communications channel.
• Hashing prevents malicious actors from modifying sensitive information on network-connected shared or personal storage devices.
• Encryption and hashing provide a level of proof that the sender of the information is who they say they are by authenticating their identity.
• Encryption and hashing provide an assurance to the recipient that the information’s sender cannot subsequently alter or deny sending that information.
Encryption Methods
There are two primary forms of encryption in common usage, symmetric and asymmetric encryption.
• Symmetric encryption is an encoding technique where the encryption and decryption processes use a single shared key. Cryptographic keys are, in essence, an alphanumeric string with a set number
of characters. These characters can be manually chosen or randomly generated. Symmetric encryption is also known as secret key encryption. The security of communications is dependent on the
protection of the encryption key.
• Asymmetric encryption is an encoding technique where the encryption process uses an openly shared public key. The decryption process uses a second secret key known only to the recipient.
Asymmetric encryption is also known as public-key encryption.
We will now look at these two types of encryption in more detail.
Symmetric Encryption
Symmetric encryption relies on using a mathematically based algorithm that can encrypt and decrypt data using the same secret key, hence the symmetry from which the name is derived. The main
advantage of symmetric encryption is that using a single key is simpler to implement and requires less processing than other more secure encryption forms.
The use of a single shared key necessitates implementing a secure method of sharing the key between the sender and receiver. Symmetric encryption relies on the sender and recipient having access to
the same cryptographic key while ensuring that no other third-party such as an eavesdropper or interceptor has access to that key. This technique will only be secure if the sender and recipient can
securely share the key and guarantee that no other party can access that key.
Traditional approaches relied on the physical transfer of keys between the two parties. Electronic exchange techniques such as the Diffie–Hellman key exchange have been developed to enable the secure
exchange of cryptographic keys over a public network. One limitation of the Diffie–Hellman key exchange is that it does not prevent a malicious actor from pretending to be the intended recipient.
This would allow the attacker to complete the key exchange using their own public and private keys.
Symmetric encryption is straightforward to implement. The sender takes plaintext, uses the secret key to encode the plaintext into ciphertext using an encryption algorithm, and then sends the
ciphertext over a communications channel. The recipient receives the ciphertext and uses the secret key to decode the ciphertext back into plaintext using the same encryption algorithm to access its
The Encryption Algorithm
A cipher function takes the plaintext and breaks it up into pre-set blocks of data. The encryption algorithm then takes each block and performs a mathematically based calculation using the plaintext
block and the secret key to create a ciphertext block. The size of the blocks will depend on the particular encryption algorithm selected. This technique is known as a block cipher for evident
The implemented symmetric encryption will then concatenate the ciphertext blocks and transmit them as an encrypted message in its simplest form. However, a weakness with symmetric encryption is that
it is deterministic in nature, so that if two or more plaintext blocks are the same, then the corresponding ciphertext blocks will also be the same. This opens up the encrypted message to analysis to
look for patterns and deduce the secret key by guessing what information those repeated blocks may contain. Typically, messages contain header information or standard blocks of text, such as a
disclaimer or copyright statement that an attacker can use to reverse engineer the secret key from the ciphertext blocks. While session keys can minimize the risks of messages containing sufficient
repeated text to allow deduction of the private key, it still remains an avoidable risk to the communications’ security.
Simple block encryption
One solution is to use a plaintext block’s contents to generate a seed value used to alter the next plaintext block’s encryption. Any subsequently repeated plaintext blocks will not generate the same
corresponding ciphertext blocks. By adding a dependency on previous data into the encryption algorithm, effectively chaining the blocks together, any attempt to analyze the encrypted data to deduce
the secret key will become considerably more complex. The initial seed value does not need to be secure; the sender can share this data with the recipient in plaintext as part of the message stream.
Using a randomly generated seed data block that changes with each message offers the best protection against attacks based on the analysis of the encrypted data.
A downside to this approach is that the message’s corruption in transit will prevent decryption of the message from where the corruption occurred to the end of the message. With the simple block
encryption approach, only the block affected by the corruption would be lost. This can place a significant bandwidth overhead if the transmission route is subject to interference.
Cipher block chaining
When generating the final data block, padding data will generally be required unless the plaintext size happens by chance to be an exact multiple block size. A common technique is for the padding to
be the numerical value of the padding’s size to ensure that the decryption process correctly recognizes what part of the block is padding. This ensures that the decryption algorithm does not
incorrectly identify message data as padding data.
Decoding the ciphertext simply requires breaking the received message into the correct pre-set blocks of encrypted data and performing a reversal of the mathematically based calculation with the
secret key to recreate the plaintext blocks. These blocks are then joined back together to recreate the original plaintext.
Stream Ciphers
One downside of using a block cipher technique is that it does not provide a practical solution where plaintext is generated on a random or pseudo-random basis in real-time and requires encryption
and transmission on an as-required basis. Collating the plaintext data into the pre-set blocks can add significant latency to the data and needs data to be stored in a vulnerable unencrypted state
while waiting for sufficient data to form a block to be available. The solution is to use a cipher feedback technique where data is encrypted as soon as it is available.
This technique basically starts with a randomly generated seed data block. Then as real data becomes available, it is added to the end of the block, and the corresponding number of bits are removed
from the start of the data block. The data block can then be encrypted and transmitted as for the block encryption mode. As new data becomes available, the process is repeated, with the bits in the
data block being shifted to the left as new data is added to the right-hand end. This technique can accommodate the transmission of data of any size. However, the smaller the size of data added each
time a new message is transmitted, the greater the processing overhead due to performing the encryption algorithm processing each time.
Asymmetric Encryption
Asymmetric encryption is an encoding technique where the information sender encrypts the data using the recipient’s public key. Hence, it is also known as public-key encryption. The encrypted data
can then only be decoded by the recipient using a second private key known only to them. This allows the encrypted data to be sent over any open channel where interception by a third party will not
compromise the data as long as the encryption method is sufficiently strong to resist attack. The strength will be dependent both on the encryption algorithm used and the length of the keys.
Asymmetric encryption relies on using a mathematically based algorithm that can encrypt information using one key but requires a second, different key to decrypt the information. The key used for
encryption cannot be used for decryption, hence the asymmetry from which the name is derived. The two keys are mathematically related, the public and private keys being generated as a pair. However,
the relationship’s nature is such that an attacker cannot derive the private key from the public key.
All parties involved in the communications have a private key known only to them and a public key known to everyone. The sender takes plaintext, uses the recipient’s public key to encode the
plaintext into ciphertext, and then sends the ciphertext over a communications channel. The recipient receives the ciphertext, uses their private key to decode the ciphertext back into plaintext to
access the information it contains.
The advantage of asymmetric encryption is that it does not require the secure sharing of keys. The public encryption key can be openly shared and will not enable the decryption of the data. The
private decryption key is only required by the receiving party and so will never be shared. The main disadvantage of asymmetric encryption is the encryption and decryption processes are relatively
slow and can cause significant latency issues. Another disadvantage of asymmetric encryption is that once a private key is discovered, the confidentiality of all communications past and present using
that key is compromised. However, it is straightforward for the individual to generate a new public/private key pair and then publish the new public key for any subsequent communications. This is far
simpler than sharing a new symmetric encryption key.
Asymmetric Key Generation
Typically, the public and private keys are long, relatively prime numbers generated from a pair of shorter prime numbers. In a simplified example based on the key generation for the RSA algorithm,
two initial prime numbers A and B, are selected. The public key comprises two parts. The first part of the public key C1 is simply calculated from A*B. The second part of the public key C2 is
selected using the criteria that C2 and the result of (A-1)*(B-1) are both relatively prime. That is, the two numbers share no common factors. For comprehensively secure asymmetric encryption, these
two numbers should themselves be prime numbers, but processing limitations make this impractical given the size of the numbers involved. Typically, the RSA algorithm uses either 512-bit or 1024-bit
Once a value for C2 is chosen, the private key D can then be calculated from C2-1 MOD ((A-1)*(B-1)). The public key C (C1 and C2) can then be published, and the private key D stored securely on the
device that will perform the decryption process. The original numbers A and B are no longer required, but they must never be disclosed or retained as their compromise would allow an attacker to
deduce the private key D’s value. Given the mathematical relationship between keys, it will take less processing power to derive a private key corresponding to a known public key. This is the reason
why key lengths for asymmetric encryption are significantly longer than for symmetric encryption. A more complete explanation of the RSA algorithm can be found here.
Typically, a 512-bit asymmetric encryption key offers a similar resistance to brute force attack as a 64-bit symmetric encryption key. The asymmetric encryption key would need to be over 2,304 bits
in length to achieve an equivalent resistance as a 128-bit symmetric encryption key. However, the main vulnerability for asymmetric encryption keys is weaknesses in the algorithm used to create the
key pair that would allow the private key to be derived from the public key using a mathematical calculation rather than a brute force attack. Therefore, asymmetric encryption protection relies on
the integrity of the algorithm and the absence of any weaknesses. Microsoft has produced a more detailed guide for the symmetric and asymmetric key generation here.
The Encryption Algorithm
As for symmetric encryption, a cipher function using a block cipher technique takes the plaintext and breaks it up into pre-set blocks of data. The encryption algorithm then takes each block and
performs a mathematically based calculation using the plaintext block and the recipient’s public key to create a ciphertext block. The size of the blocks will depend on the particular encryption
algorithm selected, specifically the key length. In the case of the RSA algorithm, the block size in bytes is simply the integer result of dividing the key length in bits less one by eight. That is,
for a 512-bit key length, the block size is INT ((512 – 1) / 8) bytes.
In the case of RSA, the encryption algorithm is simply calculated from:
ciphertext = plaintext C2 MOD C1
where C1 and C2 are the two parts of the public key
The ciphertext block is the remainder when the plaintext block is converted to a numerical value, raised to the power of the second part of the recipient’s public key, and then divided by the first
part of the recipient’s public key. A key difference between asymmetric and symmetric encryption is that the ciphertext block’s length is the same as the length of the plaintext block for symmetric
encryption. For asymmetric encryption, the ciphertext block’s length is the same as the length of the encryption key modulus and so will be longer than the plaintext block from which it is
The implemented asymmetric encryption will then concatenate the ciphertext blocks together and transmit them as an encrypted message. Padding is added to the plaintext to ensure the message is an
exact multiple of the block size.
Asymmetric encryption
The security of asymmetric encryption is such that it is unnecessary to introduce dependencies between a ciphertext block and the preceding block. The advantage is that a ciphertext block corrupted
in transmission will not prevent decryption of subsequent ciphertext blocks, making this technique more resilient than symmetric encryption that uses a cipher block chaining technique.
In this RSA example, the ciphertext can be decrypted by the recipient simply using the following calculation:
Plaintext = ciphertext D MOD C1
where C1 is the first part of the public key and D is the recipient’s private key
The security of the encrypted information is directly dependent on the sender and recipient’s ability to share the secret key and keep it secure while in their possession.
Brute Force
Without access to the secret key, an eavesdropper or interceptor can only access the information if they can steal or deduce the key. One attack option is to try every possible key until the correct
one is found, known as a brute force attack. Modern processing facilities can undertake millions of operations per second, but using a secret key of 64 bits or greater would make the average time
required sufficiently long enough to guarantee security.
For example, a 32-bit key has 232 possible values, which equals 4,294,967,296. If an attacker has access to sophisticated processing facilities that can test 1000 keys per second, it will take around
50 days to try every key. The law of probabilities states that, on average, it would take 25 days to find the correct key. By changing to use a 64-bit key, this figure changes to an average of around
290,000,000 years.
Dictionary Attack
Another attack option is to guess the secret key based on the assumption that it was logically derived rather than randomly generated characters. Also known as a dictionary attack, the private key’s
security is now down to its complexity. A key based on words found in a dictionary would be simple to break within a reasonable period to render communications insecure.
The mathematical encryption algorithms themselves may also contain weaknesses and flaws that an attacker could exploit to either deduce the keys used or directly decode the encrypted information
without the need for the key. All popular encryption algorithms are subject to extensive cryptoanalysis by users seeking to identify and resolve issues and attackers seeking to find a previously
unidentified flaw. Attackers, in this case, will be highly organized and sophisticated nation-state actors such as intelligence agencies. Finding and exploiting a flaw in an encryption algorithm used
worldwide offers a significant incentive to invest vast sums of time and large budgets.
For example, the original Wired Equivalent Privacy (WEP) algorithm used for Wi-Fi communications should not be used due to the number of flaws that have been identified that render this algorithm
insecure. The nature of the weaknesses meant that an attacker could deduce the encryption key within a few minutes by simply monitoring the encrypted traffic.
The simplest method of compromising encryption is to steal a copy of any cryptographic keys from one or more of the parties involved in the secure communications exchanges. Malware introduced onto a
computer used for encryption processing can steal key information from the repository where keys are stored, from dynamic memory during the encryption process, or by monitoring the user as they enter
into the computer. This requires significantly less knowledge, resources, and time than attempting to undertake crypto-analysis on message data or perform a brute force attack. Users undertaking
encryption to protect sensitive information should be carrying out regular checks of devices, including anti-virus scans and log file reviews for suspicious security events.
Application and Algorithm Weaknesses
The last attack option is to use known flaws or weaknesses in the encryption algorithm to deduce the secret key based upon analysis of the encrypted data. The majority of software applications
contain weaknesses, flaws, and vulnerabilities that can be exploited once identified. This is no different for applications used to implement encryption and decryption processes.
Several popular encryption algorithms have been found to contain defects, either in the underlying mathematical algorithms or in the software that implements the algorithms. Using knowledge of which
algorithm has been used, it may be possible to deduce the secret key within a reasonable period. Common faults include errors in the key generation that restrict the total possible number of keys
that can be generated for a given key length. This reduces the time necessary to undertake a brute force attack should the attacker be aware of the flaw and restrict the guessing process to use only
that subset of keys that the key generation algorithm will allow. Such defects are rare but not impossible. The Sweet32 attack identified cipher design weaknesses that could be exploited, an
interesting article on how this was achieved can be found here.
Session Keys
We have seen how symmetric encryption is vulnerable to unauthorized decoding by either analyzing the encrypted data to deduce the secret key or through an attack on the sender or recipient to steal
the private key. Using the same key for all communications between the sender and recipient means all communications will be compromised if the key is compromised. A standard solution is to create a
new secret key for every communications session established between the sender and recipient. This ensures that only those communications undertaken during the session will be at risk of unauthorized
access when that key was in use if a key is compromised. This concept is known as perfect forward secrecy. The compromised key only allows the attacker to see communications using that key. This
solution then emphasizes that the method of sharing the key at the start of each session must be sufficiently secure.
Practical Implementation of Encryption
The mathematical algorithms for asymmetric encryption and decryption are different and significantly more complex than the mathematical algorithms required for symmetric encryption/decryption. As a
result, asymmetric encryption requires more processing power to achieve the same protection level as symmetric encryption. Also, where a sender needs to send a message to multiple recipients, using
asymmetric encryption would require the sender to separately encode the data sent to each recipient using their public keys. With large numbers of recipients, this places an enormous processing
overhead on the sender.
In practice, secure information communications protocols utilize both asymmetric and symmetric encryption. They combine the advantages of each to deliver an optimum balance of security and
throughput. While symmetric encryption with lower processing and latency overheads is used for information encryption, asymmetric encryption is used to securely share the secret symmetric encryption
key used for the communications session. This delivers the required secure key sharing for all parties involved in the communications. Where there are many recipients, the sender is only required to
separately encode the secret key for the symmetric encryption sent to each recipient using their public keys. The symmetric encryption is then common for all recipients.
Where encryption is used for protecting information at rest in local storage, such as whole disk encryption, there is no requirement to share the encryption key. In this case, the key, typically
based on a memorable password, is known only to the storage device user. Here, symmetric encryption is utilized to provide adequate protection with minimum access latency. For example, the Bitlocker
application, which Microsoft has integrated into the currently supported versions of the Windows operating system, uses the AES symmetric encryption algorithm with options for an efficient 128-bit
key or a more secure but slower 256-bit key.
The Dark Side of Encryption
Encryption algorithms are not only used to protect the confidentiality of information. Ransomware uses symmetric encryption to encode information on an infected device, preventing that device’s
authorized user from accessing the data. Typically, the malware that implements the ransomware function will employ a key stored on a remote server under the attacker’s control responsible for
distributing the malware. Following a ransom payment, the attacker may provide the victim with the key to decode and recover their information. If the victim is lucky and the attacker is lax, the
encryption key may be hard-coded as part of the malware. Analysis of the software can uncover the key without resorting to paying the ransom. Suppose the victim is unlucky and the attacker is
callous. In that case, the encryption key may be randomly generated and not stored, meaning that there will be no practical method of decoding the data, even if the ransom is paid.
Analysis of the ‘Ransom Warrior‘ ransomware identified that there were 1,000 keys hardcoded in the software. When allowed to execute, this malware then chose one of these keys at random. This allowed
security researchers to develop a decoding utility that simply tried each of the uncovered keys in turn until the correct key was found.
Analysis of the ‘WannaCry‘ ransomware identified that the prime numbers used to generate the encryption key were not cleared from memory after the encryption key was generated, allowing the key to be
recreated by inspection of the device’s memory.
Of course, the simple solution to recover from a ransomware attack is to restore the infected device’s contents from a backup taken from before the infection occurred. Even if the malware is
proficiently written to retain and store the encryption key on the attacker’s remote server, there is no guarantee that they will release that key after the ransom is paid.
Hashing Functions
Hashing functions are used to create a unique fingerprint of a message that can be used to identify if that message is subsequently altered. This provides assurance of the integrity of the
information contained in the message, preventing an attacker from intercepting the message, changing the content, and then forwarding the message to the recipient. Hashing functions work by using a
mathematical algorithm that takes the message’s contents and generates a hash code representing the content.
Hashing functions can be used for encrypted communications to provide additional assurance. Still, they are equally helpful for unencrypted communications where the information shared is not
sensitive, but it must be correct. An example is downloading software, either in the form of a complete application or an update, particularly security patches. It is imperative that the downloaded
software is not intercepted and altered in transit by an attacker looking to introduce malware into the software. Malware successfully introduced into a security patch is a sure-fire method of
infecting a target computer. It avoids the majority of security controls intended to block such an attack vector. By adding a hash code to the function that is published by the original software
provider, the downloaded software can be verified that it has not been altered by regenerating a hash code for the downloaded software and cross-checking against the original hash code.
A hashing function takes the message and breaks it up into pre-set blocks of data. The hashing algorithm then takes each block and performs a mathematically based calculation using the message block
to create a hash code. The size of the blocks will depend on the particular hashing algorithm selected. Like symmetric encryption techniques, the hashing function adds a dependency on previous data
into the hashing algorithm, chaining the blocks together to generate a single hash code for the entire chain. This technique ensures that any message block changes will cascade through to the final
calculated hash code. The hashing algorithm will define the initial seed value to ensure that different users generate identical hash codes for the same message. When generating the final message
block, padding data will usually be required unless the message size happens by chance to be an exact multiple block size.
Hashing function schematic
One limitation of hashing algorithms is that depending on the hash code’s size limit, the greater the original message’s length, the more likely it will be that the hash code may not be unique. For
very long messages in the order of 264 bits or more, an attacker can alter the message to generate the same hash code relatively simply. This is because there are potentially infinite different
messages but only 2L possible hash codes, where L is the length in bits of the hash code. The question then becomes how quickly an attacker can subtly change their altered message until they create a
message with an identical hash code to the original message. A common approach to guard against this possibility is to use a secret key shared between the sender and recipient as the first message
block passed to the hashing algorithm.
There may also be a situation where the original message’s creator may seek to create an altered version with the same hash code for nefarious purposes, such as changing a contract’s terms after
acceptance by the recipient. Creating two different messages and adjusting their content until they produce identical hash codes will take significantly less time than attempting to change the
altered message to match the original fixed message. This can be demonstrated using the analogy of the birthday paradox. Given a group of people in a room, the probability of any two people sharing
the same birthday is significantly greater than the likelihood of one person sharing the same birthday as a fixed date.
A simple solution would be for the recipient to request two hash codes be produced using two different hashing algorithms. This renders the likelihood of an altered message producing both hash codes
correctly improbable. An alternative is the use of a digital signature.
Digital Signatures
A digital signature is a technique based on the hashing function that can authenticate a message to provide assurance that it was sent by the person claiming to have sent it and assuring its
integrity. The signature is generated using a hashing algorithm that takes the message’s contents and the private key of the asymmetric encryption key pair belonging to the sender. This fulfills two
functions. It assures the message’s integrity as any change to the message content would invalidate the digital signature. It also confirms the identity of the sender as long as their private key
remains secure. This technique prevents an attacker from attempting to deceive the recipient by intercepting and changing a message while in transit from the sender to the recipient. It also prevents
an attacker from trying to trick the recipient into thinking a message sent by the attacker is from the sender. As previously covered in the section on asymmetric encryption, all parties involved in
the communications have a private key known only to them and a public key known to everyone.
The sender takes plaintext and uses the recipient’s public key to encode the plaintext into ciphertext. The sender then uses a hashing algorithm to create a hash code for the ciphertext and uses
their private key to create a digital signature based on the hash code. The ciphertext and digital signature are then sent to the recipient over the communications channel. The recipient receives the
ciphertext and digital signature and uses the sender’s public key to verify that the received digital signature and ciphertext are valid by recreating the hash code for the received ciphertext. The
recipient can then use their private key to decode the ciphertext back into plaintext to access the information it contains. In this process, both the sender and recipient must have a public and
private key pair. Microsoft has produced a more detailed guide for cryptographic digital signatures here.
Digital signatures are based on a hash code of the ciphertext rather than the message itself. This is significantly more efficient than using asymmetric encryption techniques to digitally sign the
entire ciphertext message. Digital signatures can be used to authenticate multiple parties’ agreement to the contents of a message if it represents a contract or similar legal agreement. A copy of
the message is shared with all parties who each digitally sign the message by generating a hash code and using their private key to create their signature. These signatures can then be shared between
all parties, where they can be validated using the public keys for the parties. This provides assurance that the digitally signed message is identical for all parties and that each has identified
themselves using their private key. This prevents any attempt by any party to either alter the agreed message or deny signing the message.
This solution’s security depends on all parties keeping their private keys secure and using a hashing algorithm that is sufficiently secure that the message cannot be altered so that the hash code
for the message does not change. Given that the typical application of digital signatures is for contractual agreements that may be valid for many years, the asymmetric key generation algorithm and
hashing algorithm must be sufficiently robust to resist brute force attack or cryptanalysis techniques duration of the contract.
Popular Algorithms
Symmetric Encryption Algorithms
The DES (Data Encryption Standard) algorithm is one of the oldest algorithms still in fairly common usage. It was created by IBM to protect US federal information and entered usage in the 1970s. It
went on to be used in versions 1.0 and 1.1 of the TLS (Transport Layer Security) protocol. DES uses a 56-bit encryption key, operating on 64-bit data blocks and producing 64-bit ciphertext blocks.
The relatively short key length means that DES is no longer considered sufficiently secure and has been superseded by the 3DES algorithm and, in most applications, replaced by the AES algorithm.
The 3DES (Triple Data Encryption Standard) algorithm is an updated version of DES that entered usage in the 1990s. It works by effectively encrypting each data block three times using the DES
algorithm with three 64-bit keys to make brute force cracking more difficult relative to the DES algorithm. While the three 56-bit keys, in theory, represent a total 168-bit key length, commonalities
in the repeated algorithms reduce this effectiveness to being broadly equivalent to a 112-bit key. Following the identification of vulnerabilities in the 3DAS algorithm, its use is being
decrepitated. While still used in hardware encryption products, it is no longer used for version 1.3 of the TLS protocol, and its use is expected to formally end in the 2020s.
The AES (Advanced Encryption System) algorithm has been the standard symmetric encryption algorithm in everyday use worldwide since the early 2000s. The main difference from DES is that the algorithm
can accommodate a range of key lengths. The most common key length is 128 bits that deliver the most efficient encryption speeds, but 192-bit and 256-bit keys are commonly used to provide more robust
security. The block sizes for the plaintext and ciphertext will depend on the key size. The complexity of the mathematical operations undertaken by the algorithm also increases with key length. This
enables the balance of processing overhead and security protection to be tuned to the specific application the algorithm is used for.
The Blowfish algorithm is an open-source replacement for DES developed by Bruce Schneier, using the same 64-bit block size. It uses the simple block encryption technique to deliver very fast
encryption speeds, which, along with its license-free availability, have made its use popular in eCommerce applications.
The Twofish algorithm has been developed by Bruce Schneier to replace the Blowfish algorithm. The main difference is that it accommodates keys of up to 256 bits in length while maintaining very fast
encryption speeds.
Asymmetric Encryption Algorithms
The RSA algorithm is named after its developers, Ron Rivest, Adi Shamir, and Leonard Adleman. The RSA algorithm was created in the 1970s and is one of the oldest asymmetric encryption algorithms and
is still in common usage. Its longevity is due to the principle of using two large random prime numbers to generate keys. The RSA algorithm can accommodate a range of key lengths, including 768-bit,
1024-bit, 2048-bit, 4096-bit, and onwards. A study in 2010 concluded that breaking a 768-bit RSA key would require more than half a million hours of processing time. The current standard key length
is 2048 bits. RSA can be found in a broad range of applications, including SSL/TLS certificates, message encryption, and crypto-currencies.
Stream Cipher Encryption Algorithms
The RC4 (Rivest Cipher version 4) algorithm is named after its developer Ron Rivest, a member of the team behind RSA. The RC4 algorithm was created in the 1980s, designed to operate on a data stream
on a byte by byte basis. It has gained almost universal use for stream cipher applications due to its rapid encryption speeds. It is used in Secure Socket Layer (SSL), Transport Layer Security (TSL),
and IEEE 802.11 wireless networks. The RC4 algorithm is flexible in that it can use either a 64-bit or a 128-bit key length.
Hashing Algorithms
The MD5 (Message Digest version 5) algorithm was developed by Ron Rivest, a member of the team behind RSA. This algorithm produces a 128-bit message digest (another name for a hash code) by applying
a block algorithm technique, dividing the message into 512-bit blocks and then mathematically operating on 32-bit sub-blocks. Will originally intended to provide checksums for application integrity
checks, it became popular for hashing functions. However, the hash code length means that this algorithm can be susceptible to hash collision weakness where two different messages can produce the
same hash code. This has led to the replacement of MD5 with the faster and more secure SHA.
The SHA-1 (Secure Hash Algorithm version 1) is based on the MD5 algorithm but generates a 160-bit hash code to reduce hash collision weakness. It was developed in the 1990s by the US National
Security Agency (NSA) but was replaced in the 2010s following the identification of implementation weaknesses that made it vulnerable to attack.
The SHA-2 (Secure Hash Algorithm version 2) has been developed by the US NSA as the replacement for SHA-1 and covers a family of hashing function algorithms of varying hash code length. The most
commonly used variants are SHA-256, SHA-384, and SHA-512. The number in the hashing function name equates to the size in bits of the generated hash code.
The SHA-3 (Secure Hash Algorithm version 3) family has been developed outside of the US NSA and is intended to address flaws identified in the implementation of the SHA-2 algorithm. The variants all
produce the same hash code length options as the SHA-2 family but using different internal algorithms to generate the hash codes. While not yet in common usage, depreciation of SHA-2 is expected when
exploits for the known flaws become available.
Future Trends
The processing power available to nation-states and organized criminal syndicates continues to grow. There will be a point where the time required to brute force the encryption techniques available
is sufficiently quick to render these techniques insecure. Traditionally encryption techniques have evolved by using ever-longer keys and more robust algorithms, but this brings an additional
processing overhead that adversely affects latency.
Going forward, quantum cryptography, or quantum key distribution (QKD), is seen as a potential breakthrough solution that could deliver unbreakable encryption. It relies on a fundamental principle of
quantum mechanics in that observing an elementary particle’s state will change that particle’s state. This means, in essence, that it is impossible to eavesdrop on a message without detection. For
further reading, QuantumXC explains Quantum Cryptography on their website, and they have an interesting article published in Forbes.
Another interesting development is the concept of Honey Encryption, where an attacker attempting to brute force deduce a password is presented with a believable but false decoy password. Any attempt
to use that decoy password can initiate an action such as raise the alarm, block logical access, or other control measures that impede the brute force attack. McAfee has produced a blog post that
provides more information here. A more detailed explanation can be found in this research paper covering Honey Encryption, Security Beyond the Brute-Force Bound.
Cisco basic explanation of encryption algorithms: What Is Encryption? Explanation and Types - Cisco
Cloudflare basic explanation of encryption algorithms: What is encryption? | Types of encryption | Cloudflare UK
Electronic Frontier Foundation overview of how asymmetric encryption works
UK’s Information Commissioner’s Office practical guidance for implementing encryption and links to further reading Encryption scenarios | ICO | {"url":"https://privacyhq.com/documentation/encryption-hashing/","timestamp":"2024-11-09T20:20:02Z","content_type":"text/html","content_length":"59726","record_id":"<urn:uuid:6f8a30c2-01c5-4d4d-80b1-a83ab4ae6afb>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00765.warc.gz"} |
Arithmetic deformation theory via arithmetic fundamental groups and nonarchimedean theta functions, notes on the work of Shinichi Mochizuki
Fesenko, Ivan (2015) Arithmetic deformation theory via arithmetic fundamental groups and nonarchimedean theta functions, notes on the work of Shinichi Mochizuki. European Journal of Mathematics, 1
(3). pp. 405-440. ISSN 2199-675X
Full text not available from this repository.
These notes survey the main ideas, concepts and objects of the work by Shinichi Mochizuki on interuniversal Teichmüller theory [31], which might also be called arithmetic deformation theory, and its
application to diophantine geometry. They provide an external perspective which complements the review texts [32] and [33]. Some important developments which preceded [31] are presented in the first
section. Several important aspects of arithmetic deformation theory are discussed in the second section. Its main theorem gives an inequality–bound on the size of volume deformation associated to a
certain log-theta-lattice. The application to several fundamental conjectures in number theory follows from a further direct computation of the right hand side of the inequality. The third section
considers additional related topics, including practical hints on how to study the theory.
Item Type: Article
RIS ID: https://nottingham-repository.worktribe.com/output/759309
Additional Information: The final publication is available at Springer via http://dx.doi.org/10.1007/s40879-015-0066-0
Keywords: Arithmetic, Geometry
Schools/Departments: University of Nottingham, UK > Faculty of Science > School of Mathematical Sciences
Identification Number: https://doi.org/10.1007/s40879-015-0066-0
Depositing User: Fesenko, Professor Ivan
Date Deposited: 07 Oct 2015 08:23
Last Modified: 04 May 2020 17:15
URI: https://eprints.nottingham.ac.uk/id/eprint/29224
Actions (Archive Staff Only) | {"url":"http://eprints.nottingham.ac.uk/29224/","timestamp":"2024-11-08T04:26:11Z","content_type":"application/xhtml+xml","content_length":"29348","record_id":"<urn:uuid:20857a6c-5075-49f2-848e-0da151296d6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00126.warc.gz"} |