content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
How to solve an implicit differential equation numerically?
How to solve an implicit differential equation numerically?
I tried Mathematica for this, but didn't see how to do it. Is it possible to solve an equation of the following kind?
diff(R(t),t) == C1*(C2 - C3*1/R(t))*(1/R(t) + 1/sqrt(C4*t))
where t is a variable, R(t) is a function of t and C1 to C4 are constants
Any help would be appreciated.
1 Answer
Sort by ยป oldest newest most voted
One way to approach this differential equation numerically is:
From a local installation of Sage, you can also use maxima to launch a window that contains the direction field and allows you to click in the window to indicate your initial condition.
Finally, for an option outside of Sage that is very nice, I also recommend using pplane and dfieldat: http://math.rice.edu/~dfield/dfpp.html
edit flag offensive delete link more
Great, that works! Thank you for the competent answer!
clenz ( 2012-11-07 04:27:25 +0100 )edit
|
{"url":"https://ask.sagemath.org/question/9506/how-to-solve-an-implicit-differential-equation-numerically/?sort=votes","timestamp":"2024-11-11T04:47:57Z","content_type":"application/xhtml+xml","content_length":"55684","record_id":"<urn:uuid:3ecf76a1-46e5-422a-80a5-5c5908950002>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00053.warc.gz"}
|
question Archives : JoJo Teacher - Korean mathematics
[태그:] question
Can you factorize? – Korean math question Hello, I’m Jojo, a math teacher in South Korea. Today, I have a factorization quesiton. It’s a problem that middle school students learn. Would you like
to give it a try? [Can you factorize?] Factorize the given expression, please. Take your time to solve it. There are various…
|
{"url":"https://www.jojoteacher.com/tag/question/","timestamp":"2024-11-15T02:26:07Z","content_type":"text/html","content_length":"50355","record_id":"<urn:uuid:a007fb36-b240-4483-962a-f6a58f663cb1>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00849.warc.gz"}
|
[CH] Switzerland
Université de Genève / University of Geneva [UNIGE]
Name University of Geneva
Name (original vn) Université de Genève
Acronym UNIGE
Country Switzerland
Address 24 quai Ernest Ansermet, CH-1211 Geneva, Switzerland
Type University (and its Library)
Status Active
ROR id (link) https://ror.org/01swzsf04
Crossref Org ID (link) No Crossref Org ID found
ROR information
ROR ID (link) https://ror.org/01swzsf04
Primary Name University of Geneva
Schola Genevensis
Alternative Names Université de Genève
Università di Ginevra
Country Switzerland
Website(s) https://www.unige.ch/
Funder Registry instances associated to this Organization:
31 Publications associated to this Organization :
6 publications
10 publications
6 publications
3 publications
1 publication
4 publications
1 publication
Fellows affiliated to this Organization
Currently active
• Mariño, Marcos
• Sonner, Julian
Support history
List of the subsidies (in one form or another) which
SciPost has received from this Organization. Click on a row
to see more details.
Type Amount Date
Sponsorship Agreement €4300 2021-01-01 until 2025-12-31
Total support obtained: €4300
Balance of SciPost expenditures versus support received
The following Expenditures (University of Geneva) table compiles the expenditures by SciPost to publish all papers which are associated to University of Geneva , and weighed by this Organization's
Help! What do these terms mean?
Concept Acronym Definition
Associated An Organization's Associated Publications is the set of papers in which the Organization (or any of its children) is mentioned in author affiliations, or in the
Publications acknowledgements as grant-giver or funder.
Number of Associated NAP Number of Associated Publications, compiled (depending on context) for a given year or over many years, for a specific Journal or for many, etc.
A fraction of a unit representing an Organization's "weight" for a given Publication.
The weight is given by the following simple algorithm:
Publication Fraction PubFrac • First, the unit is split equally among each of the authors.
• Then, for each author, their part is split equally among their affiliations.
• The author parts are then binned per Organization.
By construction, any individual paper's PubFracs sum up to 1.
Expenditures We use the term Expenditures to represent the sum of all outflows of money required by our initiative to achieve a certain output (depending on context).
Average Publication APEX For a given Journal for a given year, the average expenditures per Publication which our initiative has faced. All our APEX are listed on our APEX page.
Total Associated Total expenditures ascribed to an Organization's Associated Publications (given for one or many years, Journals etc depending on context).
PubFrac share The fraction of expenditures which can be associated to an Organization, based on PubFracs. This is defined as APEX times PubFrac, summed over the set of Publications
defined by the context (e.g. all Associated Publications of a given Organization for a given Journal in a given year).
Subsidy support Sum of the values of all Subsidies relevant to a given context (for example: from a given Organization in a given year).
Difference between incoming and outgoing financial resources for the activities under consideration (again defined depending on context).
Impact on reserves
• A positive impact on reserves means that our initiative is sustainable (and perhaps even able to grow).
• A negative impact on reserves means that these activities are effectively depleting our available resources and threatening our sustainability.
Year (click to toggle details) NAP Total associated expenditures PubFrac Subsidy Impact on
share support reserves
Cumulative 31 €17041 €9178 €3440 €-5738
Per year:
2024 (ongoing) 6 €4800 €2932 €860 €-2072
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2024.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
SciPostPhys €800 6 €4800 3.666 €2932
2023 10 €4950 €2624 €860 €-1764
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2023.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
SciPostPhys €495 10 €4950 5.303 €2624
2022 6 €2664 €1532 €860 €-672
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2022.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
SciPostPhys €444 6 €2664 3.451 €1532
2021 3 €1926 €771 €860 €89
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2021.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
SciPostPhys €642 3 €1926 1.201 €771
2020 1 €620 €310 €0 €-310
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2020.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
SciPostPhys €620 1 €620 0.5 €310
2019 4 €1760 €769 €0 €-769
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2019.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
SciPostPhys €440 2 €880 1.0 €440
SciPostPhysProc €220 1 €220 1.0 €220
SciPostPhysLectNotes €660 1 €660 0.166 €109
2018 0 €0 €0 €0 €0
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2018.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
2017 1 €321 €240 €0 €-240
The following table give an overview of expenditures , compiled for all Publications which are associated to University of Geneva for 2017.
You can see the list of associated publications under the Publications tab.
Expenditures (University of Geneva)
Journal APEX NAP Total associated expenditures PubFracs PubFracs expenditures share
SciPostPhys €321 1 €321 0.75 €240
|
{"url":"https://scipost.org/organizations/35/","timestamp":"2024-11-13T13:00:16Z","content_type":"text/html","content_length":"82803","record_id":"<urn:uuid:c64ed2eb-f005-4d66-90c6-cdf2c6d9e667>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00026.warc.gz"}
|
A connected planar graph having 6 vertices, and 7 edges contains _____, regions.a)15b)3c)1d)11
A connected planar graph having 6 vertices, and 7 edges contains _____, regions.a)15b)3c)1d)11
Solution 1
The number of regions in a connected planar graph can be calculated using Euler's formula, which states that:
V - E + F = 2
where V is the number of vertices, E is the number of edges, and F is the number of faces (regions).
Given that the graph has 6 vertices (V = 6) and 7 edges (E = 7), we can Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI
Upgrade your grade with Knowee
Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions.
|
{"url":"https://knowee.ai/questions/5298759-a-connected-planar-graph-having-vertices-and-edges-contains","timestamp":"2024-11-05T03:02:55Z","content_type":"text/html","content_length":"352859","record_id":"<urn:uuid:a27cdf81-b3af-4101-9f2c-956b0335c229>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00854.warc.gz"}
|
NCERT Solutions for Class 7 Maths Ch 2 Fractions and Decimals Exercise 2.6
You can find Chapter 2 Fractions and Decimals Exercise 2.6 Class 7 Mathematics NCERT Solutions is is key in scoring better marks in the exams. Through the help of NCERT Solutions for Class 7, you can
easily complete your homework. It will guide students in a better way which will make student confident.
You can view NCERT Solutions for Class 7 Maths on Studyrankers which are prepared by subject experts who have provided detailed and accurate solutions of all the questions. In Exercise 2.6, you have
to do multiplication of given decimals and solve various word problems.
|
{"url":"https://www.studyrankers.com/2020/11/ncert-solutions-for-class7-maths-fractions-and-decimals-exercise2.6.html","timestamp":"2024-11-04T05:47:09Z","content_type":"application/xhtml+xml","content_length":"292806","record_id":"<urn:uuid:9f70943c-2099-4980-8012-054b223ef753>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00290.warc.gz"}
|
Noah G.
What do you want to work on?
About Noah G.
Elementary (3-6) Math
Bachelors in Communication, General from Saint Cloud State University
Career Experience
10+ years in competitive speech and debate, national semi-finalist. I also have experience as a speech judge and coach for high school students.
I Love Tutoring Because
I am fascinated by epistemology, or the study of knowledge. I enjoy the process of learning.
Other Interests
Color Guard, Music, Photography, Piano, Playing Music
Math - Elementary (3-6) Math
Communication - Public Speaking
did an good job explainging the process
Math - Elementary (3-6) Math
he was soo nice
Roshan Thapa
Math - Elementary (3-6) Math
Best tutor ever-he is so nice,I totally recommend to others!
|
{"url":"https://testprepservices.princetonreview.com/academic-tutoring/tutor/noah%20g--8447382","timestamp":"2024-11-03T23:13:24Z","content_type":"application/xhtml+xml","content_length":"217784","record_id":"<urn:uuid:ba804c62-57ac-4364-981a-6a1e7c01b3f9>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00629.warc.gz"}
|
gonum/plot vs dataframe-go - compare differences and reviews? | LibHunt
A repository for plotting and visualizing data (by gonum)
DataFrames for Go: For statistics, machine-learning, and data manipulation/exploration (by rocketlaunchr)
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages
with each PR.
gonum/plot dataframe-go
2,742 1,165
0.5% 0.0%
5.6 0.0
23 days ago over 2 years ago
Go Go
BSD 3-clause "New" or "Revised" License GNU General Public License v3.0 or later
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Posts with mentions or reviews of gonum/plot. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-16.
• The Golang Saga: A Coder’s Journey There and Back Again. Part 3: The Graphing Conundrum
3 projects | dev.to | 16 Aug 2023
And with this map now we are ready to create a group bar chart for each station to find out which station is the best for each type of value. I found a helpful tutorial on gonum/plot, so I’m
going to use plotter.NewBarChart for my purposes.
Posts with mentions or reviews of dataframe-go. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-10.
• packages similar to Pandas
Numpy functionality is largely covered by https://www.gonum.org/ but for pandas I'm not sure if there is an equivalent as widely accepted. However, you might try https://github.com/
rocketlaunchr/dataframe-go which I have not tried but it looks like it covers some of what you're looking for
• Machine Learning
• Dynamic Structs
For guidance on how to use it: https://github.com/rocketlaunchr/dataframe-go/blob/master/exports/parquet.go
What are some alternatives?
When comparing gonum/plot and dataframe-go you can also consider the following projects:
chart - Provide basic charts in go
gonum - Gonum is a set of numeric libraries for the Go programming language. It contains libraries for matrices, statistics, optimization, and more
gosl - Linear algebra, eigenvalues, FFT, Bessel, elliptic, orthogonal polys, geometry, NURBS, numerical quadrature, 3D transfinite interpolation, random numbers, Mersenne twister, probability
distributions, optimisation, differential equations.
ydata-profiling - 1 Line of code data quality profiling & exploratory data analysis for Pandas and Spark DataFrames.
qframe - Immutable data frame for Go
Stats - A well tested and comprehensive Golang statistics library package with no dependencies.
gostat - Collection of statistical routines in golang
CodeRabbit: AI Code Reviews for Developers
Revolutionize your code reviews with AI. CodeRabbit offers PR summaries, code walkthroughs, 1-click suggestions, and AST-based analysis. Boost productivity and code quality across all major languages
with each PR.
|
{"url":"https://www.libhunt.com/compare-gonum--plot-vs-dataframe-go","timestamp":"2024-11-13T22:11:25Z","content_type":"text/html","content_length":"36997","record_id":"<urn:uuid:8abc5044-d76a-45ee-adbb-e9bcb791d939>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00069.warc.gz"}
|
Using Coq's evaluation mechanisms in anger
Coq offers several internal evaluation mechanisms that have many uses, from glorious proofs by reflection to mundane testing of functions. Read on for an account of my quest to get large parts of the
CompCert verified C compiler to execute from within Coq. It was a mighty battle between the forces of transparency and opacity, with the innocent-looking “decide equality” tactic as the surprise
Recently, I spent significant time trying to execute parts of the CompCert verified C compiler directly within Coq. The normal execution path for CompCert is to extract (using Coq’s extraction
mechanism) OCaml code from the Coq function definitions of CompCert, then compile this OCaml code and link it with hand-written OCaml code. Executing Coq definitions from within Coq itself bypasses
extraction and provides a more lightweight way to run them on sample inputs.
That’s the theory; in practice, I ran into many problems, some specific to CompCert, others of a more general nature. In this post, I describe some of the “gotcha’s” I ran into. Most of this material
is old news for expert Coq users, but writing it down could help others avoiding the pitfalls of Coq’s execution mechanisms.
Coq’s evaluation mechanisms
Readers unfamiliar with Coq may wonder why a proof assistant should be able to evaluate functional programs at all. Owing to the conversion rule (types/propositions are identified up to reductions),
evaluation is an integral part of Coq’s logic. The Coq implementation provides several evaluation mechanisms:
• compute, an interpreter that supports several evaluation strategies (lazy, call-by-value, etc).
• vm_compute, which relies on compilation to the bytecode of a virtual machine, implements call-by-value evaluation, and delivers performance comparable to that of the OCaml bytecode compiler.
• native_compute, introduced in the upcoming 8.5 release of Coq, relies on the OCaml native-code compiler to produce even higher performance.
Here is an example of evaluation:
Require Import Zarith.
Open Scope z_scope.
Fixpoint fib (n: nat) : Z :=
match n with 0%nat => 1 | 1%nat => 1 | S (S n2 as n1) => fib n1 + fib n2 end.
Compute (fib 30%nat).
The Coq toplevel prints = 1346269 : Z, taking about 0.2s for the evaluation. The Compute command is shorthand for Eval vm_compute in, and therefore uses the virtual machine evaluator. We can also use
the interpreter instead, obtaining the same results, only slower:
Eval cbv in (fib 30%nat). (* takes 2.3s *)
Eval lazy in (fib 30%nat). (* takes 16s *)
On the “ML subset” of Coq, like the example above, Coq’s evaluators behave exactly as one would expect. This “ML subset” comprises non-dependent data types and plain function definitions. Other
features of Coq are more problematic evaluation-wise, as we now illustrate.
Beware opacity!
The most common source of incomplete evaluations is opaque names. An obvious example is names that are declared using Parameter or Axiom but not given a definition. Consider:
Parameter oracle: bool.
Compute (if oracle then 2+2 else 3+3).
(* = if oracle then 4 else 6 : Z *)
Coq is doing its best here, evaluating the then and else branches to 4 and 6, respectively. Since it does not know the value of the oracle Boolean, it is of course unable to reduce the if entirely.
A less obvious source of opacity is definitions conducted in interactive proof mode and terminated by Qed:
Definition nat_eq (x y: nat) : {x=y} + {x<>y}.
Proof. decide equality. Qed.
Compute (nat_eq 2 2).
(* = nat_eq 2 2 : {2%nat = 2%nat} + {2%nat <> 2%nat} *)
Qed always creates opaque definitions. To obtain a transparent definition that evaluates properly, the proof script must be terminated by Defined instead:
Definition nat_eq (x y: nat) : {x=y} + {x<>y}.
Proof. decide equality. Defined.
Compute (nat_eq 2 2).
(* = left eq_refl : {2%nat = 2%nat} + {2%nat <> 2%nat} *)
The Print Assumptions command can be used to check for opaque names. However, as we see later, an opaque name does not always prevent full evaluation, and opaque definitions are sometimes preferable
to transparent ones.
Another source of opacity, of the “after the fact” kind, is the Opaque command, which makes opaque a previous transparent definition. Its effect can be undone with the Transparent command. The
virtual machine-based evaluator ignores opacity coming from Opaque, but the interpreter-based evaluator honors it:
Definition x := 2.
Opaque x.
Compute (x + x). (* = 4 : Z *)
Eval cbv in (x + x). (* = huge useless term *)
Dependently-typed data structures
A very interesting feature of Coq, from a functional programming standpoint, is its support for dependently-typed data structures, containing both data and proofs of properties about them. Such data
structures can be a challenge for evaluation: intuitively, we want to evaluate the data parts fully, but may not want to evaluate the proof parts fully (because proof terms can be huge and their
evaluation takes too much time), or may not be able to evaluate the proof parts fully (because they use theorems that were defined opaquely).
A paradigmatic example of dependently-typed data structure is the subset type { x : A | P x }, which is shorthand for sig A P and defined in the Coq standard library as:
Inductive sig (A:Type) (P:A -> Prop) : Type :=
exist : forall x:A, P x -> sig P.
Intuitively, terms of type { x : A | P x } are pairs of a value of type A and of a proof that this value satisfies the predicate P.
Let us use a subset type to work with integers greater than 1:
Definition t := { n: Z | n > 1 }.
Program Definition two : t := 2.
Next Obligation. omega. Qed.
Program Definition succ (n: t) : t := n + 1.
Next Obligation. destruct n; simpl; omega. Qed.
The Program facility that we used above makes it easy to work with subset types: to build terms of such types, the programmer specifies the data part (e.g. 2 or n + 1 above), and the proof parts are
left as proof obligations, which can be solved using proof scripts. There are other ways, for example by writing proof terms by hand:
Definition two : t := exist _ 2 (refl_equal Gt).
or by using interactive proof mode for the whole term:
Definition two : t.
Proof. exists 2. omega. Defined.
But how well does this compute? Let’s compute succ two:
Compute (succ two).
(* = exist (fun n : Z => n > 1) 3
(succ_obligation_1 (exist (fun n : Z => n > 1) 2 two_obligation_1)) : t *)
This is not too bad: the value part of the result was completely evaluated to 3, while the proof part got stuck on the opaque lemmas introduced by Program. The reason why this is not too bad is that,
often, subset types are used locally to transport invariants on data structures, but the final result we are interested in is just the data part of the subset type, as obtained with the proj1_sig
Compute (proj1_sig (succ two)).
(* = 3 : Z *)
In other words, the proof parts of values of type t are carried around during computation, but do not contribute to the final result obtained with proj1_sig. So, it’s not a problem to have opaque
names in the proof parts. Indeed, making these names transparent (e.g. by using Defined instead of Qed to terminate Next Obligation) just creates bigger proof terms that will be discarded eventually
Another classic example of dependent data type is the type {P} + {Q} of informative Booleans. Values of this type are either the left constructor carrying a proof of P, or the right constructor
carrying a proof of Q. A typical use is for decidable equality functions that return not just a Boolean “equal/not equal”, but also a proof of equality or disequality. For example, here is a
decidable equality for the subset type t above:
Require Import Eqdep_dec.
Program Definition t_eq (x y: t) : {x=y} + {x<>y} :=
if Z.eq_dec (proj1_sig x) (proj1_sig y) then left _ else right _.
Next Obligation.
destruct x as [x Px], y as [y Py]. simpl in H; subst y.
f_equal. apply UIP_dec. decide equality.
Next Obligation.
red; intros; elim H; congruence.
Again, such definitions compute relatively well:
Compute (t_eq two two).
(* = left
(t_eq_obligation_1 (exist (fun n : Z => n > 1) 2 eq_refl)
(exist (fun n : Z => n > 1) 2 eq_refl) eq_refl)
: {two = two} + {two <> two} *)
The proof part blocks, again, on an opaque name, but, more importantly, evaluation went far enough to determine the head constructor left, meaning that the two arguments are equal. Typically, we use
decidable equality functions like t_eq in the context of an if expression that just looks at the head constructor and discards the proof parts:
Compute (if t_eq two two then 1 else 2).
(* = 1 : Z *)
Bottom line: dependently-typed data structures such as subset types or rich Booleans compute quite well indeed, even if their proof parts are defined opaquely. This is due to a phase distinction that
functions over those types naturally obey: the data part of the result depends only on the data part of the argument, while the proof part of the argument is used only to produce the proof part of
the result. Consider again the succ function above, with type
succ: { x : Z | x > 1 } -> { y : Z | y > 1 }
The y data part of the result depends only on the x data part of the argument, via y = x + 1. The x > 1 proof part of the argument contributes only to proving the y > 1 part of the result.
The odd “decide equality” tactic
At first sight, the phase distinction outlined above is a natural consequence of Coq’s sorting rules, which, to a first approximation, prevent a term of sort Type to depend on a term of sort Prop.
But there are exceptions to this sorting rule, which result in completely mysterious failures of evaluation. As I learned through painful debugging sessions, the decide equality tactic violates the
phase distinction in a mysterious way.
Consider a decidable equality over the type list t, autogenerated by decide equality:
Definition t_list_eq: forall (x y: list t), {x=y} + {x<>y}.
Proof. decide equality. apply t_eq. Defined.
Compute (if t_list_eq (two::nil) (two::nil) then 1 else 2).
(* = if match
(exist (fun n : Z => n > 1) 2 two_obligation_1)
(exist (fun n : Z => n > 1) 2 two_obligation_1) eq_refl
in (_ = x)
| eq_refl => left eq_refl
then 1
else 2 : Z *)
Whazzat? The normal form is 40 lines long! Clearly, t_list_eq (two::nil) (two::nil) failed to reduce to left of some equality proof. Apparently, it got stuck on the opaque proof t_eq_obligation_1
before reaching the point where it can decide between left (equal) and right (not equal). But that violates the phase distinction! The left/right data part of the result should not depend on the
proof term t_eq_obligation_1!
Something fishy is going on. But maybe we can circumvent it by using Defined instead of Qed in the proof obligations of t_eq? Doing so only delays failure: computation goes further but produces a
200-line term that is blocked on the opaque lemma UIP_dec from Coq’s standard library… I played this “whack-a-mole” game for hours, copying parts of the Coq standard library to make lemmas more
transparent, in the hope that functions produced by decide equality will compute eventually.
Then I realized that the problem lies with decide equality. The term it produces is roughly the same one would get with the following proof script:
Definition bad_t_list_eq: forall (x y: list t), {x=y} + {x<>y}.
induction x as [ | xh xt]; destruct y as [ | yh yt].
- left; auto.
- right; congruence.
- right; congruence.
- destruct (t_eq xh yh).
+ subst yh. (* HERE IS THE PROBLEM *)
destruct (IHxt yt).
* left; congruence.
* right; congruence.
+ right; congruence.
Notice the subst in the first + bullet? In the case where x and y are not empty and their heads are equal, it eliminates the proof of equality between the heads before recursing over the tails and
finally deciding whether to produce a left or a right. This makes the left/right data part of the final result dependent on a proof term, which in general does not reduce!
In this particular example of lists, and in all cases involving ML-like data types, this early elimination of an equality proof is useless: if we just remove the subst xh, we get a perfectly good
decidable equality that respects the phase distinction and computes just fine:
Definition good_t_list_eq: forall (x y: list t), {x=y} + {x<>y}.
induction x as [ | xh xt]; destruct y as [ | yh yt].
- left; auto.
- right; congruence.
- right; congruence.
- destruct (t_eq xh yh).
+ destruct (IHxt yt).
* left; congruence.
* right; congruence.
+ right; congruence.
Compute (if (good_list_eq (two::nil) (two::nil)) then 1 else 2).
(* = 1 : Z *)
The only case where the current behavior of decide equality would be warranted is for dependently-typed data types like the following:
Inductive bitvect : Type := Bitvect (n: nat) (v: Vector.t bool n).
When comparing two values of bitvect type, after checking that their n components are equal, we must substitute one n by the other so that the two v components have the same type and can be compared
in turn. However, decide equality just does not work on the bitvect type above, producing an ill-typed term… So much for my sympathetic explanation of the odd behavior of decide equality!
After reimplementing decide equality in 20 lines of Ltac that generate phase-distinction-correct functions (thank you very much), and performing a zillion other changes in CompCert’s Coq sources, I
was finally able to execute whole CompCert passes from within Coq. If you are wondering about performance, Coq’s VM runs CompCert at 2/3 of the speed of CompCert extracted to OCaml then compiled to
bytecode, and 15% of the speed of the “real” CompCert, extracted to OCaml then compiled to native code. Happy end for a terrible hack!
|
{"url":"https://gallium.inria.fr/blog/coq-eval/","timestamp":"2024-11-12T03:51:47Z","content_type":"text/html","content_length":"49666","record_id":"<urn:uuid:42135de6-c76a-48cb-a5d6-74754902bb1b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00685.warc.gz"}
|
Teaching Students to Write Word ProblemsTeaching Students to Write Word Problems
Teaching Students to Write Word Problems
I love word problems. There I said it, I really do. Anytime I can squeeze in some extra practice for my kiddos, I do! I especially love tasking my students with writing their own word problems. This
encourages them to think about word problems more critically than when they simply read and solve a problem.
We recently started practicing this skill in my classroom. Typically, I start this much earlier in the school year, but we just weren't ready to tackle this skill until recently. Some years are like
that, right? Now that the skill has been introduced, we can practice it each week (and I'm pretty excited about that).
A Few Things You Need to Know
When I task my students with writing their own word problems there are two things you need to know.
1. I give them a starting point. Meaning, the activity isn't totally open ended. I give them an answer, and it is their job to write a word problem to match.
2. I start out by keeping it simple because I want them to grasp the concept and feel successful with something new. I encourage the use of key words, and I encourage them to write straightforward
problems without "extra" information. When the time is right, they will be encouraged to write tougher problems.
So, to introduce the skill of writing word problems, I use this chart.
And, this mini book (or some variation of it....more on that in a moment).
Here's a look at that chart again.
Keep in mind that these guidelines work for us because we write word problems based on a given answer.
Once we go over the chart, we write at least one word problem together, using the mini book from above and the chart to guide us. Then, I have the students work in pairs to write a second word
problem. I check their stories as they finish. Finally, the students write one word problem independently. Again, I check it when they are finished writing because I like to help them make any
necessary corrections/changes on the spot.
As we revisit the skill each week, the students will write one story at a time. Independently.
I keep it simple at first, encouraging them to use key words, and to stick to simple stories (two statements and a question) like the one shown below. Once they have these steps down, I will begin to
encourage them to add extra information to their problems.
I love using my What's the Problem? mini books for practicing this skill (shown above, and below). I made an entire series of these mini books several years ago, and they are still a useful resource.
Did I mention that they are a freebie? ;) This one is
The Lucky Edition
, but I've made one for just about every month of the year.
As mentioned above, the first day that we practice this skill the students write three word problems. After that, they write just a few a week. We bring the book out as part of our math warm up, or
at the end of our math lesson. They end up writing about two word problems a week. This gives them continued practice, but they also don't get burned out as easily as they would if we did it
A Few Final Thoughts
It is always harder for students to write a word problem than it is to solve one. And, they learn this pretty quickly. However, they seem to enjoy the challenge of getting it right. As we say in my
classroom, if you don't challenge your brain, it won't grow. So, bring on the challenge!
When you're first starting out, you'll notice that some students "get it" very quickly, whereas others need repeated practice with the skill before it begins to click. Some students will need more
scaffolding than others, and some will need to be encouraged to write "tougher" problems. I've even changed some of the answers for students who needed to work with either bigger or smaller numbes.
As always, do what's best for your students and differentiate as needed.
Below are links to the What's the Problem mini books that I've shared in the past. I hope you can use them. :)
Have fun challenging your learners!
DON'T FORGET IT, PIN IT!
Share It:
|
{"url":"https://www.primarily-speaking.com/2017/03/teaching-students-to-write-word-problems.html","timestamp":"2024-11-04T02:06:41Z","content_type":"application/xhtml+xml","content_length":"93897","record_id":"<urn:uuid:152798b1-c0e6-49a2-a787-0f1bd16cc976>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00717.warc.gz"}
|
Class 10 Maths NCERT Solutions for
In this page we have Class 10 Maths NCERT Solutions for Similar triangles for EXERCISE 6.3 on pages 138,139,140 and 141. Hope you like them and do not forget to like , social_share and comment at the
end of the page.
Formula Used in this exercise
(i) AA or AAA Similarity Criterion:
If all the angles or two angles are equal between two triangles, then Triangles are similar and corresponding sides are in proportion
(ii) SSS Similar Criterion:
If in two triangles, sides of one triangle is proportion to other triangle, then triangles are similar and corresponding angles will be equal between the triangles
(iii) SAS Similar Criterion:
If one angle is equal to one angle of another triangle and sides including those angles are proportional, the triangles are similar
Triangles EXERCISE 6.3
Question 1
State which pairs of triangles in below figure are similar. Write the similarity criterion used by you for answering the question and write the pairs of similar triangles in the symbolic form:
(i) In ΔABC and ΔPQR, we have
∠A = ∠P = 60°
∠B = ∠Q = 80°
∠C = ∠R = 40°
So as per AAA similarity criterion
We have ΔABC ~ ΔPQR
(ii) In ΔABC and ΔPQR, we have
AB/QR = BC/RP = CA/PQ= 1/2
So, SSS similarity criterion
We have ΔABC ~ ΔQRP
(iii) In ΔLMP and ΔDEF, we have
LM = 2.7, MP = 2, LP = 3, EF = 5, DE = 4, DF = 6
$ \frac {PM}{DE} = \frac {2}{4} = \frac {1}{2}$
$\frac {PL}{DF} = \frac {3}{6} = \frac {1}{2}$
$\frac {LM}{EF}= \frac {2.7}{5} = \frac {27}{50}$
Here,$ \frac {PM}{DE} =\frac {PL}{DF} \ne \frac {LM}{EF}$
Hence, ΔLMP and ΔDEF are not similar.
(iv) In ΔMNL and ΔQPR, we have
$ \frac {MN}{QP} =\frac {2.5}{6}$
$ \frac {ML}{QR} = \frac {1}{2}$
So,$ \frac {MN}{QP} \ne \frac {ML}{QR}$ ∠M = ∠Q = 70°
Hence ΔMNL ~ ΔQPR are not similar
(v) In ΔABC and ΔDEF, we have
AB = 2.5, BC = 3, ∠A = 80°, EF = 6, DF = 5, ∠F = 80°
Here, $ \frac {AB}{DF} = \frac {2.5}{5} = \frac {1}{2}$
And, $\frac {BC}{EF} = \frac {3}{6} = \frac {1}{2}$
But the included angle of the sides proportional is not equal
∠B ≠ ∠F
Hence, ΔABC and ΔDEF are not similar.
(vi) In ΔDEF, we have
Two angles given as 80 and 70, so third angle will
∠F = 180-70-80=30°
In PQR, we have
Two angles given as 80 and 30, so third angle will
∠P = 180-30-80=70°
In ΔDEF and ΔPQR, we have
∠D = ∠P = 70°
∠F = ∠Q = 80°
∠F = ∠R = 30°
By AAA similarity criterion
Hence, ΔDEF ~ ΔPQR
Question 2
In the below figure, ΔODC ~ ΔOBA, ∠ BOC = 125° and ∠ CDO = 70°. Find ∠ DOC, ∠ DCO and ∠ OAB.
DOB is a straight line.
Therefore, ∠DOC + ∠ COB = 180°
⇒ ∠DOC = 180° - 125°
= 55°
In ΔDOC,
Sum of the measures of the angles of a triangle is 180º
So ∠DCO + ∠ CDO + ∠ DOC = 180°
Therefore ∠DCO = 55°
It is given that ΔODC ~ ΔOBA.
Since Corresponding angles are equal in similar triangles
∠OAB = ∠OCD
So, ∠ OAB = 55°
Question 3
Diagonals AC and BD of a trapezium ABCD with AB || DC intersect each other at the point O. Using a similarity criterion for two triangles, show that AO/OC = OB/OD
In ΔDOC and ΔBOA,
Alternate interior angles are equal as AB || CD
∠CDO = ∠ABO
Alternate interior angles are equal as AB || CD
∠DCO = ∠BAO
Vertically opposite angles
∠DOC = ∠BOA
By AAA similarity criterion
ΔDOC ~ ΔBOA
Now Corresponding sides are proportional as similar triangles
$ \frac {DO}{BO} = \frac {OC}{OA}$
Therefore, $ \frac {OA}{OC} = \frac {OB}{OD}$
Question 4.
In the below figure, QR/QS = QT/PR and ∠1 = ∠2. Show that ΔPQS ~ ΔTQR.
In ΔPQR,
∠1 = ∠2
So, ∠PQR = ∠PRQ
Therefore opposite will be equal
PQ = PR ...
$ \frac {QR}{QS} = \frac {QT}{PR}$
, we get
$ \frac {QR}{QS} =\frac {QT}{QP}$ ...
In ΔPQS and ΔTQR,
$ \frac {QR}{QS} =\frac {QT}{QP}$ [using
∠Q = ∠Q
So As per SAS similarity criterion
ΔPQS ~ ΔTQR
Question 5.
S and T are point on sides PR and QR of ΔPQR such that ∠P = ∠RTS. Show that ΔRPQ ~ ΔRTS.
In ΔRPQ and ΔRST,
∠RTS = ∠QPS (Given)
∠R = ∠R (Common angle)
As two angles are equal
∠RST = ∠RQP
By AAA similarity criterion
we get
ΔRPQ ~ ΔRTS
Question 6.
In the below figure, if ΔABE ≅ ΔACD, show that ΔADE ~ ΔABC.
It is given that ΔABE ≅ ΔACD.
By corresponding parts of the two congruent triangles
AB = AC -(a)
And, AD = AE -(b)
In ΔADE and ΔABC,
Dividing equation
AD/AB = AE/AC
∠A = ∠A [Common angle]
By SAS similarity criterion
We get, ΔADE ~ ΔABC
Question 7
In the below figure, altitudes AD and CE of ΔABC intersect each other at the point P. Show that:
(i) ΔAEP ~ ΔCDP
(ii) ΔABD ~ ΔCBE
(iii) ΔAEP ~ ΔADB
(iv) ΔPDC ~ ΔBEC
(i) In ΔAEP and ΔCDP,
As Each 90^0 ∠AEP = ∠CDP
∠APE = ∠CPD (Vertically opposite angles)
As two angles are equal, third angle will also be equal
∠DCP = ∠PAE
By using AAA similarity criterion,
ΔAEP ~ ΔCDP
(ii) In ΔABD and ΔCBE,
∠ADB = ∠CEB (Each 90°)
∠ABD = ∠CBE (Common)
As two angles are equal, third angle will also be equal
∠DAB = ∠BCE
By using AAA similarity criterion,
ΔABD ~ ΔCBE
(iii) In ΔAEP and ΔADB,
∠AEP = ∠ADB (Each 90°)
∠PAE = ∠DAB (Common)
As two angles are equal, third angle will also be equal
Hence, by using AAA similarity criterion,
ΔAEP ~ ΔADB
(iv) In ΔPDC and ΔBEC,
∠PDC = ∠BEC (Each 90°)
∠PCD = ∠BCE (Common angle)
As two angles are equal, third angle will also be equal
Hence, by using AAA similarity criterion,
ΔPDC ~ ΔBEC
Question 8.
E is a point on the side AD produced of a parallelogram ABCD and BE intersects CD at F. Show that ΔABE ~ ΔCFB.
In ΔABE and ΔCFB,
∠A = ∠C (Opposite angles of a parallelogram)
∠AEB = ∠CBF (Alternate interior angles as AE || BC)
So, two angles are equal so third angle would be also equal
By AAA similarity criterion
Therefore, ΔABE ~ ΔCFB
Question 9
In the below figure, ABC and AMP are two right triangles, right angled at B and M respectively, prove that:
(i) ΔABC ~ ΔAMP
(ii) $ \frac {CA}{PA} = \frac {BC}{MP}$
(i) In ΔABC and ΔAMP, we have
∠A = ∠A (common angle)
∠ABC = ∠AMP = 90° (each 90°)
As two angles are equal, third angle will also be equal
By AAA similarity criterion
ΔABC ~ ΔAMP
(ii) As, ΔABC ~ ΔAMP
If two triangles are similar, then the corresponding sides are proportional
Hence, $ \frac {CA}{PA} = \frac {BC}{MP}$
Question 10.
CD and GH are respectively the bisectors of ∠ACB and ∠EGF such that D and H lie on sides AB and FE of ΔABC and ΔEFG respectively. If ΔABC ~ ΔFEG, Show that:
(i) CD/GH = AC/FG
(ii) ΔDCB ~ ΔHGE
(iii) ΔDCA ~ ΔHGF
(i) It is given that ΔABC ~ ΔFEG.
Therefore, all corresponding angles should be equals
∠A = ∠F, ∠B = ∠E, and ∠ACB = ∠FGE
Now as ∠ACB = ∠FGE, angle bisector are equals
∠ACD = ∠FGH
And, ∠DCB = ∠HGE
In ΔACD and ΔFGH,
∠A = ∠F
∠ACD = ∠FGH
By AA similarity criterion
Therefore ΔACD ~ ΔFGH
Now the corresponding sides are proportional
$ \frac {CD}{GH} = \frac {AC}{FG}$
(ii) In ΔDCB and ΔHGE,
∠DCB = ∠HGE
∠B = ∠E
By AA similarity criterion
Therefore ΔDCB ~ ΔHGE
(iii) In ΔDCA and ΔHGF,
∠ACD = ∠FGH
∠A = ∠F
By AA similarity criterion
Therefore ΔDCA ~ ΔHGF
Question 11.
In the following figure, E is a point on side CB produced of an isosceles triangle ABC with AB = AC. If AD ⊥ BC and EF ⊥ AC, prove that ΔABD ~ ΔECF.
It is given that ABC is an isosceles triangle.
AB = AC
So, ∠ABD = ∠ECF --(i)
Now in ΔABD and ΔECF,
∠ADB = ∠EFC (Each 90°)
And from equation (i)
∠ABD = ∠ECF
So, By using AA similarity criterion
ΔABD ~ ΔECF
Question 12.
Sides AB and BC and median AD of a triangle ABC are respectively proportional to sides PQ and QR and median PM of ΔPQR. Show that ΔABC ~ ΔPQR.
Given: ΔABC and ΔPQR, AB, BC and median AD of ΔABC are proportional to sides PQ, QR and median PM of ΔPQR
i.e., AB/PQ = BC/QR = AD/PM
Now it can be written as
$ \frac {AB}{PQ} = \frac {(1/2) BC}{ (1/2) QR} = \frac {AD}{PM}$
$ \frac {AB}{PQ} = \frac {BD}{QM} = \frac {AD}{PM}$ (D is the mid-point of BC. M is the midpoint of QR)
So, As per SSS similarity criterion
ΔABD ~ ΔPQM
Now we know that
Corresponding angles of two similar triangles are equal
So, ∠ABD = ∠PQM
⇒ ∠ABC = ∠PQR
Now In ΔABC and ΔPQR
$ \frac {AB}{PQ} = \frac {BC}{QR}$ .
∠ABC = ∠PQR
By SAS similarity criterion
ΔABC ~ ΔPQR
Question 13.
D is a point on the side BC of a triangle ABC such that ∠ADC = ∠BAC. Show that CA
= CB.CD
In ΔADC and ΔBAC,
Given ∠ADC = ∠BAC
∠ACD = ∠BCA (Common angle)
By AA similarity criterion
Therefore, ΔADC ~ ΔBAC
We know that corresponding sides of similar triangles are in proportion.
$ \frac {CA}{CB} =\frac {CD}{CA}$
$CA^2= CB.CD$
Question 15.
A vertical pole of a length 6 m casts a shadow 4m long on the ground and at the same time a tower casts a shadow 28 m long. Find the height of the tower.
Length of the vertical pole = 6m (Given)
Shadow of the pole = 4 m (Given)
Let Height of tower =
Length of shadow of the tower = 28 m (Given)
In ΔABC and ΔDEF,
angular elevation of sun remains same
∠C = ∠E
∠B = ∠F = 90°
By AA similarity criterion ΔABC ~ ΔDEF
$ \frac {AB}{DF} = \frac {BC}{EF}$ (If two triangles are similar corresponding sides are proportional)
$ \frac {6}{h}= \frac {4}{28}$
h= 6×28/4
h=6 × 7
h= 42 m
Hence, the height of the tower is 42 m.
Question 16.
If AD and PM are medians of triangles ABC and PQR, respectively where ΔABC ~ ΔPQR prove that AB/PQ = AD/PM.
It is given that ΔABC ~ ΔPQR
We know that the corresponding sides of similar triangles are in proportion.
$ \frac {AB}{PQ} = \frac {AC}{PR} = \frac {BC}{QR}$ ...
Also, ∠A = ∠P, ∠B = ∠Q, ∠C = ∠R …
Since AD and PM are medians, they will divide their opposite sides.
BD = BC/2 and QM = QR/2 ...
From equations
, we get
AB/PQ = BD/QM ...
In ΔABD and ΔPQM,
∠B = ∠Q [Using equation
AB/PQ = BD/QM [Using equation
By SAS Similarity
ΔABD ~ ΔPQM
1. NCERT Solutions for Class 10 Maths: Chapter 6 triangles Exercise 6.3 has been prepared by Expert with utmost care. If you find any mistake.Please do provide feedback on mail.You can download this
as pdf
2. This chapter 6 has total 5 Exercise 6.1 ,6.2,6.3 ,6.4 and 6.5. This is the Third exercise in the chapter.You can explore previous exercise of this chapter by clicking the link below
Also Read Go back to Class 10 Main Page using below links Class 10 Maths Class 10 Science
Practice Question
Question 1 What is $1 - \sqrt {3}$ ?
A) Non terminating repeating
B) Non terminating non repeating
C) Terminating
D) None of the above
Question 2 The volume of the largest right circular cone that can be cut out from a cube of edge 4.2 cm is?
A) 19.4 cm^3
B) 12 cm^3
C) 78.6 cm^3
D) 58.2 cm^3
Question 3 The sum of the first three terms of an AP is 33. If the product of the first and the third term exceeds the second term by 29, the AP is ?
A) 2 ,21,11
B) 1,10,19
C) -1 ,8,17
D) 2 ,11,20
|
{"url":"https://physicscatalyst.com/Class10/similar_triangles_ncert_solution-3.php","timestamp":"2024-11-07T06:12:15Z","content_type":"text/html","content_length":"81827","record_id":"<urn:uuid:7bdddb9c-df14-4a40-a0e9-69ae1cffdda5>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00746.warc.gz"}
|
Regime shift in Arctic Ocean sea ice thickness - Nature - Strange India
Sea ice draft data were obtained from upward looking sonar (ULS) moored in the East Greenland Current in the western Fram Strait. The dataset continuously covers the last three decades (1990–2019)
with some short temporal gaps. Four ULSs were zonally aligned at approximately 79°N from 3°W to 6.5°W (Fig. 1a). The latitude of the mooring array changed from 79°N to 78.8°N in 2001. The zonal
positions (names) of the moorings equipped with ULS are 3°W (F11), 4°W (F12), 5°W (F13) and 6.5°W (F14), respectively. There are three main temporal data gaps during the three decades of
measurements, that is, 1996, 2002 and 2008. Except for these gaps, ULSs were in operation although their number varied from time to time. The ULS measures the travel time of the sound reflected at
the bottom of the floating sea ice, from which we calculate the ice draft, the underwater fraction of sea ice^54. The raw data were processed to ice draft using procedures described in earlier
literature^55,56. The accuracy of each draft measurement ranges from 0.1m (ice profiling sonars (IPS) deployed after 2006) to 0.2m (ES300 instruments before 2006), while the uncertainty of each
individual measurement is not subject to bias errors and the summary error statistics of monthly values are less than 0.1m^57.
The daily mean sea ice motion product provided by the National Snow and Ice Data Center (Polar Pathfinder Daily 25km EASE-Grid Sea Ice Motion Vectors v.4.1, hereafter NSIDCv4 ice drift) was applied
for backward trajectory calculation of sea ice floes, estimating residence time of sea ice in the Arctic and analysis of spatial average of ice drift speed. The product was derived from a combination
of various ice motion estimates from remote sensing and tracks of ice-tethered buoys^52. We applied the motion vectors from 1984 to 2019 for the backward trajectory calculation. Sea ice concentration
data were taken from the Global Sea Ice Concentration Climate Data Records of the EUMETSAT OSI SAF^51 (OSI-409 v.1.2 for the period from January 1984 to April 2015 and OSI-430 v.1.2 for the period
after April 2015). The ice concentration data were used for the sea ice trajectory calculations and to analyse spatial and temporal changes of ice concentration.
Sea surface temperatures obtained from the NOAA/NESDIS/NCEI Daily Optimum Interpolation Sea Surface Temperature v.2.1 (ref. ^58) were used to examine temporal variation in sea surface temperature in
areas of ice formation (Extended Data Fig. 1a,b). A dataset of Arctic Ocean in situ hydrographic observations^59 was also used to examine the change in ocean surface temperature between 1990 and 2006
and between 2007 and 2019. The dataset was revised using recent observations^60 and gridded to 110×110km cells covering the whole Arctic Ocean. The mean temperature in each period (1990–2006 and
2007–2019) in each cell was defined by an average of all available measurements for every 3-month period (January–March, April–June, July–September and October–December). If the number of available
data in a cell was less than four, a missing value was assigned to the cell. The difference in summer sea surface temperature (July–September, 0–20m) between two periods (1990–2006 and 2007–2019)
was used for the analysis (Extended Data Fig. 1c). Atmospheric data were taken from the European Center for Medium-Range Weather Forecasts Reanalysis v.5 (ref. ^61) (ERA5). Daily and monthly mean
sea-level pressure and 10m wind were used in the analyses.
Ice thickness distribution
Ice thickness distributions were derived on a monthly basis. All sea ice draft measurements in the Fram Strait from 1990 to 2019 were classified into draft thickness bins of 0.1m, ranging from 0 to
8m (80 bins in total). The number of data samples (ice draft measurements) used to derive the distributions varied from time to time. From 1990 to 2005, O(10^4) samples were used to derive the
distribution functions on a monthly basis (measurement interval of 240s in most cases); after 2006, O(10^6) samples were used (interval of 2s). The number of samples used were sufficient to derive
thickness distribution on a monthly basis^57. The number of data in each bin were divided by the total number of measurements to derive the distribution function. The open water fraction (that is,
zero ice thickness) was excluded when deriving the function. Distribution functions, including the open water fraction, are shown in Extended Data Fig. 3. If the temporal coverage of the data samples
was less than 15% of a monthly coverage, the distribution function was not defined and removed from the analysis. The draft thickness distributions were converted to ice thickness distributions by an
average ratio of draft to thickness in the Fram Strait, 1.136 (ref. ^62). Although the ratio has some seasonal variability and might have slightly changed due to changes in ice density and snow load
in different seasons and years, we assumed that the change was not considerable for the aim of the current analyses. A composite time series of ice thickness distribution (Fig. 2a and Extended Data
Fig. 3) was obtained from a combination of available distribution functions from F11 to F14 . More specifically, one of the distribution functions from F11 to F14 was used with a priority order of
F13, F14, F12, F11. The composite time series is shown in Fig. 2a, while the time series at each site are shown in Extended Data Fig. 4. The fractions of sea ice thicker than two thresholds, 4 and
5m, were calculated as the cumulative function of all available distributions (that is, all distributions from F11 to F14) in each month and are shown in Fig. 2d.
The uncertainties on the estimates of monthly fractions of ice thicker than a threshold and position of the modal ice peak were assessed numerically using a moving block bootstrap approach^63.
Bootstrapping is a family of resampling techniques used to derive uncertainties on various complex estimators for large datasets and uses random sampling with replacement^64. The presence of
autocorrelation in ice draft series from IPS (ULSs deployed after 2006) suggested using the moving block bootstrap approach^63. The method splits the original monthly series of ice drafts O(10^6)
samples of length N into N−K+1 overlapping blocks of length K each. The block length was set to 30 samples that approximately corresponded to a distance of 10m covered by ice travelling at 0.3
knots. It roughly corresponded to the lower limit of a horizontal spatial scale of ice ridges. For ES300 instruments (ULSs deployed until 2005) with a lower sampling rate of 240s, ordinary
bootstrapping was used.
At each of M steps of bootstrap sampling, N/K blocks were drawn at random, with replacement, from the constructed set of N−K+1 blocks, making a new bootstrap data sample for the month. A Gaussian
noise of N(0,1) was further added to the data to account for measurement uncertainty. The mean and s.d. of the fractions of thicker ice and modal ice peak position were then calculated directly from
the M estimates derived at each step of the procedure. The results suggested that the monthly coefficient of variation or a ratio of the s.d. of the estimate to its mean, for the IPS data varies from
1% to 3% for both fractions, being on average lower (1–2%) for a fraction of ice greater than 4m thick. For ES300, the coefficient of variation was slightly higher at about 4(6)% for the fractions
of ice thicker than 4(5)m. The same applied to the position of the modal peak, which showed a coefficient of variation of 0–3% being typically closer to 0 for the IPS data. It suggested that the
selected bin width was large enough to accommodate uncertainties related to the approach and data. For the ES300 data, the monthly s.d. of the modal peak position was higher, up to 30cm, and the
coefficient of variation was within 9%. Therefore, we postulate that the inferred uncertainties are far too low to have any noticeable influence on the results of the shift detection analysis.
Modal peak and variance of ice thickness distributions
As statistical measures of ice thickness distributions, we examined modal thickness and variance. The monthly mean ice thickness distributions (740 samples in total) were fitted to log-normal
$$F(x)=\frac{1}{\sqrt{2{\rm{\pi }}{\sigma }_{{\rm{f}}}^{2}}x}\exp \left(\frac{-{({\rm{l}}{\rm{n}}x-{\mu }_{{\rm{f}}})}^{2}}{2{\sigma }_{{\rm{f}}}^{2}}\right)$$
where σ[f] and μ[f] are the fitting parameters, x is the ice thickness bin and F is the distribution function. To detect the second peak of the distributions that represents multi-year ice travelled
across the Arctic Basin, a cut-off threshold was introduced. The threshold was used to exclude thin sea ice fraction, which is supposedly formed in the vicinity of the Fram Strait and is not
representing basin-wide changes of ice properties in the Arctic. We defined the threshold by the minimum between the first and second peak of each monthly mean distribution. A set of two consecutive
negative gradients (towards thicker bins) followed by two consecutive positive gradients was used to detect the minimum (after applying 3-bin smoothing), while a threshold of 1.53m (corresponding to
1.3m of ice draft) was applied when an estimated threshold was thicker than 3m. Function values ranging lower than the threshold were set to zero (zero case) or excluded from the fitting (NaN
case). A least-square minimization was applied to fit a log-normal function to the distribution. In general, the fitted log-normal functions represent the distribution very well. The NaN case
slightly underestimates the modal peak, while the zero case captures the peak very well. Examples of distribution functions, together with cut-off thresholds and fitted log-normal functions, are
shown in Extended Data Fig. 5. The modal peak of the log-normal function roughly gives the thickness of thermodynamically grown sea ice, while the variance of the function quantifies the deformed
fraction of sea ice (dynamically thickened thickness). Changes in modal peak height and variance of the fitted log-normal functions, var(x)=exp (2μ[f]+σ[f]^2) (exp (σ[f]^2)−1), for the last
three decades are summarized in Fig. 2b,c. The time series of modal thickness and the fitting parameters σ[f] and μ[f] are summarized in Extended Data Fig. 6.
Regime shifts detection
A sequential algorithm for regime shift detection^23 was applied to all time series. The method identifies discontinuities in a time series using a data-driven approach that does not require an a
priori assumption on the timing of the regime shifts. The method first identifies potential change points sequentially by checking if the anomaly of the data point is statistically significant from
the mean value of the current regime. If it is significant, the following data points are sequentially used to assess the confidence of the shift, using a regime shift index (RSI). RSI represents a
cumulative sum of normalized deviations from the hypothetical mean level for the new regime, for which the difference from the mean level of the current regime is statistically significant according
to a Student’s t-test. If the RSI is positive for all points sequentially within the specified cut-off length, the null hypothesis of a constant mean is rejected. This led us to conclude that the
regime shift might have occurred at that point in time^65. If multiple data are available at a certain point in time (that is, multiple sites from F11 to F14), the mean value is applied in the time
series. Before testing, the temporal gaps of the time series were interpolated by the average of all available data (modal peak height (Fig. 2b), variance (Fig. 2c) and modal thickness (Extended Data
Fig. 6a) of ice thickness distributions, fraction of thick sea ice (Fig. 2d) and residence time of sea ice (Fig. 3a and Extended Data Fig. 2)). The cut-off length was set to 7 years (84 months) to
cover the advection timescale (travelling time across the Arctic) of sea ice, while at the same time, detecting shifts occurring at a timescale shorter than a decade. Other cut-off lengths (3, 4, 5,
6, 8 and 10 years) were also tested to see the sensitivity and robustness of the results. A summary of the test results is given in Extended Data Table 1; the timing of the detected shifts is shown
in all time series except for Fig. 2. The timing of the detected shifts of modal peak height and variance of ice thickness distributions are shown in Extended Data Fig. 6b,c, while those of the
fraction of thick sea ice are shown in Extended Data Fig. 8. RSIs, respective P values and the shift of the means are summarized in Extended Data Table 2.
Sea ice trajectory analysis
To investigate changes of pathways and residence time of sea ice in the Arctic basins, sea ice trajectories were calculated for the last three decades. Eight pseudo-ice floes were settled in the
western half of the Fram Strait section (from prime meridian to 10°W) at the same time and advected backwards in time. The calculations started on the 15th of every month from 1990 to 2019. Daily
sea ice motion vectors from the NSIDCv4 were used to update daily position of ice floes backwards in time. Ice motion vectors at the respective floe positions were calculated by interpolation of
surrounding points with Gaussian-type weighting (e-folding scale of 25km). Each trajectory calculation was performed 6 years back in time, while it was terminated if no motion vector was available
within a 25-km distance or sea ice concentration at the floe position was lower than 15%. The sea ice concentration at the ice floe position was obtained from OSI-409/OSI-430 with a Gaussian-type
weighting (e-folding scale of 12.5km). The position of each trajectory termination was used to define the location of ‘initial sea ice formation’. Trajectories shorter than three months were
excluded from the analysis because they represent ice floes formed in the vicinity of the Fram Strait.
Uncertainty of the daily position of the pseudo-floes was assessed by comparisons with ice-tethered buoy tracks obtained from the International Arctic Buoy Program (IABP)^53. We used 83 buoy tracks
that arrived in the Fram Strait from 2000 to 2018 and calculated the corresponding pseudo-buoy tracks backwards in time. The comparisons showed that the mean error of the daily pseudo-buoy positions
can be reasonably approximated by a linear function of backtracking days^19, error=50+(backtracking days)/2km. We applied this empirical formula as an error of the daily position of the backward
trajectories from 0 to 500 backtracking days, which corresponds to a 200 (300) km error after 300 (500) backtracking days. Note that this error estimate may underestimate the uncertainty because IABP
buoy tracks have been included in the NSIDCv4 ice motion product. However, comparisons between non-IABP buoys and pseudo-buoy tracks derived from the NSIDCv4 with error estimates by a bootstrap
method showed that pseudo-tracks are largely parallel to the corresponding buoys and the error does not monotonically increase over time^66. The estimated error circles (approximately 300km) of ice
formation location in the present study are sufficiently small compared to the polygons in Fig. 3b (greater than 1,500-km width), which guarantees the robustness of the analysis.
The residence time of sea ice in the Arctic basins was defined by the period from the start to the termination date of each trajectory. We calculated an average of residence time of eight pseudo-ice
floes that arrived in the Fram Strait at the same time and used it to define the mean residence time of ice floes for each month. The uncertainty of the residence time was defined by the s.d. of the
residence time of the eight pseudo-ice floes.
Stochastic model of dynamic ice thickening
The log-normal form of the ice thickness distribution can be obtained from a simple proportionate growth process, \({X}_{m}={X}_{0}{\Pi }_{i=0}^{i=m-1}{a}_{i}\). If ln X[0] and ln a[0], ln a[1], ..,
ln a[i] are uncorrelated; thus, the probability function of X[m] for large m is given by:
$$f({X}_{m})=\frac{1}{\sqrt{2{\rm{\pi }}\mathop{\sigma }\limits^{ \acute{} }{(m)}^{2}}{X}_{m}}\exp \left(\frac{-{({\rm{l}}{\rm{n}}({X}_{m}/\mathop{X}\limits^{ \acute{} }(m)))}^{2}}{2\mathop{\sigma }\
limits^{ \acute{} }{(m)}^{2}}\right)$$
where \(\acute{X}(m)={e}^{\nu m}\) and \(\acute{\sigma }(m)=\sigma {m}^{1/2}\) and ν and σ^2 are the mean and variance of the population distribution of ln a[m] (including X[0]), respectively^67.
In this study, we provide a concept and description of a stochastic model that formulates sea ice thickening associated with dynamic ice deformation. The model formulates three features of dynamic
sea ice thickening by ridging and/or rafting: (1) dynamic ice thickening is a stochastic process (areal and thickening stochasticity); (2) thicker ice has a larger potential to get thicker than
thinner ice at a dynamic event (proportionate ice thickening); and (3) thinner ice has a higher probability of dynamic deformation due to its weaker ice strength (preferential deformation of thinner
over thicker ice types). The first point consists of two types of stochasticity in the dynamic ice thickening process. One is areal stochasticity, corresponding to the fact that ice deformation only
occurs for a small fraction of the pack ice while the rest of the ice is unchanged when a dynamic event occurs. The other is thickening stochasticity, representing the fact that the thickness gain by
ridging/rafting varies in space and differs between events. The second point represents a sea ice characteristic that thicker ice is tolerant and can exert stronger compressive force on the ice
forming ridges and/or rafts; hence, more energy is potentially available for the dynamic thickening^30,31. The third point represents the fact that the thinner part of the pack ice is preferentially
ridged/rafted when a dynamic event occurs^68,69. This also takes into account the effect of ice thickness changes on the dynamic thickening process, for example, the thinner ice condition in recent
years increases the likelihood of ice deformation.
The two first points can be formulated by a proportionate thickening of stochastic ice thickness:
where X is the stochastic ice thickness at a certain location, i denotes the time index counting sporadic dynamic events (for example, passage of a storm) and a[i−1] is the conditional proportionate
thickening increment due to the event at i−1. The increment a[i−1] is a stochastic variable representing both areal and thickening stochasticity. They are implemented as:
$${a}_{i}=\left\{\begin{array}{cc}1+b{r}_{i} & {\rm{w}}{\rm{i}}{\rm{t}}{\rm{h}}\,{\alpha }_{i}({\rm{ \% }})\,{\rm{p}}{\rm{r}}{\rm{o}}{\rm{b}}{\rm{a}}{\rm{b}}{\rm{i}}{\rm{l}}{\rm{i}}{\rm{t}}{\rm{y}}\\
1 & {\rm{w}}{\rm{i}}{\rm{t}}{\rm{h}}\,1-{\alpha }_{i}({\rm{ \% }})\,{\rm{p}}{\rm{r}}{\rm{o}}{\rm{b}}{\rm{a}}{\rm{b}}{\rm{i}}{\rm{l}}{\rm{i}}{\rm{t}}{\rm{y}}\end{array}\right.$$
where b is a proportionate thickening constant, r[i] is a stochastic thickening increment that represents the thickening stochasticity of i-th dynamic event and α[i] is the areal probability of
dynamic thickening that represents areal stochasticity and gives the probability of ice thickening occurrence. This formula indicates that when a dynamic event occurs, α(%) area of pack ice
experiences dynamic thickening (ridging/rafting), while the rest (1−α(%)) is unchanged (areal stochasticity). The thickness gain, br[i], in the dynamic thickening area, is also a stochastic
variable: the possible maximum gain is b while the minimum is 0 (r[i] is random, so 0≤r[i]<1) (thickening stochasticity).
We applied the proportionate thickening constant as b=0.4. The choice of b=0.4 implies that sea ice in the ridging/rafting area gains 0.4m thickness at maximum (0.2m on average) when the ice is
initially 1-m thick, while it gains a 1.2m thickness at maximum (0.6m on average) when initially the ice is 3-m thick. The value of the parameter b comes from a recent high-resolution observation
(5×5m resolution covering a 9-km^2 area) of single ice deformation event north of Svalbard, which describes changes in the sea ice freeboard just before and right after a storm event^70. According
to this study, the change in sea ice freeboard in a converging area is 0.07m (from 0.36m to 0.43m) on average, corresponding to 0.58m dynamic thickening of ice by ridging and/or rafting (assuming
the freeboard to thickness ratio=8.35)^62. The gain relative to the mean ice thickness is estimated by b=0.58/1.45=0.4 (the mean ice thickness in the survey area=1.45m). We applied the value
for the proportionate thickening constant, b, as the first approach to develop the model. Although the study captured detailed spatial change in the sea ice freeboard, the estimate of b comes from a
one-time event and hence needs further assessments by future observations. It should be noted that b=0.4 is an areal average estimated from the observations, whereas we applied b as the upper bound
of the proportionate thickening. This is because the probability function of the stochastic thickening increment, r[i], is not known so far; hence, we assumed a constant probability of the increment
between 0 and b, potentially causing excessive thickening near the upper bound. The effect of the choice of b is discussed below.
The areal probability of dynamic thickening, α, is included to take the third point into account, that is, thinner ice has more chance to be ridged and/or rafted than thicker ice when a dynamic event
occurs. To implement this feature, α is given by a function of ice thickness: the areal probability is inversely proportional to the stochastic ice thickness X[i]:
$${\alpha }_{i}({X}_{i})=\frac{8}{{X}_{i}+1}({\rm{ \% }}).$$
The formula indicates that 1-m thick ice experiences dynamic thickening at 4% areal probability, while 3-m ice experiences dynamic thickening at 2% areal probability. Our first implementation of this
formula is based on an observational estimate of areal fraction of dynamic thickening^70. According to the high-resolution survey of a single dynamic event, thickening occurred in 4% of the survey
area with a mean ice thickness of 1.45m. This formula also needs further evaluation by comparing with future observations that address the relationship between areal probability of deformation and
ice thickness.
Another parameter necessary for the model is the number of dynamic events, m, that is, external forcing that could cause mechanical fracturing of sea ice and consequent ridging and/or rafting. We
used the number of Arctic cyclones that passes over the ice pack as a first-order indicator of the number of dynamic events. Typically 90–130 cyclones per year occur in the Arctic Ocean (40–60
cyclones in winter, 50–70 cyclones in summer)^71. A typical size of an Arctic cyclone is approximately 3×10^6km^2 (mean radius of approximately 10^3km)^71, which covers approximately one third of
the ice-covered area of the Arctic Ocean. We therefore assumed that one-third of all cyclones hits the ice pack at a certain location in the Arctic, that is, the ice pack experiences approximately 40
dynamic events per year. This corresponds to approximately 80–240 dynamic deformation events for the typical residence time of sea ice in the Arctic (2–6 years; Fig. 3a).
In addition, examples shown in Fig. 5 contain a simple thermodynamic term to mimic the effect of modal peak shift of thickness distribution due to thermodynamic ice growth:
where c is the thermodynamic ice growth coefficient. This term comes from a simplified thermodynamic process without thermal inertia of sea ice and heat flux from the ocean^72:
$$\frac{{\rm{d}}H}{{\rm{d}}t}=\frac{{\kappa }_{{\rm{ice}}}({T}_{{\rm{f}}}-{T}_{{\rm{s}}})}{{\rho }_{{\rm{ice}}}{L}_{{\rm{f}}}H}$$
where H is the ice thickness, κ[ice] is the heat conductivity of ice, ρ[ice] is the ice density, L[f] is the latent heat of freezing, T[f] is the freezing temperature of sea ice and T[s] is the
temperature at the ice surface. We applied this formula with a simplification, ΔH=c/H, where ΔH is the ice thickness change due to a thermodynamic process, c is a thermodynamic ice growth
coefficient corresponding to Δtκ[ice] (T[f]−T[s])/(ρ[ice]L[f]). As the model does not include a process that forms new thin ice by lead opening, inclusion of the thermal forcing term without a
compensating term makes the modal peak very steep after few years, that is, no ice exists in thickness ranges thinner than the thermal equilibrium thickness. To alleviate such an excessive modal peak
generation and to take into account the insulating effect of the snow pack that substantially delays thermodynamic ice growth, we applied a moderate value, c=0.015, which is about one-third of the
value estimated from c=Δtκ[ice] (T[f]−T[s])/(ρ[ice]L[f]), where Δt ≅ 9d (corresponding to 40 dynamic events per year) and annual mean surface air temperature of T[s]=263K.
The initial condition of the ice thickness distribution in Fig. 5 is given by a thermodynamically grown sea ice without dynamic deformation, X[0]. This is also a stochastic variable, having a normal
distribution for simplicity:
$$g({X}_{0})=\frac{1}{\sqrt{2{\rm{\pi }}}{\sigma }_{0}}\exp \left(\frac{-{(x-{\mu }_{0})}^{2}}{2{\sigma }_{0}^{2}}\right)$$
where μ[0]=1.0 and σ[0]=0.25 are applied in the examples (that is, 1m mean ice thickness with 0.25s.d., shown by m=0 in Fig. 5), which roughly corresponds to the thickness of new ice three
months after its formation (based on Anderson’s freezing degree days law^25, with an assumption of T[s]=253K). Figure 5 shows examples of ice thickness distribution after 60, 120 and 180 dynamic
events, roughly corresponding to 1.5, 3 and 4.5 years of residence time of sea ice.
The current formulation contains three parameters to describe the dynamic ice thickening process: b (the proportionate thickening constant); α (the areal probability of dynamic thickening); and m
(the number of dynamic events). In this study, we briefly describe the sensitivity of the ice thickness distributions to these parameters. In general, a smaller (larger) thickening constant b
decelerates (accelerates) the dynamic thickening process, that is, a smaller b gives a smaller variance and steeper modal peak of thickness distribution if α and m are fixed. However, a large value
of b (for example, b=0.8, indicating that ridged ice can be 1.8 times thicker than the ice before an event at maximum) makes the distribution bimodal because the possible thickness gain at each
dynamic event is far from the modal thickness and the ridged/rafted ice tends to generate another peak apart from the mode. Therefore, the possible and realistic range of b should be examined further
together with the probability density function of the thickening increment r by high-resolution observations in the future. The areal probability of dynamic thickening, α, also affects the evolution
of the dynamic thickening process. A larger α promotes dynamic thickening because a larger fraction of pack ice can be deformed at one event. The thickness dependency of the probability, equation
(8), decelerates further thickening of thick ice. Although values of b and α affect the progress of dynamic thickening in the model, we obtained similar ice thickness distributions with a log-normal
form sooner or later, that is, smaller b and α can be compensated by a large m, the number of deformation events, indicating a robustness of the formulation. The resulting shape of the distribution,
its temporal evolution (Fig. 5) and its comparison with the observed change in distribution (Fig. 1b) suggest that the proposed stochastic ice thickening model captures the essence of the dynamic
thickening process that resulted in the observed changes in ice thickness distribution.
|
{"url":"https://strangeindia.com/regime-shift-in-arctic-ocean-sea-ice-thickness-nature/","timestamp":"2024-11-03T10:37:25Z","content_type":"text/html","content_length":"141195","record_id":"<urn:uuid:15e02a6d-1636-46f2-bdc3-e80abcb24fc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00449.warc.gz"}
|
Problem B
Languages en is
by Matteo Paganelli, Unsplash
Vegagerðin is renewing the software they use to track maintenance and repair costs for roads. The first step towards that is implementing the new system for the most important road in the country,
hringvegurinn. To simplify things, the circular road is divided into
equally long sections and they are numbered
$1, 2, \dots , N$
is the road heading west from Akureyri and the sections are in increasing order following that one going around the country. Then section
is the section of road east of Akureyri. The system needs to support the following operations. It needs to be possible to register a cost
for each section in an interval. It needs to be possible to get the total cost for an interval thus far. Finally the system has to be able to rotate the numbering such that section
is in a new location. An interval means a contigious section of the road. The interval
$3, 6$
contains the sections
$3, 4, 5, 6$
. However, if
$N = 6$
the interval
$5, 2$
contains the sections
$5, 6, 1, 2$
. The interval
$3, 3$
is just the section
. After relabeling the sections the orientation is still the same, so the sections are still increasing when traveling counter-clockwise. Rotating by
section means that the section numbered
will be numbered
and section numbered
will be numbered
The first line of the input contains two integers, the number of sections $N$ and the number of queries $q$ ($1 \leq N \leq 10^6$, $1 \leq q \leq 10^4$). Next there are $q$ lines, each with one
query. Each query is a single line and starts with the number $1, 2$ or $3$. If it starts with $1$ there will follow a single number $t$ ($1 \leq t \leq N$) which means the numbering should be
rotated cyclically by $t$ sections counter-clockwise. If the query starts with the number $2$ there will follow three integers $l, r, x$ ($1 \leq l, r \leq N$, $1 \leq x \leq 10^9$) which means that
the total cost on the sections from $l$ to $r$ should each be increased by $x$. Finally if the query starts with the number $3$ there will follow two integers $l, r$ ($1 \leq l, r \leq N$) in which
case the total cost of the sections from $l$ to $r$ should be printed.
One line for each query starting with the number $3$ should be printed as described above.
Group Points Constraints
1 20 $1 \leq n \leq 1000$
2 50 $l \leq r$ in all intervals and there are no rotation operations
3 30 No further constraints
Sample Input 1 Sample Output 1
|
{"url":"https://ru.kattis.com/courses/T-414-AFLV/aflv22/assignments/sb4wah/problems/hringvegurinn","timestamp":"2024-11-07T10:50:13Z","content_type":"text/html","content_length":"35200","record_id":"<urn:uuid:842c4dfd-9a2d-46c5-a05a-f83bb8d75ed4>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00079.warc.gz"}
|
Re: [CSS3-mediaqueries]: Invalid test cases in test suite
On 29/09/2011 6:23 AM, Boris Zbarsky wrote:
> On 9/28/11 4:13 PM, Arron Eicholz wrote:
>> CSS 2.1 still doesn't cover 0 exactly. What is 0?
> The CSS 2.1 spec says its numbers are real numbers.
> Typical fairly equivalent definitions of 0 would then include:
> * The smallest element of the subset of the real numbers called the
> "Natura numbers".
> * The additive identity in the field structure of the real numbers.
> * "That thing defined in the first Peano axiom."
> and so forth. As in, this is the zero you learned about in grade school;
> nothing magic about it.
This is what I would expected but the wording in the spec is a surprise
to me.
>> Since -0 is equivalent to 0 and is not a negative number that
>> completely explains -0. It does not however explicitly explain what 0
>> or +0 is or isn't. We must therefore draw the conclusion that +0 is
>> positive and 0 can be both positive and negative.
> I have no idea what you're talking about here.
-0 = 0 = +0
100px = +100px
My question is why implementations would allow something like
margin-left: +100px in the first place?
-100px is a negative number but does not equal 100px or +100px.
>> We need explicit text explaining this
> How much more explicit than "these are real numbers" can you get? Or do
> you want the CSS spec to include some subset of the field axioms for the
> reals, enough to prove that +0 == 0 == -0 (using the usual definitions
> of unary + and - for the reals, which would likewise need to be included
> in the CSS spec)?
I hope not since that would mean including all positive and negative
numbers to infinity with a extra subset (or sub equation) for all
negative numbers like so.
-100px (+100px x 2) == 100px == +100px
-nIDENT (+nIDENT x 2) == nIDENT == +nIDENT
> We can explicitly say that -0 == 0 == +0, of course, as an informative
> note or something, if you think that makes things clearer for people who
> are unfamiliar with the term "real number"....
> -Boris
I would recommend removing all of this.
# Both integers and real numbers may be preceded by
# a "-" or "+" to indicate the sign. -0 is equivalent
# to 0 and is not a negative number.
And replacing it with this:
| All negative integers and negative real numbers are
| preceded by a "-" to indicate the sign. All other
| integers and real numbers have no sign.
I see not reason for the '+' sign and zero is just zero.
Alan Gresley
Received on Thursday, 29 September 2011 04:42:32 UTC
|
{"url":"https://lists.w3.org/Archives/Public/public-css-testsuite/2011Sep/0063.html","timestamp":"2024-11-14T16:03:04Z","content_type":"application/xhtml+xml","content_length":"10983","record_id":"<urn:uuid:10cb3a0e-24c4-4f46-8375-652b7f40cc90>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00514.warc.gz"}
|
Van der Pol oscillator
From Scholarpedia
Takashi Kanamaru (2007), Scholarpedia, 2(1):2202. doi:10.4249/scholarpedia.2202 revision #138698 [link to/cite this article]
The van der Pol oscillator is an oscillator with nonlinear damping governed by the second-order differential equation \[\tag{1} \ddot x - \epsilon (1-x^2) \dot x + x = 0 \ ,\]
where \(x\) is the dynamical variable and \(\epsilon>0\) a parameter. This model was proposed by Balthasar van der Pol (1889-1959) in 1920 when he was an engineer working for Philips Company (in the
When \(x\) is small, the quadratic term \(x^2\) is negligible and the system becomes a linear differential equation with a negative damping \(-\epsilon \dot{x}\ .\) Thus, the fixed point \((x=0,\dot
{x}=0)\) is unstable (an unstable focus when \(0 < \epsilon < 2\) and an unstable node, otherwise). On the other hand, when \(x\) is large, the term \(x^2\) becomes dominant and the damping becomes
positive. Therefore, the dynamics of the system is expected to be restricted in some area around the fixed point. Actually, the van der Pol system (1) satisfies the Liénard's theorem ensuring that
there is a stable limit cycle in the phase space.The van der Pol system is therefore a Liénard system.
Using the Liénard's transformation \(y = x - x^3/3 - \dot{x}/\epsilon\ ,\) equation (1) can be rewritten as \[\tag{2} \dot x = \epsilon \left( x - \frac{1}{3} x^3 - y \right) \]
\[\tag{3} \dot y = \frac{x}{\epsilon} \]
which can be regarded as a special case of the FitzHugh-Nagumo model (also known as Bonhoeffer-van der Pol model).
Small Damping
When \(\epsilon\) << 1, it is convenient to rewrite equation (1) as \[\tag{4} \dot x = \epsilon \left( x - \frac{1}{3} x^3\right) - y \]
\[\tag{5} \dot y = x \]
where the transformation \(y = \epsilon (x - x^3/3) - \dot{x}\) was used. When \(\epsilon = 0\ ,\) the system preserves the energy and has the solution \(x=A\cos(t+\phi)\) and \(y=A\sin(t+\phi)\ .\)
To obtain the approximated solution for small \(\epsilon\ ,\) new variables \((u,v)\) which rotate with the unperturbed solution, i.e., \[ u = x \cos t + y \sin t \] \[ v = -x \sin t + y \cos t \]
are considered. By substituting them into equations (4) and (5), we obtain \[\tag{6} \dot{u} = \epsilon \left[ u \cos t - v \sin t - \frac{1}{3}( u \cos t - v \sin t)^3 \right] \cos t \]
\[\tag{7} \dot{v} = - \epsilon \left[ u \cos t - v \sin t - \frac{1}{3}( u \cos t - v \sin t)^3 \right] \sin t \, . \]
Because \(\dot{u}\) and \(\dot{v}\) are \(O(\epsilon)\ ,\) the varying speed of \(u\) and \(v\) is much slower than \(\cos t\) and \(\sin t\ .\) Therefore, the averaging theory can be applied to
equations (6) and (7). Integrating the righthand sides of equations (6) and (7) with respect to \(t\) from \(0\) to \(T=2\pi\ ,\) keeping \(u\) and \(v\) fixed, \[ \dot{u} = \frac{\epsilon}{8}\,u\
left[ 4 - ( u^2+ v^2) \right] \] \[ \dot{v} = \frac{\epsilon}{8}\,v \left[ 4 - ( u^2+ v^2) \right] \] are obtained. Introducing \(r=\sqrt{u^2+v^2}\ ,\) a differential equation \[\tag{8} \dot{r} = \
frac{\epsilon}{8}\, r\, ( 4 - r^2 ) \]
which has a stable equilibrium with \(r=2\) is obtained. Therefore, the original system (4) and (5) has a stable limit cycle with \(r=2\) for small \(\epsilon\ .\)
Large Damping
When \(\epsilon\) >> 1, it is convenient to use equations (2) and (3). When the system is away from the curve \(y=x-x^3/3\ ,\) a relation \(|\dot{x}|\) >> \(|\dot{y}|=O(1/\epsilon)\) is obtained from
equations (2) and (3). Therefore, the system moves quickly in the horizontal direction. When the system enters the region where \(|x-x^3/3-y| = O(1/\epsilon^2)\ ,\) \(\dot{x}\) and \(\dot{y}\) are
comparable because both of them are \(O(1/\epsilon)\ .\) Then the system goes slowly along the curve, and eventually exits from this region. Such a situation is shown in Figure 3. It can be observed
that the system has a stable limit cycle.
It is also observed that the period of oscillation is determined mainly by the time during which the system stays around the cubic function where both \(\dot{x}\) and \(\dot{y}\) are \(O(1/\epsilon)\
.\) Thus, the period of oscillation is roughly estimated to be \(T\propto \epsilon\ .\)
When van der Pol (1927) realized equation (1) with an electrical circuit composed of two resistances \(R\) and \(r\ ,\) a capacitance \(C\ ,\) an inductance, and a tetrode, the period of oscillation
was determined by \(\epsilon = RC\) in his circuit. Because \(RC\) is the time constant of relaxation in RC circuit, he named this oscillation as relaxation oscillation. The characteristics of the
relaxation oscillation are the slow asymptotic behavior and the sudden discontinuous jump to another value. Using few relaxation oscillations, van der Pol and van der Mark (1928) modeled the electric
activity of the heart.
Electrical Circuit
To make electrical circuits described by equation (1), active circuit elements with the cubic nonlinear property, \(i=\phi(v)= \gamma v^3 - \alpha v\ ,\) are required, where \(i\) and \(v\) are
current and voltage, respectively. In the 1920s, van der Pol built the oscillator using the triode or tetrode. After Reona Esaki (1925-) invented the tunnel diode in 1957, making the van der Pol
oscillator with electrical circuits became much simpler.
Using the tunnel diode with input-output relation \[ i=\phi_t(v) = \phi(v-E_0) + I_0 \] the equation for the circuit shown in Figure 5 is written as follows. \[ \dot{V} = \frac{1}{C}\left(- \phi(V) -
W\right) \] \[ \dot{W} = \frac{1}{L}V \] This can be rewritten as \[\tag{9} \ddot{V} - \frac{1}{C} (\alpha - 3\gamma V^2) \dot{V} + \frac{1}{LC} V = 0 \]
Introducing new variables \(x = \sqrt{3\gamma/\alpha} V\ ,\) \(t' =t/\sqrt{LC}\ ,\) and \(\epsilon = \sqrt{L/C} \alpha\ ,\) equation (9) can be transformed into equation (1). As shown in the previous
section, when \(\epsilon\) is large, the period of oscillation is proportional to \(\epsilon\ .\) Thus, the original system has a period \(T\propto \epsilon \sqrt{LC}= L\alpha\ .\) Because \(\alpha\)
has an order of the reciprocal of resistance \(r\ ,\) \(T\propto L/r\) is obtained. \(L/R\) is the time constant of relaxation in LR circuit; therefore, the name of "relaxation oscillation" is
The electrical circuit elements with the nonlinear property can also be realized using operational amplifiers. By this method, much research has been done to study the nonlinear dynamics in physical
Periodic Forcing and Deterministic Chaos
Van der Pol had already examined the response of the van der Pol oscillator to a periodic forcing in his paper in 1920, which can be formulated as \[ \ddot x - \epsilon (1-x^2) \dot x + x = F \cos\
left( \frac{2 \pi t}{T_{in}}\right) \] There exist two frequencies in this system, namely, the frequency of self-oscillation determined by \(\epsilon\) and the frequency of the periodic forcing. The
response of the system is shown in Figure 6 (upper) for \(T_{in}=10\) and \(F=1.2\ .\) It is observed that the mean period \(T_{out}\) of \(x\) often locks to \(mT_{in}/n\ ,\) where \(m\) and \(n\)
are integers. It is also known that chaos can be found in the system when the nonlinearity of the system is sufficiently strong.Figure 6 (lower) shows the largest Lyapunov exponent, and it is
observed that chaos takes place in the narrow ranges of \(\epsilon\ .\)
Van der Pol and van der Mark (1927) considered an electrical circuit composed of a resistance, a capacitance, and a Ne lamp, and they heard the response of the system by inserting the telephone
receivers into their circuit. Besides the locking behaviors, they heard irregular noises before the period of the system jumps to the next value. They stated that this noise is a subsidiary
phenomenon, but today it is thought that they heard the deterministic chaos in 1927 before Yoshisuke Ueda (1961) and Edward Lorenz (1963). Nevertheless, van der Pol did not identify the structure
underlying a chaotic attractor in the phase space. Lorenz published a picture of a chaotic attractor in the phase space in the early 60s and Ueda did in the early 70s.
Typical sounds of the system can be heard in the following links (before clicking the link, please lower the volume of your speaker)
where (A), (B), and (C) correspond to the letters in Figure 6. A transformation of the timescale was applied so that the oscillation with \(T_{out}=10\) was transformed into the oscillation with 440
[Hz]. An irregular noise would be heard when chaos exists in the system.
The locking behaviors of the mean period can be understood using the circle map and related mappings. This was done in a series of papers by M.L. Cartwright and J.E. Littlewood (1945-1950) and in
work on an important piece-wise linear approximation by N. Levinson (1949). Both of these investigations uncovered "random-like" dynamics. Levinson's analysis led to S. Smale's introduction of the
horseshoe mapping, which was used by M. Levi (1981) to complete the picture of limit behavior of all solutions. van der Pol's model was simulated using high resolution computations by J.E. Flaherty
and F.C. Hoppensteadt (1978) who identified overlapping regions in the parameter domain where phase locking occurs, similar to Arnold's tongues. That work motivated a successful investigation of
phase-locking in neural tissue done by R. Guttman et al.(See Voltage-Controlled Oscillations in Neurons). As for chaos in the Arnold's tongues, please see Horita et al. (1988) and Ott (1993).
• B. van der Pol, A theory of the amplitude of free and forced triode vibrations, Radio Review, 1, 701-710, 754-762, 1920.
• E. V. Appleton and B. van der Pol, On the form of free triode vibrations, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science Ser.6, 42, 201-220, 1921.
• E. V. Appleton and B. van der Pol, On a type of oscillation-hysteresis in a simple triode generator, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science Ser.6, 43,
177-193, 1922.
• B. van der Pol, On oscillation hysteresis in a triode generator with two degrees of freedom, The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science Ser.6, 43, 700-719,
• B. van der Pol, On "relaxation-oscillations", The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science Ser.7, 2, 978-992, 1926.
• B. van der Pol, Forced oscillations in a circuit with non-linear resistance (reception with reactive triode), The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science Ser.7
, 3, 65-80, 1927.
• B. van der Pol and J. van der Mark, Frequency demultiplication, Nature, 120, 363-364, 1927
• B. van der Pol and J. van der Mark, The heartbeat considered as a relaxation oscillation, and an electrical model of the heart. The London, Edinburgh, and Dublin Philosophical Magazine and
Journal of Science Ser.7, 6, 763-775, 1928.
• B. van der Pol, The nonlinear theory of electric oscillations, Proceedings of the Institute of Radio Engineers, 22, 1051-1086, 1934.
• M. L. Cartwright and J. E. Littlewood, On non-linear differential equations of the second order: I. The equation \(\ddot y - k(1 - y^2 )\dot y + y = b\lambda k cos (\lambda t + a); k\) large,
Journal of the London Mathematical Society, 20, 180-189, 1945.
• N. Levinson, A second order differential equation with singular solutions, Ann. Math., 50, No. 1, 127-153, 1949.
• M.L. Cartwright, Forced oscillations in nonlinear systems, Contrib. to theory of nonlinear oscillations, Princeton University Press (Study 20) 149-241, 1950.
• R. FitzHugh, Impulses and physiological states in models of nerve membrane, Biophysical Journal, 1, 445-466, 1961.
• J. Nagumo, S. Arimoto, and S. Yoshizawa, An active pulse transmission line simulating nerve axon, Proceedings of the Institute of Radio Engineers, 50, 2061-2070, 1962.
• J.E. Flaherty, F.C. Hoppensteadt, Frequency entrainment of a forced van der Pol oscillator, Studies in Appl. Math., 58, 5-15, 1978.
• M. Levi, Qualitative analysis of the periodically forced relaxation oscillations, Memoirs of the Amer. Math. Soc., 32, No. 244, 1981.
• J. Guckenheimer and P. Holmes, Nonlinear oscillations, dynamical systems, and bifurcations of vector fields, Springer-Verlag, 1983.
• T. Horita, H. Hata, H. Mori, T. Morita, K. Tomita, S. Kuroki, and H. Okamoto, Local Structures of Chaotic Attractors and q-Phase Transitions at Attractor-Merging Crises in the Sine-Circle Maps,
Progress of Theoretical Physics, 80, 793-808, 1988
• E. Ott, Chaos in Dynamical Systems, Cambridge University Press, New York, 1993.
Internal references
External Links
See Also
Averaging, Chaos, FitzHugh-Nagumo Model, Periodic Orbit, Relaxation Oscillator, Stability, Voltage-Controlled Oscillations in Neurons
|
{"url":"http://www.scholarpedia.org/article/Van_der_Pol_oscillator","timestamp":"2024-11-05T02:32:35Z","content_type":"text/html","content_length":"52206","record_id":"<urn:uuid:0b51d0d3-891b-4d97-8116-c2558b6b0865>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00181.warc.gz"}
|
Problem MP 19
Problems for
Intermediate Methods in Theoretical Physics
Edward F. Redish
Three Coupled Oscillators
Three identical carts are connected to exterior walls by four identical springs as shown in the figure.
(a) Using coordinates, y[i] that are horizontal, have their positive direction to the right, and have their 0 at the i-th cart's equilibrium position, (i = 1, 2, 3) write the laws of motion for the
three carts starting with Newton's second law. Ignore friction and air resistance.
(b) Find the normal modes and natural frequencies for this system. Express your frequencies as a multiple of the natural frequency parameter, ω[0] = (k/m)^1/2.
(c) Explain the motion of the carts in each normal mode and, if you can, explain physically why the frequencies associated with each mode are what they are what they are.
(d) How would your equations differ if you used the same coordinate system for all the masses? Say, choosing the origin at the left wall with the positive direction running to the right? The
equations in these coordinates should look dramatically different from those you found for (a). Are they in fact the same? If they are, show it. If they are not, explain why not.
This page prepared by Edward F. Redish
Department of Physics
University of Maryland
College Park, MD 20742
Phone: (301) 405-6120
Email: redish@umd.edu
Last revision 28. December, 2010.
|
{"url":"https://physics.umd.edu/perg/MathPhys/Problems-HW/MP19.htm","timestamp":"2024-11-04T20:01:11Z","content_type":"text/html","content_length":"3421","record_id":"<urn:uuid:ce5a93e6-9153-494d-bcde-252b5adcd044>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00323.warc.gz"}
|
Splitting an object along the edges of a rectangular box (the shape of the portal in our game) is a complicated problem. To solve it, I decomposed it into smaller sub-problems, described below.
Our objects contain two essential components: a mesh renderer and a 2D polygon collider. Both of them need to be split in order for the object to function correctly. I decided that the first problem
to solve was to write code to split both of these components along an arbitrary line.
Splitting a mesh on a line
I based this algorithm on the one outlined in this Stack Overflow answer. First, transform every point in the mesh such that the line we're splitting on is the y-axis. This makes it easier to reason
about what we're splitting. We then create two empty meshes which will hold the two pieces resulting from the split.
For each triangle in the original mesh, we determine whether each vertex in the triangle is "left" (x < 0) or "right" (x >= 0). This gives us four cases: 1) every vertex is on the left, 2) every
vertex is on the right, 3) one vertex is on the left and two are on the right, or 4) one vertex is on the right and two are on the left. In the first two cases, we just add the triangle to the mesh
for the appropriate side. Otherwise, the triangle will be split by the line, so we need to do more geometry.
Assume for this example that there are two vertices on the left and one on the right. Let the vertices on the left be A and B, and let the vertex on the right side be C. Then let the point of
intersection between the vector AC and the y-axis be D, and the point of intersection between the vector BC and the y-axis be E:
This gives us three new triangles: ADB, BDE, and DCE. ADB and BDE will go in the left side's mesh, and DCE will go in the right side's mesh.
Once we've iterated through every triangle in the original mesh, we transform all the vertices back to their real local positions (i.e. the inverse of the transformation we applied at the beginning).
The mesh is now split!
Splitting a collider on a line
We used Unity's PolygonCollider2D collider for all of our objects, which are defined by an arbitrary number of closed paths. As with the mesh, we first transform the points so that the splitting line
is the y-axis. We then apply the following splitting algorithm to each of the paths separately, so for each path in the original object, we have a left path and a right path.
To simplify the logic of this algorithm, I restricted our game to only use convex colliders. As a result, if we follow the path from point to point, it will either never cross the y-axis (meaning
it's entirely on one side of the y-axis, so we can directly copy the path to that side's collider), or it will cross over exactly twice: once from left to right, and once from right to left. For
instance, in the following diagram, we cross over on the edges BC and DA. Splitting along the y-axis, this would result in new points E and F, which form a new edge that is included in both paths.
With that in mind, our algorithm for collider splitting is straightforward:
1. Create two new empty paths (leftPath and rightPath).
2. Determine whether we've started on the left or the right of the y-axis. (Assume for this example that we start on the left, but note that if we start on the right, we can just replace "left" with
"right" in the following steps.)
3. Walk along the original path, adding each point to leftPath, until we reach a point on the right of the y-axis. Find the point of intersection between the y-axis and the edge between the previous
point and the current point; add this point to both leftPath and rightPath.
4. Continue walking along the original path, but now add each point we encounter to rightPath instead.
5. Again, when we find an edge that crosses the y-axis, find the point of intersection and add it to both paths.
6. Continue walking along the original path, adding each point we encounter to leftPath, until we reach the end of the original path.
As with the mesh, we then transform these points back into local space. The collider is now split!
Splitting a composite object
As mentioned earlier, we required that all collider paths be convex to simplify the splitting algorithm. However, if we cut a corner out of an object, this would result in a concave collider. To work
around the problem, we instead allowed splittable objects to made up of multiple colliders and meshes in child GameObjects, so what appears to be a concave shape can actually be made up of a group of
convex shapes. As a result, when a single splittable object is being split, we may have to split several child objects.
To accomplish this, we clone the original object. The original object will become the "left" side of the split, and the clone will become the "right" side. Then, for each child of the left object, we
attempt to split its mesh and its collider. If a split occurs, we update the corresponding child on both sides accordingly. If no split occurs, then the entire mesh/collider is on one side of the
splitting plane, so we leave the child on that side as it is and delete the corresponding child on the other side.
If, after splitting all of the children, one of the sides has no children, we delete it entirely.
Splitting on a rectangle's edge
Now that we've solved the problem of splitting on a line, we can tackle the problem of splitting on the edges of an axis-aligned rectangle, which is the actual shape of our portal. This rectangular
split can be decomposed into four line splits — one for each edge of the rectangle. Cutting along each of these lines results in 9 sections:
Section E will be transferred through the portal; all of the other sections form a rim around the rectangle and will need to be "glued" together into a single object (i.e. made children of the same
parent object), as we don't actually want it split into 8 parts.
Rather than wrestling with bounding boxes and imprecise floating point math to determine which piece is in section E, we can instead wind the direction of the line cuts counter-clockwise around the
circle (as shown by the arrowheads in the diagram). This way, section E is the only section that is "left" of every line cut. Keep in mind that in our composite object splitting algorithm, the "left"
object is always the original object. This makes our algorithm very simple: cut the original object once for each of the lines, keeping a list of the newly-split "right" portions. If the original
object hasn't been deleted (i.e. it still has children), it was on the left of every line and thus in section E, so transfer it through the portal. Finally, glue all of the other objects together.
Reducing hitching
We noticed during playtesting that if the portal cut many objects at once, there was a noticeable hitch due to the processing time required for this algorithm. I optimized the algorithm as much as I
could using Unity's profiling tools, but despite making decent headway, I found that since the number of objects the player might be cutting at once is theoretically unbounded, no amount of
optimization would truly solve the problem.
Instead, I used a trick to hide the processing time. I re-fitted the splitting functions in PortalManager as coroutines, which allow a function's execution to be temporarily suspended. At key points
in the algorithm (primarly when an object is finished being split), we check to see if we've spent more than 10 ms of the current update step in the splitting algorithm. If we have, we suspend
execution and resume it on the next update step. This allows us to continue rendering at the full frame rate while the splitting algorithm works away in the background.
Normally this would wreak havoc if objects were moving and being split at the same time. However, one of our game mechanics is that time freezes while the player is creating a portal, so we just keep
time frozen until splitting is done. Even when cutting many objects at once, this is usually practically unnoticeable, taking only an extra frame or two to finish splitting. In extreme cases where
processing takes a long time, it's still better than having the game completely freeze for the entire duration, as some animations and particle effects can continue while time is frozen.
Potential improvements
Ultimately, we felt that this algorithm was good enough in its current state, as we had limited time to implement it and it worked well for our purposes. However, given more time, there are two main
things I would improve.
First of all, our "gluing" algorithm in the rectangle splitting step assumes that our object's overall shape is convex, which may not be true if the player cuts out part of the object (e.g. a "C"
shape). This can result in two disconnected objects being glued together (e.g. if the player cuts only the right half of a "C" shape off, the "arms" of the shape will be glued together). In our
experience, this case rarely comes up in gameplay unless the player intentionally tries to cause it, so we focused our limited development time on other problems. However, it could be resolved by
checking whether pieces have any vertex locations in common before merging them.
Secondly, if splitting happens exactly at the edge of an object, we may produce empty objects or tiny "slivers" of the original object. This can happen when the player uses the undo button, as it
redraws a portal at exactly the same location. Our workaround to the problem is just to check if each newly split object is below a certain size threshold and if it is, delete it. A better solution
would handle this at the mesh/collider splitting stage, as we currently treat a point exactly on the y-axis as being right of the y-axis, though debugging this would unfortunately have taken more
time than we had.
|
{"url":"https://elliotcolp.com/gemmas-great-gambit/object-slicing","timestamp":"2024-11-12T15:20:38Z","content_type":"text/html","content_length":"20294","record_id":"<urn:uuid:8465b412-f20f-4088-acd6-46b45c1ea1ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00688.warc.gz"}
|
Introduction to ehymet
ehymet package goal
The ehymet package (Epigraph-Hypograph based methodologies for functional data) is an R package designed to extend popular multivariate data analysis methodologies to functional data using various
indices. The package introduces the epigraph and hypograph indices, along with their modified versions for functional datasets in one or multiple dimensions. These indices transform a functional
dataset into a multivariate one, enabling the application of existing multivariate data analysis methods.
More information about the theory behind these methodologies can found in the following papers:
• Belén Pulido, Alba M. Franco-Pereira, Rosa E. Lillo (2023). “A fast epigraph and hypograph-based approach for clustering functional data.” Statistics and Computing, 33, 36. doi: 10.1007/
• Belén Pulido, Alba M. Franco-Pereira, Rosa E. Lillo (2023). “The epigraph and the hypograph indices as useful tools for clustering multivariate functional data.” doi: 10.48550/arXiv.2307.16720
DGPs introduced in the package
ehymet introduces two functions which generate different Data Generation Processes (DGPs) for functional data in one or multiple dimension. The first function, sim_model_ex1, generate DGPs in one
dimension as originally described by (Flores, Lillo, and Romo 2018). The second function, sim_model_ex2, produces DGPs in both one and two dimensions first introduced by (Martino et al. 2019).
These functions are useful for simulating functional data to test statistical methods or to understand the behavior of different models under controlled conditions. Each of the two functions display
different DGPs depending on the given parameters, each of them with different characteristics.
Type 1 DGPs
The first set of DGPs is generated by function sim_model_ex1, which produces eight different DGPs, each corresponding to a different value of i_sim. Here’s a breakdown of how each DGP is generated
and how the function works:
• n: The number of curves to generate for each group. By default, this is set to 50 per group, resulting in a total of 100 curves.
• p: The number of grid points at which the curves are evaluated over the interval \([0, 1]\). The default is 30 grid points.
• i_sim: An integer between 1 and 8 that specifies which model to use for generating the curves in the second group. The first group of curves is generated in the same way for all values of i_sim.
Function breakdown
The first group of functions is generated by a Gaussian process \[X_1(t)=E_1(t)+e(t),\] where \(E_1(t)=30t^{ \frac{3}{2}}(1-t)\) is the mean function and \(e(t)\) is a centered Gaussian process with
covariance matrix \[ Cov(e(t_i),e(t_j))=0.3 \exp\left(-\frac{\lvert t_i-t_j \rvert}{0.3}\right).\]
The second group of functions \(X_i\) are obtained from the first one by perturbing the generation process. Different values of i_sim generate different curves.
For i_sim values of 1, 2, and 3, the curves in the second group exhibit changes in the mean while the covariance matrix remains unchanged. The changes in the mean increase in magnitude as i_sim
• i_sim = 1: \(X_i(t)=X_1(t)+0.5.\)
• i_sim = 2: \(X_i(t)=X_1(t)+0.75.\)
• i_sim = 3: \(X_i(t)=X_1(t)+1.\)
For i_sim 4 and 5, the curves in the second group are obtained by multiplying the covariance matrix by a constant.
• i_sim = 4: \(X_i(t)=E_1(t)+2 \ e(t).\)
• i_sim = 5: \(X_i(t)=E_1(t)+0.25 \ e(t).\)
For i_sim = 6, the curves in the second group are obtained by adding to \(E_1(t)\) a centered Gaussian process \(h(t)\) whose covariance matrix is given by \[ Cov(h(t_i),h(t_j))=0.5 \exp (-\frac{\
lvert t_i-t_j\rvert}{0.2})\].
• i_sim = 6: \(X_i(t)=E_1(t)+ \ h(t).\)
For i_sim 7 and 8, the curves in the second group are obtained by a different mean function \(E_2(t)=30t{(1-t)}^2\).
• i_sim = 7: \(X_8(t)=E_2(t)+ h(t).\)
• i_sim = 8: \(X_9(t)=E_2(t)+ e(t).\)
The function returns a data matrix of size \(2n \times p\), where the first \(n\) rows contain the curves from the first group, and the next \(n\) rows contain the curves from the second group,
generated according to the selected value of i_sim.
To simulate the curves, we can follow these steps:
This code generated 10 curves over the interval [0,1] with 30 grid points, using i_sim=1
Also, we can plot them. To do so, we are going to load external packages. This is just an example of how to plot the curves, but the end user can do it the way they want.
plot_curves <- function(curves, title = "", subtitle = "") {
data <- as.data.frame(curves) |>
mutate(curve_id = row_number())
n <- dim(curves)[1] / 2
data_long <- data |>
cols = -curve_id,
names_to = "point",
values_to = "value"
) %>%
point = as.numeric(sub("V", "", point)),
group = if_else(curve_id %in% 1:n, 1, 2)
ggplot(data_long, aes(x = point, y = value, group = curve_id, color = as.factor(group))) +
geom_line() +
title = title,
subtitle = subtitle,
color = "Group",
x = "",
y = ""
) +
scale_color_d3() +
theme_half_open() +
theme(legend.position = "none")
plot_curves(curves, title = "Type 1 DGPs", subtitle = "i_sim = 1")
The case for i_sim = 7 has the following graph representation:
Clear differences can be seen between both plots. We leave it up to the end user to experiment as they see fit with the simulations.
Type 2 DGPs
The second set of DGPs is generated by function sim_model_ex2, which produces four different DGPs, each corresponding to a different value of i_sim. The first two DGPs are in one dimension, while the
remaining two DGPs are two-dimensions functional datasets. Here’s a breakdown of how each DGP is generated and how the function works:
• n: The number of curves to generate for each group. By default, this is set to 50 per group, resulting in a total of 100 curves.
• p: The number of grid points at which the curves are evaluated over the interval \([0, 1]\). The default is 150 grid points.
• i_sim: An integer between 1 and 4. 1 and 2 for one-dimensional functional data and 3 and 4 for the multidimensional case.
Function breakdown
For the one-dimensional case the first group of functions is generated by \[X_{1}(t)=E_3(t)+ A(t),\] where \(E_3(t)=t(1-t)\) is the mean function, and \[A(t) = \sum_{k=1}^{100} Z_k\sqrt{\rho_k}\
theta_k(t),\] with \(\{Z_k, k=1,...,100\}\) being independent standard normal variables, and \(\{ \rho_k,k\geq 1 \}\) a positive real numbers sequence defined as \[\rho_k = \left\{ \begin{array}{lll}
\frac{1}{k+1} & if & k \in \{1,2,3\}, \\ \frac{1}{{(k+1)}^2} & if & k \geq 4, \end{array} \right. \] in such a way that the values of \(\rho_k\) are chosen to decrease faster when \(k\geq 4\) in
order to have most of the variance explained by the first three principal components. The sequence \(\{\theta_k, k\geq 1\}\) is an orthonormal basis of \(L^2(I)\) defined as \[\theta_k(t) = \left\{ \
begin{array}{lllll} I_{[0,1]}(t) & if & k=1, & \\ \sqrt{2}\sin{(k\pi t)}I_{[0,1]}(t) & if & k \geq 2,\\ & & k \ even,\\ \sqrt{2}\cos{((k-1)\pi t)}I_{[0,1]}(t) & if & k \geq 3,\\ & & k \ odd, \end
{array} \right. \] where \(I_A(t)\) stands for the indicator function of set \(A\).
The second group of data is generated as follows, depending on the value of i_sim:
• i_sim = 1: \(X_{i}(t)=E_4(t)+ A(t),\) where \(E_4(t)=E_2(t)+\displaystyle \sum_{k=1}^3\sqrt{\rho_k}\theta_k(t).\)
• i_sim = 2: \(X_{i}(t)=E_5(t)+ A(t),\) where \(E_5(t)=E_2(t)+\displaystyle \sum_{k=4}^{100}\sqrt{\rho_k}\theta_k(t).\)
For the two-dimensional case the first group of functions is generated by \[\mathbf{X}_{1}(t)=\mathbf{E}_6(t)+ B(t),\] where \(\mathbf{E}_6(t)= \begin{pmatrix} t(1-t)\\ 4t^2(1-t) \end{pmatrix}\) is
the mean function of this process, and \[\mathbf{B}(t) = \sum_{k=1}^{100} \mathbf{Z}_k\sqrt{\rho_k}\theta_k(t),\] where \[\mathbf{E}_6(t)= \begin{pmatrix} t(1-t)\\ 4t^2(1-t) \end{pmatrix}\] is the
mean function of this process, \(\{\mathbf{Z}_k, k=1,...,100\}\) are independent bivariate normal random variables, with mean \(\mathbf{\mu = 0}\) and covariance matrix \[\Sigma= \begin{pmatrix} 1 &
0.5\\ 0.5 & 1 \end{pmatrix}.\] The second group of data is generated as follows, depending on the value of i_sim:
• i_sim = 3: \(X_{i}(t)=\mathbf{E}_7(t)+ B(t),\) where \(\mathbf{E}_7(t)=\mathbf{E}_6(t)+\mathbf{1}\displaystyle \sum_{k=1}^3\sqrt{\rho_k}\theta_k(t),\) is the mean function of this process, where
\(\mathbf{1}\) represents a vector of 1s.
• i_sim = 4: \(X_{i}(t)=\mathbf{E}_8(t)+ B(t),\) where \(\mathbf{E}_8(t)=\mathbf{E}_6(t)+\mathbf{1}\displaystyle \sum_{k=4}^{100}\sqrt{\rho_k}\theta_k(t).\)
The function returns for the one-dimensional case (i_sim=1,2) a data matrix of size \(2n \times p\). The output for the two-dimensional case (i_sim=3,4)is an array of dimensions \(2n \times p \times
This function generates functional curves in one dimension for values of i_sim equal to 1 and 2, and functional datasets in two dimensions for values of i_sim equal to 3 and 4. Here, we would like to
point out how to simulate a multivariate functional dataset with i_sim = 3 or i_sim = 4.
As can be seen, now we don’t have a matrix but a 3-dimensional array. We are going to plot the first dimension:
And now the second dimension:
For the sake of visualization, we can also obtain the derivatives of the curves using the funspline function, which is an internal function of the ehymet package. It can be used with ehymet:::.
funspline_result <- ehymet:::funspline(curves)
deriv <- funspline_result$deriv
deriv2 <- funspline_result$deriv2
The plot for first dimension of the derivative is the following one:
And for the second dimensions:
We can observe a clear overlapping between first derivatives curves on both dimensions.
Indices computation
ehymet provides the implementation of the epigraph and the hypograph index, both in one and multiple dimensions. The function to generate the indices is generate_indices and it can compute:
• Epigraph Index (EI)
• Hypograph Index (HI)
• Modified Epigraph Index (MEI)
• Modified Hypograph Index (MHI)
The indices parameter can be specified to calculate a subset of the total indices, but by default it calculates all of them. Indices for one-dimensional functional data or for data in more dimensions
are calculated depending of the size of the input array.
Let’s try this function to generate indices with multidimensional data generated with sim_model_ex2:
And now compute the indices for the multidimensional data:
We can check that all the indices are computed for each curve and for its first and second derivative:
#> [1] "dtaEI" "ddtaEI" "d2dtaEI" "dtaHI" "ddtaHI" "d2dtaHI" "dtaMEI" "ddtaMEI" "d2dtaMEI" "dtaMHI" "ddtaMHI"
#> [12] "d2dtaMHI"
We can also take a quick look to the generated indices:
head(indices_mult, 3)
#> dtaEI ddtaEI d2dtaEI dtaHI ddtaHI d2dtaHI dtaMEI ddtaMEI d2dtaMEI dtaMHI ddtaMHI d2dtaMHI
#> 1 0.99 0.99 0.99 0.01 0.01 0.01 0.8068 0.7372667 0.6912667 0.2582667 0.3046000 0.2504667
#> 2 0.99 0.99 0.99 0.01 0.01 0.01 0.7458 0.7348667 0.6760667 0.2315333 0.2957333 0.2698667
#> 3 0.98 0.99 0.99 0.01 0.01 0.01 0.5360 0.7300000 0.6956000 0.1463333 0.2895333 0.2426000
Now, for the sake of comparison, let’s see what happens if we calculate the indices separately for each dimension and check the first few rows:
head(indices_dim1, 3)
#> dtaEI ddtaEI d2dtaEI dtaHI ddtaHI d2dtaHI dtaMEI ddtaMEI d2dtaMEI dtaMHI ddtaMHI d2dtaMHI
#> 1 0.94 0.99 0.99 0.03 0.01 0.01 0.4260000 0.5051333 0.4685333 0.4360000 0.5151333 0.4785333
#> 2 0.91 0.99 0.99 0.01 0.01 0.01 0.3576667 0.5132667 0.4755333 0.3676667 0.5232667 0.4855333
#> 3 0.95 0.99 0.99 0.01 0.01 0.01 0.3458000 0.5030667 0.4674667 0.3558000 0.5130667 0.4774667
head(indices_dim2, 3)
#> dtaEI ddtaEI d2dtaEI dtaHI ddtaHI d2dtaHI dtaMEI ddtaMEI d2dtaMEI dtaMHI ddtaMHI d2dtaMHI
#> 1 0.99 0.99 0.99 0.04 0.01 0.01 0.6290667 0.5267333 0.4632000 0.6390667 0.5367333 0.4732000
#> 2 0.99 0.99 0.99 0.01 0.01 0.01 0.6096667 0.5073333 0.4604000 0.6196667 0.5173333 0.4704000
#> 3 0.95 0.99 0.99 0.01 0.01 0.01 0.3265333 0.5064667 0.4607333 0.3365333 0.5164667 0.4707333
One difference that is readily apparent is, for example, for “dtaMHI”. Let’s take a look at it:
dim1 = unname(indices_dim1["dtaMHI"]),
dim2 = unname(indices_dim2["dtaMHI"]),
mult = unname(indices_mult["dtaMHI"])
), 5)
#> dim1 dim2 mult
#> 1 0.4360000 0.6390667 0.2582667
#> 2 0.3676667 0.6196667 0.2315333
#> 3 0.3558000 0.3365333 0.1463333
#> 4 0.3744000 0.7898000 0.3180000
#> 5 0.3574000 0.5160000 0.1821333
Let’s plot MHI vs MEI for both the original curves and both first and second derivatives:
For the original curves, no clear separation is evidenced. However, both in the first and second derivatives we see that the groups are clearly differentiated. This may indicate that using the MEI
and MHI indices computed for the first or second derivatives could yield great results in clustering problems.
|
{"url":"https://cran.rstudio.org/web/packages/ehymet/vignettes/ehymet-01-introduction.html","timestamp":"2024-11-09T09:39:50Z","content_type":"text/html","content_length":"143764","record_id":"<urn:uuid:24956b04-3717-4148-ae3e-0b5e850450f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00651.warc.gz"}
|
Thursday, 17 February 2022
Factorising Algebraic Expressions
Video Lesson (14 Min)
Video Lesson Notes
Exercise and Solutions
Thursday, 20 January 2022
Intro to Indices
Lesson Notes
Indices Homework complete by Friday 28 January
Wednesday, 1 December 2021
This short video lesson is designed to help you with Q1 - Q3 of Assessment 1.
Video Lesson (8 min)
Lesson Notes
Assessment 1 Q1 - Q3
Watch the video and/or read the lesson notes before completing Assessment 1 Q1 - Q3.
Please hand in to Mr Herring next lesson on Wednesday 1 December.
Saturday, 6 November 2021
Mixed Number Addition and Subtraction
Lesson Notes
Saturday, 16 October 2021
Fractions - Finding, Multiplying and Dividing
Lesson Notes
Friday, 17 September 2021
Prime Factors, HCFs, LCMs
Lesson Notes
Saturday, 11 September 2021
Un unfortunate start with half the class missing part of the lesson to take their lateral flow tests.
Here is the complete set of lesson notes which you can print off and stick in your exercise books.
Factors, Multiples, Primes
Lesson Notes
|
{"url":"https://homework.m34maths.com/9hmn","timestamp":"2024-11-02T07:42:26Z","content_type":"application/xhtml+xml","content_length":"67165","record_id":"<urn:uuid:7bf33bfb-9532-4224-9b44-481a625274d4>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00749.warc.gz"}
|
Combined Gas Law vs. Ideal Gas Law - What's the Difference? | This vs. That
Combined Gas Law vs. Ideal Gas Law
What's the Difference?
The Combined Gas Law and Ideal Gas Law are both equations used to describe the behavior of gases. However, they differ in terms of the variables they consider. The Combined Gas Law combines Boyle's
Law, Charles's Law, and Gay-Lussac's Law into a single equation, allowing for changes in pressure, volume, and temperature. It states that the product of pressure and volume is directly proportional
to the product of temperature and the number of moles of gas. On the other hand, the Ideal Gas Law incorporates all the variables mentioned in the Combined Gas Law, but also includes the ideal gas
constant. This law states that the product of pressure, volume, and temperature is directly proportional to the number of moles of gas and the ideal gas constant. In summary, while the Combined Gas
Law focuses on the relationship between pressure, volume, and temperature, the Ideal Gas Law encompasses all these variables along with the ideal gas constant.
Attribute Combined Gas Law Ideal Gas Law
Formula P1V1/T1 = P2V2/T2 PV = nRT
Variables P1, V1, T1, P2, V2, T2 P, V, n, R, T
Pressure Measured in Pascals (Pa) Measured in Pascals (Pa)
Volume Measured in cubic meters (m³) Measured in cubic meters (m³)
Temperature Measured in Kelvin (K) Measured in Kelvin (K)
Number of moles Not explicitly included Measured in moles (mol)
Ideal gas constant Not used 8.314 J/(mol·K)
Applicability Applicable to any gas Applicable to ideal gases
Assumptions Assumes constant amount of gas Assumes ideal behavior of gas particles
Further Detail
Gas laws are fundamental principles in the field of thermodynamics that describe the behavior of gases under different conditions. Two important gas laws are the Combined Gas Law and the Ideal Gas
Law. While both laws are used to analyze the properties of gases, they have distinct attributes that make them applicable in different scenarios. In this article, we will explore and compare the
attributes of the Combined Gas Law and the Ideal Gas Law.
Combined Gas Law
The Combined Gas Law is a mathematical relationship that combines Boyle's Law, Charles's Law, and Gay-Lussac's Law. It allows us to analyze the changes in pressure, volume, and temperature of a gas
sample when all three variables are altered simultaneously. The formula for the Combined Gas Law is:
P1V1/T1 = P2V2/T2
Here, P1 and P2 represent the initial and final pressures, V1 and V2 represent the initial and final volumes, and T1 and T2 represent the initial and final temperatures.
The Combined Gas Law is particularly useful when studying situations where all three variables change simultaneously. For example, if we have a gas sample in a closed container and we increase the
pressure while simultaneously decreasing the volume and increasing the temperature, we can use the Combined Gas Law to determine the final state of the gas.
Ideal Gas Law
The Ideal Gas Law is a fundamental equation in thermodynamics that relates the pressure, volume, and temperature of an ideal gas. Unlike the Combined Gas Law, the Ideal Gas Law assumes that the gas
behaves ideally, meaning it follows certain assumptions such as negligible molecular size and no intermolecular forces. The formula for the Ideal Gas Law is:
PV = nRT
Here, P represents the pressure, V represents the volume, n represents the number of moles of gas, R is the ideal gas constant, and T represents the temperature in Kelvin.
The Ideal Gas Law is widely applicable in various scenarios, especially when dealing with ideal gases. It allows us to calculate the unknown variables of pressure, volume, temperature, or number of
moles of gas when the other variables are known. This law is particularly useful in the study of gases in open systems, where the number of moles of gas remains constant.
Comparison of Attributes
While both the Combined Gas Law and the Ideal Gas Law are used to analyze the behavior of gases, they have distinct attributes that make them suitable for different situations. Let's compare their
The Combined Gas Law considers the simultaneous changes in pressure, volume, and temperature. It allows us to determine the final state of a gas sample when all three variables change. On the other
hand, the Ideal Gas Law relates pressure, volume, and temperature assuming a constant number of moles of gas. It allows us to calculate the unknown variables when the others are known.
The Combined Gas Law does not make any specific assumptions about the behavior of gases. It combines the individual gas laws to provide a general relationship between the variables. In contrast, the
Ideal Gas Law assumes that the gas behaves ideally, meaning it follows certain assumptions such as negligible molecular size and no intermolecular forces. These assumptions make the Ideal Gas Law
applicable to ideal gases under specific conditions.
The Combined Gas Law is particularly useful when studying situations where all three variables change simultaneously. It allows us to analyze the behavior of gases in closed systems where pressure,
volume, and temperature are interrelated. On the other hand, the Ideal Gas Law is widely applicable in various scenarios, especially when dealing with ideal gases. It is commonly used in open systems
where the number of moles of gas remains constant.
The Combined Gas Law involves algebraic manipulations of the equation to solve for the unknown variables. It requires the initial and final values of pressure, volume, and temperature to determine
the relationship between them. On the other hand, the Ideal Gas Law requires the known values of three variables to calculate the unknown variable. It involves simple rearrangement of the equation to
solve for the desired variable.
The Combined Gas Law is limited to situations where all three variables change simultaneously. It cannot be used to analyze the behavior of gases when only one or two variables change while the
others remain constant. The Ideal Gas Law, although widely applicable, is limited to ideal gases that follow the assumptions of the law. Real gases may deviate from ideal behavior at high pressures
or low temperatures, requiring the use of more complex equations.
In conclusion, the Combined Gas Law and the Ideal Gas Law are both important tools in the study of gases. While the Combined Gas Law allows us to analyze the behavior of gases when pressure, volume,
and temperature change simultaneously, the Ideal Gas Law relates these variables assuming ideal gas behavior. The Combined Gas Law is useful in closed systems, while the Ideal Gas Law is applicable
in open systems. Understanding the attributes and limitations of these gas laws enables scientists and engineers to accurately analyze and predict the behavior of gases in various scenarios.
Comparisons may contain inaccurate information about people, places, or facts. Please report any issues.
|
{"url":"https://thisvsthat.io/combined-gas-law-vs-ideal-gas-law","timestamp":"2024-11-05T16:49:45Z","content_type":"text/html","content_length":"14692","record_id":"<urn:uuid:493421f6-4017-41a8-9518-76aed1a8a568>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00380.warc.gz"}
|
Todor Eliseev Milanov
Professor (from 2021/12/01 )
show past_positions
Research Field
□ Mathematics
□ String Theory
<todor.milanov _at_ ipmu.jp>
Last Update 2023/04/26
The Korteweg-de Vries (KdV) equation is a mathematical model of the motion of a wave in shallow waters. It has been studied extensively from a number of different perspectives. In particular, it was
discovered that KdV is a reduction of a more universal equation, known as the Kadomtsev- Petviashvili (KP) equation. It turns out that the solutions of KP can be parameterized by the points of an
infinite Grassmanian. The latter is a central object in both geometry and representation theory. I am deeply impressed by the unity of seemingly different areas of mathematics on one side and nature
on the other.
At the end of the 20th century it was discovered that the KdV equation governs the amplitudes of string motions in a vacuum. I have been interested in finding other equations, similar to KdV, which
characterize the string amplitudes in more interesting spaces that have non-trivial topologies. More precisely, I am using complex geometry and representation theory to obtain a characterization of
the string amplitudes. It seems that there are some new geometrical objects, as well as some new representation theories, that are still awaiting discovery.
Back to
Member List
|
{"url":"https://db.ipmu.jp/member/personal/597en.html","timestamp":"2024-11-14T16:54:31Z","content_type":"text/html","content_length":"14045","record_id":"<urn:uuid:d724d0d7-b00e-479f-a7d1-ee2939c39eb2>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00025.warc.gz"}
|
Mathematics of a roller chain animation
Given two circular sprockets \(C_1(M_1, r_1)\) and \(C_2(M_2, r_2)\) with their midpoint and radius, we want to find a way to put a chain around them and animate the links like this:
Lets start with some trivial observations. The distance \(d\) between the two midpoints is \(d=|M_2-M_1|\). \(s\) is the difference between the two radii \(r_1\) and \(r_2\), which will be zero if
the circles are of the same size and negative if circle \(C_1\) has a greater radius than \(C_2\). \(\theta\) is the rotation angle between the two circles and can be calculated with \(\theta=\tan^
{-1}\left(\frac{M_2^y - M_1^y}{M_2^x - M_1^x}\right)\). The points \(T_1, T_2\) and the opposite points \(T_1', T_2'\) can be calculated as follows:
$$T_1:= M_1 + Rot(\frac{1}{2}\pi + \phi + \theta) r_1$$
$$T_2:= M_2 + Rot(\frac{1}{2}\pi + \phi + \theta) r_2$$
$$T_1':= M_1 + Rot(\frac{3}{2}\pi - \phi + \theta) r_1$$
$$T_2':= M_2 + Rot(\frac{3}{2}\pi - \phi + \theta) r_2$$
Calculating the chain length
Case 1) \(r_2 > r_1\): The triangle \((M_1, M_2, S)\) has a right angle at point \(S\) and the line \((M_1, S)\) is parallel to line \((T_1, T_2)\) and also has the same length \(t\), which is \(t=|
T_2 - T_1|\) or easier \(t=\sqrt{d^2 - s^2}\) using Pythagorean theorem. \(\phi\) can be calculated using the law of sines as \(\phi=\sin^{-1}\left(\frac{s}{d}\right)\) or with the inverse tangent
again, \(\phi=\tan^{-1}\left(\frac{s}{t}\right)\).
Having these parameters, the length of the chain can be calculated with the two straight lines plus the two arcs around the circles. The length of the arcs is
\[l_1 = 2\pi\cdot r_1 \cdot \frac{\pi - 2\phi}{2\pi} = r_1(\pi - 2\phi)\]
\[l_2 = 2\pi\cdot r_2 \cdot \frac{\pi + 2\phi}{2\pi} = r_2(\pi + 2\phi)\]
And the total length of the chain is given by
\[l = l_1 + l_2 + 2t\]
Case 2) \(r_1 > r_2\): This case is analogue to case 2 with flipped indices.
Case 3) \(r_1 = r_2\): As stated before, if the radii have the same length, \(s\) will become zero. \(t\) equals \(d\), which can be seen with \(t=\sqrt{d^2 - s^2}\) from case 1. \(\phi\) is zero,
analogue to case 1 again. Thus the length for case 3 is already handled with case 1.
Conclusion: Since \(s\) loses the negative sign when being squared, we only need case 1 to calculate the length.
Calculate the position of the links
Lets say we have \(n\) links. It follows that the length of every link is \(\overline{l} = \frac{l}{n}\). If we open the chain and put it straight on the table, we can segment the chain as
illustrated in this sketch:
That means we start at point \(T_2\) go down on the arc of length \(l_2\), continue for length \(t\), go up the arc on the left with length \(l_1\) and back to point \(T_2\). If we now get an
arbitrary point \(c'\), we can calculate \(c' \mod l\) to keep it circular on the chain. The initial distribution of all links on the line is \(k\cdot \overline{l}\, \forall k<n\).
We have four segments, which will be handled separately now. For each case, we calculate a parameter \(p\in[0,1]\), which represents the position in percent on the given segment. Additionally, we
have the current position \(c:= k\cdot\overline{l} + z\, \forall k<n, z\in\mathbb{R}\) to calculate the point \(N_k\), the position of the kth link. \(z\) is a free parameter to scroll.
Case 1) \(c < l_2\):
\(p:= \frac{c}{l_2}\)
\(N_k:= M_2 + Rot(\frac{1}{2}\pi + \phi + \theta - p (\pi + 2 \phi)) r_2\)
Case 2) \(l_2\leq c < l_2 + t\):
\(p:= \frac{c - l_2}{t}\)
\(N_k:= T_2' + Rot(\pi - \phi + \theta) \cdot t \cdot p\)
Case 3) \(l_2 + t\leq c < l_2 + t + l_1\):
\(p:= \frac{c - l_2 - t}{l_1}\)
\(N_k:= M_1 + Rot(\frac{3}{2}\pi - \phi + \theta - p (\pi - 2 \phi)) r_1\)
Case 4) \(l_2 + t + l_1\leq c < l\):
\(p:= \frac{c - l_2 - t - l_1}{t}\)
\(N_k:= T_1 + Rot(\phi + \theta) \cdot t \cdot p\)
|
{"url":"https://raw.org/math/mathematics-of-a-roller-chain-animation/","timestamp":"2024-11-02T12:18:27Z","content_type":"text/html","content_length":"35478","record_id":"<urn:uuid:d388ebde-e3e4-4dd6-9e04-8a2ae4635b68>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00885.warc.gz"}
|
Axial Loading - S.B.A. Invent
Axial Loading
Axial loading occurs when a force or pressure is applied parallel and on the centroid axis of an object. Refer to the image below to see what axial loading is. With any strength of materials
problem statics should be used to create a free body diagram, regardless of how simple that problem looks. This is done to make sure all forces sum to zero so that something isn’t missed. A free body
diagram for an object under axial load is seen below. Notice that on the free body diagram the force P cancels itself out causing the forces to sum to zero.
Now that the free body diagram shows that the forces sum to zero, the stress for the axial loading can now be calculated. Recall from the stress section that a stress is calculated by dividing a
force by an area. For the above image the cross-sectional area that would be used is the cross-sectional area that is perpendicular to the load P. Refer to the equation below. The resulting stress
will be considered a normal stress not a shear stress.
(Eq 1) $σ=\frac{F}{A}$
σ = normal stress
F = force
A = area
When calculating the stress due to an axial load the stress would be constant when the cross-sectional area remains the same. However, if the cross-sectional area changes than the stress will change
accordingly. The stress will get lower for larger cross-sectional area and in comparison the stress will higher for smaller cross-sectional areas.
Now there is a certain case where the stress will vary even though the above equation states that it should be uniform through the part for a constant cross-sectional area. This happens when the end
of the object is fixed. This is known as Saint Venant’s principle and can only be seen when an FEA is ran as seen in the image below.
Now remember from the Hook’s Law section that the deflection can be calculated if the Young’s Modulus of the material is known. To calculate the deflection of a part caused by an axial load the
equation below would be used.
(Eq 2) $δ=\frac{FL}{AE}$
δ = deflection
L = length
E = Young’s Modulus
In addition to a calculating deflection and stress, the stiffness of the part can also be calculated by using the equation below. Stiffness is used to relate the deflection to the force required to
deflect the object. Determining the stiffness will allow you to determine the deflection of a part that is made out of multiple materials. Also, it is used to calculate the natural frequencies of the
(Eq 3) $k=\frac{P}{δ}=\frac{AE}{L}$
k = stiffness
Multiple Loading
Now there are also cases when there could be multiple loads on part instead of a single load. To solve this type of problem you will need to create a free diagram. In this case multiple free body
diagrams that relate to each other would be used. Remember the forces must sum to zero. To see how to do this refer to the image below.
You must be logged in to post a comment.
|
{"url":"https://sbainvent.com/strength-of-materials/axial-loading/","timestamp":"2024-11-11T14:26:04Z","content_type":"text/html","content_length":"71617","record_id":"<urn:uuid:28820697-4a17-4515-b12b-9feb56e740ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00140.warc.gz"}
|
The proof that A5 (and all An for n>5) is a non-abelian simple group is not given in the book. The book simply asks the reader to tacitly accept this fact. The proof, given in stages over
several pages and drawn together at the end of the section was supplied to me as an undergraduate by the lecturer for my 'Group Theory' course. Whilst not the only proof that A5 is
simple, it is easy enough a proof for me to understand and so I present the same argument here.
Simple Groups
This page states the basic properties of Abelian groups (those groups with a commutative operation), which always factor into simple groups: a series of prime order cyclic groups. (I
state the fundamental theorem of finitely generated abelian groups.) Also stated is the relationship between the order of Sn and An, the symmetric group of degree n and its largest
non-trivial normal subgroup, the alternating group of degree n.
An Is Generated By 3-Cycles
A simple argument to show that every alternating group of degree n>3 is generated by products of 3-cycles. (3-cycles are themselves a product of an even number of transpositions.) Every
element of An (n>3) is a product of 3-cycles in its 'supergroup' Sn.
Every Normal Subgroup of An contains Every 3-Cycle in An
Another simple argument; this time to show that if H is a normal subgroup of An, (which includes An itself) then it is true that every 3-cycle in An is also in H.
Every Normal Subgroup Of An Contains A 3-Cycle
A somewhat lengthier argument to show that every normal subgroup H of An contains at least one 3-cycle from Sn is presented here. I state that this proof is the hardest of the proofs in
this section to remember, but is reasonably simple once you understand how the products of permutations operate in their cycle notation.
An (n>4) Is A Simple Group
In drawing together all the threads from this section it is almost a simple observation that An for all n>4 are simple groups. (Of course, A5 is simple - as is every alternating group of
higher degree.)
Continue To Next Section
Continue To Next Page
|
{"url":"https://seveneyesopen.net/Mathematics/Simple0.php","timestamp":"2024-11-13T12:43:13Z","content_type":"application/xhtml+xml","content_length":"19468","record_id":"<urn:uuid:ea680845-3288-47e0-9061-5396b4ac2138>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00472.warc.gz"}
|
Sources of Self-reported Happiness: Logistic Regression
Analyze sources of self-reported happiness using logistic regression.
We'll cover the following
Conceptual preparation
We’re interested in finding out whether gender, religious belief, and income influence self-reported happiness or not. The dependent variable happy is dichotomous, with 1 being happy and 0 being
In this type of problem, OLS isn’t appropriate because it can generate predicted probabilities larger than one and smaller than zero. A widely used statistical technique is logistic regression.
Conceptually, the probability of a respondent being happy or not can be expressed as being a function of gender, religious belief, and income.
$\pi_i =p( happy_i =1) = probability \ \ respondent_i\ \ being \ \ happy$
$=\beta_0 + \beta_1male+\beta_2 belief+\beta_3income$
To keep the predicted probability bounded between 0 and 1, the logistic regression fits an S-shaped relationship between happy and other covariates with the following model:
$ln\bigg(\frac{\pi_i}{1-\pi_i}\bigg)=\beta_0 + \beta_1male+\beta_2 belief+\beta_3income$
Above, $\frac{\pi_i}{1-\pi_i}$ is the odds of a respondent being happy, which is the probability of being happy $(\pi_i)$ divided by the probability of being unhappy (1 − $\pi_i$), and $ln\bigg(\frac
{\pi_i}{1-\pi_i}\bigg)$is the log odds or logistic transformation of odds.
Two issues are worth clarification.
• First, the $\beta_s$ are regression parameters. Following previous chapters, we carry out hypothesis testing with respect to the null hypothesis on each $\beta$ being zero, indicating that a
variable has no statistical effect on the dependent variable in the population.
• Second, what is substantively most interesting is the value of πi, the probability of a respondent being happy, under different values of the independent variables. To obtain that value, we can
apply the following formula:
$\pi_i= \frac{e^{\beta_0 + \beta_1male+\beta_2 belief+\beta_3income}}{1+e^{\beta_0 + \beta_1male+\beta_2 belief+\beta_3income}}$
Data preparation
We already have all four variables in the above model prepared except for income. So, we now get the income variable ready for analysis. The codebook definition for the income variable is as follows:
“V239. On this card is an income scale on which 1 indicates the lowest income group and $10$ the highest income group in your country. We would like to know in what group your household is.
Please, specify the appropriate number, counting all wages, salaries, pensions and other incomes that come in. (Code one number):”
The tabulation shows that the variable has several negative values indicating missing values. They will be recoded as NA.
Get hands-on with 1200+ tech skills courses.
|
{"url":"https://www.educative.io/courses/using-r-data-analysis-social-sciences/sources-of-self-reported-happiness-logistic-regression","timestamp":"2024-11-03T06:31:08Z","content_type":"text/html","content_length":"991176","record_id":"<urn:uuid:57d88899-6e14-4b3f-bed0-aa4e7192bc91>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00595.warc.gz"}
|
Pay Calculation Tutorial – Overloads
Warning: This page is out of date and outlines a contractual process that is now changed! This page is being left in place, however, as it contains other useful resources and can be used to calculate
retroactive payments. See Appendix B of our current CBA for updated information.
If your teaching load contains overload, as the examples on this page do, there should be a second page of your contract listed as an “Over Load Faculty Contract” (see below). You need both pages to
calculate the load and check the pay amount listed on the overload contract. Overload pay errors are one of the most consistent problem areas in our contracts. For more explanation of the load
calculation below, visit the parent page of this tutorial.
EXAMPLE #1
Regular contract:
Overload contract:
Course catalog entries:
The 120% load calculation in this contract falls above 106.67%, so overload pay applies (Appendix B of collective bargaining agreement). To calculate the amount of overload pay, you’ll again need
Appendix B. The formula that governs overload pay is somewhat obscure, but is as follows:
Where P = the overload percentage in excess of 100%
Where R = the pay rate in dollars per hour for overtime hours (Appendix D-3 Rate 1)
Where N = the total number of hours in the total teaching load calculation (the sum of the numerators in the teaching load equation above)
For this example,
R = $69.28 (as of December 1, 2021, this rate is $74.27)
N = 18 hours
P = 20%
Accordingly, monthly overload pay would be calculated as follows:
This number should be reflected in the “Amount/Pay Pd.” field in the upper right of the overload contract. As you can see from this example, this faculty member was slightly overpaid due to a
systematic rounding error in the District’s Colleague software. Finally, the “Total Stipend” field is calculated by multiplying this monthly overload rate by 5. The reason for this is that you are
paid for 5 of the 6 months during the fall/spring terms for overload, as you are not teaching your fall/spring courses in July/January.
EXAMPLE #2
In contract example #2, the instructor has the same percentage of overload (20%), but a likely human data entry error compounded the coding errors in the District’s Colleague software, leading to an
underpayment. Calculations are shown below.
Regular contract:
Overload contract:
Course catalog entries:
As was the case for example #1, for this example,
R = $69.28 (as of December 2021, this rate is $74.27)
N = 18 hours
P = 20%
In this example, this faculty member would have been underpaid by $214.75 in overload pay if they had not caught this error.
|
{"url":"https://aft1388.org/pay-calculation-tutorial-overloads/","timestamp":"2024-11-06T23:02:24Z","content_type":"text/html","content_length":"88385","record_id":"<urn:uuid:52926d03-e8c2-44ca-8f4d-f7e9359b62f3>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00508.warc.gz"}
|
Year-Over-Year Formula (with Calculator) (2024)
Year over year (YOY) is a financial formula that represents the annual increase or decrease for a particular metric (see examples of various metrics further on the page. Earnings per share is one
such example).
The Y-O-Y formula is often presented as a percentage. Occasionally, a nonpercentage number may be quoted, but in these examples, only simple subtraction is used and the writer is simply using this
phrase to convey "a difference from one year to the next". An example of this would be "Variable X is up $2.20 Y-o-Y".
The formula for YOY is found by dividing variable X at the "now" date by variable X one year prior to the "now" date, then subtracting the result by one. To illustrate what is meant by the "now"
date, suppose an article was written 6 months ago from today. The article mentions the earnings per share of a particular company, and states that EPS is up 20% YOY. Obviously, they are referencing
the date that earnings were released (let us assume 6 months ago at the time the article was written) and the earnings one year prior to that date. The "now" date would be 6 months ago.
The Year Over Year formula has multiple abbreviations, such as YOY, YoY, Y-O-Y, Y/Y, and Y/O/Y. No matter the abbreviation, it will be obvious that the author is referring to a year-over-year change
for whatever metric that is discussed.
This formula also applies to a decrease. For example, if this year's X is 800 and the prior year's X is 1,000, the decrease is 20% YOY. If youuse the formula at the top of the page, this would be
-200 divided by 1000, which results in -20%.
Note: This site uses y and 'y-1' to illustrate that this formula is always used when looking back in the past. This formula may appear to be different than other financial formulas, such as present
value and future value, as those formulas look forward (e.g., period x, period x+1, period x+2, et cetera). Even the present value formula discounts future cashflows so it still looks forward when
discounting the cashflows to their present values.
Use of the Year Over Year Formula
The year-over-year formula is used with various economic and financial metrics to represent a change from one year to the next. It is primarily used by authors or in general discussion within a
company. There is no functional use in the year-over-year formula other than to convey the amount of growth that has occurred over that year.
Examples of financial and economic metrics where an author may use "YOY" include:
• GDP (Gross Domestic Product)
• CPI(Consumer Price Index, for inflation)
• PPI (Producer Price Index)
• EPS (Earnings Per Share)
• Revenues
The year-over-year formula is also sometimes used in non-financial topics.
Examples of the Year-Over-Year (YOY) Formula
1) Suppose that an individual is measuring inflation and using the CPI (Consumer Price Index) to do so. The 'rate of inflation' formula andthe year-over-year formula are effectively the same formula.
In short, the 'rate of inflation' for one year is the growth in CPI, year-over-year. Both use the "rate of change" formula. Last year's CPI, in this example, had an index of 2,200. The year prior,
the same index was 2,000.
Putting these variables into the formula at the top of the page would be:
which shows a growth of 10% year-over year.
2) Suppose that an author is writing an article about a quarterly earnings release issued by a particular company. The author may mentionthe quarterly eps and would like to express a frame of
reference so the user is not just seeing random numbers. Suppose that this year's 3rd quarter earnings were $2.25 per share and last year's 3rd quarter earnings were $1.50.
Using the formula at the top of the page with these variables, this would be:
Which would result in a 50% change over the year. The author may casually state "the Q3 earnings were $2.25 which was a 50% increase YoY" and then go on to state changes in the company or economy
that led to the change.
Alternative Year-Over-Year (YOY) Formulas
The formula shown directly above is the standard 'change in' formula. Both this formula and the formula at the top of the page are the same, only rearranged. The denominator of X0 can be factored out
to get the formula at the top of the page.
The formulas 'change in' and YoY are the same because YoY is basically a change in a variable from one period to the next. More specifically, the time period would be the change over one year
Some may find this formula easier to mentally compute compared to the one at the top of the page. For example, if 200 and 220 were the variables, respectively, this would be 20 divided by 200 which
would be easy to do the math mentally and get 10%. This is not to say that the other formula cannot be mentally computed.
2. The YOY formula at the top of page is not to be confused with "x percent of last year's numbers". Instead of representing a 'change in', this represents a percentage of last year's amount.
Using an extreme example, suppose the variables are 100 and 10, respectively. One could either say "Variable X is down 90% Y/Y" or they could say "This year's numbers are 10% of what they were last
year". These two statements obviously do not use the same math and the YoY formula is related to the former and not the latter.
This is 'Delta X'. Delta is used in various financial formulas and non-financial formulas to resemble a 'change in' a variable. Keep in mind that Delta X is not presented as a percentage. It is
simply the difference between the current year and prior year.
Return to Top
|
{"url":"https://turbokrecik.info/article/year-over-year-formula-with-calculator","timestamp":"2024-11-07T07:21:01Z","content_type":"text/html","content_length":"68859","record_id":"<urn:uuid:72fc4215-44d2-48b1-ac5e-75fa8f56f964>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00140.warc.gz"}
|
Diane L. Evans
According to our database
, Diane L. Evans authored at least 8 papers between 1992 and 2008.
Collaborative distances:
Book In proceedings Article PhD thesis Dataset Other
On csauthors.net:
The Distribution of the Kolmogorov-Smirnov, Cramer-von Mises, and Anderson-Darling Test Statistics for Exponential Populations with Estimated Parameters.
Commun. Stat. Simul. Comput., 2008
Conceptual Case for Assimilating Interferometric Synthetic Aperture Radar Data Into the HAZUS-MH Earthquake Module.
IEEE Trans. Geosci. Remote. Sens., 2007
The Distribution of Order Statistics for Discrete Random Variables with Applications to Bootstrapping.
INFORMS J. Comput., 2006
Algorithms for computing the distributions of sums of discrete random variables.
Math. Comput. Model., 2004
Input modeling using a computer algebra system.
Proceedings of the 32nd conference on Winter simulation, 2000
Overview of results of Spaceborne Imaging Radar-C, X-Band Synthetic Aperture Radar (SIR-C/X-SAR).
IEEE Trans. Geosci. Remote. Sens., 1995
Estimates of surface roughness derived from synthetic aperture radar (SAR) data.
IEEE Trans. Geosci. Remote. Sens., 1992
Approach to derivation of SIR-C science requirements for calibration.
IEEE Trans. Geosci. Remote. Sens., 1992
|
{"url":"https://www.csauthors.net/diane-l-evans/","timestamp":"2024-11-09T03:06:09Z","content_type":"text/html","content_length":"22471","record_id":"<urn:uuid:b1554751-eba7-48ad-917f-88e3555ff1ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00264.warc.gz"}
|
Module 2B: Water as a shared resource
Lesson 2 Activity 1 - Math Basics
In this activity, you will connect their prior understanding of coordinate space to the green Spaceland of StarLogo Nova. The coordinate system of Spaceland is 101 patches in the x-dimension by 101
patches in the y-dimension with (0,0) in the center. You will practice placing their turtles in specific quadrants of Spaceland using set traits block called ‘set my…’ You will then use the ‘set my
heading to’ block to explore having turtles move in specific directions in Spaceland. These commands will be used to make the water molecules move as if responding to gravity.
• Review coordinates on a graph; connect coordinate system to Spaceland.
• Create turtles in different quadrants of Spaceland and use new blocks to make turtles move in a specific direction.
Review coordinates on a graph; connect coordinate system to Spaceland.
• Review X and Y axes as horizontal and vertical and review where (0,0) is in Spaceland.
• Demonstrate to students how to set an agent at (0,0)
Create turtles in different quadrants of Spaceland and add new blocks to have the turtles move in a specific direction.
• Add code to the ‘when setup pushed’ block. First you must clear everything that was there before and then add a turtle with specific X and Y coordinates.
• Next, add to the setup code by adding 3 more ‘create 1 turtle’ with new X and Y coordinates.
• You should be able to put a turtle in each of the 4 quadrants using the ‘set my’ blocks.
• Now use the ‘when forever toggled’ block to get your turtles moving in a specific direction on Spaceland. The directions follow 360 degrees like a protractor.
• Get your turtles to all move towards the top of Spaceland.
Complete these short coding challenges that focus on improving your coding skills in StarLogo Nova
What challenges did you encounter with positioning and moving your turtles? Post your reflection in your portfolio in the section "Reflections->Computer Modeling and Simulation".
|
{"url":"https://guts-cs4hs.appspot.com/unit?unit=131&lesson=136","timestamp":"2024-11-08T22:09:26Z","content_type":"text/html","content_length":"67931","record_id":"<urn:uuid:00f03679-b440-47ff-a1a7-27791cf4b8f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00700.warc.gz"}
|
Polynomial Interpolation
〜 ♣ 〜
Cursory introductions to AI/ML often reference the black box—that is, the notion of a function whose inner workings are uninterpretable—in describing machine learning models. They're not wrong; we
truly don't understand how and why these models work—and not without reason. Real-world data is often multidimensional in nature and embeds complex relationships that can't be articulated using
conventional statistics.
So there's a reason why we turn to the black box model. It's got clear advantages, of course; we never could've made so much progress in ML research if we hadn't embraced the "it works because it
works" mentality. But there's also loads of drawbacks, especially when ML meets decision-making. When we turn to computers to make decisions, we suddenly hesitate to trust their judgement. That's
because, when real-world consequences are involved, it's not enough to say "the computer made me do it" to justify our actions; clear, founded reasoning must follow. The black box isn't going to cut
it for high-stakes usage.
So how do we begin to understand the models we've spent years building? It's a daunting task, but, as good scientific practice suggests, we could begin by simplifying the problem and solving it with
a foundation of solid mathematical principles. Enter the interpolation problem, a "predecessor" of sorts to machine learning. First, some notation:
Let and be sets such that and .
Now, suppose a function exists on the interval .
The problem arises: how do we construct a tractable interpolant to approximate given the condition that for all ?
A reasonable first approach leverages the fact that there exists a unique polynomial of degree for any collection of points with distinct abscissae. Thus, we can express
Lagrange interpolation enables us to find explicit coefficients. Consider the following construction of a polynomial:
is interpolated by the linear combination
So how well does our interpolant perform? Let's put it to the test. I begin by sampling evenly-spaced points on from the Legendre polynomial of degree 3 and computing the Lagrange polynomial that
interpolates those points.
import scipy
import numpy as np
import matplotlib.pyplot as plt
f = scipy.special.legendre(3)
x = np.linspace(-1, 1, 16)
y = f(x)
p = scipy.interpolate.lagrange(x, y)
fig, ax = plt.subplots(1, 2, figsize=[10, 3])
for i in (0, 1):
ax[i].set_xlim([-1.25, 1.25])
ax[i].set_ylim([-1.25, 1.25])
ax[i].scatter(x, y)
d = np.linspace(-1.25, 1.25, 500)
ax[1].plot(d, p(d), 'g')
Here, the Lagrange interpolation works like a charm. But that's to be expected; we're using a polynomial to interpolate a polynomial. What if we test our method with a harder task, like interpolating
a transcendental function?
import scipy
import numpy as np
import matplotlib.pyplot as plt
f = lambda x: 1 / np.cosh(5 * x)
x = np.linspace(-1, 1, 16)
y = f(x)
p = scipy.interpolate.lagrange(x, y)
fig, ax = plt.subplots(1, 2, figsize=[10, 3])
for i in (0, 1):
ax[i].set_xlim([-1.25, 1.25])
ax[i].set_ylim([-0.25, 1.25])
ax[i].scatter(x, y)
d = np.linspace(-1.25, 1.25, 500)
ax[1].plot(d, p(d), 'g')
Here, I interpolate the function using evenly-spaced points, as before. Notice that, unlike the first example, diverges aggressively from in an oscillatory fashion near the ends of the interpolation
points. This behavior is known as Runge's phenomenon. The oscillation is encountered often in naïve Lagrange interpolation and exacerbated by the use of equally-spaced points, although there are a
number of numerical methods that can mitigate the problem. Unfortunately, these topics are beyond the scope of a 1 am post.
The key takeaway is that, like in machine learning, unexpected behavior can arise as a natural consequence of the method in numerical analysis. By understanding the mathematical basis for these
errors, we can develop robust mitigation techniques.
- Aaron
|
{"url":"https://atian.me/writing/polynomial-interpolation","timestamp":"2024-11-05T02:48:35Z","content_type":"text/html","content_length":"136078","record_id":"<urn:uuid:0b36ab7b-b9e6-4a0b-98fa-fb9848ff8750>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00518.warc.gz"}
|
VAE in no time
A quick tour of Variational Autoencoder (VAE)
In recent times the generative model has gained huge attention due to its state-of-art performance and hence achieved massive importance in the marketplace and is also used widely. Variational
Autoencoders are deep learning techniques used to learn the latent representations they are one of the finest approaches to unsupervised learning. VAE shows exceptional results in generating various
kinds of data.
Autoencoder (AE) at a glance
Autoencoder comprises an encoder, decoder, and a bottleneck. The encoder simply transforms the input into a digital representation to the lowest dimension into the bottleneck to absorb its salient
features and the decoder reconstructs back the output from the representations nearly similar to the input.
The Autoencoder aims to minimize the reconstruction loss. The reconstruction loss is the difference between the original data and the reconstructed data.
L2 Loss function is used to calculate the loss in AE. i.e the sum of all the squared differences between the true value and the predicted value.
l2 loss
The applications of an Autoencoder include Denoising, Dimensionality Reduction, etc.
Variational Autoencoders at a glance
VAE is also a kind of Autoencoder which not only reconstructs the output but also generates new content. Stating explicitly, VAE is a generative model and Autoencoders are not. The Autoencoder learns
to transform an input into some vector representation by minimizing the reconstruction loss calculated from the input and the reconstructed image, VAE, on the other hand, generates output by
minimizing the reconstruction as well as the KL Divergence loss which is the difference between the actual and observed probability distribution, It is the symmetrical score as well as the distance
measure between two probabilistic distributions, in terms of VAE it tells whether the distribution learned is not far from a normal distribution.
k-l divergence
The above is the k-l divergence between distributions P and Q over the space χ
Variational autoencoder can be defined as an autoencoder whose training is regularized to avoid overfitting problems and it makes sure that the latent space assimilates fruitful results that generate
some distinctive and unique results.
fig 1.1 VAE Architecture
The variational autoencoder consists of an encoder, decoder, and a loss function. The encoder and decoder are simple neural networks. When input data X is passed through the encoder, the encoder
outputs the latent state distributions (Mean μ, Variance σ) from which a vector is sampled Z. We always make an assumption that the latent distribution is always a Gaussian distribution. The input x
is compressed by the encoder into a smaller dimension. Which is typically referred to as bottleneck or the latent space. From which some data is randomly sampled and the sample is decoded by
backpropagating the reconstruction loss and we get a new generated variety.
Gaussian (Normal) Distribution [source]
Reparameterization Trick
After the distribution is thrown out of the encoder, the sample is chosen by a random node which cannot make backpropagation possible. We need to backpropagate the encoder-decoder model to make it
learn. To overcome the backpropagation, we use the epsilon (ε) with the mean and variance to maintain the stochasticity. So, at a time we can also choose a random sample and also learn with the
latent distribution states. During the iterations, the epsilon remains the random sample and the parameters of the encoder output are updated.
figure 1.2 [source]
After the distribution is thrown out of the encoder, the sample is chosen by a random node which cannot make backpropagation possible. We need to backpropagate the encoder-decoder model to make it
learn. To overcome the backpropagation, we use the epsilon (ε) with the mean and variance to maintain the stochasticity. So, at a time we can also choose a random sample and also learn with the
latent distribution states. During the iterations, the epsilon remains the random sample and the parameters of the encoder output are updated.
Let try to implement the VAE into our code with MNIST data using PyTorch.
Install PyTorch with Torchvision
#command line>> pip3 install pip install torch==1.7.1+cpu torchvision==0.8.2+cpu torchaudio===0.7.2 -f https://download.pytorch.org/whl/torch_stable.html>> pip3 install numpy
Import Libraries
import torchvision.transforms as transformsimport torchvision
from torchvision.utils import save_imagefrom torch.utils.data import DataLoaderimport torchimport numpy as npimport torch.nn as nnimport torch.optim as optim
Prepare Data
We are using the MNIST dataset, so we’ll transform it by resizing it to 32×32 and converting it to tensor. Make data ready with the best ever PyTorch data loader with batches of 64
transform = transforms.Compose([transforms.Resize((32,32)),transforms.ToTensor(),])trainset = torchvision.datasets.MNIST(root='./', train=True, download=True, transform=transform)trainloader = DataLoader(trainset, batch_size=64, shuffle=True)testset = torchvision.datasets.MNIST(root='./', train=False, download=True, transform=transform)
MNIST Images have a single channel with 28×28 pixels.
define device to be used as per our requirement
dev = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
Define Loss function
def final_loss(bce_loss, mu, logvar): BCE = bce_loss
KLD = -0.5 * torch.sum(1 + logvar - mu.pow(2) - logvar.exp())
return BCE + KLD
The Loss is the sum of the Kullback-Leibler divergence and Binary Cross-Entropy
Define parameters
z_dim =20lr = 0.001criterion = nn.BCELoss(reduction='sum')epochs = 1batch_size = 64
Create Variational Autoencoder model
The Encoder consists of convolution, batch normalization layers with leaky relu. The output of the Encoder is the mean vector and the standard deviation vector.
class VAE(nn.Module):
def __init__(self):
super(VAE, self).__init__() #encoder self.conv1 = nn.Conv2d(1 ,8 ,4 ,stride =2 ,padding =1 ) self.BN1 = nn.BatchNorm2d(8) self.af1 = nn.LeakyReLU() self.conv2 = nn.Conv2d(8 ,16 ,4 ,stride =2 ,padding = 1) self.BN2 = nn.BatchNorm2d(16) self.af2 = nn.LeakyReLU() self.conv3 = nn.Conv2d(16 ,32 ,4 ,stride =2 ,padding = 1) self.BN3 = nn.BatchNorm2d(32) self.af3 = nn.LeakyReLU() self.conv4 = nn.Conv2d(32 ,64 ,4 ,stride =2 ,padding = 0) self.BN4 = nn.BatchNorm2d(64)
self.af4 = nn.LeakyReLU()
Fully connected layers Providing the mean and log variance value (Bottleneck part)
self.fc1 = nn.Linear(64,128) self.fc_mu = nn.Linear(128, z_dim) self.fca1 = nn.LeakyReLU() self.fcd1 = nn.Dropout(0.2) self.fc_log_var= nn.Linear(128, z_dim) self.fca2 = nn.LeakyReLU() self.fcd2 = nn.Dropout(0.2)
The Decoder just reconstructs the sampled latent vector representations And brings out a new variant of the original one.
self.fc2 = nn.Linear(z_dim, 64) self.da1 = nn.LeakyReLU() self.dd1 = nn.Dropout(0.2) self.deu1 = nn.UpsamplingNearest2d(scale_factor=2) self.dec1 = nn.ConvTranspose2d(64 ,64 ,4 ,stride =2 ,
padding = 0) self.deb1 = nn.BatchNorm2d(64) self.dea1 = nn.LeakyReLU() self.deu2 = nn.UpsamplingNearest2d(scale_factor=2) self.dec2 = nn.ConvTranspose2d(64 ,32 ,4 ,stride =2 ,
padding = 1) self.deb2 = nn.BatchNorm2d(32) self.dea2 = nn.LeakyReLU() self.deu3 = nn.UpsamplingNearest2d(scale_factor=2) self.dec3 = nn.ConvTranspose2d(32 ,16 ,4 ,stride =2 ,
padding = 1) self.deb3 = nn.BatchNorm2d(16) self.dea3 = nn.LeakyReLU() self.deu4 = nn.UpsamplingNearest2d(scale_factor=2) self.dec4 = nn.ConvTranspose2d(16 ,1 ,4 ,stride =2 ,
padding = 1) self.dea4 = nn.Sigmoid()
The random vector is sampled out from the mean vector and the standard deviation. which is further reconstructed by applying upsampling and ConvTransposed layers, I used both the upsampling and
ConvTransposed it gave me better results.
def sampling(self, mu, log_var): std = torch.exp(log_var / 2) epsilon = torch.randn_like(std) return mu + epsilon * stddef forward(self, x): #creating encoder X = self.conv1(x) x = self.BN1(X) x = self.af1(x) x = self.conv2(x) x = self.BN2(x) x = self.af2(x) x = self.conv3(x) x = self.BN3(x) x = self.af3(x) x = self.conv4(x) x = self.BN4(x) x = self.af4(x) x = x.view(x.size()[0], -1) x = self.fc1(x) mu = self.fc_mu(x) mu = self.fca1(mu) mu = self.fcd1(mu) log_var = self.fc_log_var(x) log_var = self.fca2(log_var) log_var = self.fcd2(log_var) #creating sampling z = self.fc2(self.sampling(mu, log_var)) z = self.da1(z) z = self.dd1(z) z = z.view(-1,64,1,1) #creating decoder d = self.dec1(z) d = self.deb1(d) d = self.dea1(d) d = self.dec2(d) d = self.deb2(d) d = self.dea2(d) d = self.dec3(d) d = self.deb3(d) d = self.dea3(d) d = self.dec4(d) recontruction = self.dea4(d) return recontruction, mu, log_var
Configure device
device = dev
model = VAE().to(device)
Define Optimizer
optimizer = optim.Adam(model.parameters(), lr=lr)
Start the training
grid_images = []
train_loss = []
valid_loss = []
def validate(model, dataloader, dataset, device, criterion):
running_loss = 0.0
counter = 0
with torch.no_grad():
for i, data in tqdm(enumerate(dataloader), total=int(len(dataset)/batch_size)):
counter += 1
data= data[0]
data = data.to(device)
reconstruction, mu, logvar = model(data)
bce_loss = criterion(reconstruction, data)
loss = final_loss(bce_loss, mu, logvar)
running_loss += loss.item()
# save the last batch input and output of every epoch
if i == int(len(dataset)/batch_size) – 1:
recon_images = reconstruction
val_loss = running_loss / counter
return val_loss, recon_imagesdef train(model, dataloader, dataset, device, optimizer, criterion):
running_loss = 0.0
counter = 0
for i, data in tqdm(enumerate(dataloader), total=int(len(dataset)/batch_size)):
counter += 1
data = data[0]
data = data.to(device)
reconstruction, mu, logvar = model(data)
bce_loss = criterion(reconstruction, data)
loss = final_loss(bce_loss, mu, logvar)
running_loss += loss.item()
train_loss = running_loss / counter
return train_losscount = 0
for epoch in range(epochs):
count = count+1
if count == epochs:
train_epoch_loss = train(
model, trainloader, trainset, device, optimizer, criterion
valid_epoch_loss, recon_images = validate(
model, testloader, testset, device, criterion
save_image(recon_images.cpu(), f”./output{epoch}.jpg”) image_grid = make_grid(recon_images.detach().cpu())
grid_images.append(image_grid) print(f”Train Loss: {train_epoch_loss:.4f}”)
print(f”Val Loss: {valid_epoch_loss:.4f}”)
Here our model is trained and our reconstructed Images get saved into the defined path. The image below is the reconstructed Image I got after 5 epochs.
Model Prediction
fig. P1
fig. P2
Thing to Try
Try to reconstruct an image using your own custom-made image data. Hope you may get some surprising results, just try to Hypertune the model with different combinations. Try to increase the number of
epochs. Try to play with the Z dim, learning rate, the convolution layers, strides, and much more.
VAE can perform much more if lots of data and proper computing power are used.
|
{"url":"https://havric.com/2021/06/24/vae-in-no-time/","timestamp":"2024-11-04T10:08:04Z","content_type":"text/html","content_length":"175777","record_id":"<urn:uuid:614abb4e-bbbf-42fa-ad66-5e7ad329e557>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00646.warc.gz"}
|
How to Perform the ANOVA Test in Stata | The Data Hall
How to Perform the ANOVA Test in Stata
ANOVA (Analysis of Variance) is an analysis tool used to see the effect of categorical independent variables on a dependent variable in regressions. This article is motivated by Chapter 9 of A Gentle
Introduction to Stata by Alan C. Acock.
There are two main assumptions that need to be met when using ANOVA.
1. The groups in a categorical variable must be independent.
2. Data must be normally distributed with equal variance.
To explore how to perform ANOVA in Stata, we will use Stata’s reading scores dataset (‘reading.dta’). This dataset can be loaded using the webuse command.
webuse reading.dta
This dataset has five variables related to students’ reading scores, reading program, skill enhancement technique, and their group.
We will drop the ‘class’ and ‘group’ variable and use only the ‘program’ and ‘skill’ variables as the independent categorical variables that have an effect on ‘score’.
drop class group
Checking Normality of Dependent Variable in Stata
First, let’s check whether ‘score’ is normally distributed or not. While there are several ways to do this, we will use a simple histogram.
histogram score
histogram score, by(program) normal
histogram score, by(skill) normal
In general, the variable ‘score’ does appear to tend to follow a normal distribution across all groups of the categorical variables.
Types of ANOVA
There are two main types of ANOVA:
• One-way (or unidirectional) ANOVA
• Two-way ANOVA
A one-way ANOVA helps evaluate the effect of a single, categorical independent variable on a single dependent variable. It can be used to see whether there exist any statistically significant
differences between the means of the independent (unrelated) groups/categories.
A two-way ANOVA is similar to a one-way ANOVA, with the only difference being that there are two categorical, independent variables. A two-way ANOVA is also used to study the interaction between the
two categorical variables. Similarly, it can also used to study their joint effect on the dependent variable.
The Null Hypothesis in ANOVA in Stata
The null hypothesis in an ANOVA test states that there is no statistically significant difference between the groups/categories being tested. In other words, the null hypothesis states that the means
of all these groups are equal If this is true, the ANOVA test’s F-ratio statistic will be close to 1.
The alternative hypothesis would be that the mean of at least one of the groups/categories is not equal to the others.
One-way ANOVA in Stata
Let’s conduct a one-way ANOVA using ‘score’ (i.e. a student’s reading score) as the continuous, dependent variable and ‘program’ as the independent categorical variable. A one-way ANOVA can be
performed in Stata using the oneway command followed by the dependent and the independent variables.
oneway score program, tab
The table reports the sums of squares (column ‘SS’), the means of squares (MS) along with the degrees of freedom (df). The mean squares is calculated by dividing the sums of square by the degree of
freedom. The relevant test statistic in this case is the F-stat reported in the second last column titled ‘F’ calculated by taking a ratio of the mean squares. The degrees of freedom in the numerator
are 1, and in the denominator are 298.
Statistical significance of the F-stat can be ascertained from the p-value (column titled Prob > F). If the value is less than 0.05, we can say that at least one pair of means is not equal.
In this case, because the p-value is 0, it can be concluded that at least one pair of score means across the two groups of the reading program are not equal.
The Bartlett’s test for equal variance is reported at the bottom. Because the p-value (prob>chi2) is statistically insignificant, it suggests that the variances of the mean score across the groups
are not unequal.
So, which of the pairs of means is not equal then? We can check this by reporting all possible pairwise comparisons using the Bonferroni correction.
oneway score program, tab bonferroni
The Bonferroni correction reports the pairwise comparisons of means while also adjusting for multiple comparisons. The table at the bottom displays two statistics for each pair of means. The first
statistic (-7.74) is the difference of the row mean and the column mean (i.e. the difference between reading scores in program 2 and reading scores in program 1). Below this is the p-value for the
comparison that has already been adjusted for multiple comparisons by Stata. Wherever the p-value is less than 0.05, we can conclude that those pairs of means are not equal. In this case, the
difference of -7.74 is statistically significant and the difference in mean reading scores across both program groups can be concluded to be unequal.
If the independent variable had more than two categories, the pairwise mean differences between them would also be reported.
This exact one-way analysis of variance can be done using the ANOVA and the regress command together.
anova score program
regress, baselevels
The coefficient reported from the regress command is interpreted by treating it as a difference compared to the base category. In this case, program 1 is the base category. Reading scores of students
in program 2 tend to be 7.74 points lower than program 1 student. This difference is also statistically significant (p-value = 0.000 < 0.05).
Let’s also do a one-way ANOVA in Stata on the effect of ‘skill’ on ‘score’.
The F-stat in this case has a p-value of 0.1291 which is much higher than our significance level of 0.05. This leads us to conclude that across the three categories of skill enhancement techniques,
reading scores do not differ significantly. The pairwise comparisons reported in the second table are not very large, and all have a p-value of of over 0.05 indicating that reading scores of students
that were subject to skill enhancement technique 1 are not different from those subject to skill enhancement technique 2 which in turn is also not significantly different from reading scores of
students from skill enhancement technique 3.
Two-way ANOVA in Stata
A two-way ANOVA test is used when we want to check whether our (continuous) dependent variable differs significantly over two independent categorical variables. We can also check whether there is an
interaction effect of the two independent variables on the dependent variable. Let’s check whether ‘score’ differs across ‘skill’ and ‘program’ together.
anova reading program##skill
The two hash signs indicate that we would also like to study the joint effect of the two variables on ‘reading’.
This is a quicker way of writing the following command which will also produce the exact same output.
anova reading program skill program#skill
The order in which the independent variables are specified does not matter.
It also does not matter if the data is balanced or not. Data is balanced if there are an equal number of observations for all the pairs of categories that can be present when interacting the two
independent variables. This can be checked through a simple tab command.
tab skill program
In this case, our data is balanced because the number of observations for each possible interaction/pair of the categorical variables is equal at 50. If they were unequal, there would be an unequal
number of observations for the program-skill cells here, but that would not have affected Stata’s two-way ANOVA execution.
Let’s run the two-way ANOVA.
The F-stats and their p-values reported for ‘program’ and ‘skill’ allow us to draw the same conclusions about the effect of the categories on ‘score’. The different categories in ‘program’ have a
statistically different effect on reading scores, while the three skill techniques in ‘skill’ do not appear to be associated with statistically significant differences in reading scores. The
interaction of these two variables, i.e. the joint effect of ‘program’ and ‘skill’ does appear to be statistically significant as indicated by a p-value of 0.000 with an F-stat of 11.84.
Let’s look at the regression coefficients for each pair of categories across the two groups. This time, we use the option allbaselevels so it is clear which pairs of categories the coefficients are
to be compared against. This is helpful when there are more than two categories in a variable which is being interacted with another.
regress, allbaselevels
The coefficient for category 2 of ‘program’ shows the same result as the one we found using one-way ANOVA. For the variable ‘skill’, we find that the mean reading scores in category 2 and category 3
differ significantly from the base category 1 (by being 15.3 points and 6.62 points lower respectively) once the joint effect of ‘program’ and ‘skill’ is accounted for.
The interaction effect of these two independent variables, when broken down, suggests that when a student is part of program 2 and skill enhancement technique 2, their reading score tends to be
higher by 21.14 points as compared to the four base categories.
0 Comments
Inline Feedbacks
View all comments
| Reply
|
{"url":"https://thedatahall.com/how-to-perform-the-anova-test-in-stata/","timestamp":"2024-11-03T16:45:06Z","content_type":"text/html","content_length":"191412","record_id":"<urn:uuid:30f8a4a3-3210-44d9-9ba2-ba15168c4c39>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00428.warc.gz"}
|
Power is Rate of Change of Work Done and Problem with Solution
Power is defined as the rate of change of work done. Work done is the format of the energy. If we have energy, we can do the work and the if we are doing work, it means we are having energy. In the
other way work and energy or kind of synonyms which are used in different situations with a similar meaning.
Power is the ability of doing work done. Work done in a specified time gives the ability of power measurement. Power is defined as the work done by time. If we do work more quickly that is in lesser
time, we have better power. Power is a kind of efficiency of doing the work.
Power is measured with a unit joule per second and it is also called in the name of watt. We have another unit called horse power unit to measure the power and it is equal to 746 watt.
We have a unit called kilo watt hour. But it is not the unit of power rather it is a unit of energy. It is the commercial unit of energy used in the regular day to day life.
Increase in the power of a motor to increase the water coming out of pipe Problem and Solution
Let us consider a pipe having some area of cross section through which water is coming out. Let the water has some density and mass is varying with time and can be expressed as shown in the video
below. We can write the volume of water as the product of area of cross section and variable length of the pipe.
We can express the new power and our aim is to increase the mass of water coming out of the pipe by n times of its initial value. For this to happen, we shall increase the power of the system by n
power of three and it is shown in detailed proof in the following video.
No comments:
|
{"url":"https://www.venkatsacademy.com/2015/12/power-is-rate-of-change-of-work-done.html","timestamp":"2024-11-14T20:20:57Z","content_type":"application/xhtml+xml","content_length":"83934","record_id":"<urn:uuid:aa748ecd-9ed6-4525-81f0-e14dc0efdc22>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00094.warc.gz"}
|
Quanta's 2022 in Review — LOGiCFACE
Quanta's 2022 in Review
Quanta has started publishing its yearly STEM reviews and I thought I’d make this post to showcase one highlight from each.
On memory formation:
Neuroscientists have long understood a lot about how memories form — in principle. They’ve known that as the brain perceives, feels and thinks, the neural activity that gives rise to those
experiences strengthens the synaptic connections between the neurons involved. Those lasting changes in our neural circuitry become the physical records of our memories, making it possible to
re-evoke the electrical patterns of our experiences when they are needed. The exact details of that process have nevertheless been cryptic. Early this year, that changed when researchers at the
University of Southern California described a technique for visualizing those changes as they occur in a living brain, which they used to watch a fish learn to associate unpleasant heat with a
light cue. To their surprise, while this process strengthened some synapses, it deleted others.
The information content of a memory is only part of what the brain stores. Memories are also encoded with an emotional “valence” that categorizes them as a positive or negative experience. Last
summer, researchers reported that levels of a single molecule released by neurons, called neurotensin, seem to act as flags for that labeling.
On breaking down cryptography:
The safety of online communications is based on the difficulty of various math problems — the harder a problem is to solve, the harder a hacker must work to break it. And because today’s
cryptography protocols would be easy work for a quantum computer, researchers have sought new problems to withstand them. But in July, one of the most promising leads fell after just an hour of
computation on a laptop. “It’s a bit of a bummer,” said Christopher Peikert, a cryptographer at the University of Michigan.
The failure highlights the difficulty of finding suitable questions. Researchers have shown that it’s only possible to create a provably secure code — one which could never fall — if you can
prove the existence of “one-way functions,” problems that are easy to do but hard to reverse. We still don’t know if they exist (a finding that would help tell us what kind of cryptographic
universe we live in), but a pair of researchers discovered that the question is equivalent to another problem called Kolmogorov complexity, which involves analyzing strings of numbers: One-way
functions and real cryptography are possible only if a certain version of Kolmogorov complexity is hard to compute.
On the W boson:
The Tevatron collider in Illinois smashed its last protons a decade ago, but its handlers have continued to analyze its detections of W bosons — particles that mediate the weak force. They
announced in April that, by painstakingly tracking down and eliminating sources of error in the data, they’d measured the mass of the W boson more precisely than ever before and found the
particle significantly heavier than predicted by the Standard Model of particle physics.
A true discrepancy with the Standard Model would be a monumental discovery, pointing to new particles or effects beyond the theory’s purview. But hold the applause. Other experiments weighing the
W — most notably the ATLAS experiment at Europe’s Large Hadron Collider — measured a mass much closer to the Standard Model prediction. The new Tevatron measurement purports to be more precise,
but one or both groups might have missed some subtle source of error.
The ATLAS experiment aims to resolve the matter. As Guillaume Unal, a member of ATLAS, said, “The W boson has to be the same on both sides of the Atlantic.”
On new number theory proofs:
It was a bumper year for number theorists of all ages, following a productive 2021. A high school student, Daniel Larsen, found a bound on the gaps between pseudoprimes called Carmichael numbers,
like 561, which resemble primes in a certain mathematical sense but can be factored (in this case 561 = 3 × 11 × 17).
Jared Lichtman, a graduate student at the University of Oxford, showed that actual primes are, according to a certain measure, the largest example of something called a primitive set.
Two mathematicians at the California Institute of Technology proved a 1978 conjecture predicting that cubic Gauss sums, which sum numbers of the form e2iπn3p for some prime number p, always add
up to about p5/6. Their proof assumed the truth of something called the generalized Riemann hypothesis, which mathematicians widely believe to be true but have not yet proved. Meanwhile a simpler
analogue of the Riemann hypothesis called the subconvexity problem was solved.
Filed under: brain data neuroscience numbers programming quantum computing research
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://logicface.co.uk/quantas-2022-in-review/","timestamp":"2024-11-02T21:29:28Z","content_type":"text/html","content_length":"51133","record_id":"<urn:uuid:b220b527-544f-4ad9-97ef-d988df6a4e78>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00880.warc.gz"}
|
Mathematical Physics (B-SCI_MAJOR_28)
Mathematical Physics
Bachelor of ScienceMajorYear: 2021
You’re currently viewing the 2021 version of this component
This major combines the overviews of the Physics and the Mathematics and Statistics majors.
NOTE - Students undertaking this major may not be concurrently admitted to the Diploma in Mathematical Sciences (D-MATHSC).
Intended learning outcomes
On completion of this major, students should be able to demonstrate:
• Mastery of a broad spectrum of mathematical methods and ability to use these methods to solve diverse problems in the physical as well as the engineering sciences
• Appreciation of the distinction between mathematics which is a science based on rigorous proofs and physics which is a science based on experimental results
• Ability to move back and forth between abstract mathematical concepts and concrete physical results, transferring ideas from one subject to the other
• Aptitude at modelling physical phenomena in mathematical terms, as well proposing physical realizations of mathematical results
• Ability to communicate abstract mathematical ideas to an audience with a heterogeneous background in the physical and engineering sciences as well as communicating real-life experimental results
to mathematicians
• Intuition into applicability of mathematics in the real world and ability to extract a physical significance of mathematical facts
• Facility in successfully joining groups of researchers from diverse backgrounds including mathematicians as well as physicists and engineers.
Last updated: 3 May 2024
50 credit points
All of
Code Name Study period Credit Points
PHYC30018 Quantum Physics Semester 1 (Dual-Delivery - Parkville) 12.5
Semester 1 (Dual-Delivery - Parkville)
MAST30021 Complex Analysis 12.5
Semester 2 (Dual-Delivery - Parkville)
One of
Code Name Study period Credit Points
PHYC30016 Electrodynamics Semester 1 (Dual-Delivery - Parkville) 12.5
PHYC30017 Statistical Physics Semester 2 (Dual-Delivery - Parkville) 12.5
One of
Last updated: 3 May 2024
|
{"url":"https://handbook.unimelb.edu.au/2021/components/b-sci-major-28/print","timestamp":"2024-11-10T18:10:41Z","content_type":"text/html","content_length":"13831","record_id":"<urn:uuid:d8639da9-c4b8-4bc2-9f70-9f013b35c7b0>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00571.warc.gz"}
|
Shivika Garg tutor for Math, Math, Calculus, 10th Grade Math, 9th Grade Math, 8th Grade Math, Algebra and Algebra 2
Expert math tutor online with 10+ years of tutoring experience with school and college students. Provides 1-on-1 lessons, assignment help, and test prep in Algebra, Stats, Calculus, and Trigonometry.
I have a bachelor's degree in Information Technology and a master's degree in Mathematics. I've been interested in teaching since I was a child, and this has led me to want to become a teacher. In
the last 10 years. I design my classes based on the needs of the students, so that I can meet their learning requirements, and so my tutoring sessions are highly beneficial for them. Math is widely
regarded as one of the most difficult subjects to master among students. Math falls into the category of those subjects that offer a lot of practical applications in real life. It deals with topics
such as Trigonometry, Algebra, Geometry, Calculus etc. Learning math helps improve our reasoning ability, creativity, critical thinking, spatial thinking, problem-solving abilities, etc.
Master’s / Graduate Degree
Can also teach
• 10th Grade Math
• Calculus
• Math
• +4 subjects more
Teaching methodology
The major goal of my online math tutoring classes is to make the subject highly interesting for the students and to overcome the students perception of math as a difficult subject to learn and
understand. I teach my students quick ways to solve problems as well as tips on how to remember formulas and theories. Along with the tutoring sessions, I also help them with their assignments,
homework and provide extra classes in preparation for the tests. I am a very approachable kind of person, and so my students can always come and speak to me whenever they need academic assistance
from me.
High demand
6 lesson booked in last 96 hours
|
{"url":"https://wiingy.com/tutors/000006860178/","timestamp":"2024-11-14T21:32:22Z","content_type":"text/html","content_length":"94719","record_id":"<urn:uuid:9091c03b-2602-4544-95ad-7971330eee76>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00230.warc.gz"}
|
Automatically Adjusting Rows in a Range | Microsoft Community Hub
Forum Discussion
Automatically Adjusting Rows in a Range
I run a daily golf league where the number of players is not consistent. The data is removed after every use. I use multiple tables that randomly creates teams using RAND function. It also sorts and
rank player performance and sum winnings from 3 tables (team performance, individual performance and skins). Every time I use the workbook, I need to manually adjust the table sizes in multiple pages
since any #N/A or #VALUE! in a table will interfere with sum, match, vlookup, sorting and rank functions. Is there a way to automatically adjust the number of rows in all of the tables based on the
the number of players (PLAYER COUNT)?
• Hi schlag58
Here are some possible solutions I can think of at the moment. You can try adapting one of these options to your case.
Option 1: Use Dynamic Formulas with the IFERROR Function
To avoid values like #N/A or #VALUE! from interfering with your formulas, you can use the IFERROR function. This allows the formula to continue working even if an error occurs.
=IFERROR(VLOOKUP(A2, Players_Table, 2, FALSE), "")
Here, the IFERROR function ensures that if the VLOOKUP generates an error, Excel will return a blank cell, preventing a negative impact on other formulas.
Option 2: Control the Number of Players with Formulas
Create a cell to automatically count the number of players present, for example:
Here A2:A100 would be the range where the player names are listed.
Now, use this value to adjust your ranking and summing formulas. Combine this count with formulas that avoid processing empty or erroneous cells.
Option 3: Filter Valid Rows
Use the FILTER function (if available in your version of Excel) to create a dynamic range that automatically excludes cells with errors:
This example filters all rows that have valid (numeric) values in column A.
=FILTER(A2:B100, ISNUMBER(A2:A100))
Option 4: Using VBA
If you want to fully automate the table updates based on the number of players, you can use a small VBA script that checks the player count and adjusts the tables accordingly.
Sub AdjustTables()
Dim lastRow As Long
' Assuming the player names are in column A
lastRow = Cells(Rows.Count, 1).End(xlUp).Row
' Here you can adjust the table you want to resize dynamically
Range("Players_Table").Resize(lastRow, 2).Sort Key1:=Range("A2"), Order1:=xlAscending
End Sub
Thank You! These are solutions I had not thought of....I will give them a try!
|
{"url":"https://techcommunity.microsoft.com/discussions/excelgeneral/automatically-adjusting-rows-in-a-range/4246253","timestamp":"2024-11-08T17:51:25Z","content_type":"text/html","content_length":"218622","record_id":"<urn:uuid:846a2aa9-7a63-4a22-9f77-083db7c8dce9>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00453.warc.gz"}
|
Binary Coded Decimal - BCD - Electronics-Lab.com
Binary Coded Decimal – BCD
• Muhammad Shahid
• 10min
• 626 Views
• 0 Comments
Binary Coded Decimal
The Binary Coded Decimal is a 4-bit binary number coded to represent specifically a decimal number. The “coded” refers to the process of assigning a specific or unique binary code to a particular
decimal number. In the Binary Coded Decimal or shortly BCD, the decimal numbers from “0” to “9” are binary coded. The binary code representing each decimal number is called a Binary Coded Decimal.
The Binary Coded Decimals are used in digital systems for displaying decimal values, mainly.
The decimal numbers use a base-10 numbering system and, as such, there are a total of ten (10) decimal numbers from “0” to “9”. Likewise, binary numbers use a base-2 numbering system. In order to
code “0” to “9” decimal numbers in binary, ten (10) unique combinations of binary numbers are required, each representing a single decimal number. The number of combinations that can be produced by
binary digits or bits (n) is given by 2^n. The representation of ten (10) decimal numbers requires a binary code having at least four (4) binary digits or bits. The Binary Coded Decimal uses the
minimalist bits’ option of four (4) to represent decimal numbers.
The famous Hexadecimal numbering system also uses the four (4) bits to represent equivalent binary numbers. The hexadecimal number uses a base-16 numbering system and there is a total of sixteen (16)
hexadecimal numbers. The Binary Coded Decimals are similar to Hexadecimal numbers. However, Binary Coded Decimal (BCD) encoding uses only “0” to “9” numbers, and the rest of the numbers i.e. from “A”
to “F” or from “10” to “15” are not required. The hexadecimal numbers from “0” to “9” are similar to the binary coded decimals, “0” to “9”, respectively.
The usage of Binary Coded Decimals to represent decimal numbers has many advantages in digital systems and, amongst these, the main advantage is the ease of conversion from and to decimals. However,
there is a wastage of six (6) numbers from “A” to “F” as discussed above. In Binary Coded Decimal, each decimal digit is represented by a four (4) bit binary number (BCD), and each decimal digit can
be represented by a weighted sum of binary values. It is known from previous articles, that the weight of a decimal digit, from right to left, increases by 10 times, whereas, of a binary digit (bit)
by 2 times. In four-bit BCD, the first, second, third, and fourth digit has a weight of 2^0 = 1, 2^1 = 2, 2^2 = 4, and 2^3 = 8, respectively. In the following table, the binary power or weight of
each BCD bit is shown.
Using the above table, the weighted sum of bits of “0000” to “1001” binary numbers equalizes to “0” to “9” decimal numbers, respectively. Starting from the right, the four bits of BCD give a weight
of 8, 4, 2, and 1, respectively. The weights (8, 4, 2, and 1) of a BCD sum up to constitute a decimal number. Due to this reason, a BCD is also called an 8421 code as it represents a relevant decimal
number in a 4-bit format.
The conversion of a decimal number, consisting of multiple decimal digits, requires obtaining an equivalent BCD for each decimal digit. For example, consider a decimal number of 915[10,] having three
decimal digits i.e. “9”, “1”, and “5”. These decimal digits “9”, “1”, and “5” have equivalent binary coded decimals “1001”, “0001”, and “0101”, respectively. The combination of these binary coded
decimals is the equivalent of 915[10] in BCD as given below.
The following table lists each decimal number against their respective binary coded decimal (BCD). The BCD or 8421 code is unique for “0” to “9” digits and for numbers greater than “9” such as “10”,
“11”, and “12” etc. each decimal digit is given the respective unique 8421 code, separately. For example, the “10” would make up an 8421 code of “0001 0000” where “0001” and “0000” are unique 8421
codes of “1” and “0”, respectively.
Decimal to BCD Conversion
There are multiple methods to obtain the BCD or 8421 code of a decimal number. Each method requires the processing of each decimal digit, separately, not the whole decimal number. First, and the
easiest, way is to memorize these ten (10) BCD codes or lookup the BCD truth table for each decimal digit and find the respective BCD/ 8421 code. The second method would require the application of
decimal to binary conversion on each (single) decimal digit i.e. repeated-division-by-2. The third method is to split each decimal digit into weights of bits summing up to desired decimal digit. The
weighted binary digits form the equivalent BCD/ 8421 code. A few examples of decimal to equivalent BCD conversions are given below.
Decimal to BCD Conversion Examples
The decimal numbers: 63[10], 869[10], and 4728[10] are converted into their equivalent BCD numbers by use of the above-given BCD truth table.
BCD to Decimal Conversion
As each decimal digit is represented by a 4-bit BCD and, therefore, it is necessary to split the given binary number into groups of 4-bits. The 4-bit groups are formed from the least significant side
(rightmost) to the most significant side (leftmost). Eventually, leading to the formation of the last 4-bit group which may require additional significant zero(s) to complete the 4-bit group. Each
4-bit group represents the respective BCD or 8421 code of that decimal digit. Using the above BCD truth table, equivalent decimal digits against respective BCD/ 8421 codes are obtained. The
combination of decimal digits ultimately gives the representation of the desired decimal number. In the following examples, the BCD to decimal conversion is carried out to explain the conversion
BCD to Decimal Conversion Examples
The binary numbers: 1000[2], 10011[2], and 10010110010101[2] are converted into their equivalent decimal numbers by splitting into 4-bit groups of 8421 (BCD) codes and then finding equivalent decimal
numbers against respective 8421 (BCD) codes using the above given BCD truth table.
The Binary Coded Decimal is a mere representation of a single decimal digit and a decimal number represented by BCD encoding is not the actual binary equivalent of that decimal number. For example,
the BCD equivalent of 63[10] is 01100011[2], whereas, the pure binary equivalent of 63[10] is 00111111[2]. The BCD representation of decimals is useful for displaying decimal values etc. but is not
an efficient way of storing data and for performing arithmetic operations. The storage using BCD encoding would require an additional bit(s) compared to its equivalent true binary number. It is
because of discarding six (6) binary numbers out of sixteen (16) as described above. For example, the representation of a three-digit decimal number requires 12-bits in BCD and, contrary to this, a
10-bit binary number can accommodate a decimal number up to “1024”. Moreover, BCD encoded binary numbers are not suitable for arithmetic operations. Consider a simple example of the addition of two
BCD binary numbers which generates a carry bit, the addition of this carry bit to a BCD number of “1001” or “9” would lead to an invalid BCD code of “1010”. The solution requires conversion from
binary to decimal i.e. 10[10] and then reverting to BCD equivalent i.e. “0001 0000”. However, it is more suitable and convenient to convert BCD encoded numbers to pure binary numbers before
performing any arithmetic operations.
Binary Coded Decimal Decoder IC
The usage of Binary Coded Decimal is useful in applications requiring the display of information in decimals. The digital or electronic systems display this information on LCD or 7-segment LED
displays. In order to display decimal numbers, the binary numbers are converted to equivalent BCD numbers and a BCD decoder IC is used to display the decimal numbers on these displays. The widely
used 7-segment display uses a BCD to 7-segment decoder IC to display BCD numbers. The 7-segment displays come in two variants depending on the configuration of LEDs with the supply voltage i.e.
common anode, and common cathode. The common anode variant of the 7-segment requires a logic “LOW” at one of its segment’s inputs to turn it “ON”. Whereas, the common cathode 7-segment requires a
logic “HIGH” to lit a segment. The commercially available BCD to 7-segment decoder ICs are 74LS47 and 74LS48. The 74LS47 produces an active-low output and, as such, is suitable for a common anode
7-segment display. On the other hand, a common cathode 7-segment display requires an active-high output BCD to 7-segment decoder IC i.e. 74LS48. In the following figure, a 74LS48 (active-high output)
decoder IC with the common cathode 7-segment display is shown.
Figure 1: The common cathode 7-segment display with 74LS48 IC
• The Binary Coded Decimal (BCD) is a 4-bit binary code meant to represent a decimal number. The Binary Coded Decimal has ten (10) unique binary codes each to represent a decimal number from “0” to
• The Binary Coded Decimal is also known as 8421 code where 8, 4, 2, and 1 represent the weight of the 4^th, 3^rd, 2^nd, and 1^st bit, respectively.
• The Binary Coded Decimal is similar to the Hexadecimal numbers but uses only “0” to “9” numbers and the rest of the numbers from “A” to “F” are wasted. Due to this, the storage of information in
BCD format is not efficient and requires an additional bit(s) compared to pure binary equivalent.
• The conversion from decimal to BCD requires obtaining equivalent BCD/ 8421 code from the truth table. Whereas, conversion from BCD to decimal is the exact opposite of the decimal to BCD
conversion process. However, this requires splitting the binary (BCD) number into 4-bit groups and may require additional significant zero(s) in the last (leftmost) group.
• It is appropriate to convert BCD numbers to pure binary numbers before performing any arithmetic operation.
• The BCD encoding is useful for displaying information in the form of decimal numbers and, as such, is widely used in 7-segment displays. The BCD to 7-segment decoder i.e. 74LS47, 74LS48, etc. is
used to display decimal numbers on 7-segment displays.
Inline Feedbacks
View all comments
|
{"url":"https://www.electronics-lab.com/article/binary-coded-decimal-bcd/","timestamp":"2024-11-05T13:55:07Z","content_type":"text/html","content_length":"200176","record_id":"<urn:uuid:746f1373-f1ec-49b4-a604-fb39b0e91c21>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00489.warc.gz"}
|
Programme of Study for Mathematics
These are the areas of study for the English National Curriculum for Mathematics Key Stage 3.
Through the mathematics content, pupils should be taught to develop fluency, reason mathematically and solve problems. Click on an area below to find more information about the programme of study and
suggested activities from Transum.
In addition to the content above, through the learning of Mathematics pupils should develop more general thinking skills:
Ideal for conducting a skills inventory, currriculum map or any other type of sorting activity have a look at the Statement Sorting page.
Checklists for various schemes and syllabuses can be found on the Objectives Checklists page.
See also US Common Core Standards for Mathematics
Schemes of Learning
This Schemes of Learning above were produced by White Rose Maths and are used here with permission.
|
{"url":"https://transum.org/Maths/National_Curriculum/Default.asp?KS=3","timestamp":"2024-11-14T18:07:43Z","content_type":"text/html","content_length":"16496","record_id":"<urn:uuid:5ce616fd-3fe9-4c59-b9f2-75c9794b7e17>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00768.warc.gz"}
|
The Ultimate Fractal Video Project!
...... where no man
has gone before ...... I guarantee you have never seen this part of the Mandelbrot before - the center point between the 2 main bulbs. Nobody goes there. Images take days to generate, iteration count
is too high (2,100,000,000 in this case). Even this short sequence took 4 months on 4 hi-end P3 systems!
This area has long intrigued me; it is very difficult to get there:
( REAL = -0.75000003190208660 , IMAG = +0.00022826353530089 )
Even FRACTINT could not distinguish the lakes from the canyon any further down than this (even at maximum iterations!) - and no other fractal program can come close. If you view it full size on a
normal computer monitor, at the midpoint stop the original Mandelbrot is approximately the size of the Earth, the canyon walls are exactly as you see them, and you are 0.35 miles from the center of
the Earth.
AVI 640x480 45 sec. 23 MEG ♦♦
|
{"url":"http://www.fractal-animation.net/ufvp.html","timestamp":"2024-11-08T17:37:38Z","content_type":"text/html","content_length":"41345","record_id":"<urn:uuid:29b961c7-5e6e-4366-ab9d-b62348d4a02a>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00630.warc.gz"}
|
Efficiency and Effectiveness of State Transport Undertakings in India: A DEA Approach
Theoretical Economics Letters Vol.07 No.06(2017), Article ID:79219,14 pages
Efficiency and Effectiveness of State Transport Undertakings in India: A DEA Approach
Sanjay K. Singh, Amit P. Jha^
Indian Institute of Management, Lucknow, India
Copyright © 2017 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY 4.0).
Received: August 21, 2017; Accepted: September 18, 2017; Published: September 21, 2017
The Indian bus transport industry is dominated by the publicly owned State Transport Undertakings (STUs). Most of the STUs have, over the years, accumulated financial losses. However, since STUs
offer their services with a social aim, financial losses faced by them may not be bad per se. For publicly owned organizations, efficiency and effectiveness are more important than mere
profitability. This paper attempts to measure the efficiency and effectiveness of fifteen major STUs in India for the period 2003-04 to 2013-14 using Data Envelopment Analysis (DEA). The paper also
examines STUs’ scale elasticity and its relationship with firm size. It is found that the STUs operating in the state of West Bengal are not only the least efficient but also the least effective
whereas Andhra Pradesh state road transport corporation, which is the largest bus transport operator in the world, is the most efficient and effective operator. In general, there is a strong positive
correlation between STUs’ efficiency and their effectiveness. On the other hand, there is a negative relationship between size of the STUs and returns to scale; large size firms are showing
decreasing returns to scale whereas small size ones are operating on increasing returns to scale. Therefore, a size correction through mergers, demergers or altering scale of operation, as the case
may be, will be economically prudent.
DEA, Efficiency, Effectiveness, Scale Elasticity, Indian STUs
1. Introduction
Road transport in India has gained importance over other modes of transportation during the last few decades. If we look at the history of development of modern modes of transportation in India, we
notice a departure in the trend between the first half of the previous century and the latter. In pre-independence India, railway was the dominant mode of transport. Market share of railways started
to decline since 1950-51 and the number of buses in India went up from a meagre 56,800 in 1960-61 to more than half a million by the turn of the century. The number of buses in India was about
1,887,000 during 2013-14, representing an average compounded annual growth rate of 6.8 percent since 1960-61.
In India, publicly owned bus transport companies, known as State Transport Undertakings (STUs), have an important role in passenger bus transport since private sector is highly fragmented. Presently,
STUs in India are operating with 137,000 buses and employing close to 700,000 people. During the year 2013-14, latest year for which data are available, the total bus-kilometres operated by them were
around 15 billion, the number of passengers carried was over 23 billion, and the volume of operations had crossed the mark of 500 billion passenger-kilometres. From the very beginning, STUs in India
faced huge financial losses from their operation. STUs’ total revenue during the year 2013-14 was just Rs. 431.19 billion in comparison to total cost of Rs. 502.31 billion. Due to this, they faced a
net loss of more than Rs. 71 billion during the year 2013-14. On an average, every bus-km operated by these undertakings resulted in a loss of around Rs. 5 during the same year.
However, since STUs in India offer their services with a social aim, financial losses faced by them may not be bad per se. For publicly owned public transport organizations, efficiency and
effectiveness are more important than mere profitability. Efficiency and effectiveness evaluation in public transportation is therefore an issue of foremost importance. There are several approaches
to measure transport operators’ efficiency and effectiveness. Parametric and non- parametric frontiers are the two main approaches for this (for comparison, see, [1] [2] [3] [4] [5] ). It is well
known that parametric techniques (such as, Stochastic Frontier Analysis or SFA) require a set of distributional assumptions which may or may not hold for the set of firms in question. SFA is based on
the work of Aigner et al. and further enriched by other researchers such as Battese and Coelli [6] [7] [8] . On the other hand, non-parametric techniques such as index number approach and Data
Envelopment Analysis (DEA) require no such assumptions regarding variable distribution. The DEA technique is based on the seminal work by Farrell [9] . DEA, in its present form, was developed almost
four decades ago by Charnes et al. and since then is refined by successive researchers such as Banker et al., Lovell and Rouse and various others [10] [11] [12] .
Benchmarking tools and productivity measuring methodologies are used by various researchers in transportation sector. Kumbhakar and Bhattacharya (1996) considered an econometric approach with a
translog cost function for production technology to measure total factor productivity growth and technical change for thirty one publicly owned passenger bus companies in India during 1983-87 [13] .
Jørgensen et al. (1997) estimated a stochastic cost frontier model for the Norwegian bus industry to estimate the efficiency. Interestingly, they found no significant differences in the efficiency
between privately and publicly owned operators, though it is hard to generalize the findings to developing economies [14] . Viton (1997) applied DEA on a relatively large sample of both privately and
publicly operated bus systems from the United States to study their efficiency. Two output and multiple input measures such as, average speed, average fleet age, fleet size, fuel consumed, staff
employed in various divisions, etc., were used and efficiencies were estimated, though, only returns to scale characteristics and not quantitative estimates of the same were reported [15] .
Using a frontier approach for cost inefficiencies in Indian state road transport undertakings, Jha and Singh (2001) made the analyses and concluded that smaller STUs are, in general, more efficient
[16] . Singh and Venkatesh (2003) compared efficiency across STUs using a production frontier approach [17] . As the dynamics of the industry is rapidly changing a renewed in-depth analysis of Indian
STUs is called for. Karlaftis (2003) concluded that the results of various analysis indicated that efficiency and returns to scale findings differ substantially depending on the evaluation
methodology used [18] . This necessitates further analysis of Indian STUs using the most general DEA models by incorporating newly developed theoretical frameworks to applied research.
Boame (2004) has studied technical efficiencies of urban transit systems of Canada by using DEA with bootstrapping and the average technical efficiency of transit systems was found to be 78 per cent.
Transit systems mostly were found to experience increasing returns to scale [19] .
Matthew G. Karlaftis (2004) used DEA approach for evaluating the efficiency and the effectiveness of urban transit systems in the US context and has further used goal programming technique by
utilising Charnes et al. (1996) methodology to estimate return to scale measures for groups of transit systems through multiplicative DEA. The method develops an empirical efficient production
function via a Robustly Efficient Parametric Frontier in a two-stage approach [20] [21] . Odeck and Alkadi (2004) applied DEA to Norwegian rural and urban bus operators and also used nonparametric
testing for efficiency and scale differences with respect to ownership, region of operation and scope of operation [22] .
Sampio et al. (2008) have analysed Brazilian and some European transport systems using CRS assumption of DEA with three inputs and one output. A causal link between efficiency and system of tariffs
was also established [23] . Saxena and Saxena (2010) used DEA to measure efficiencies of some of the Indian STUs. Scale efficiencies were calculated but no attempts were made to estimate scale
elasticities [24] .
Agarwal et al. (2010) estimated the technical efficiency of public transport sector in India for thirty five different STUs for the year 2004-2005 by employing CCR input-oriented DEA model. Fleet
size, number of staff, fuel consumption and a measure for accidents were inputs and bus utilization, passenger km and load factor were the outputs. On the basis of the status of technical efficiency
(TE), it was concluded that the performance of the STUs were good but not optimal. The mean overall TE was found to be 83.26 per cent [25] . Jordá et al. (2012) studied, by using slack-based measures
model, the technical efficiency of the Spanish urban bus companies for 2004-09 [26] .
Relative performance of twenty six Indian public urban transport organizations with 19 criteria―grouped in 3 heads; operations, finance, and accident- based―was carried out by Vaidya (2014). The
author computed efficiency using the CCR DEA approach. Analytical Hierarchy Process was used before applying DEA to assign weights to each criteria group and finally, a Transportation Efficiency
Number (TEN) was developed to quantify the overall performance [27] . Hanumappa et al. (2015) studied the premium bus services operated under Bangalore Metropolitan Transport Corporation using input
oriented CCR model of DEA. Analysis indicated that most depots were efficient, but some routes have significant opportunities for further improvement [28] . Venkatesh and Kushwaha (2017) is a recent
attempt to measure technical efficiency of passenger bus companies in India using non-radial DEA [29] .
Only a few literature is available where STUs in India are evaluated using DEA and very few have used full potential of DEA. This paper attempts to estimate efficiency and effectiveness along with
combined performance of a fairly representative sample of Indian STUs by using DEA on panel data. The paper further attempts to fill the existing literature gap by estimating scale elasticity
measures for the STUs which are less explored in the literature. Further, paper attempts to establish a connection between STUs’ size and returns to scale. This may help managers and policymakers to
determine optimal size of the STUs.
The remainder of this paper is organized in the following manner: Section 2 deals with the theoretical framework used in the study. Section 3 describes the data and the sample STUs. The results are
discussed in Section 4. Section 5 presents the conclusion of the study.
2. Theoretical Framework
DEA is a well-established analytical tool to make comparisons among Decision Making Units (DMUs). The methodology with its extensions has rich applicability in applied research. We have adopted
Variable Returns to Scale (VRS) DEA model for this study. This general model was proposed by Banker et al. [11] . The model is though well-known, yet sparingly used by transportation researchers in
Indian context. Most authors have preferred original model as proposed by Charnes et al. [10] . Original model worked under the assumption of Constant Returns to Scale (CRS). One of the objectives of
our study is to estimate the Returns to Scale (RTS) experienced by the STUs. VRS model can be extended to estimate the RTS.
A Data Envelopment Analysis can be conducted either from the output orientation or from the input orientation. We have used output orientations in our analysis. We assume that we have $j=\left\{1,\
cdots ,n\right\}$ DMUs (STUs in our case). We further assume that the DMUs take $i=\left\{1,\cdots ,m\right\}$ inputs and produce $r=\left\{1,\cdots ,s\right\}$ outputs, say x[ij ]be the i^th input
by the j^th DMU and y[rj][ ]be the r^th output from the j^th DMU. All the DMUs are evaluated for each of the t periods, where, $t=\left\{1,\cdots ,T\right\}$ . The efficiency for the DMUs can be
obtained from the following linear program which is based on Banker et al. [11] .
$\text{minimize}\text{\hspace{0.17em}}\sum _{i=1}^{m}\text{ }{v}_{i}{x}_{i0}+\xi$
$\text{subject}\text{\hspace{0.17em}}\text{to}\text{\hspace{0.17em}}\sum _{i=1}^{m}\text{ }\text{ }{v}_{i}{x}_{ij}-\sum _{r=1}^{s}\text{ }\text{ }{u}_{r}{y}_{rj}+\xi \ge 0,\text{\hspace{0.17em}}\text
{\hspace{0.17em}}j=1,\cdots ,n$
$\sum _{r=1}^{s}\text{ }{u}_{r}{y}_{r0}=1$
${u}_{r},{v}_{i}\ge \varphi$ and $\xi$ is unrestricted.
$\varphi$ is a small non-Archimedian positive number.
In CRS model, the variable $\xi$ is dropped from the formulation. The performance indicator in output orientation is the reciprocal of the objective function value. The relative performances thus
measured are technical efficiencies.
For the dataset considered in our analysis, we have n = 15 DMUs, m = {1, 2, 3} inputs and s = {1, 2} outputs. We have considered two outputs: passenger-km and bus-km and three inputs: number of staff
employed, total fuel consumed and number of buses held. Output oriented DEAs are applied once for combined output, i.e., both passenger-km and bus-km as outputs and once each for one output scenario,
i.e., once each for passenger-km and bus-km. We have considered T = 11 time periods. All the linear programming problems are solved for each of the T = 11 time periods. We have coded the linear
programming problems in one of the standard statistical software, R. In sync with Karlaftis (2004), we have also obtained efficiency-effectiveness matrix [20] . The output measure passenger-km
captures effectiveness and the output measure bus-km corresponds to efficiency.
In the next stage we compute returns to scale measure for each firm and for each time period. To formalize the discussion on scale elasticity estimation, assume a firm employing a vector of inputs X
to produce a vector of outputs Y. Let, all inputs are subjected to proportional expansion of $\alpha$ and the corresponding maximum proportional expansion in all outputs be $\beta$ , such that,
$\varphi \left(\alpha X,\beta Y\right)=0$
By definition, a measure of scale elasticity is,
$\frac{\partial \beta }{\partial \alpha }=ϵ\left(X,Y\right)$
Sahoo and Tone (2015) have utilized Panzar and Willig (1977) to obtain a quantitative estimate of scale elasticity based on DEA approach [30] [31] . The measure of scale elasticity for the firm, k is
given by,
${ϵ}_{o}\left({X}_{k},{Y}_{k}\right)=1-\frac{\xi }{{D}_{o}\left(x,y\right)}$
In the above equations $ϵ$ denotes the elasticity measure and D denotes the values of objective function in the corresponding linear programing formulation of the VRS model. The subscript o is for
output orientation. Values of $\xi$ are obtained from the solutions of the linear program formulation of the DEA. We have calculated elasticities using above formula for each firm, in each time
period and for the two output scenario. In this way, we obtained not only returns to scale characteristics but also a scale elasticity measure. A value of $ϵ$ equal to (or very close to) 1
corresponds to constant returns to scale scenario, a value less than 1 corresponds to decreasing returns to scale scenario and a value greater than 1 corresponds to increasing returns to scale
scenario. This is a quantitative estimate of returns to scale. We emphasize that for scale elasticity estimation we have used output oriented DEA because the formula for input oriented DEA may fail
to give finite elasticity measure when the value of objective function of the DEA linear program and the scale characteristic variable $\xi$ are equal in magnitude but $\xi$ is negative in sign [30]
. Next we partitioned our dataset, with two outputs, in three categories (Large, Medium and Small sized respectively), using k-means clustering for each year, in order to establish relationship
between firm size and RTS estimate.
3. The Data and the Sample STUs
Annual data for a sample of fifteen STUs from 2003-04 to 2013-14 are used for this study. The primary source of data is Performance Statistics of STUs, 2003-04 to 2013-14 published for the
Association of State Road Transport Undertakings (ASRTU), New Delhi by the Central Institute of Road Transport, Pune, India. Sample is based on availability of consistent data. Sample STUs include
Andhra Pradesh State Road Transport Corporation (APSRTC), Maharashtra State Road Transport Corporation (MSRTC), Karnataka State Road Transport Corporation (KnSRTC), North West Karnataka Road
Transport Corporation (NWKnRTC), Gujarat State Road Transport Corporation (GSRTC), Uttar Pradesh State Road Transport Corporation (UPSRTC), Rajasthan State Road Transport Corporation (RSRTC), State
Transport Haryana (STHAR), South Bengal State Transport Corporation (SBSTC), Kadamba Transport Corporation Limited (KDTC), Orissa State Road Transport Corporation (OSRTC), Kerala State Road Transport
Corporation (KSRTC), North Eastern Karnataka Road Transport Corporation (NEKnRTC), North Bengal State Transport Corporation (NBSTC) and Bihar State Road Transport Corporation (BSRTC). The descriptive
statistics of the sample STUs for the period 2013-14 is presented in Table 1.
Sample STUs are publicly owned, operate throughout their respective jurisdiction (often throughout the state), mainly provide inter-city and mofussil (rural) bus transport services, and do business
in the field of passenger transportation only, but differ in size and the level of output produced. The size of the sample STUs, as measured by bus-kilometres (BKm) in 2013-14, ranges from 7 million
BKm for BSRTC to 2623 million BKm for APSRTC. Fleet strength of
Table 1. Descriptive statistics of the sample STUs during 2013-14.
STUs varies drastically, from 414 buses for BSRTC to 22,145 buses for APSRTC. Number of workers employed by STUs also varies from less than 1000 for BSRTC and OSRTC to more than 100,000 for APSRTC
and MSRTC. In almost all respect, BSRTC is the smallest STU whereas APSRTC is the largest one. In fact, APSRTC is the world’s largest bus transport operator.
The sample is fairly good representative of the publicly owned bus transport industry; sample STUs constitute two third of the publicly owned bus transport industry in India. In 2013-14, they
operated with 93,582 buses which is more than two third of the industry fleet size. During the same year, sample STUs consumed 2877 million litres of HSD which is more than 70% of the industry
consumption. The total staff employed by sample STUs was 464,661 in 2013-14, which is again nearly a two third of the total staff employed by all the STUs. Our sample thus covers almost two third of
the entire state owned public transport sector. Furthermore, our sample is fairly good representative of the entire state owned public transport sector in the sense that the firm size varies from
small STUs such as BSRTC and OSRTC to large STUs such as APSRTC and MSRTC. The sample also includes medium size STUs such as GSRTC, UPSRTC and RSRTC.
We have considered a two output and three input model. Passenger-km and bus-km are our two of the outputs and labour/staff employed, fuel consumed and number of vehicles used are our input measures.
All the data points are measured on a per year basis. Outputs are measured in million passenger-km and million bus-km, respectively. Inputs are measured as the number of staff employed; fuel
consumption, measured as HSD consumed in kilo-litres; and number of vehicles, measured as the number of buses held by the respective STUs. Some of the STUs such as those operating in north eastern
states and Tamil Nadu―where STUs are fragmented―were not taken in the sample because of at least two reasons. Firstly, all the required data fields for the entire time series under consideration were
not available, perhaps because of non-reporting by the concerned STU. Secondly, smaller STUs may lead to bias in the analysis.
Charnes et al. (1996) have suggested that the number of DMUs in a DEA should be at least thrice the number of variables considered. In the DEA literature, number of inputs, number of outputs and the
total number of DMUs considered are represented by m, s and n, respectively. In our case, m = 3, s = 2, and n = 15. So, the criteria $n\ge 3\left(m+s\right)$ is satisfied [21] .
4. The Results
Performance of the STUs can be viewed either based on passenger-km or bus-km (Matthew G Karlaftis, 2004) along with an overall performance [20] . Whereas, higher bus-km for a given input set may be
called more efficient, a still better view is the passenger-km based. It is the passenger-km which is of prime importance in public transport and hence a higher value for a given input set
corresponds to higher effectiveness. To evaluate these, we have employed DEA with one output―either passenger-km or bus-km―along with an overall performance based on combined output model. Relative
performances along with the temporal changes in the cross sectional performances have been examined. Analysis has been performed based on the annual data for the entire time period considered. The
efficiency and the effectiveness scores are reported for selected years in Table 2. Mean efficiency and mean effectiveness scores are calculated based on arithmetic average of the annual scores and
STUs are ranked based on the same.
Table 2 results show that substantial inefficiencies exist in few STUs; average inefficiency in NBSTC is 33.48% followed by KSRTC (28.30%), SBSTC (21.06%), BSRTC (11.89%), and MSRTC (10.26%).
However, three STUs of APSRTC, RSRTC, and OSRTC, experienced efficiency scores of 100% during the sample period. Average inefficiency in remaining seven STUs is less than 10%, varying from 1.04% for
USRTC to 8.83% for KDTC. As far as effectiveness is concerned, two STUs, APSRTC and OSRTC, are the most effective ones with 100% score during the sample period. As is the case with efficiency,
average effectiveness score also varied substantially; NBSTC is not only the least efficient but also the least effective STU. Average ineffectiveness score of NBSTC is 33.40% followed by MSRTC
(27.18%), SBSTC (25.80%), KDTC (15.66%), NWKnRTC (14.3%), BSRTC (12.82%), KSRTC (12.71%), GSRTC (11.22%), NEKnRTC (11.09%), and STHAR (10.49%). It is interesting to note that the average
ineffectiveness score exceeds 10% for ten STUs whereas average inefficiency score exceeds 10% for
Table 2. Efficiency and effectiveness scores of sample STUs based on a single output.
Table 3. Overall performance scores of the sample STUs for select years based on combined output.
only five STUs. In general, STUs’ effectiveness score is lower than their efficiency score. Table 2 shows that the average effectiveness score exceeds the average efficiency score only for two out of
fifteen sample STUs, KnSRTC and KSRTC. Three STUs have almost identical scores for both efficiency and effectiveness whereas remaining ten STUs have lower effectiveness score.
STUs are also evaluated based on overall relative performance scores, obtained in dual output scenario, across the temporal and cross sectional dimensions. The overall performance scores of the
sample STUs are calculated for all the years from 2003-04 to 2013-14 and reported for selected years in Table 3 along with mean performance scores, calculated as the arithmetic average of the annual
performance scores and STUs are ranked accordingly. Table 3 reveals that the three STUs, APSRTC, RSRTC, and OSRTC, achieved perfect score of 100% during every year of the sample period. NBSTC
(69.2%), SBSTC (82.7%), KSRTC (87.3%), BSRTC (88.4%), and MSRTC (89.7%) are among the worst performers having performance score of less than 90%. This shows that the state owned bus transport
operators operating in the state of West Bengal, Bihar, Kerala, and Maharashtra need to improve their performance significantly. They can learn from the operators operating in the state of Andhra
Pradesh, Rajasthan, and Orissa.
An important research question in transportation research, dealt in this paper, is whether there is a correlation between effectiveness and efficiency. Effectiveness and efficiency are, in fact,
positively correlated with a correlation coefficient of 0.768. Effectiveness and overall performance score are positively correlated with a correlation coefficient of 0.893. Efficiency, on the other
hand, is highly correlated with overall performance score with a correlation coefficient of 0.928. The results are summarized in correlation matrix presented in Table 4.
One of the most important research focus of this paper, not so well explored in the transportation sector research in India, is the establishment of a connect between quantitative measure of scale
elasticity and firm size. The results of scale elasticity or returns to scale estimation are discussed next followed by a discussion on firm size and scale elasticity relationship. A value of one
denotes constant returns to scale, a value greater than one indicates increasing returns to scale and a value less than one indicates decreasing returns to scale.
Table 5 reports the quantitative measures of scale elasticities for sample STUs for selected years. Figures show that seven out of fifteen STUs operate on increasing returns to scale, same number of
STUs operate on decreasing returns to scale, and one STU, RSRTC, operates on constant returns to scale. As expected, we found a negative relationship between size of the STUs, measured in terms of
passenger-km served, and the scale elasticity. The correlation coefficient between these two comes out to be −0.583. The relationship is moderate and is statistically significant with a t-statistic
of 2.585.
We have also segregated the STUs, using k-means clustering in three sizes: Large, Medium and Small. A summary statistics for the scale elasticity estimates, with STUs being grouped based on their
size, is given in Table 6. It shows that the medium size STUs, such as RSRTC, are operating at close to optimal scale. Large size firms, such as APSRTC and MSRTC, are showing decreasing returns
whereas small size firms, such as BSRTC, NBSTC, SBSTC, and KDTC, are showing increasing returns to scale. This means that both large as well as small size STUs are operating at non-optimal scale.
Since RSRTC is operating at the most productive scale size, optimal fleet size for STUs would be the fleet size of
Table 4. Correlation statistics for performance measure.
Table 5. Scale elasticity measures for selected years and mean for 2003-14 for sample STUs.
Table 6. Summary statistics of scale elasticity, STUs grouped on size.
the RSRTC, that is around 4500 to 5000 buses. Since fleet strength of large size STUs, such as APSRTC (22,145) and MSRTC (18,055), is far more than the optimal one, the division of these STUs would
lead to higher level of productivity. On the other hand, smaller STUs operating in the same state, such as NBSTC and SBSTC of West Bengal, may be merged for the same reason.
5. Conclusions
DEA, as a technique of benchmarking and relative performance evaluation of state owned utilities, like STUs, is a frequently used tool in applied economic research. In this paper, we have used an
output oriented DEA methodology with VRS assumption to estimate the efficiency, effectiveness, and overall performance scores of fifteen major STUs in India. VRS is, in fact, an appropriate
assumption because our analysis revealed that most of the STUs are operating at scale elasticities different from unity. We found that the three STUs, APSRTC, RSRTC, and OSRTC, are the most efficient
ones with 100% efficiency score whereas NBSTC (66.52%), KSRTC (71.70%), and SBSTC (78.94%) are among the least efficient ones. Moreover, APSRTC and OSRTC not only achieved 100% efficiency score but
also 100% effectiveness score whereas NBSTC (66.60%) and SBSTC (74.20%) along with MSRTC (72.82%) are among the least effective firms. This shows that the STUs operating in West Bengal, NBSTC and
SBSTC, are not only the least efficient but also the least effective whereas Andhra Pradesh state road transport corporation, which is the largest bus transport operator in the world, is the most
efficient and effective operator. That’s why, in general, there is a strong positive correlation between STUs’ efficiency and their effectiveness. We also evaluated STUs based on overall relative
performance obtained in dual output scenario, bus-km as well as passenger-km. We found that the three STUs, APSRTC, RSRTC, and OSRTC, achieved perfect score of 100% whereas both the STUs of West
Bengal, NBSTC (69.2%) and SBSTC (82.7%) performed worse than others.
The second part of the research concentrated on estimating the returns to scale and its relationship with firm size. The main purpose is to look for those STUs which are operating at or close to
optimal size. We found that there is negative relationship between size of the STUs and returns to scale; large size firms are showing decreasing returns to scale whereas small size ones are
operating on increasing returns to scale. In general, medium size firms such as RSRTC, NWKnRTC, NEKnRTC, and STHAR are operating with constant returns to scale. This means that both large as well as
small size STUs are operating at non-optimal scale. We found that the optimal fleet size for STUs would be around 4500 to 5000 buses. Since fleet strength of some of the large size firms is far more
than the optimal one, their demerger would be desirable and likely to lead to higher level of productivity. On the other hand, smaller STUs operating in the same state, such as NBSTC and SBSTC of
West Bengal, should be merged. This will be in the larger interest of the public, as STUs are in general, continuously making substantial losses. A size correction through mergers, demergers or
altering scale of operation, as the case may be, will be economically prudent.
This paper is part of a seed money project sponsored by the Indian Institute of Management, Lucknow, India. We are thankful to the Director and Dean (Research) of the institute for providing grant
for the study.
Cite this paper
Singh, S.K. and Jha, A.P. (2017) Efficiency and Effectiveness of State Transport Undertakings in India: A DEA Approach. Theoretical Economics Letters, 7, 1646-1659. https://doi.org/10.4236/
|
{"url":"https://file.scirp.org/Html/7-1501239_79219.htm","timestamp":"2024-11-06T15:34:37Z","content_type":"application/xhtml+xml","content_length":"76289","record_id":"<urn:uuid:6901b736-5775-4308-9a1a-ef2b11a3e03a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00001.warc.gz"}
|
Flow Regime Characterization in Emergency Relief System Design
By Benjamin Doup, Ph.D., Senior Nuclear and Chemical Engineer, Fauske & Associates, LLC
Flow regime characterization in emergency relief system (ERS) design is important because it can impact your required vent size and will impact the quantity and rate of liquid material that is
The quantity and rate of liquid material that is vented will affect the design of downstream effluent handling equipment. The flow regime, in the context of emergency relief system design, refers to
the interplay between vapor (and/or gas) and liquid phases in a vessel. The flow regime is a characteristic of the venting material during an emergency relief.
The Design Institute for Emergency Relief Systems (DIERS) program [1] contributed to the understanding of the behavior of the vapor liquid two-phase flow in vessels and supplied reasonably easy to
use correlations that generally describe the two phase flow behavior. The resulting correlations are based upon a drift flux model [2-3] approach. The drift flux model treats the vapor and liquid
phases as a single homogeneous mixture, but then describes the difference in velocity and the non uniform distribution of the two phases using constitutive correlations. The drift velocity, which
models the velocity difference between the vapor and liquid, is defined as
where j = summation of the local vapor and liquid superficial velocities, m/s
u[g] = local vapor velocity, m/s
V[gj] = vapor drive velocity, m/s
α = local vapor void fraction, -
⟨ ⟩ = indicates flow area leveraging, -
The distribution coefficient, which models the nonuniform distribution of the phases, is defined as
where C[0] = distribution coefficient, -
Two flow regimes were initially defined in the DIERS program [1] utilizing the drift flux model. These two flow regimes are bubbly and churn-turbulent flow regimes. Other flow regimes such as the
wall boiling or foamy regimes exist, but these flow regimes are not discussed further in this article.
The churn-turbulent flow regime is characterized by larger bubbles that can be elongated and the flow structure is very turbulent partially due to bubble induced turbulence. These bubbles have a
smaller surface area to volume ratio. Figure 2 shows an image of churn-turbulent flow in a vertical airwater test section. This image was obtained in a 2” diameter cylindrical test section, which is
much smaller than most vessels and the wall can impact the flow structure. These wall effects are not as pronounced in Figure 1 Image of Bubbly Flow in a Vertical Air-Water Test Section - Courtesy of
Dr. B. Doup and Dr. X. Sun (The Ohio State University)
The form of the drift velocity used in the original DIERS program [1] is given by
where m= 3 for the bubbly flow regime and approaches ∞ for the churn turbulent flow regime
n= 2 for the bubbly flow regime and 0 for the churn turbulent flow regime
u[∞] = bubble rise velocity, m/s
Figure 2 Image of Churn-turbulent Flow in a Vertical Air-Water Test Section - Courtesy of Dr. B. Doup and Dr. X. Sun (The Ohio State University
They related the vapor superficial velocity to the average void fraction by assuming the average vessel void fraction is equal to the local void fraction for bubbly flow and by averaging the void
fraction in churn turbulent flow over the height of the two-phase mixture. Grolmes and Fisher [4] re-investigated these correlations and derived an alternative form of the bubbly correlation that was
obtained without assuming the average vessel void fraction is equal to the local void fraction. The vapor superficial velocity relations from the original DIERS program [1] are given in Equation 4.
where j[g] = vapor superficial velocity, m/s
α‾ = vessel average void fraction, -
The pressure force is defined in Equation 5
where d[b] = bubble diameter, m
F[p]= pressure force, kg∙m/s^2
g = acceleration due to gravity, m/s^2
ρ[f] = liquid density, kg/m^3
The body force is defined in Equation 6
where F[g] = body force, kg∙m/s^2
ρ[g] = vapor density, kg/m^3
The drag force is defined in equation 7
where C[D] = drag coefficient
F[D] = drag force, kg∙m/s^2
The drag coefficient can be expressed as shown in Equation 8
where μ[f] = liquid dynamic viscosity, Pa∙s
σ = surface tension, N/m
The resulting bubble rise velocity is then
Researchers have replaced the √2 factor by experimentally determined coefficients. Peebles and Garber [5] (according to Wallis [3]) present the bubble rise velocity as
which was used in the DIERS program for the bubble rise velocity in the bubbly flow regime.
Harmathy [6] (according to Wallis [3]) presents the bubble rise velocity as
which was used in the DIERS program for the bubble rise velocity in the churn-turbulent flow regime.
The next logical question is how to determine flow regime for a new material or new mixture of materials? The only option at this point is to test your material under emergency relief conditions.
** See the fall 2018 Process Safety newsletter for a detailed flow regime testing approach and sample data.
1. Fisher, H.G., Forrest, H.S., Grossel, S.S., Huff , J.E., Muller, A.R., Noronha, J.A., Shaw, D.A., and Tilley, B.J., Emergency Relief System Design Using DIERS Technology, The Design Institute for
Emergency Relief Systems (DIERS ) – Project Manual, 1992.
2. Zuber, N. and Findlay, J.A., “Average Volumetric Concentration in Two-phase Flow Systems,” Journal of Heat Transfer, November, 1965.
3. Wallis, G.B., One-dimensional Two-phase Flow, 1969.
4. Grolmes, M.A. and Fisher, H.G. “Vapor-liquid Onset/Disengagement Modeling for Emergency Relief Discharge Evaluation,” Prepared for Presentation at the AIChE 1994 Summer National Meeting , 1994.
5. Peebles, F.N. and Garber, H.J. Chemical Engineering Progress, Vol. 49, pp . 88-97, 1953.
6. Harmathy, T.Z. AIChE Journal, Vol. 6, pp . 281, 1960.
Dr. Benjamin Doup is a Senior Nuclear and Chemical Engineer in the Thermal Hazards department at Fauske & Associates, LLC. For more information or to discuss Emergency Relief System Design, DIERS,
two-phase flow regimes, risk based inspection and other process safety concerns, please contact info@fauske.com or 630-323-8750.
|
{"url":"https://www.fauske.com/blog/flow-regime-characterization-in-emergency-relief-system-design","timestamp":"2024-11-07T16:45:12Z","content_type":"text/html","content_length":"147412","record_id":"<urn:uuid:8d4ca216-41da-4169-b559-765aa4cbcde8>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00326.warc.gz"}
|
Factors of 37
The factors of 37 are 1 and 37 i.e. F[37] = {1, 37}. The factors of 37 are all the numbers that can divide 37 without leaving a remainder. 37 is a prime number, so it is divisible by 1 and 37 only.
We can check if these numbers are factors of 37 by dividing 37 by each of them. If the result is a whole number, then the number is a factor of 37. Let's do this for each of the numbers listed above:
· 1 is a factor of 37 because 37 divided by 1 is 37.
· 37 is a factor of 37 because 37 divided by 37 is 1.
Properties of the Factors of 37
The factors of 37 have some interesting properties. One of the properties is that the sum of the factors of 37 is equal to 38. We can see this by adding all the factors of 37 together:
1 + 37 = 38
Another property of the factors of 37 is that the only prime factor of 37 is 37 itself.
10 Math Problems officially announces the release of Quick Math Solver and 10 Math Problems, Apps on Google Play Store for students around the world.
Applications of the Factors of 37
The factors of 37 have several applications in mathematics. One of the applications is in finding the highest common factor (HCF) of two or more numbers. The HCF is the largest factor that two or
more numbers have in common. For example, to find the HCF of 37 and 74, we need to find the factors of both numbers and identify the largest factor they have in common. The factors of 37 are 1, and
37. The factors of 74 are 1, 2, 37, and 74. The largest factor that they have in common is 37. Therefore, the HCF of 37 and 74 is 37.
Another application of the factors of 37 is in prime factorization. Prime factorization is the process of expressing a number as the product of its prime factors. The prime factor of 37 is 37 since
it is only the prime number that can divide 37 without leaving a remainder. Therefore, we can express 37 as:
37 = 37
We can do prime factorization by division method as given below,
∴ 37 = 37
Since 37 is a prime number, there is no factor tree of 37.
The factors of 37 are the numbers that can divide 37 without leaving a remainder. The factors of 37 are 1, and 37. The factors of 37 have some interesting properties, such as having a sum of 38. The
factors of 37 have several applications in mathematics, such as finding the highest common factor and prime factorization.
|
{"url":"https://www.10mathproblems.com/2023/04/factors-of-37.html","timestamp":"2024-11-13T00:03:10Z","content_type":"application/xhtml+xml","content_length":"167366","record_id":"<urn:uuid:97d69acb-b827-418a-b8c4-27e788fe72a3>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00105.warc.gz"}
|
Rate of Return (ROR) Calculator
ROR - Rate of Return Profit Loss Calculator is an online tool which allows any Business or Enterprenurer to know the ratio of money gained or lost whether realized or unrealized on an investment
relative to the amount of money invested to the business enterprise, and will help you to take the right decisions on Present Value, Future Cash Values, and Initial Investment
Rate of return is an indicator to measure the cash flow of an investment to the investor over a particular period of time generally a year. It's also known as Return on Investment often abbreviated
as ROI as it is the measure of profitability on investment will be represented in percentage. The size of the investment increase is caused by the compound interest and dividend reinvestment so that
an investor can get the higher return on the investment. Usually the risk is consider to be high for the greater rate of return on the investment and vice versa. The cash flow of the investments to
the investor normally componsate the time value of money. The profit or loss occured over time is referred as the interest profit or loss. The money invested in may referred as asset, principal or
investment. When it comes to online calculation, this Return on Investment calculator can assist you to calculate the ROR value in percentage for the respective input values of future value, present
value and the total time period. Thus you can get to determine the necessary investment controls to be taken for the Company, Enterprenuer or Busines.s
|
{"url":"https://dev.ncalculators.com/profit-loss/rate-of-return-calculator.htm","timestamp":"2024-11-12T18:38:36Z","content_type":"text/html","content_length":"34461","record_id":"<urn:uuid:cbe23e7c-9423-4593-8ddc-e42f23db85f0>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00382.warc.gz"}
|
How do you simplify 7(5 - 2) + 4 ÷ 2 times 2 +1 using order of operations? | HIX Tutor
How do you simplify #7(5 - 2) + 4 ÷ 2 times 2 +1# using order of operations?
Answer 1
Use PEMDAS (with caution!!!)
Let's rewrite your expression as: #7(5-2) + 4/2 * 2 + 1#
NOTE: this is NOT the same as #7(5-2) + 4/(2*2) + 1#
In order of getting the second expression, you would have to write it like this: #7(5−2)+4÷(2×2)+1#
It would be incorrect to divide 4 by (2*2), since there was not originally a set of parentheses there. PEMDAS is a little misleading, since you actually have to multiply and divide at the same time
(if there is not a set of parentheses).
#7(5-2) + 4/2 * 2 + 1# #7*(3) + 4/cancel(2) * cancel(2) + 1# #21 + 4 + 1 = 26#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
To simplify the expression following the order of operations (PEMDAS/BODMAS), proceed as follows:
1. First, perform operations inside parentheses: (5 - 2 = 3)
2. Next, perform multiplication and division from left to right: (4 ÷ 2 = 2)
3. Then, continue with the multiplication: (2 \times 2 = 4)
4. Now, perform addition and subtraction from left to right: (7 \times 3 = 21) (21 + 4 = 25) (25 + 1 = 26)
So, the simplified expression is (26).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/how-do-you-simplify-7-5-2-4-2-times-2-1-using-order-of-operations-8f9af8d4c5","timestamp":"2024-11-06T23:58:57Z","content_type":"text/html","content_length":"579386","record_id":"<urn:uuid:42e21fd1-301e-4838-ab96-92919d8a3360>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00682.warc.gz"}
|
Chaos-Based Encryption of ECG Signals Experimental Results
1. Introduction
The need for ECG signal encryption cannot be overemphasized. In many countries, patient records often need to move from expert to expert on one hand. On the other hand, increasingly, portable ECG
recorders allowing patients to make their own recording are used. These recordings are then regularly reported to a medical centre for analysis. Recently, in order to reduce the cost and to improve
the service, electronic forms of medical records have been sent over networks from the laboratories to medical centres or to doctor’s offices. A form of remote assistance can thus be developed
between countries with deficiency of specialists, including underdeveloped countries, and cardiology experts in the world. In all these cases, the ECG signal has to be encrypted to protect privacy.
Since the proof by Percorra et al. [1] of synchronization of chaotic oscillators, many researchers have devised and proposed diverse applications in secure communications. Basically, the works of the
authors tackling this important problem in the literature can be roughly divided into two groups: the authors that use synchronization and those that do not use it. Just as its name implies,
synchronization of chaos denotes a process in which two (or many) chaotic systems achieve a common dynamical behaviour after a transient duration. Here, the common behaviour may be a complete
coincidence of the chaotic trajectories, or just a phase locking.
From the study by Kolumbàn et al. [2] , many notions of synchronization have been proposed for chaotic systems, the strongest and most widely-used of which is identical synchronization, where the
state of the receiver system converges asymptotically to that of the transmitter [3] . More recently, two weaker notions of synchronization, called generalized synchronization [1] , [4] and phase
synchronization [5] , [6] have been introduced.
The chaotic synchronization techniques which have been published to date are quite sensitive to both noise and distortion in the channel which makes signal recovery very difficult.
Mindful of the fact that synchronisation is very sensitive to noise, some authors have tried a number of techniques excluding any need for synchronization. The first of this type is chaos shift
keying (CSK) [7] [8] . CSK is a method of digital modulation. Depending on the current value of the N-ary message symbol, the signal x[i](t), (i = 1, ···N) from one of N chaos generators with
different characteristics is transmitted. The main drawback of the CSK is that the threshold level required by the decision circuit depends on the signal to noise ratio (SNR). A special case of CSK
is the chaotic on-off keying (COOK) [9] . COOK uses one chaotic oscillator, which is switched on or off according to a binary message symbol to be transmitted. The major disadvantage of the CSK
system, namely that the threshold value of the decision circuit depends on the noise level, also appears in COOK. This means that by using COOK it is possible to maximize the distance between the
elements of the signal set, but the threshold level required by the decision circuit depends on the SNR (Signal on Noise Ratio).
However, the threshold value can be kept constant and the distance can be doubled by applying the differential CSK (DCSK) [10] [11] . In DCSK, the two channels are formed by time division. For every
message symbol, the reference signal is first transmitted, followed by the modulated reference carrying the message symbol. The principal drawback of DCSK arises from the fact that every information
bit is transmitted by two sample functions because the bit rate is halved. In the literature, authors [12] [13] proposed frequency modulation DCSK (FM-DCSK) to tackle this problem. The peculiarity of
this scheme is that the transmitted energy per bit belonging to one symbol, is kept constant. Both in the DCSK and FM-DCSK techniques, every information bit is transmitted by two sample functions,
where the first part serves as a reference, while the second part carries the information. The modulator works in the same way as in DCSK, the only difference being that not the chaotic, but the FM
modulated signal is the input of the DCSK modulator. The drawback of standard FM-DCSK system is the fact that only one information-bearing is transmitted after the reference signal.
Several different methods have been proposed in the literature to increase the data rate of DCSK, of which one of the most efficient is the quadratic chaos shift keying (QCSK) [14] [15] scheme. The
basic idea underlying the QCSK scheme is the generation of chaotic signals which are orthogonal over a specified time interval. This allows the creation of a basis of chaotic functions from which
arbitrary constellations of chaotic signals can be constructed. For instance, in QCSK, a linear combination of two chaotic basis functions is used to encode four symbols. The key point for exploiting
this idea in a communication system is that one must be able to generate the chaotic basis functions starting from a single chaotic signal. The same concept holds for conventional digital
communication schemes such as QPSK, where the quadrature component can be obtained from the phase one by means of a simple phase shifter. The main drawback of this method is its high complexity.
Among several systems proposed, one of the best performances has been achieved by the differential chaos shift keying (DCSK) scheme and its variation utilizing frequency modulation, and that is
Schemes based on the use of the chaos synchronization principle, all suffer from some common weakness [16] :
• It is difficult to determine the synchronization time; therefore, the message during the transient period will be lost.
• Noise throughout the transmission significantly affects the intended synchronization. This means the synchronization noise intensity should be small compared to the signal level, or the desired
synchronization will not be achieved.
• Technically, it is difficult to implement two well-matched analog chaotic systems, which are required in synchronization, and if this is not required (i.e., with certain robustness) then the
opponent can also easily achieve the same synchronization for attack.
A close look at the two groups of methods reveals some drawbacks. The main drawback of the first group of methods boils down to inaccuracy in synchronization. For the second group it is the fact that
the decrypted signal is rather estimated which increases imprecision during recovery of the hidden signal.
In this work, we propose an encryption and decryption method for ECG signal, using simple electronics and whose principles and elements of novelty are described below. Our method is based on four
important concepts that are encryption by adding the chaos to information to be hidden, multiplexing, demultiplexing and subtraction. In the literature, some authors have used methods of transmission
to non-coherent receiver. We used this principle of non-coherent receiver. However, by multiplexing the signals, we use the same channel to carry the encrypted message and the reference signal. This
differs from other non coherent receiver methods proposed in the literature where the coding of the information is done without putting itself on line as it is the case with coherent receiver
systems. Our approach also differs from masking method encountered in coherent receiver system. Indeed, with such a system, encryption is also done by addition of course, but it requires the use of
another chaotic oscillator at the reception and once synchronized, it serves as a reference for information retrieval. By carrying the reference signal, we bypass the stress of synchronization often
difficult to perform when using another chaotic generator at the reception. The system is therefore free from the setbacks inherent to coherent system. Moreover, unlike in the other non-coherent
systems presented in the literature where the recovered signal is only estimated, in our case, the decrypted ECG signal is deducted by the encrypted one. This adds to the accuracy of the proposed
scheme. It should also be noted that the multiplexed signal is chaotic, composite and therefore cannot be synthesized by any pirate. This adds to the security.
In the next section, we shall describe the general organization of the system. Section III is devoted to experimental setting and result description followed by discussion of these results. This
gives way to conclusion and a list of references.
2. Description of the EDS
The general organization of the Encryption-Decryption-System (EDS) is given in Figure 1. It is made of the ECG generator unit, an encryption unit and the decryption unit.
2.1. ECG Generator Unit
In this work, we used the ECG generator that was developed in our laboratory by Tchimnoue et al. [17] and that we succinctly described below. The generator is built around a 16F84A microcontroller
(MC) manufactured by MICROCHIP. The ECG generation is as follows. A single period of an ECG signal sampled at 255 Hz and digitized at 8 bits is stored in a Flash memory of the MC. The MC repeatedly
sends at a 255 Hz speed these data to the DAC0808 digital-to-analog converter whose output is used by our system after it has transited by a voltage divider across resistor R6 to bring the voltage
level of the signal generated to 3 mV peak-to-peak. The circuitry is represented in Figure 2.
2.2. Encryption Unit
This unit is organized around two main sub-units which are the chaotic generator and the encrypting and multiplexing subunit.
2.2.1. The Chaotic Generator
The chaotic generator is a Colpitts oscillator. It is made of an LC circuit at the collector of NPN bipolar junction transistor, a voltage divider whose elements are two capacitors (C[1] and C[2])
connected to a bipolar junction transistor (BJT) output. In this oscillator, the non linear component of the circuit is the BJT Q2N2222. The circuit we used is represented in Figure 3.
Under certain circumstances that are discussed in [18] , the voltage across any of the two capacitors exhibits chaotic behaviour. This signal is used to encrypt the ECG in the EDS.
Figure 1. General organization of the EDS.
Let’s assume that U[1] is the voltage across C[1] and U[2] the voltage across C[2]. Applying Barkhausen criterium to this oscillator, the resonance frequency f[0] can be computed
Applying Kirchhoff current and voltage laws to the circuit, we have:
where α and β are the BJT parameters:
Let’s introduce some dimensionless variables for convenient numerical analysis:
The first equation of system (2) becomes:
We consider
Posing (5)
we transform Equation (4) into Equation (6).
Similarly, with these changes in variables, the second equation of the system (2) is transformed into equation (7).
The third equation becomes:
Finally, the set of Equations (2) is transformed to set of Equations (9)
where (.) denotes the partial derivative. A change of origin led to the set of Equations (10).
The nature of the solution of set of Equations (10) strongly depends on the control parameter μ.
Using fourth order Runge-Kutta to resolve the system (10), we realized that for
From Equation (5), we can see that μ is a function of the current and the elements of the Colpitts circuits. Deducing the current [2], L, C[1], C[2]).
2.2.2. Encrypting and Multiplexing Subunit
This subunit is made of an adder whose inputs are the ECG signal and the chaotic signal from the Colpitts oscillator. The output of the adder is sent to one output of a 2-to-1 multiplexer while its
second input receives once more the chaotic signal from the Colpitts generator. The encrypting and multiplexing subunit is depicted in Figure 4.
2.3. Decryption Unit
The decryption unit is made of a 1-to-2 demultiplexer whose two inputs are connected to the two inputs of a subtractor. The output of the substractor is sent to a low-pass filter whose output yields
the decrypted ECG signal. Figure 5 gives the decryption unit.
Figure 4. The encrypting and multiplexing subunit.
3. Results and Discussion
3.1. Chaotic Behaviour
The colpitts oscillator has been well researched in the literature. Its main role is to generate chaotic oscillations that are added to the signal to be hidden. In our experimental setting, the value
of the current [1](V[C1]) and C[2](V[C2]) are displayed on the oscilloscope. The value of C[1] and C[2] is 470 nF, inductor L value is 2.2 mH while R[2] value is 36 Ω. The supply voltages are 7 V for
U[0] and −7.5 V for U[3]. We realized during our experiments that:
We can see that the waveforms change according to the current’s value until the chaotic state is reached as shown below by waveforms V[C1] and V[C2] for a current of 7.6 mA (Figure 6) and phase
diagram (Figure 7). The bifurcation diagram obtained [18] as the current varies is plotted below (Figure 8).
A usual test for chaos is calculation of Lyapunov exponents. It is common to refer to the largest one as the Maximal Lyapunov exponent (MLE), because it determines a notion of predictability for a
dynamical system. The Lyapunov exponents give the average exponential rates of divergence or convergence of nearby orbits in the phase-space. In systems exhibiting exponential orbital divergence,
small differences in initial conditions which we may not be able to resolve get magnified rapidly leading to loss of predictability. Such systems are chaotic. In Figure 9 is plotted the dynamic of
Lyapunov exponents for the colpitts oscillator used. For initial conditions (x = 0.2, y = 0.5, z = 0.5), the system being solved by means of 4th order Runge Kutta technique, with Step 0.01, three
values of Lyapunov exponents (Lamda 1, Lamda 2, Lamda 3) are obtained: Lamda 10.1 (positive value), Lamda 2 = 0.0, Lamda 3 = −0.70 (negative value). These results validate the bifurcation diagram of
Figure 8 and prove the chaotic nature of the oscillator [18] .
3.2. Signal Encryption and Multiplexing
The signal yielded across resistor R[6] of the ECG generator (Figure 2) is injected at the input E[1] of the encryption unit (Figure 4) while the chaotic signal studied in the previous subsection is
injected at the inputs E[2]. At the output S[0], the encrypted ECG signal is collected. Figure 10 and Figure 11 display the original ECG signal and the encrypted ECG.
Figure 6. Voltage waveforms from V[C1] and V[C2].
Figure 9. Dynamics of Lyapunov exponents for the oscillator.
Figure 10. ECG original signal: the upper signal is the original ECG.
After encryption, the signal is sent to one input of a 2 to 1 multiplexer while the other input receives the chaotic signal. The output of the multiplexer is S[1] and is displayed in Figure 12.
The multiplexed signal is sent on the transmission line and gets to the decrypting subunit whose results are given in the next subsection.
3.3. Decrypting Unit
The signal enters this unit by a 1-to-2 demultiplexer (DMX) who receives the encrypted ECG. The two outputs of the DMX are sent to the substractor whose output is sent to a low-pass filter in order
to retrieve the hidden ECG signal. The result is shown in Figure 13.
3.4. Discussion
Visually, there is a good level of concordance between the original and the decrypted ECG as can be seen from Figure 13. The mean quadratic error of the two signal was computed and we found a value
of 1, 33. The error committed during signal retrieval is therefore less than 2%. ECG signals are generally in the order of 1 to 5 mV before amplification. There is averagely therefore a difference of
20 to 100 μV between the two signals which is acceptable. The quality of recovered signal is linked to the filter. In our case, a low pass filter is used. We obtained the results with 2 set of values
for resistor R[11] and capacitor C[3] of the filter. When R[11] = 159 Ω and C[3] = 10nF, the decrypted signal is visually good, but it still contains noise (Figure 14). For R[11] = 1 KΩ and C[3] = 10
Figure 13. The upper signal is the retrieved (decrypted) ECG signal for R[11] = 1 KΩ and C[3] = 10 μF while the lower one is the original ECG.
Figure 14. The signal is the retrieved (decrypted) ECG for R[11] = 159 Ω and C[3] = 10 nF.
uF, the quality of decrypted signal is visually very good as we can observe on Figure 13 as compared to the original ECG.
We also computed the signal to noise ratio and found 25.50 dB. This value is an indication of the level of corruption of the signal by noise. The noise is therefore about twenty times smaller than
the signal carrying the information. The system therefore yields a good margin. The last metric that was computed to numerically evaluate the resemblance is the frequency distortion which a measure
of how far the recovered signal has drifted from the original signal frequency-wise. We found a value of 6 × 10^−4. This value shows there is really no frequency drift between the two signals.
The three metrics computed permits us to conclude that both visually and numerically, the concordance (resemblance) between the two signals can be termed as good.
During our experiments, for to the value of C[1] = C[2] = 470 nF, we observed that when R1 ≤ 300 Ω or R1 ≥ 2000 Ω there is not chaos in our system. With the appropriate values of C[1], C[2] and R[1],
we also lost chaos when L ≤ 2.1 mH and L ≥ 5 mH.
We noticed that the range of multiplexer/demultiplexer frequency for which the hidden signal is well decrypted is 53.3 Khz to 850 Khz. For frequencies out of this range, we had only noise at the
receiver. Furthermore, it was completely impossible to retrieved the hidden signal when the working frequency of the multiplexer was different from the one used by demultiplexer. This aspect enhances
the security of our system. The experiment of transmission in this work was tested on a distance of 45 m.
4. Conclusion
In this paper, we have designed and tested a very simple chaos-based encryption system for a very delicate and common medical signal. The system was designed on the basis of some shortcomings of
earlier techniques. The results in terms of mean quadratic error signal to noise ratio and frequency distortion are satisfactory. In future works, we wish to examine the effect of the transmission
conditions on the recovered signal, namely non-linearity in the propagation line, type and level of noise and then radiofrequency transmission.
|
{"url":"https://www.scirp.org/journal/paperinformation?paperid=46024","timestamp":"2024-11-02T21:41:23Z","content_type":"application/xhtml+xml","content_length":"120861","record_id":"<urn:uuid:01559f7a-8618-47a5-b277-bbaa4f584086>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00616.warc.gz"}
|
Tarjan's Algorithm with Implementation in Java
Tarjan’s Algorithm with Implementation in Java
In this article, we will look at a famous algorithm in Graph Theory, Tarjan Algorithm. We will also look at an interesting problem related to it, discuss the approach and analyze the complexities.
Tarjan’s Algorithm is mainly used to find Strongly Connected Components in a directed graph. A directed graph is a graph made up of a set of vertices connected by edges, where the edges have a
direction associated with them. A Strongly Connected Component in a graph is basically a self contained cycle within a directed graph where from each vertex in a given cycle we can reach every other
vertex in that cycle.
Let us understand this with help of an example, consider this graph:
In the above graph, the box A and B show the SCC or Strongly Connected Components of the graph. Let us look at a few terminologies before explaining why the above components are SCC.
• Back-Edge: So an edge of nodes (u,v) is a Back-Edge, if the edge from u to v has Descendent-Ancestor relationship. The node u is the descendant node and node v is the ancestor node. In this case,
it results in a cycle and is important in forming a Strongly Connected Component.
• Cross-Edge: An edge (u,v) is a Cross-Edge, if the edge from the u to v has no Ancestor-Descendent relationship. They are not responsible for forming a SCC. They mainly connect two SCC’s together.
• Tree-Edge: If an edge (u,v) has a Parent-Child relationship such an edge is Tree-Edge. It is obtained during the DFS traversal of the tree which forms the DFS tree of the graph.
So, in the above graph edges -> (1 , 3), (3 , 2), (4 , 5), (5 , 6), (6 , 7) are the tree edges because they follow the Parent-Child Relationship. Edges -> (2 , 1) and (7 , 4) form the back edges
because from node 2 (Descendent) we go back to 1 (Ancestor) completing a cycle (1->3->2). Similarly, from edge 7 we go back to 4 completing a cycle ( 4-> 5 -> 6 ->7). Hence the components (1,3,2)
and (4,5,6,7) are the Strongly Connected Components of the graph. The edge (3 , 4) is a Cross edge because it follows no such relationship and connects the two SCC’s together.
Note: A Strongly Connected Component in a graph must have a Back-Edge to its head node.
Tarjan’s Algorithm
Now let us see how Tarjan’s Algorithm will help us find a Strongly Connected Component.
• The idea is to do a Single DFS traversal of the graph which produces a DFS tree.
• Strongly Connected Components are the subtrees of the DFS tree. If we find the head of each subtrees, we can get access to all the nodes in the subtree which is one SCC, then we can print the
SCC including the head.
• We will consider only the tree edges and back edges while traversing, we ignore the cross edges as it separates one SCC from another.
So now, let us look how to implement the above steps. We are going to assign each node a time value for when it is visited or discovered. At root or start node Time value is 0. For every node in the
graph, we assign a tuple with two time values: Disc and Low.
Disc: This indicates the time for when a particular node is discovered or visited during DFS traversal. For each node we increase the Disc time value by 1.
Low: This indicates the node with lowest discovery time accessible from a given node. If there is a back edge then we update the low value based on some conditions. The maximum value Low for a node
can be assigned is equal to the Disc value of that node since the minimum discovery time for a node is the time to visit/discover itself.
Note: The Disc value once assigned will not change while we keep on updating the low value traversing each node. We will discuss the condition next.
Implementation in Java
Step 1:
We use a Map (Hash-Map) to store the graph nodes and edges. The Key of map stores the nodes and in the value we have a list which represents the edges from that node. For the Disc and Low we use two
integer arrays of size same as a number of vertices. We fill both the arrays with -1, to indicate no nodes are visited initially. We use a Stack (for DFS) and a Boolean array inStack to check whether
an already discovered node is present in our Stack in O(1) time as checking in the stack will be a costly operation (O(n)).
Step 2:
So, for each node we process we add it into our stack and mark true in the array inStack. We maintain a static Timer variable initialized to 0. If for an edge (u,v) if v node is already present in
stack, then it is a back edge and (u,v) pair is Strongly Connected. So we change the low value as :
if(Back-Edge) then Low[u] = Min ( Low[u] , Disc[v] ).
After visiting this node on returning the call to its parent node we will update the Low value for each node to ensure that Low value remains the same for all nodes in the Strongly Connected
Step 3:
Now if for an edge (u,v) if v node is not present in stack then it is a tree edge or a neighboring edge. In such case, we update the low value for that particular node as :
if (Tree-Edge) then Low[u] = Min ( Low[u] , Low[v] ).
We determine the head or start node of each SCC when we get a node whose Disc[u] = Low[u], such a node is the head node of that SCC. Every SCC should have such a node maintaining this condition.
After this, we just print the nodes by popping them out of the stack marking the inStack as false for each popped value.
Note: A Strongly Connected Component must have all its low values same. We will print the nodes in reverse order.
Now, let us look at the code for this in Java:
import java.util.*;
public class TarjanSCC
static HashMap<Integer,List<Integer>> adj=new HashMap<>();
static int Disc[]=new int[8];
static int Low[]=new int[8];
static boolean inStack[]=new boolean[8];
static Stack<Integer> stack=new Stack<>();
static int time = 0;
static void DFS(int u)
Disc[u] = time;
Low[u] = time;
inStack[u] = true;
List<Integer> temp=adj.get(u); // get the list of edges from the node.
for(int v: temp)
if(Disc[v]==-1) //If v is not visited
Low[u] = Math.min(Low[u],Low[v]);
//Differentiate back-edge and cross-edge
else if(inStack[v]) //Back-edge case
Low[u] = Math.min(Low[u],Disc[v]);
if(Low[u]==Disc[u]) //If u is head node of SCC
System.out.print("SCC is: ");
System.out.print(stack.peek()+" ");
inStack[stack.peek()] = false;
inStack[stack.peek()] = false;
static void findSCCs_Tarjan(int n)
for(int i=1;i<=n;i++)
for(int i=1;i<=n;++i)
DFS(i); // call DFS for each undiscovered node.
public static void main(String args[])
adj.put(1,new ArrayList<Integer>());
adj.put(2,new ArrayList<Integer>());
adj.put(3,new ArrayList<Integer>());
adj.put(4,new ArrayList<Integer>());
adj.put(5,new ArrayList<Integer>());
adj.put(6,new ArrayList<Integer>());
adj.put(7,new ArrayList<Integer>());
SCC is: 7 6 5 4
SCC is: 2 3 1
The code is written for the same example as discussed above, you can see the output showing the Strongly Connected Components in reverse order since we use a Stack. Now let us look at the
complexities of our approach.
Time Complexity: We are basically doing a Single DFS Traversal of the graph so time complexity will be O( V+E ). Here, V is the number of vertices in the graph and E is the number of edges.
Space Complexity: We at the most store the total vertices in the graph in our map, stack, and arrays. So, the overall complexity is O(V).
So that’s it for the article you can try out different examples and execute the code in your Java Compiler for better understanding.
Let us know any suggestions or doubts regarding the article in the comment section below.
Leave a Comment
|
{"url":"https://www.thecrazyprogrammer.com/2021/03/tarjans-algorithm.html","timestamp":"2024-11-09T01:16:55Z","content_type":"text/html","content_length":"138361","record_id":"<urn:uuid:e6dc863e-ccbc-4335-bfba-ad9c3c05e7a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00339.warc.gz"}
|
Dynamics and Darboux Integrability of the D 2 Polynomial Vector Fields of Degree 2 in ℝ 3 $\mathbb {R}^{3}$
We characterize the Darboux integrability and the global dynamics of the 3-dimensional polynomial differential systems of degree 2 which are invariant under the D2 symmetric group, which have been
partially studied previously by several authors.
• Correction
• Source
• Cite
• Save
• Machine Reading By IdeaReader
|
{"url":"https://www.acemap.info/paper/56375346","timestamp":"2024-11-02T01:16:49Z","content_type":"text/html","content_length":"55106","record_id":"<urn:uuid:a18c062d-6b78-473c-8236-6866de89f7b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00895.warc.gz"}
|
Columns used to define observations
OBSERVATION: response
The OBSERVATION column-type can be used to record continuous (PKanalix and Monolix), categorical (Monolix), count (Monolix) or time-to-event (Monolix) data. For dose lines, the content is free and
will not be used. For response lines, the requirements depend on the type of data and are summarized below.
Continuous data
The value represents what has been measured (e.g concentrations) and can be any double value.
ID TIME AMT Y
1 0 50 .
1 0.5 . 1.1
1 1 . 9.2
1 1.5 . 8.5
1 2 . 6.3
1 2.5 . 5.5
Categorical data
In case of categorical data, the observations at each time point can only take values in a fixed and finite set of nominal categories. In the data set, the output categories must be coded as
consecutive integers.
ID TIME Y
1 0.5 3
1 1.5 2
1 2.5 3
• Full data set for joint continuous and categorical data: One can see the respiratory status data set and the warfarin data set for example for more practical examples on a categorical and a joint
continuous and categorical data set respectively.
Count data
Count data can take only non-negative integer values that come from counting something, e.g., the number of trials required for completing a given task. The task can for instance be repeated several
times and the individuals performance followed.
Count data can also represent the number of events happening in regularly spaced intervals, e.g the number of seizures every week. If the time intervals are not regular, the data may be considered as
repeated time-to-event interval censored, or the interval length can be given as regressor to be used to define the probability distribution in the model.
• Basic example: in the data set below, 10 trials are necessary the first day (t=0), 6 the second day (t=24), etc.
ID TIME Y
One can see the epilepsy attacks data set for a more practical example.
(Repeated) time-to-event data
In this case, the observations are the “times at which events occur“. An event may be one-off (e.g., death) or repeated (e.g., epileptic seizures, mechanical incidents, strikes). In addition, an
event can be exactly observed, interval censored or right censored.
For the formatting of time-to-event data, the column TIME should contain:
• the time of an event,
• the time at which the observation period starts (required by the MonolixSuite, contrary to other tools for survival analysis. This allows to define the data set using absolute times, in addition
to durations (if the start time is zero, the records represent durations between the start time and the event).
• the end of the observation period or time intervals for interval-censoring.
The column OBSERVATION contains an integer that indicates how to interpret the associated time. The different values for each type of event and observation are summarized in the table below:
The figure below summarizes the different situations with examples:
Single events exactly observed data
One must indicated the start time of the observation period with Y=0, and the time of event (Y=1) or the time of the end of the observation period if no event has occurred (Y=0).
• Basic example: in the following dataset, the observation period last from starting time t=0 to the final time t=80. For individual 1, the event is observed at t=34, and for individual 2, no event
is observed during the period. Thus it is noticed that at the final time (t=80), no event occurred.
ID TIME Y
Using absolute times instead of duration, we could equivalently write:
ID TIME Y
The duration between start time and event (or end of the observation period) are the same as before, but this time we record the day at which the patients enter the study and the days at which they
have events or leave the study. Different patients may enter the study at different times.
Repeated events exactly observed data
One must indicate the start time of the observation period (Y=0), the end time (Y=0) and the time of each event (Y=1).
• Basic example: below the observation period last from starting time t=0 to the final time t=80. For individual 1, two events are observed at t=34 and t=76, and for individual 2, no event is
observed during the period.
ID TIME Y
Single events interval censored data
When the exact time of the event is not known, but only an interval can be given, the start time of this interval is given with Y=0, and the end time with Y=1. As before, the start time of the
observation period must be given with Y=0.
• Basic example: we only know that the event has happened between t=32 and t=35.
ID TIME Y
Repeated events interval censored data
In this case, we do not know the exact event times, but only the number of events that occurred for each individual in each interval of time. The column-type Y can now take integer values greater
than 1, if several events occurred during an interval.
• Basic example: No event occurred between t=0 and t=32, 1 event occurred between t=32 and t=35, 1 between t=35 and t=50, none between t=50 and t=56, 2 between t=56 and t=78 and finally 1 between t
=78 and t=80.
ID TIME Y
• If a subject or a subject-occasion has no observations, a warning message arises telling which subjects or subjects-occasions have no measurements and will be ignored.
Format restrictions
• A data set shall not contain more than one column with column-type OBSERVATION. Multiple observation types need to be distinguished with the OBSERVATION ID column.
OBSERVATION ID: response identifier
The OBSERVATION ID column permits to distinguish several types of observations (several concentrations, effects, etc). The OBSERVATION ID column assigns an identifier to each observation of the
OBSERVATION column. Those identifiers are used to map the observations of the data set to the outputs of the model (in the OUTPUT block of the model file). The dot “.” is not considered as a
repetition of the previous line but as a different identifier.
There can be more OBSERVATION ID values than there are outputs in the model file. In that case only the observations corresponding to the first identifiers(s) in alphabetical order will be used
(example below).
For continuous data, in the Monolix graphical user interface, the data viewer and the plots, the observations will be called yX with X corresponding to the identifier given in the OBSERVATION ID
column (for instance y1 and y2 if identifiers 1 and 2 were used in the OBSERVATION ID column).
• Basic example with integers: with the following data set
0 . 12 1
5 . 6 2
10 . 4 1
15 . 3 2
and the following output block
output = {Cc, R}
the observations “12” and “4” which have identifier “1” will be matched to the output “Cc”, while observations ”6″ and “3” with identifier “2” will be matched to “R”.
• Basic example with strings (not recommended): with the following data set
0 . 12 PK
5 . 6 PK
10 . 4 PD
15 . 3 PD
and the following OUTPUT block in the Mlxtran model file:
output = {Cc, R}
the observations tagged with “PD” will be mapped to the first output “Cc” (which is probably not what is desired), and those tagged with “PK” will be mapped to the second output “R”, because in
alphabetical order “PD” comes before “PK”.
• Basic example with more OBSERVATION ID values than model outputs: with the following data set
0 . 12 1
5 . 6 2
10 . 4 1
15 . 3 2
and the following output block
output = {Cc}
the observations tagged with identifier “1” will be mapped to the model output “Cc” and the observations tagged with “2” will be ignored. If the user wants to use only the data tagged with “2”, he
can add an IGNORED OBSERVATION column which ignores all observations with identifier “1” (see below).
• Basic example with more OBSERVATION ID values than model outputs and an IGNORED OBSERVATION column: with the following data set
0 . 12 1 1
5 . 6 2 0
10 . 4 1 1
15 . 3 2 0
and the following output block
output = {Cc}
the observations tagged with identifier “1” will all be ignored (due to the MDV column tagged as IGNORED OBSERVATION) and the observations with identifier “2” will be mapped to the model output “Cc”.
Format restrictions
• A data set shall not contain more than one column with column-type OBSERVATION ID.
• The content of the OBSERVATION ID column can be strings or integers.
• The dot “.” is not considered as a repetition of the previous line but as a different identifier.
CENSORING: censored observation
The CENSORING column permits to mark censored data. When an observation is marked as censored, the (upper or lower) limit of quantification is given in the OBSERVATION column (not in a separate
• CENSORING = 1 means that the value in OBSERVATION column is a lower limit of quantification (LLOQ). The true observation y verifies y<LLOQ.
• CENSORING = 0 means the value in response-column corresponds to a valid observation (no interval associated).
• CENSORING = -1 means that the value in OBSERVATION column is an upper limit of quantification (ULOQ). The true observation y verifies y>ULOQ.
The mathematical handling of censored data is described here.
Format restrictions
• A data set shall not contain more than one column with column-type CENSORING.
• For dose lines, the content is free and will be ignored.
• For response lines, there are only four possible values : -1, 0, 1 and ‘.’ (interpreted as 0).
LIMIT: limit for censored values
When the column LIMIT contains a numeric value and CENSORING is different from 0, the value in the LIMIT column is interpreted as the second bound of the interval. Writing yobs the value in the
OBSERVATION column and ylimit the value in the LIMIT column, the true observation y verifies y∈[ylimit,yobs]. When LIMIT = ‘.’ , the value is interpreted as -infinity or +infinity depending on the
value of CENSORING (1 or -1 respectively) as if the LIMIT column would not be present.
Format restrictions
• A data set shall not contain more than one column with column-type LIMIT.
• A data set shall not contain any column with column-type LIMIT if no column with column-type CENSORING is present.
• Allowed values are doubles and dot ‘.’ , and strings are not allowed (even when CENSORING = 0).
Example with CENSORING and LIMIT
It is possible to have both censoring type on the same individual, i.e. both upper or lower limit of quantification. It is possible to have measurements with and without bounds. The example below
gives an overview of the possible combinations:
|
{"url":"https://monolixsuite.slp-software.com/monolix/2024R1/columns-used-to-define-observations","timestamp":"2024-11-07T12:53:00Z","content_type":"text/html","content_length":"56516","record_id":"<urn:uuid:594f53f9-dd71-4b64-8e41-12a183f83028>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00815.warc.gz"}
|
Maximal elements algorithms - electowikiMaximal elements algorithmsMaximal elements algorithms
Versions of Kosaraju's algorithm and the Floyd-Warshall algorithm can each be used to calculate the Schwartz set and the Smith set for an election. When there are N candidates, Kosaraju's algorithm
runs in Θ(N^2) time with Θ(N^2) space, while the Floyd-Warshall algorithm runs in Θ(N^3) time with Θ(N^2) space.
The general problem is identifying the members of the maximal elements of an order generated from a binary relation. The order is the natural partial order generated from the preorder that is the
transitive-reflexive closure of the relation.
The Schwartz set is associated with the beatpath order on the set of beatpath cycle equivalence classes, which are generated from the Beat relation defined by:
(X,Y) is in Beat if and only if candidate X pair-wise beats candidate Y
The Smith set is associated with the beat-or-tie order on the set of beat-or-tie cycle equivalence classes, which are generated from the Beat-or-tie relation defined by:
(X,Y) is in Beat-or-tie if and only if candidate X pair-wise beats or ties candidate Y
From a graph-theoretic perspective, these algorithms identify which vertices in the maximal strongly connected components of the directed graph of the given relation.
The algorithms are started with a typical approach to summarizing the pair-wise contests between candidates: with a voting matrix such that Votes[i,j] is the number of votes for candidate i in a
pair-wise contest against candidate j. The final result of each algorithm is an boolean array, such that IsInMaximal[i] is true if and only if candidate i is in (one of) the maximal cycle equivalence
classes, that is, if and only if the candidate is in the requested set, either the Schwartz set or the Smith Set.
Pseudo-code for the algorithms starts with:
function FindMaximalCandidates(number [1..N,1..N] Votes, string Set, string Algorithm)
boolean [1..N,1..N] Relation
// Initialize Relation
for i from 1 to N
for j from 1 to N
if (Set == "Schwartz" and Votes[i,j] > Votes[j,i]) or
(Set == "Smith" and i <> j and Votes[i,j] >= Votes[j,i])
Relation[i,j] = true
Relation[i,j] = false
boolean [1..N] IsInMaximal
// Call the requested algorithm
if Algorithm == "Kosaraju"
IsInMaximal = KosarajuMaximal(Relation)
if Algorithm == "Floyd-Warshall"
IsInMaximal = FloydWarshallMaximal(Relation)
Kosaraju's algorithm
Kosaraju's algorithm does two depth-first searches (DFS) of the Relation. The first search records the search finish times, which are then reversed to be used as a search order for the second search
using the transpose of the Relation. The second search partitions the candidates into search trees which are actually cycle equivalence classes, and also identifies which candidates are in maximal
cycle equivalence classes. While the first and second searches have slightly different requirements, for the purposes of presentation here, the pseudo-code for both searches have been consolidated
into a single set of functions.
function KosarajuMaximal(boolean [1..N,1..N] Relation)
// SearchOrder[i] == j means candidate j is in position i of the search order
integer [1..N] SearchOrder
for i from 1 to N
SearchOrder[i] = i
StartDFS(Relation, SearchOrder)
//The second DFS is performed with
// a search order that is the reverse finish order from first search
integer [1..N] SecondSearchOrder
for i from 1 to N
SecondSearchOrder[i] = FinishOrder[N+1 - i]
// ... and with the transposed Relation
boolean [1..N,1..N] RelationTranspose
for i from 1 to N
for j from 1 to N
RelationTranspose[i,j] = Relation[j,i]
StartDFS(RelationTranspose, SecondSearchOrder)
bool [1..N] IsInMaximal
for i from 1 to N
if TreeConnectsToAnyPriorTree[Tree[i]] == true
IsInMaximal[i] = false
IsInMaximal[i] = true
return IsInMaximal
// This function initializes several variables that are globally shared
// between the KosarajuMaximal, StartDFS, and VisitDFS functions
function InitDFS(int N)
integer [1..N] FinishOrder
FinishOrderCnt = 0
boolean [1..N] Visited
integer [1..N] Tree
// The variable TreeConnectsToAnyPriorTree is only relevant in the second DFS.
// Since the second DFS is based on the transpose of the original relation,
// and in the second search, trees correspond to cycle equivalence classes
// aka strongly connected components, this is really tracking whether under
// the original relation this tree is dominated by another one and hence
// is not maximal.
boolean [1..N] TreeConnectsToAnyPriorTree
TreeCnt = 0
for i from 1 to N
Visited[i] = false
TreeConnectsToAnyPriorTree[i] = false
function StartDFS(boolean [1..N,1..N] Relation, integer [1..N] SearchOrder)
for srchIx from 1 to N
rootIx = SearchOrder[srchIx]
if Visited[rootIx] == false
// Note: when translating to zero-based array indexes,
// reverse the order of the next two lines
TreeCnt = TreeCnt + 1
VisitDFS(Relation, SearchOrder, rootIx)
function VisitDFS(boolean [1..N,1..N] Relation, integer [1..N] SearchOrder, integer visitIx)
Tree[visitIx] = TreeCnt
Visited[visitIx] = true
for srchIx from 1 to N
probeIx = SearchOrder[srchIx]
if Relation[visitIx,probeIx] == true
if Visited[probeIx] == false
VisitDFS(Relation, SearchOrder, probeIx)
if Tree[probeIx] < Tree[TreeCnt]
TreeConnectsToAnyPriorTree[TreeCnt] = true
FinishCnt = FinishCnt + 1
FinishOrder[FinishCnt] = visitIx
Kosaraju's algorithm always starts the second depth-first search with a tree that is a maximal cycle equivalence class. So if conditions hold that guarantee that the relation-induced partial ordering
has exactly one maximal cycle equivalence class, and only the maximal candidates need to be identified, then a specialized version of StartDFS can be used for the second depth-first search that only
executes the for loop one time. In that case, the test for whether to set IsInMaximal[i] = true is simply whether Visited[i] == true.
Floyd-Warshall algorithm
This version of the Floyd-Warshall algorithm calculates whether there is a path between any two candidates, and then uses the results to see if each candidate is in a maximal cycle equivalence class.
Floyd Algorithm to calculate the Schwartz winners:
function Floyd-WarshallMaximal(boolean [1..N,1..N] Relation)
boolean [1..N] IsInMaximal
for i from 1 to N
IsInMaximal[i] = true
// Eventually, HasPath[i,j] == true iff there is a path from i to j
boolean [1..N,1..N] HasPath
// Initialize HasPath for paths of length 1
for i from 1 to N
for j from 1 to N
if i <> j
if Relation[i,j] == true
HasPath[i,j] = true
HasPath[i,j] = false
// Expand consideration to paths that have intermediate nodes from 1 to k
for k from 1 to N
for i from 1 to N
if k <> i
for j from 1 to N
if k <> j and i <> j
if HasPath[i,k] == true and HasPath[k,j] == true
HasPath[i,j] = true
// Disqualify as maximal any candidates that have paths to them,
// but no path back to complete a cycle
for i from 1 to N
for j from 1 to N
if i <> j
if HasPath[j,i] == true and HasPath[i,j] == false
IsInMaximal[i] = false;
return IsInMaximal
• "[EM] New Voting mailing list: Politicians and Polytopes". Election-Methods mailing list. 2000-04-07. Retrieved 2020-02-17.
• "[EM] Beatpath Magnitude Algorithm Revision". Election-Methods mailing list. Retrieved 2020-02-17.
• "[EM] Re: bicameral design poll". Election-Methods mailing list. Retrieved 2020-02-17.
• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L. (1990). Introduction to Algorithms, first edition, MIT Press and McGraw-Hill. ISBN 0-262-03141-8.
□ Section 23.5, "Strongly connected components", pp. 488–493;
□ Section 26.2, "The Floyd-Warshall algorithm", pp. 558–565;
See also
|
{"url":"https://electowiki.org/wiki/Maximal_elements_algorithms","timestamp":"2024-11-09T23:14:27Z","content_type":"text/html","content_length":"59424","record_id":"<urn:uuid:e1bf12f6-d944-403b-b004-243c966872ca>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00658.warc.gz"}
|
If y= limn→∞∑r=1n(r(3r+4n)2n), then value of 10y is ?... | Filo
Not the question you're searching for?
+ Ask your question
Was this solution helpful?
Found 8 tutors discussing this question
Discuss this question LIVE for FREE
15 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions from Integrals
View more
Practice questions on similar concepts asked by Filo students
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text If , then value of is ?
Updated On May 1, 2023
Topic Integrals
Subject Mathematics
Class Class 12
Answer Type Text solution:1 Video solution: 2
Upvotes 182
Avg. Video Duration 9 min
|
{"url":"https://askfilo.com/math-question-answers/if-y-lim-_n-rightarrow-infty-sum_r1nleftfracsqrtnsqrtr3-sqrtr4-sqrtn2right-then","timestamp":"2024-11-09T14:17:06Z","content_type":"text/html","content_length":"602832","record_id":"<urn:uuid:377a1900-f67d-47b7-89ed-1121f6a3d958>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00384.warc.gz"}
|
TY - JOUR T1 - The Semantics of Empirical Unverifiability JF - Organon F Y1 - 2015 A1 - Sedlár, Igor AB - Pavel Cmorej has argued that the existence of unverifiable and unfalsifiable empirical
propositions follows from certain plausible assumptions concerning the notions of possibility and verification. Cmorej proves, it the context of a bi-modal alethic-epistemic axiom system AM4, that
(1) p and it is not verified that p is unverifiable; (2) p or it is falsified that p is unfalsifiable; (3) every unverifiable p is logically equivalent to p and it is not verifiable that p; (4) every
unverifiable p entails that p is unverifiable. This article elaborates on Cmorej’s results in three ways. Firstly, we formulate a version of neighbourhood semantics for AM4 and prove completeness.
This allows us to replace Cmorej’s axiomatic derivations with simple model-theoretic arguments. Secondly, we link Cmorej’s results to two well-known paradoxes, namely Moore’s Paradox and the
Knowability Paradox. Thirdly, we generalise Cmorej’s results, show them to be independent of each other and argue that results (3) and (4) are independent of any assumptions concerning the notion of
verification. IS - 3 VL - 22 SP - 358-377 UR - http://www.klemens.sav.sk/fiusav/doc/organon/2015/3/358-377.pdf U2 - Articles U3 - 358377 TI - The Semantics of Empirical Unverifiability ER -
|
{"url":"http://klemens.sav.sk/fiusav/organon/?q=sk/biblio/export/ris/1280","timestamp":"2024-11-04T17:41:51Z","content_type":"text/plain","content_length":"1961","record_id":"<urn:uuid:6ae71b68-43b3-4e60-8171-5f39f2502788>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00588.warc.gz"}
|
1. Syllabus
Course Info
Title: MAT 526: Topics in Combinatorics
Semester: Spring 2021
Credits: 3
Section: 1
Time: 9:10-10:00AM MWF
Location: AMB 207
MAT 226, MAT 316, and MAT 411 with grades of C or better.
Catalog Description
Topics in enumerative, algebraic, and geometric combinatorics, chosen at instructor’s discretion; may include advanced counting techniques, graph theory, combinatorial designs, matroids, and
error-correcting codes.
Textbook and Course Materials
We will be using the recently published textbook Eulerian Numbers by T. Petersen (DePaul University). All other necessary material (including homework) will be made available via handouts and
postings on the course webpage. You should be seeking clarification about the material whenever necessary by asking questions in class, working with our students, stopping by office hours, or
emailing me. Here’s one of my favorite quotes about reading mathematics. You can find the current errata for the book here.
Don’t just read it; fight it! Ask your own questions, look for your own examples, discover your own proofs. Is the hypothesis necessary? Is the converse true? What happens in the classical
special case? What about the degenerate cases? Where does the proof use the hypothesis?
Outline of Course
The tentative plan is to cover Chapters 1-6 and 11 of Eulerian Numbers, but we may cover more or less depending on time and interests. Here are the proposed topics:
• Eulerian numbers
□ Binomial coefficients
□ Generating functions
□ Classical Eulerian numbers
□ Eulerian polynomials
□ Two important identities
□ Exponential generating function
• Narayana numbers
□ Catalan numbers
□ Pattern-avoiding permutations
□ Narayana numbers
□ Dyck paths
□ Planar binary trees
□ Noncrossing partitions
• Partially ordered sets
□ Basic definitions and terminology
□ Labeled posets and P-partitions
□ The shard intersection order
□ The lattice of noncrossing partitions
□ Absolute order and Noncrossing partitions
• Weak order, hyperplane arrangements, and the Tamari lattice
□ Inversions
□ The weak order
□ The braid arrangement
□ Euclidean hyperplane arrangements
□ Products of faces and the weak order on chambers
□ Set compositions
□ The Tamari lattice
□ Rooted planar trees and faces of the associahedron
• Refined enumeration
□ The idea of a $q$-analogue
□ Lattice paths by area
□ Lattice paths by major index
□ Euler-Mahonian distributions
□ Descents and major index
□ $q$-Catalan numbers
□ $q$-Narayana numbers
□ Dyck paths by area
An ounce of practice is worth more than tons of preaching.
Rights of the Learner
As a student in this class, you have the right:
1. to be confused,
2. to make a mistake and to revise your thinking,
3. to speak, listen, and be heard, and
4. to enjoy doing mathematics.
You may encounter many defeats, but you must not be defeated.
In our classroom, diversity and individual differences are respected, appreciated, and recognized as a source of strength. Students in this class are encouraged and expected to speak up and
participate during class and to carefully and respectfully listen to each other. Every member of this class must show respect for every other member of this class. Any attitudes or actions that are
destructive to the sense of community that we strive to create are not welcome and will not be tolerated. In summary: Be good to each other. I would appreciate private responses to the following
question: Are there aspects of your identity that you would like me to attend to when forming groups, and if so, how?
Students are also expected to minimize distracting behaviors. In particular, every attempt should be made to arrive to class on time. If you must arrive late or leave early, please do not disrupt
class. Please turn off the ringer on your cell phone. I do not have a strict policy on the use of laptops, tablets, and cell phones. You are expected to be paying attention and engaging in class
discussions. If your cell phone, etc. is interfering with your ability (or that of another student) to do this, then put it away, or I will ask you to put it away.
Don’t fear failure. Not failure, but low aim, is the crime. In great attempts it is glorious even to fail.
Rules of the Game
Reviewing material from previous courses and looking up definitions and theorems you may have forgotten is fair game. However, when it comes to completing assignments for this course, you should not
look to resources outside the context of this course for help. That is, you should not be consulting the web, other texts, other faculty, or students outside of our course in an attempt to find
solutions to the problems you are assigned. This includes Chegg and Course Hero. On the other hand, you may use each other, the textbook, me, and your own intuition. If you feel you need additional
resources, please come talk to me and we will come up with an appropriate plan of action. Please read NAU’s Academic Integrity Policy.
You will become clever through your mistakes.
You are allowed and encouraged to work together on homework. However, each student is expected to turn in their own work. Prior to the start of class, you will need to capture your handwritten work
digitally and then upload a PDF to BbLearn. There are many free smartphone apps for doing this. I use TurboScan on my iPhone. Submitting your work prior to class allows me to see what you
accomplished outside of class. In general, late homework will not be accepted. However, you are allowed to turn in up to two late homework assignments with no questions asked. Unless you have made
arrangements in advance with me, homework turned in after class will be considered late. When doing your homework, I encourage you to consult the Elements of Style for Proofs. On each homework
assignment, please write (i) your name, (ii) name of course, and (iii) Homework number. You can find the list of assignments on the homework page. I reserve the right to modify the homework
assignments as I see necessary.
Generally, the written homework assignments will be due on Mondays, but I will always tell you when a given homework assignment is due–so there should never be any confusion. Your homework will
always be graded for completion and some subset of the problems will be graded for correctness. Problems that are graded for completeness will be worth 1 point. Problems that are graded for
correctness will either be worth 2 points or 4 points depending on the level of difficulty. Generally, computational problems will be worth 2 points while problems requiring a formal proof will be
worth 4 points. Each 4-point problem is subject to the following rubric:
Grade Criteria
4 This is correct and well-written mathematics!
3 This is a good piece of work, yet there are some mathematical errors or some writing errors that need addressing.
2 There is some good intuition here, but there is at least one serious flaw.
1 I don't understand this, but I see that you have worked on it; come see me!
0 I believe that you have not worked on this problem enough or you didn't submit any work.
To compute your score on a given homework assignment, I will divide your total points by the total possible points to obtain a percent score. Each homework assignment has the same weight. Your
overall homework grade will be worth 35% of your final grade.
I write one page of masterpiece to ninety-one pages of shit.
There will be two midterm exams and a cumulative final exam. Exam 1 will be a written exam consisting of an in-class portion, and possibly a take-home portion. Exam 1 is tentatively scheduled for
Friday, February 26 (week 7) and is worth 25% of your overall grade in the course. Exam 2 will be a 30-minute oral exam taken individually with me (via Zoom or in my office, depending on how the
semester proceeds) sometime during the last two weeks of classes. Exam 2 will be worth 15% of your overall grade. The final exam will be on Wednesday, April 28 at 7:30-9:30AM and is worth 25% of your
overall grade. The final exam may or may not have a take-home portion. If either of Exam 1 or the Final Exam have a take-home portion, you will have a few days to complete it. Make-up exams will only
be given under extreme circumstances, as judged by me. In general, it will be best to communicate conflicts ahead of time.
The impediment to action advances action. What stands in the way becomes the way.
Attendance and Participation
Regular attendance is expected and is vital to success in this course, but you will not explicitly be graded on attendance. Students can find more information about NAU’s attendance policy on the
Academic Policies page. You are also expected to respectfully participate and contribute to class discussions. This includes asking relevant and meaningful questions to both the instructor and your
peers in class and on our Discord server.
I must not fear.
Fear is the mind-killer.
Fear is the little-death that brings total obliteration.
I will face my fear.
I will permit it to pass over me and through me.
And when it has gone past I will turn the inner eye to see its path.
Where the fear has gone there will be nothing.
Only I will remain.
The only thing I will award extra credit for is finding typos on course materials (e.g., textbook, exams, syllabus, webpage). This includes broken links on the webpage. However, it does not include
the placement of commas and such. If you find a typo, I will add one percentage point to your next exam. You can earn at most two percentage points per exam and at most five percentage points over
the course of the semester. They’re is a typo right here.
Basis for Evaluation
In summary, your final grade will be determined by your scores in the following categories.
Category Weight Notes
Homework 35% See above for requirements
Exam 1 25% In-class portion on Friday, February 26, possible take-home portion
Exam 2 15% Individual oral exam taken during last 2 weeks of semester
Final Exam 25% Monday, November 23 at 7:30-9:30AM
It is not the critic who counts; not the man who points out how the strong man stumbles, or where the doer of deeds could have done them better. The credit belongs to the man who is actually in
the arena, whose face is marred by dust and sweat and blood; who strives valiantly; who errs, who comes short again and again, because there is no effort without error and shortcoming; but who
does actually strive to do the deeds; who knows great enthusiasms, the great devotions; who spends himself in a worthy cause; who at the best knows in the end the triumph of high achievement, and
who at the worst, if he fails, at least fails while daring greatly, so that his place shall never be with those cold and timid souls who neither know victory nor defeat.
Department and University Policies
You are responsible for knowing and following the Department of Mathematics and Statistics Policies (PDF) and other University policies listed here (PDF). More policies can be found in other
university documents, especially the NAU Student Handbook (see appendices).
As per Department Policy, cell phones, MP3 players and portable electronic communication devices, including but not limited to smart phones, cameras and recording devices, must be turned off and
inaccessible during in-class tests. Any violation of this policy will be treated as academic dishonesty.
Important Dates
Here are some important dates:
• January 18: Martin Luther King Day (no classes)
• January 20: Last day to drop a course without a “W”
• March 14: Last day to drop a course without a petition
• April 28: Final Exam (7:30-9:30AM)
Getting Help
There are many resources available to get help. First, you are allowed and encouraged to work together on homework. However, each student is expected to turn in their own work. You are strongly
encouraged to ask questions in our Discord discussion group, as I (and hopefully other members of the class) will post comments there for all to benefit from. You are also encouraged to stop by
during my office hours and you can always email me. I am always happy to help you. If my office hours don’t work for you, then we can probably find another time to meet. It is your responsibility to
be aware of how well you understand the material. Don’t wait until it is too late if you need help. Ask questions!
Tell me and I forget, teach me and I may remember, involve me and I learn.
Changes to the Syllabus
Any changes to this syllabus made during the term will be properly communicated to the class.
If you want to sharpen a sword, you have to remove a little metal.
The "Rights of the Learner" were adapted from a similar list written by
Crystal Kalinec-Craig
. The first paragraph of "Commitment to the Learning Community" is a modified version of statement that
Spencer Bagley
has in his syllabi. Lastly, I've borrowed a few phrases here and there from
Bret Benesh
|
{"url":"https://danaernst.com/teaching/mat526s21/syllabus/","timestamp":"2024-11-01T23:01:31Z","content_type":"text/html","content_length":"35712","record_id":"<urn:uuid:376af876-eca6-4b6a-a1dc-b85d86263b56>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00856.warc.gz"}
|
Lorentz Transformation - Cosmos
Lorentz transformations, within the theory of special relativity, are a set of relationships that account for how the measurements of a physical magnitude obtained by two different observers are
related. These relationships established the mathematical basis for Einstein’s theory of special relativity since the Lorentz transformations specify the type of space-time geometry required by
Einstein’s […]
|
{"url":"https://cosmos.theinsightanalysis.com/tag/lorentz-transformation/","timestamp":"2024-11-14T04:18:00Z","content_type":"text/html","content_length":"128960","record_id":"<urn:uuid:d0c770b8-b86f-459e-9e5d-9fca21c57d80>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00201.warc.gz"}
|
About THEP division
In 1990, on the basis of several theoretical laboratories and groups at the Skobeltsyn Institute of Nuclear Physics of the Moscow State University (SINP MSU), the Department of Theoretical High
Energy Physics (DTHEP) was established. The department was organized by the decision of the SINP MSU Council in order to unite the efforts of theorists to solve the problems of modern physics of
elementary particles and to carry out work within the framework of the USSR state scientific and technology program “High Energy Physics”. The center of the “crystallization” of the DTHEP creation
was the Laboratory for Symbolic Computing in High Energy Physics (LSCHEP), organized on the initiative of the rector of Moscow State University, Academician A.A. Logunov in 1983 and brought together
a number of young scientists – graduates of the MSU Faculty of Physics.
At present, DTHEP includes three laboratories: LSCHEP (head of Laboratory, candidate of physical and mathematical sciences A.P. Kryukov), Laboratory of the theory of fundamental interactions (LTFI),
head of Laboratory, doctor of physical and mathematical sciences, Professor E.E.Boos) and the Laboratory of Field Theory (LFT), head of the Laboratory, doctor of physical and mathematical sciences,
Professor V.E.Troitsky).
The main topics of the Department are related to elementary particle physics and high-energy physics – one of the most rapidly developing areas of research in the world of physics. The purpose of
these studies is to gain knowledge about the most fundamental properties of matter at distances of the order of 10-16-10-17 cm and less.
High energy physics and quantum field theory, in particular, face a number of unsolved problems today. The Standard Model (SM) quite successfully, in some cases at the 0.1% accuracy level, describes
the existing experimental data. The SM triumph was the discovery of the Higgs boson at the Large Hadron Collider (LHC) at CERN. However, SM has a number of internal problems, such as the problem of
hierarchy of scales. It does not answer many questions, such as the number and structure of generations of quarks and leptons, has many free parameters, etc. The nature of the detected neutrino
oscillations cannot be explained within the framework of the Standard Model, the value of the vacuum energy density calculated in the Standard Model is about 120 orders of magnitude higher than the
value of the cosmological constant, the problem of violation of CP invariance and the asymmetry of matter-antimatter in the Universe remains a mystery. The theory of gravity stands apart from the SM,
although the problem of creating a quantum theory that includes gravity is becoming more acute, especially in the light of recent astrophysical observations on the percentage of ordinary matter, dark
matter and dark energy in the accelerating expanding Universe.
Comprehension of experimental data, prediction of new results and directions of research requires the creation of adequate theoretical approaches to the description of interactions of elementary
particles. The need arises for the development of new methods of quantum field theory, which is the foundation of the theory of physics of the microworld, as well as for the construction of various
models of the interaction of elementary particles, including phenomenological ones. Finally, it is required to create highly efficient methods for calculating (including computerized) characteristics
of the interaction of elementary particles: scattering cross sections, structure functions, spectra of bound states, spin properties, and others.
One of the most striking achievements of the department's employees in this direction was the creation of the CompHEP software package, which has received worldwide fame and is designed to automate
the calculations of the processes of collisions of elementary particles and their decays within the framework of modern theories of gauge fields. It is freely available on the website <http://
theory.sinp.msu.ru/comphep> and allows physicists (even those with little computer experience) to calculate cross sections and construct various distributions for collision processes of elementary
particles within the Standard Model and its extensions.
Since 2001, on the basis of the CompHEP computer program, the department has been developing the CalcHEP software package, which, as a result of many years and fruitful cooperation with the LAPP-TH
laboratory (France), has become the basis of the well-known micrOMEGAs computer program for calculating the characteristics of dark matter in various extensions of the Standard Model, including
supersymmetric theories.
Employees of DTHEP made a significant contribution and were pioneers in Russia in the development of the GRID computer system, which is based on the idea of regional centers for storage, processing
and analysis of experimental data distributed in different countries. GRID is one of the most important components of the LHC project, which provided the processing and analysis of experimental data,
as a result of which, in particular, the Higgs boson was discovered.
Theoretical work carried out at DTHEP has always been in the mainstream of world research, at their forefront. For many years, the department employees have been conducting joint research with such
leading research centers in the field of high energy physics as CERN (Switzerland), DESY and the Max Planck Institute (Germany), KEK (Japan), FNAL (USA), LAPP (France). DTHEP staff fruitfully
collaborated with many leading universities in the world, for example, with universities in London, Helsinki, Tokyo, Hamburg, Karlsruhe, Lisbon, Leipzig, Dublin, Seoul, Chicago and others. Among
Russian research centers, contacts with IHEP (Protvino), JINR (Dubna), INR RAS, Novosibirsk, St. Petersburg, Samara and Southern Federal Universities are especially fruitful.
The DTHEP team regularly wins grants from Russian and foreign foundations, including grants from the Russian Foundation for Basic Research, the Russian Science Foundation, grants from the program
“Universities of Russia”, INTAS, CERN-INTAS, DFG, FTP contracts, and others. Over the past 15 years, a joint team of employees from DTHEP and DEHEP has become the winner of the competition for a
grant from the President of the Russian Federation for state support of leading scientific schools in Russia.
Much attention is paid to the training of scientific and pedagogical personnel in the specialty of physics of high energies and elementary particles. For many years, DTHEP employees have been giving
lecture courses at the Physics Faculty of Moscow State University, at other universities in the country and abroad, have prepared and published textbooks recommended for many institutes and
universities. In recent years, a number of special courses have been read for senior students, for example, on the theory of dynamic equations in quantum field theory, on renormalization theory in
local quantum field theory, on elementary particle physics and the Standard Model, on group theory, and others. Every year, the staff of the department carry out scientific supervision of an average
of 5 students and 3 graduate students of the Faculty of Physics of Moscow State University. Its chair of Physics of the Atomic Nucleus and the Quantum Theory of Collisions is, in fact, the base
educational chair of DTHEP.
Since 1985, the International School-Seminar for Young Scientists on Quantum Field Theory and High Energy Physics has been held annually by the efforts of the DTHEP staff. Since 1991, this
school-seminar has been transformed into an international workshop (QFTHEP), held in different regions of the country in conjunction with other universities and received international recognition.
More than 100 participants from Russia and other republics of the former USSR, including up to 30 foreign scientists, took part in these meetings. The QFTHEP workshops demonstrate the high level and
authority of the scientists of the Skobeltsyn Institute of Nuclear Physics of the Moscow State University, conducting research in experimental and theoretical physics of elementary particles and
related fields, as well as the effectiveness of international cooperation in high energy physics.
- V.Savrin, E.Boos
thep-en.txt · Last modified: 04/04/2021 08:58 by admin
|
{"url":"https://theory.sinp.msu.ru/doku.php/thep-en","timestamp":"2024-11-04T10:57:30Z","content_type":"text/html","content_length":"30529","record_id":"<urn:uuid:f02bff7b-9c41-48c9-a55e-493826b4e038>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00359.warc.gz"}
|
A Related Rates Problem of a Triangle with Two Sides of Constant Length While the Included Angle Increases at a Given Rate
Question Video: A Related Rates Problem of a Triangle with Two Sides of Constant Length While the Included Angle Increases at a Given Rate Mathematics • Third Year of Secondary School
A triangle with sides π , π , and contained angle π has area π ΄ = 1/2 π π sin π . Suppose that π = 4, π = 5, and the angle π is increasing at 0.6 rad/s. How fast is the area
changing when π = π /3?
Video Transcript
A triangle with sides π , π , and contained angle π has area π ΄ equals a half π π sin π . Suppose that π equals four, π equals five, and the angle π is increasing at 0.6
radians per second. How fast is the area changing when π is π by three.
Okay, so letβ s try to picture whatβ s happening in this scenario. This scenario involves a triangle with sides of length π and π and contained angle π . So that means that the angle
between the two sides π and π has measure π . Weβ re told that the area of this triangle is a half π π sin π .
And you might recognize this as the formula for the area of a triangle given that the lengths of two sides and the measure of the included angle. We are additionally told that the values of the side
lengths π and π are four and five, respectively. And so we can update our diagram and also substitute these values into the general formula for the area of the triangle. So the area of our
triangle is a half times four times five times sin π which is 10 sin π .
Continuing to read the question, we see that π is increasing at 0.6 radians per second. What does that mean? Well, we can draw another picture of our triangle after some time has passed and the
angle π has increased. And this helps us to understand what it means for the angle π to increase as the side length stays the same. But we would like to find some mathematical way of stating
that the angle π is increasing at 0.6 radians per second.
When we see words like increasing or decreasing and units of something over seconds, then we think about rates of change. The variable π is changing at 0.6 radians per second. The derivative of
the measure of the angle π with respect to time π ‘ is 0.6.
The last sentence of the question tells us what weβ re looking for. Weβ re looking for how fast the area is changing when π is π by three. The natural way to express how the area is changing
is with a derivative as before. Weβ re looking for the rate of change of area with respect to time β π π ΄ by π π ‘. In particular, weβ re looking for the instantaneous rate of change of
the area with respect to time when π is π by three.
Okay, so now that weβ ve read through the question and translated all the statements into mathematical notation, letβ s recap. We were given in the question that the rate of change of the measure
of one of the angles of the triangle with respect to time π π by π π ‘ is 0.6. We deduced from the information in the question that the area π ΄ of the triangle is 10 sin π . And we are
required to find the rate of change of the area of the triangle with respect to time at π equals π by three.
Weβ re looking for π π ΄ by π π ‘ and we have the value of π π by π π ‘. So if we had a relation between π π ΄ by π π ‘ and π π by π π ‘, then weβ d be pretty much done.
Okay, but we donβ t have a relation between π π ΄ by π π ‘ and π π by π π ‘, which would allow us to find π π ΄ by π π ‘ in terms of π π by π π ‘. But we do have a
relation between π ΄ and π . Now, how does this help us find a relation between π π ΄ by π π ‘ and π π by π π ‘? Well, we can differentiate this relation implicitly with respect to π
‘. So letβ s do that.
On the left-hand side, we get π π ΄ by π π ‘ which is after all what weβ re looking for. So this is looking promising. On the right-hand side, weβ ve got π by π π ‘ of something
involving π . So weβ re going to want to use the chain rule. π by π π ‘ can be replaced by π π by π π ‘ times π by π π . So letβ s make this change.
We can differentiate 10 sin π with respect to π quite easily. The derivative of sin is cos and so this is 10 cos π . How about π π by π π ‘? Well, if you remember we were given its
value in the question, π π by π π ‘ is 0.6. Multiplying these together, we get that π π ΄ by π π ‘ is six cos π . Weβ re looking for the value of π π ΄ by π π ‘ when π is π
by three. And so we have to substitute π by three for π . We make this substitution. And as we know that cos π by three is 0.5, π by three being a special angle, we see that π π ΄ by
π π ‘ is six times 0.5 which is three.
Letβ s interpret this result in context then. In a triangle with sides of lengths four and five, where the included angle π is increasing at a rate of 0.6 radians per second, then at the instant
when π is π by three, the instantaneous rate of change of the area with respect to time is three square units per second.
This question is a related rates question and we solved it using the standard methods for solving such questions. We use derivatives to express the scenario described in the question mathematically.
For example, we saw that we were required to find the value of π π ΄ by π π ‘. And we were given the value of π π by π π ‘. We then want to relate to the rate π π ΄ by π π ‘ to
the rate π π by π π ‘ which is what makes it a related rates problem. And we did this by first finding a relation between π ΄ and π , before what differentiating implicitly to turn this
into a relation between π π ΄ by π π ‘ and π π by π π ‘. Having done this, we just substituted the values of π π by π π ‘ and π to find our answer.
|
{"url":"https://www.nagwa.com/en/videos/240198171749/","timestamp":"2024-11-03T01:21:55Z","content_type":"text/html","content_length":"260687","record_id":"<urn:uuid:72bdf1c2-6a4a-4025-8103-0f17d0065450>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00298.warc.gz"}
|
Time limit on editing messages
Time limit on editing messages
I replied to a post on this forum, and then wanted to change the wording slightly. I then got a messages saying that I had just 5 seconds to make an edit!
Maybe there is a reason for preventing edits after a large time lapse, but surely 5 seconds is absurdly short!
Now it is 180 seconds - which is still a bit crazy if you want to rephrase something. Would it help to discuss what type of abuse this short time scale is meant to solve, so people could agree a
reasonable time limit, or set it to infinity if that is not going to open the system to some sort of attack.
I do realise the pressure that you are all under, but a really short editing period makes for more scrappy, less thought out contributions - which is a shame.
|
{"url":"https://community.notepad-plus-plus.org/topic/7176/time-limit-on-editing-messages","timestamp":"2024-11-01T19:12:35Z","content_type":"text/html","content_length":"44247","record_id":"<urn:uuid:fead1158-bfa9-4a89-8635-81a6b88e7b80>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00278.warc.gz"}
|
Find the general solution of y′′ - y′ = x2 - Stumbling Robot
Find the general solution of y′′ – y′ = x^2
Find the general solution of the second-order differential equation
If the solution is not valid everywhere, describe the interval on which it is valid.
The general solution of the homogeneous equation
is given by Theorem 8.7 with
To find a particular solution of
Setting the coefficients of like powers of
By Theorem 8.8 (page 330 of Apostol) the general solution of the given differential equation is then
Point out an error, ask a question, offer an alternative solution (to use Latex type [latexpage] at the top of your comment):
|
{"url":"https://www.stumblingrobot.com/2016/01/31/find-the-general-solution-of-y-y-x2/","timestamp":"2024-11-10T09:19:29Z","content_type":"text/html","content_length":"60380","record_id":"<urn:uuid:dbfe0821-4896-4d68-8c5c-1c8bc2529d55>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00807.warc.gz"}
|
211 research outputs found
We introduce the extended Freudenthal-Rosenfeld-Tits magic square based on six algebras: the reals $\mathbb{R}$, complexes $\mathbb{C}$, ternions $\mathbb{T}$, quaternions $\mathbb{H}$, sextonions $\
mathbb{S}$ and octonions $\mathbb{O}$. The ternionic and sextonionic rows/columns of the magic square yield non-reductive Lie algebras, including $\mathfrak{e}_{7\scriptscriptstyle{\frac{1}{2}}}$. It
is demonstrated that the algebras of the extended magic square appear quite naturally as the symmetries of supergravity Lagrangians. The sextonionic row (for appropriate choices of real forms) gives
the non-compact global symmetries of the Lagrangian for the $D=3$ maximal $\mathcal{N}=16$, magic $\mathcal{N}=4$ and magic non-supersymmetric theories, obtained by dimensionally reducing the $D=4$
parent theories on a circle, with the graviphoton left undualised. In particular, the extremal intermediate non-reductive Lie algebra $\tilde{\mathfrak{e}}_{7(7)\scriptscriptstyle{\frac{1}{2}}}$
(which is not a subalgebra of $\mathfrak{e}_{8(8)}$) is the non-compact global symmetry algebra of $D=3$, $\mathcal{N}=16$ supergravity as obtained by dimensionally reducing $D=4$, $\mathcal{N}=8$
supergravity with $\mathfrak{e}_{7(7)}$ symmetry on a circle. The ternionic row (for appropriate choices of real forms) gives the non-compact global symmetries of the Lagrangian for the $D=4$ maximal
$\mathcal{N}=8$, magic $\mathcal{N}=2$ and magic non-supersymmetric theories obtained by dimensionally reducing the parent $D=5$ theories on a circle. In particular, the Kantor-Koecher-Tits
intermediate non-reductive Lie algebra $\mathfrak{e}_{6(6)\scriptscriptstyle{\frac{1}{4}}}$ is the non-compact global symmetry algebra of $D=4$, $\mathcal{N}=8$ supergravity as obtained by
dimensionally reducing $D=5$, $\mathcal{N}=8$ supergravity with $\mathfrak{e}_{6(6)}$ symmetry on a circle.Comment: 38 pages. Reference added and minor corrections mad
As the frontiers of physics steadily progress into the 21st century we should bear in mind that the conceptual edifice of 20th-century physics has at its foundations two mutually incompatible
theories; quantum mechanics and Einsteinâ s general theory of relativity. While general relativity refuses to succumb to quantum rule, black holes are raising quandaries that strike at the very
heart of quantum theory. M-theory is a compelling candidate theory of quantum gravity. Living in eleven dimensions it encompasses and connects the five possible 10-dimensional superstring theories.
However, Mtheory is fundamentally non-perturbative and consequently remains largely mysterious, offering up only disparate corners of its full structure. The physics of black holes has occupied
centre stage in uncovering its non-perturbative structure. The dawn of the 21st-century has also played witness to the birth of the information age and with it the world of quantum information
science. At its heart lies the phenomenon of quantum entanglement. Entanglement has applications in the emerging technologies of quantum computing and quantum cryptography, and has been used to
realize quantum teleportation experimentally. The longest standing open problem in quantum information is the proper characterisation of multipartite entanglement. It is of utmost importance from
both a foundational and a technological perspective. In 2006 the entropy formula for a particular 8-charge black hole appearing in M-theory was found to be given by the â hyperdeterminantâ , a
quantity introduced by the mathematician Cayley in 1845. Remarkably, the hyperdeterminant also measures the degree of tripartite entanglement shared by three qubits, the basic units of quantum
information. It turned out that the different possible types of three-qubit entanglement corresponded directly to the different possible subclasses of this particular black hole. This initial
observation provided a link relating various black holes and quantum information systems. Since then, we have been examining this two-way dictionary between black holes and qubits and have used our
knowledge of M-theory to discover new things about multipartite entanglement and quantum information theory and, vice-versa, to garner new insights into black holes and M-theory. There is now a
growing dictionary, which translates a variety of phenomena in one language to those in the other. Developing these fascinating relationships, exploiting them to better understand both M-theory and
quantum entanglement is the goal of this thesis. In particular, we adopt the elegant mathematics of octonions, Jordan algebras and the Freudenthal triple system as our guiding framework. In the
course of this investigation we will see how these fascinating algebraic structures can be used to quantify entanglement and define new black hole dualities
We complete the classification of half-supersymmetric branes in toroidally compactified IIA/IIB string theory in terms of representations of the T-duality group. As a by-product we derive a last
wrapping rule for the space-filling branes. We find examples of T-duality representations of branes in lower dimensions, suggested by supergravity, of which none of the component branes follow from
the reduction of any brane in ten-dimensional IIA/IIB string theory. We discuss the constraints on the charges of half-supersymmetric branes, determining the corresponding T-duality and U-duality
orbits.Comment: 34 pages, 3 figure
Tensoring two on-shell super Yang-Mills multiplets in dimensions $D\leq 10$ yields an on-shell supergravity multiplet, possibly with additional matter multiplets. Associating a (direct sum of)
division algebra(s) $\mathbb{D}$ with each dimension $3\leq D\leq 10$ we obtain formulae for the algebras $\mathfrak{g}$ and $\mathfrak{h}$ of the U-duality group $G$ and its maximal compact subgroup
$H$, respectively, in terms of the internal global symmetry algebras of each super Yang-Mills theory. We extend our analysis to include supergravities coupled to an arbitrary number of matter
multiplets by allowing for non-supersymmetric multiplets in the tensor product.Comment: 25 pages, 2 figures, references added, minor typos corrected, further comments on sec. 2.4 included, updated to
match version to appear in JHE
We determine explicit orbit representatives of reducible Jordan algebras and of their corresponding Freudenthal triple systems. This work has direct application to the classification of extremal
black hole solutions of N = 2, 4 locally supersymmetric theories of gravity coupled to an arbitrary number of Abelian vector multiplets in D = 4, 5 space-time dimensions.Comment: 18 pages. Updated to
match published versio
|
{"url":"https://core.ac.uk/search/?q=authors%3A(Borsten)","timestamp":"2024-11-05T23:00:24Z","content_type":"text/html","content_length":"158517","record_id":"<urn:uuid:0c011dd3-bad1-434a-9a8c-a153b26a1bde>","cc-path":"CC-MAIN-2024-46/segments/1730477027895.64/warc/CC-MAIN-20241105212423-20241106002423-00819.warc.gz"}
|
A New Generalized Odd Gamma Uniform Distribution: Mathematical Properties, Application and Simulation
f92cbcce-2010-4061-be99-f3ba34a3d51a20210803015149083wseas:wseasmdt@crossref.orgMDT DepositWSEAS TRANSACTIONS ON COMPUTERS2224-28721109-275010.37394/23205http://wseas.org/wseas/cms.action?id=
40263220213220212010.37394/23205.2021.20https://wseas.org/wseas/cms.action?id=23297B.HossieniDepartment of Statistics, Faculty of Intelligent Systems Engineering and Data Science Persian Gulf
University Bushehr, IRANM.AfshariDepartment of Statistics, Faculty of Intelligent Systems Engineering and Data Science Persian Gulf University Bushehr, IRANM.AlizadehDepartment of Statistics, Faculty
of Intelligent Systems Engineering and Data Science Persian Gulf University Bushehr, IRANH.KaramikabirDepartment of Statistics, Faculty of Intelligent Systems Engineering and Data Science Persian
Gulf University Bushehr, IRANn many applied areas there is a clear need for the extended forms of the well-known distributions.The new distributions are more flexible to model real data that present
a high degree of skewness and kurtosis, such that each one solves a particular part of the classical distribution problems. In this paper, a new two-parameter Generalized Odd Gamma distribution,
called the (GOGaU) distribution, is introduced and the fitness capability of this model are investigated. Some structural properties of the new distribution are obtained. The different methods
including: Maximum likelihood estimators, Bayesian estimators (posterior mean and maximum a posterior), least squares estimators, weighted least squares estimators, Cramér-von-Mises estimators,
Anderson-Darling and right tailed Anderson-Darling estimators are discussed to estimate the model parameters. In order to perform the applications, the importance and flexibility of the new model are
also illustrated empirically by means of two real data sets. For simulation Stan and JAGS software were utilized in which we have built the GOGaU JAGS module7312021731202113514815https://wseas.com/
journals/computers/2021/a305105-014(2021).pdf10.37394/23205.2021.20.15https://wseas.com/journals/computers/2021/a305105-014(2021).pdf10.1016/j.csda.2011.11.015Alexander, C., Cordeiro, G.-M., Ortega,
E.-M.- M., Sarabia, G. M.: Generalized beta-generated distributions. Computational Statistics & Data Analysis. 56(6), 1880–1897 (2012) 10.1007/s40300-013-0007-yAlzaatreh, A., Lee, C., Famoye, F.: A
new method for generating families of continuous distributions. Metron. 71(1), 63–79 (2013) 10.1214/aoms/1177729437Anderson, T.-W., Darling, D.-A.: Asymptotic theory of certain” goodness of fit”
criteria based on stochastic processes. The annals of mathematical statistics. 193–212 (1952) 10.1093/biomet/67.2.261Atchison, J., Shen, S.-M.: Logistic-normal distributions: Some properties and
uses. Biometrika. 67(2) 261–272 (1980) 10.18637/jss.v076.i01Carpenter, B., Gelman, A., Hoffman, M., Lee, D., Goodrich, B., Betancourt, M., Brubaker, M., Guo, J., Li, P., Riddell, A.: Stan: A
probabilistic programming language. Journal of Statistical Software. 20(2), 1–37 (2016) 10.1080/00224065.1995.11979578Chen, G., Balakrishnan, N.: A general purpose approximate goodness-of-fit test.
Journal of Quality Technology. 27(2)154–161 (1995) 10.1111/j.2517-6161.1968.tb00743.xChoi, K., Bulgren, W.-G.: An estimation procedure for mixtures of distributions. Journal of the Royal Statistical
Society. Series B (Methodological). 444–460 (1968) Cordeiro, G.-M., Alizadeh, M., Nascimento, A.- D.-C., Rasekhi, M.: The exponentiated gompertz generated family of distributions: Properties and
applications. Chilean Journal of Statistics (ChJS). 7(2) (2016) 10.1080/00949650903530745Cordeiro, G.-M., de Castro, M.: A new family of generalized distributions. Journal of statistical computation
and simulation. 81(7), 883–898 (2011) 10.1214/10-bjps124Cordeiro, G.-M., dos Santos, R., Brito, E.-D.: The beta power distribution. Brazilian journal of probability and statistics. 26(1), 88–112
(2012) Dey, S., Mazucheli, J., Nadarajah, S.: Kumaraswamy distribution: different methods of estimation. Computational and Applied Mathematics. 1–18 (2017) 10.17713/ajs.v47i2.580Hosseini, B.,
Afshari, M., Alizadeh, M.: The generalized odd gamma-g family of distributions: Properties and applications. AJS. 47, 69–89 (2018) 10.1016/0022-1694(80)90036-0Kumaraswamy, P.: A generalized
probability density function for double-bounded random processes. Journal of Hydrology. 46(1-2) , 79–88 (1980) Journal of the Royal Statistical Society: Series B (Statistical Methodology). 62(2)
271–292 (2000) Plummer. M.: Jags: just another gibbs sampler proceedings of the 3rd international workshop on distributed statistical computing (dsc 2003). Vienna, Austria. (2005) Rrnyi, A.: On
measures of entropy and information. In Fourth Berkeley symposium on mathematical statistics and probability. 1, 547– 561 (1961). 10.1080/00949658808811068Swain, J., Venkatraman, S., Wilson, J.-R.:
Least-squares estimation of distribution functions in johnson’s translation system. Journal of Statistical Computation and Simulation. 29(4), 271– 297 (1988) 10.3758/s13428-013-0369-3Wabersich, D.,
Vandekerckhove, J.: Extending jags: A tutorial on adding custom distributions to jags (with a diffusion model example). Behavior research methods. 46(1), 15–28 (2014) 10.1016/s1568-4946(02)00059-5Xu,
K., Xie, M., Tang, L.-C., Ho, S.-L.: Application of neural networks in forecasting engine systems reliability. Applied Soft Computing. 2, 255–268 (2003) 10.1016/j.stamet.2008.12.003Zografos, K.,
Balakrishnan, K.: On families of beta and generalized gamma-generated distributions and associated inference. Statistical Methodology. 6(4), 344–362 (2009)
|
{"url":"https://wseas.com/journals/computers/2021/10.37394_23205.2021.20.15.xml","timestamp":"2024-11-11T21:29:55Z","content_type":"application/xml","content_length":"10486","record_id":"<urn:uuid:3e02680f-9eb4-45f1-a94d-2e9a4aefd0ce>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00011.warc.gz"}
|
FCKeditor Toolbar Quick Reference
This WYSIWYG editor allows the user to build HTML-encoded pages without knowing HTML. The FCKeditor toolbar (Fig. 1) includes tools to insert, format and/or edit text, bulleted and numbered lists,
hypertext links, anchors, images, special symbols, and tables. For more detailed information about using FCKeditor, see the FCKeditor v2.x Users Guide.
Figure 1. The FCKeditor toolbar includes tools for simple HTML formatting, as well as for inserting and editing text, hyperlinks, images, and tables.
Bold, Italics, Underline, Strikeout – format the selected text (Bold, Italics, Underline, [S:Strikeout:S]).
– set the text alignment.
– create a bulleted or numbered list (close the list by hitting Enter on the keyboard twice); convert selected lines of text, separated by paragraph- or line breaks, to a bulleted or numbered list;
convert a selected list to lines separated by paragraph breaks.
– decrease or increase the indentation of selected content.
– undo/redo action.
– insert, edit, or remove hypertext links and anchors. Convert selected text to hypertext links or anchors.
– insert an image at the cursor location, or edit the selected image.
– change the color of the selected text or its background.
– superscript or subscript the selected text.
– format a block of text to identify quotations.
– display the document as raw HTML code.
– insert a horizontal rule on a new line.
– cut or copy selected content to the clipboard; paste the contents of the clipboard to the cursor location as plain text (i.e., without formatting). Note that when pasting from word processing
documents and other files, paragraph breaks may be converted into line breaks; these will have to be replaced with pragraph breaks to allow other formatting features to work properly.
– show the block elements boundaries in the text.
– remove formatting from the selected text.
– insert symbols or special character at the cursor location.
– apply standard formatting styles (Normal text, Heading 2, Heading 3) to the selected content from a drop down menu.
– insert or edit a table. Open the Table Properties window and specify numbers of rows and columns, table width and alignment, border style, cell spacing and padding, and table caption.
– find or replace a specified word or phrase in the document.
– select all of the contents of the editor window.
– maximize the editor size inside the browser window.
– check spelling of the contents of the editor window.
– open the Cell Properties window to edit properties of the selected cell (cell width and height; text wrapping and alignment; colors; column- and row spanning) in a table.
– a insert a row, column, or cell after the cursor location in a table.
– delete the selected row, column, or cell in a table.
– merge the selected cells in a table into one.
– split the selected cell in a table into two columns.
|
{"url":"https://pbgworks.org/node/812","timestamp":"2024-11-15T01:15:39Z","content_type":"application/xhtml+xml","content_length":"20836","record_id":"<urn:uuid:aa9ac665-6dca-4b9f-83f9-a8f61831b28f>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00672.warc.gz"}
|
Systems Of Nonlinear Equations Worksheet Doc - Equations Worksheets
Systems Of Nonlinear Equations Worksheet Doc
Systems Of Nonlinear Equations Worksheet Doc – Expressions and Equations Worksheets are designed to help children learn faster and more efficiently. These worksheets include interactive exercises as
well as problems that are based on sequence of operations. These worksheets are designed to make it easier for children to grasp complex concepts and simple concepts quickly. It is possible to
download these free resources in PDF format to aid your child’s learning and practice maths equations. These are great for children who are in the 5th-8th grades.
Download Free Systems Of Nonlinear Equations Worksheet Doc
The worksheets listed here are designed for students from the 5th to 8th grades. These two-step word problem are constructed using fractions and decimals. Each worksheet contains ten problems. These
worksheets can be found on the internet as well as in print. These worksheets are a fantastic opportunity to practice rearranging equations. These worksheets can be used to practice rearranging
equations . They assist students with understanding equality and inverse operation.
These worksheets are designed for fifth and eight graders. They are great for those who struggle to calculate percentages. There are three types of problems. You can decide to tackle one-step
problems that include decimal or whole numbers or employ word-based techniques to solve problems involving decimals and fractions. Each page will have ten equations. The Equations Worksheets can be
used by students from 5th through 8th grades.
These worksheets are a great way to practice fraction calculations and other concepts in algebra. Many of these worksheets allow users to select from three different kinds of problems. It is possible
to select word-based or numerical problem. It is vital to pick the correct type of problem since each one will be different. Ten problems are on each page, so they are excellent resources for
students from 5th to 8th grades.
These worksheets help students understand the connection between numbers and variables. They provide students with practice in solving polynomial equations or solving equations, as well as getting
familiar with how to use these in their daily lives. These worksheets are a great opportunity to gain knowledge about equations and formulas. They will teach you about the different kinds of
mathematical problems as well as the various types of symbols used to represent them.
These worksheets can be extremely useful for students in the first grades. These worksheets can help students develop the ability to graph and solve equations. These worksheets are excellent for
practicing with polynomial variable. These worksheets will help you factor and simplify them. You can get a superb set of equations, expressions and worksheets that are suitable for kids of any grade
level. Making the work yourself is the best way to learn equations.
There are numerous worksheets on quadratic equations. There are different levels of equations worksheets for each stage. These worksheets can be used to solve problems to the fourth level. When
you’ve completed a particular stage, you’ll begin to work on solving other types of equations. Then, you can continue to work on similar problems. You can, for example solve the same problem as a
elongated problem.
Gallery of Systems Of Nonlinear Equations Worksheet Doc
Systems Of Nonlinear Equations Worksheet Kuta
Systems Of Nonlinear Equations Worksheet Doc
Systems Of Nonlinear Equations Worksheet Doc
Leave a Comment
|
{"url":"https://www.equationsworksheets.net/systems-of-nonlinear-equations-worksheet-doc/","timestamp":"2024-11-13T11:17:57Z","content_type":"text/html","content_length":"61303","record_id":"<urn:uuid:f057b178-1bac-42af-9f90-1884a824d352>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00833.warc.gz"}
|
Debugging 101
Welcome back for my second post about debugging! In this post, I'll give you an overview of the basic capabilities of the debugger in Visual Studio 2015, and add a few tips and tricks to take your
debugging skills to the next level. We will not delve into the more advanced topics yet though, like debugging multi-threaded applications for example: those topics are being reserved for later posts
(either by Maarten or myself) because they are too large to handle all at once.
Now, before you think to skip this post because you've been programming for some years, I do suggest to glance over this post: you might see something you didn't know, or didn't see the benefit of
that debugger feature. If you already knew everything I've written in this post, then you are awesome! And should you know a neat trick which I didn't cover, then don't hesitate to leave a comment :)
Let's start with the basics, shall we?
Stop! Hammer Time!
When you debug code to find out where exactly things are going wrong, you'll probably have an idea where to start searching in the code base:
• You just wrote some new classes/methods and things started behaving not quite as expected.
• A unit test fails suddenly.
• An exception has been thrown and you can see where it originates using the stack trace.
So, you have an idea where to start digging. The obvious thing to do then, is to tell the debugger to stop running the code at that point in the source. This is a breakpoint, which you can set using
the Debug > Toggle Breakpoint menu item or its default shortcut key F9. Visual Studio, and many other IDE's, typically indicates a breakpoint with a red circle in front of the line of code,
optionally also changing the colorization of the code:
When we run this code, Visual Studio will stop executing the application right before executing the line of code were we added the breakpoint. You will also see a yellow arrow pointing at that line.
Think of the yellow arrow as the instruction pointer (or program counter) for your code:
At this moment, you can start inspecting your application state using the different windows provided by Visual Studio. These are the first few views which can be useful when debugging, and all of
these can be found in the menu Debug > Windows while the debugger is active:
• Watch: you can have four different watch views. These views allow you to inspect variables, fields, properties, collections, methods, and so on. You'll see the value and the data type listed next
to the watch entry.
• Autos: this is a special watch window, showing the result of the current and previous lines of code if it can.
• Locals: this is also a special watch window, displaying all variables and method parameters in the current method or lambda expression. However, it doesn't show properties or fields who are being
referenced in that method or lambda expression.
• Call Stack: this window shows the execution path up until now per executed method. Because not all of the code is managed code (written using the .NET CLR), or when you are missing debug symbols,
you'll see some entries appearing as [External Code]: this can either be a 3rd party library you're using but Visual Studio couldn't find the (correct) debug symbols or PDB files for that
library, or it could be unmanaged code - things Windows needs to do to start your .NET application, creating an AppDomain for instance. We'll cover the Call Stack window later in this post more
in depth.
In this code example, you've obviously already seen what will go wrong: as awesome a number 42 might be, dividing it by zero just doesn't work in this universe. Before evaluating the next line of
code however, you can inspect what would happen by adding value / 0 in the Watch 1 window, either by selecting the statement, right-clicking it and select the Add Watch menu item, or by pasting/
typing the statement in a row in the Watch window:
You can now stop the debugger (Debug > Stop Debugging, SHIFT+F5 or the Stop Debugging button in the toolbar) to fix the bad code, or you can even change it while the debugger is active:
Stop! When it's hammer time!
This is, however, the absolute basics of debugging. We can do better than this! One improvement lies in controlling when the debugger actually decides to stop when it arrives at a breakpoint. You can
disable any breakpoint, without removing it, by hovering over it and pressing the disable button. Other options are to right-click and select Disable Breakpoint or to press the default shortcut
Ctrl+F9. This is an easy way of controlling a method where the application passes through several times, but where you don't want to stop continuously. But it's not ideal, and we have better tools in
our toolbox.
A second tool, is to enable breakpoint conditions. Breakpoint conditions allows us to only break when certain things are true:
• When value equals 42, this is a conditional expression.
• When we hit this breakpoint for the n-th time, this is a hit count expression.
• When we run this code on a certain machine, in a certain process or thread: a filter.
To get to these options, you can hover over the breakpoint and click the cog wheel icon (labeled Settings...) or right-click and select the Conditions... menu item.
Visual Studio will now only stop running our application when the value of divisor equals 0. Neat!
As you can see in the image above, we can also add Actions to a breakpoint: for example, to log information to the Output window every time we pass the breakpoint. Ironically, this is how I used to
debug when I started out coding: putting PRINT statements all over the place, displaying the values of variables to know what was going on.
Stop! HammerNotFoundException!
Another nifty trick to know, especially when having to fix a hot issue in an unknown code base, is by setting breakpoints on (un)handled exceptions. In the example I've been using, I can declare -
inside Visual Studio - that I want the debugger to break whenever the DivideByZeroException is being thrown for example, even if this exceptions has been caught in the code. This enables you to stop
executing the application when an exception arises, instead of when it might get rethrown in a main thread. You can reach these awesome bits in the Exception Settings window, through the Debug >
Windows > Exception Settings menu item or its default shortcut CTRL+ALT+E:
By default, you can choose to break on all unlisted exceptions, to break on your own exception types or those used by 3rd party libraries. That possibility is immediately followed by an extensive
list of default exception types provided in the CLR, like the well-known System.NullReferenceException. You can also add specific missing exception types, by selecting the line Common Language
Runtime Exceptions and hitting the Add button (indicated with the plus sign). When you right-click on an exception, you can choose an additional option: Continue when unhandled in user code. When you
activate this, the debugger will not stop on that exception in your codebase if the exception is handled by the consumer of your code.
The Call Stack
I did promise a bit about the Call Stack window, didn't I? Well, here goes! Whenever the debugger stops running your application, the Call Stack window will show you the list of actions/methods/etc.
that led up to the current point of execution:
Now, the [External Code] section is where things could get interesting. Basically, this is anything that is not in your code base, like native Windows API's, .NET CLR stuff, 3rd party libraries: you
name it. You can however opt in to see what's behind the covers, by right clicking in the Call Stack window and enabling the option called Show External Code:
When you double click on an entry shown in black in the Call Stack (or right click and choose Switch to Frame), you can jump back and forth through the Call Stack, inspecting the flow of your
application up to where it was interrupted by - for example - a breakpoint, including the state of each frame. This means that you can inspect all variables and fields for each step in the Call
Stack, which can give you a lot more insight when trying to pinpoint where things might've gone wrong:
Now, you can also try to do this for code outside your code base, displayed as gray entries. Try clicking on a gray line in the Call Stack, and you'll be presented a window looking somewhat like
this, depending on your settings:
Basically, Visual Studio is telling you that it can't display any source code for the frame you've selected to inspect, but it can try to load debug symbols (or PDB files) which will help in showing
some degree of information. As a last resort, you can try to click the View disassembly link, which will show you what's going on at the most basic of levels. You can, of course, add your own symbol
servers. This can be convenient when you have a custom package feed with its own debug symbol source, or when you want to build debug symbols on the fly using DotPeek for example. Check out these two
links to learn more about these possibilities, they will definitely help you when dealing with strange behavior in 3rd party libraries!
There are much more features hidden inside the Debug menu, but I'm going to save these for another post, where we will dive into more advanced concepts. Stay tuned!
Did you find this article valuable?
Support Wesley Cabus by becoming a sponsor. Any amount is appreciated!
|
{"url":"https://wesleycabus.be/debugging-101","timestamp":"2024-11-02T09:24:43Z","content_type":"text/html","content_length":"187370","record_id":"<urn:uuid:141baeba-4314-47fd-9df2-ef27da00bdb2>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00630.warc.gz"}
|
Coq devs & plugin devs
I'm interested in porting a project I'm working on from 8.17 to master, in part because I want to try out a fix for #17893 (primitive float fma) on it. Currently blocked by #17833, about the "Missing
notation term for variables" error regression. I don't understand @Gaëtan Gilbert 's suggestion well enough to implement the fix myself, but if it's not too complicated, I'd be interested in giving
it a shot with a bit more guidance. (Alternatively, I'd be quite happy if one of the other devs (@Gaëtan Gilbert ? @Hugo Herbelin ?) had a(n easy) fix for at least the regression part of the issue.)
I'll look at it next week
I'm on holidays this week so I'm not doing any serious work
Thank you!
Last updated: Oct 13 2024 at 01:02 UTC
|
{"url":"https://coq.gitlab.io/zulip-archive/stream/237656-Coq-devs-.26-plugin-devs/topic/Fixing.20bug.20.2317833.html","timestamp":"2024-11-06T11:54:00Z","content_type":"text/html","content_length":"3493","record_id":"<urn:uuid:c73c5efd-5329-4be0-b542-2176987e03ae>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00374.warc.gz"}
|
Petra on Programming: The Correlation Cycle Indicator
The previous article dealt with indicators based on correlation with a trend line. This time we’ll look into another correlation-based indicator by John Ehlers. The new Correlation Cycle indicator
(CCY) measures the price curve correlation with a sine wave. This works surprisingly well – not for generating trade signals, but for a different purpose.
Ehlers published the indicator together with TradeStation code in the recent S&C magazine. Since the C language supports function pointers, we can code it in a shorter and more elegant way:
var correlY(var Phase); // function pointer
var cosFunc(var Phase) { return cos(2*PI*Phase); }
var sinFunc(var Phase) { return -sin(2*PI*Phase); }
var correl(vars Data, int Length, function Func)
correlY = Func;
var Sx = 0, Sy = 0, Sxx = 0, Sxy = 0, Syy = 0;
int count;
for(count = 0; count < Length; count++) {
var X = Data[count];
var Y = correlY((var)count/Length);
Sx += X; Sy += Y;
Sxx += X*X; Sxy += X*Y; Syy += Y*Y;
if(Length*Sxx-Sx*Sx > 0 && Length*Syy-Sy*Sy > 0)
return (Length*Sxy-Sx*Sy)/sqrt((Length*Sxx-Sx*Sx)*(Length*Syy-Sy*Sy));
else return 0;
var CCY(vars Data, int Length) { return correl(Data,Length,cosFunc); }
var CCYROC(vars Data, int Length) { return correl(Data,Length,sinFunc); }
The correl function measures the correlation of the Data series with an arbitrary curve given by the Func function. This allows us to create all sorts of correlation indicators by just using a
different Func. For example, it reduces Ehlers’ Correlation Trend Indicator from the previous article to 2 lines:
var trendFunc(var Phase) { return -Phase; }
var CTI(vars Data,int Length) { return correl(Data,Length,trendFunc); }
The empty correlY function pointer in the code above serves as a template for Func, and is used for calling it inside the correl function. For the CTI it’s simply a rising slope (negative because
series are in reverse order), for the CCY it’s the standard cosine function.
At first let’s see how the CCY indicator behaves when applied to a sine wave. We’re using Zorro’s wave generator to produce a sine chirp with a rising cycle length from 15 up to 30 bars, which is 25%
below and 50% above the used CCY period of 20 bars. The code:
function run()
MaxBars = 300;
LookBack = 40;
asset(""); // dummy asset
ColorUp = ColorDn = 0; // don't plot a price curve
vars Chirp = series(genSine(15,30));
And the result:
This confirms Ehlers’ stress test. A shorter period results in a phase lag, a longer period in a phase lead. We’re now going to apply the indicator to real-world price curves. This code displays the
CCY and its rate of change (CCYROC) in a SPY chart:
function run()
BarPeriod = 1440;
LookBack = 40;
StartDate = 20190101;
assetAdd("SPY","STOOQ:SPY.US"); // load price history from Stooq
vars Prices = series(priceClose());
What’s the use of the Correlation Cycle indicator in a trading system? The chart might hint that its peaks or valleys could be used for trade signals, but you can save the time of testing it: I did
already. The CCY is no good for trade signals. But Ehlers had another idea. The phase angle of the CCY and CCYROC reflects the market state. It returns 1 for a rising trend, -1 for a falling trend,
and 0 for cycle regime. Here’s the code of Ehlers CCY market state indicator:
var CCYState(vars Data,int Length,var Threshold)
vars Angles = series(0,2);
var Real = correl(Data,Length,cosFunc);
var Imag = correl(Data,Length,sinFunc);
// compute the angle as an arctangent function and resolve ambiguity
if(Imag != 0) Angles[0] = 90 + 180/PI*atan(Real/Imag);
if(Imag > 0) Angles[0] -= 180;
// do not allow the rate change of angle to go negative
if(Angles[1]-Angles[0] < 270 && Angles[0] < Angles[1])
Angles[0] = Angles[1];
//return Angles[0];
// compute market state
if(abs(Angles[0]-Angles[1]) < Threshold)
return ifelse(Angles[0] < 0,-1,1);
else return 0;
Applied to SPY:
At first glance, trends and cycles seem to be rather well and timely detected. But how useful is the indicator in a real trading system?
For finding out, we’ll compare the performance with and without market state detection of a simple trend follower, as in the first Zorro workshop. It uses a lowpass filter for detecting trend
reversals. The only parameter is the cutoff period of the low pass filter. This parameter is walk forward optimized, so the system does depend on any choosen parameter value. The trend follower
without market state detection:
function run()
BarPeriod = 1440;
LookBack = 40;
NumYears = 8;
assetAdd("SPY","STOOQ:SPY.US"); // load price history from Stooq
NumWFOCycles = 4;
int Cutoff = optimize(10,5,30,5);
vars Prices = series(priceClose());
vars Signals = series(LowPass(Prices,Cutoff));
else if(peak(Signals))
The system enters a long position on any valley of the lowpass filtered price curve, and a short position on any peak. The resulting equity curve:
We can see that the simple SPY trend follower is not very good. Yes, it’s profitable, but the main profit came from some lucky trades at the corona drop. In the years before the system had long flat
periods. Let’s see if the CCYState indicator can help. Its two parameters, period and threshold, are also walk forward optimized. The new script:
function run()
BarPeriod = 1440;
LookBack = 40;
NumYears = 8;
assetAdd("SPY","STOOQ:SPY.US"); // load price history from Stooq
NumWFOCycles = 4;
int Cutoff = optimize(10,5,30,5);
int Period = optimize(14,10,25,1);
var Threshold = optimize(9,5,15,1);
vars Prices = series(priceClose());
var State = CCYState(Prices,Period,Threshold);
vars Signals = series(LowPass(Prices,Cutoff));
if(State != 0) {
else if(peak(Signals))
else {
The new system trades only when the market state is 1 or -1, indicating trend regime. It goes out of the market when the market state is 0. We can see that this improves the equity curve remarkably:
I think most people would prefer this system to the previous one, even though it stayed out of the market at the corona drop. Ehlers’ market state indicator did a good job.
John Ehlers, Correlation Cycle Indicator, Stocks&Commodities 6/2020
The indicators and trade systems are available in the Scripts 2020 repository.
25 thoughts on “Petra on Programming: The Correlation Cycle Indicator”
1. it fits well into certain market conditions in sinusoidal phase.But other times it fells apart.
2. Thats why you better not use it for trade signals, but for market state detection.
3. Hey Petra,
What is the ROC function used? It looks bounded vs the formula Tradingview uses:
ROC = [(CurrentClose – Close n periods ago) / (Close n periods ago)] X 100
4. The ROC of something is its first derivative. d/dx cos(x) = -sin(x).
5. Hi Petra,
I wonder how stable a system based on this could be when adding an additional signal-/entry- and position management system for the cyclic phases as well….
7. Any chance we could see the full code in one entry? Zorro does not have this CCYState indicator. many thanks.
8. The full code should be in the Scripts 2020 archive – Petra adds all new code in there.
9. Hi Petra, I have reproduced your experiment and there is one strange thing in the code. The script with the State filter actually does not care, in which direction the trade is beeing opened. I
mean there are short trades for State >0 and long trades for State <0. Does not this contradict the Ehlers paper ? Many thanks …
10. Right, it only filters by market state. The direction is by signal peaks and valleys. I remember that I tried additional filtering by market direction, but it did not improve the system further.
Anyway you’re free to test it.
11. Yes, I changed the State detector to the „directional“ version and indeed, no positive effect. Which is bothering me a bit, because the explanation of Ehlers was strictly „directional“.
I will certainly make more tests based on your valuable work and the new indicator code. For example there is a possibility to use the Correlation Trend Indicator from your last blog also as the
State detector …
Many thank for the code and your work !!
12. Just a few thoughts from a newbie:
You can improve the system profitability by excluding exitLong/Short when State goes back to zero and by adding the condition to Buy/Sell only when the price is over/under an optimized SMA. You
also have to optimize a TakeProfit target. I just tried it to GBP/USD and it is a beauty!
EHLERS compiling…………
Read EHLERS_1.par
WFA Test: EHLERS GBP/USD 2011..2020
Read EHLERS_2.par EHLERS_3.par
Monte Carlo Analysis… Median AR 51%
Win 124$ MI 3.22$ DD 28.55$ Capital 75.59$
Trades 14 Win 85.7% Avg +97.2p Bars 6
AR 51% PF 9.23 SR 1.30 UI 7% R2 0.00
13. HI! Thanks for the work, I am enjoying and learning a lot.
I’ve created my rudimentary 3-indicator system (trend following + mean reversion signals under certain circumstances) , and adding this CCI as a signal filter i’ve managed to get positive results
for my very first time. I would suggest an interpretation: It keeps signaling “trend mode” when a trend reversal happens. That allows the system to enter early on the beginning trend making it
profitable if another indicator creates a good entry signal. If we use directionally we lose this possibility.
Interestingly, testing it from 2007 to 2020, with SPY, EURUSD and BTC, keeps positive (not amazing, but positive), just with the default.
I would say is an easy improvement for poor skilled newbies like me!
14. Hi Petra,
thank you for your articles.
I Have downloaded your CCY.c script willing to learn some new stuff.
#define DO_SINE , works as expected
#define DO_PLOT, gives an error while compiling
#define DO_NOSTATE, works as expected
#define DO_STATE, works, but with totally different behavior compared to the equity line and statistics you mentioned in the articles.
Am I missing something?
Thank you in advance
15. I use #defines for activating different code parts in the same script. You can NOT arbitrarily combine them. Look in the code to check which #defines are used for which purpose.
16. Hi
In both your code above and the Ehlers paper the arctangent is described in terms of a the ratio of real to imaginary. Isn’t this, in fact, the reciprocal of the arctangent? If we think of the
phasor diagram with the Imaginary axis being the ‘y-axis’ and the real axis being the ‘x-axis’, then tan of the phase angle would be y/x.
In addition, this definition is at odds with Ehler’s own ‘rocket science of traders’ book where he uses the same process to derive the phase angle via the Hilbert Transform.
17. I have just converted his code without giving it much consideration. But you are right, from math in school I remember that a phase angle is atan(sin/cos), not atan(cos/sin) as in the code. But
swapping sin and cos is a 90 degrees rotation and I think that’s why Ehlers adds 90 to the result. The end result is anyway based on angle difference, not on absolute angle. Maybe Ehlers can
explain why he has calculated it in this way.
18. Why did I found this blog?!! Can’t stop reading… I should be sleeping… argh…
19. Hi, Can I have the definitive Period and Threshold/Length Optimized In your test? Thank you
20. No, because it’s a walk forward optimization. Lengths and periods change all the time.
21. What programming language is for your code above. Is it in Pine Script? Thanks.
22. C.
23. Thanks. I used ChatGPT to do the conversion of your code to Python but somehow the correlation angles and market states results do not come out right. In case if you are familiar with Python,
below is the Python’s version of your CCYState(). The variable ‘previous_angle’ is initialized to 0 on the outer loop where each bar is step through. At the end of each bar cycle,
‘previous_angle’ is updated to have the ‘current_angle’ value. Do you see anything in the code that might not be correct or have a suggestion on how to convert your code to Python? Thank you.
def CCYState(Data, Length, Threshold, previous_angle):
Real = correl(Data, Length, cosFunc)
Imag = correl(Data, Length, sinFunc)
# Compute the angle as an arctangent function and resolve
if Imag == 0:
current_angle = 0
current_angle = 90 + math.degrees(np.arctan(Real/Imag))
if Imag > 0:
current_angle -= 180
# Do not allow the rate change of angle to go negative
if previous_angle – current_angle < 270 and current_angle <
current_angle = previous_angle
# Compute market state
if abs(current_angle – previous_angle) < Threshold:
if current_angle = 0:
state = 1
state = 0
return state, current_angle
24. I am no Python expert, but the begin – the Real and Imag calculation – does not look right to me. Also Python might have some subtle differences that affect the result.
25. Thank you. I will look into that.
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"https://financial-hacker.com/petra-on-programming-the-correlation-cycle-indicator/","timestamp":"2024-11-02T03:16:01Z","content_type":"text/html","content_length":"101935","record_id":"<urn:uuid:17db0a91-a5c5-4801-bb83-702e246123c0>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00535.warc.gz"}
|
Counting unique values - based on another column!
so here's the data: column A contains a drop-down list of three text values (Apple, Pear, Orange). column B contains names - some of which may be repeated.
what i would like to do is count the number of unique values in column B for EACH of the values in column A - leaving me with three unique sums. For example, there are three occurrences of the same
name in column B where Appeal has been chosen in column A. I only want to count that name once.
Does my question make sense? I found tons of formulae online about counting unique text values, but none about counting unique text values BASED ON text values in another column ...
Thanks in advance! :)
Say the data with Apple, Pear, Orange are located in A2:A22 the data with the values to count uniques are located in B2:B22
for Apples: =SUM(IF(FREQUENCY(IF((LEN($B$2:$B$22)>0)*($A$2:$A$22="Apple"),MATCH($B$2:$B$22,$B$2:$B$22,0),""), IF((LEN($B$2:$B$22)>0)*($A$2:$A$22="Apple"),MATCH($B$2:$B$22,$B$2:$B$22,0),""))>0,1))
for Pears:
=SUM(IF(FREQUENCY(IF((LEN($B$2:$B$22)>0)*($A$2:$A$22="Pear"),MATCH($B$2:$B$22,$B$2:$B$22,0),""), IF((LEN($B$2:$B$22)>0)*($A$2:$A$22="Pear"),MATCH($B$2:$B$22,$B$2:$B$22,0),""))>0,1))
for Oranges: =SUM(IF(FREQUENCY(IF((LEN($B$2:$B$22)>0)*($A$2:$A$22="Orange"),MATCH($B$2:$B$22,$B$2:$B$22,0),""), IF((LEN($B$2:$B$22)>0)*($A$2:$A$22="Orange"),MATCH($B$2:$B$22,$B$2:$B$22,0),""))>0,1))
each of these need to be entered with Ctrl+Shift+Enter (hold down the control and shift keys while you hit Enter) rather than just enter since these are Array formulas.
Is that what you want.
The above were tested under the stated assumptions and they worked for me.
|
{"url":"http://www.eluminary.org/en/QnA/Counting_unique_values_-_based_on_another_column!_(Excel)","timestamp":"2024-11-13T22:24:31Z","content_type":"text/html","content_length":"10640","record_id":"<urn:uuid:419fe4db-9aaa-4f62-ad9c-8e65b7019b91>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00099.warc.gz"}
|
Flows with demands
Flows with demands¶
In a normal flow network the flow of an edge is only limited by the capacity $c(e)$ from above and by 0 from below. In this article we will discuss flow networks, where we additionally require the
flow of each edge to have a certain amount, i.e. we bound the flow from below by a demand function $d(e)$:
$$ d(e) \le f(e) \le c(e)$$
So next each edge has a minimal flow value, that we have to pass along the edge.
This is a generalization of the normal flow problem, since setting $d(e) = 0$ for all edges $e$ gives a normal flow network. Notice, that in the normal flow network it is extremely trivial to find a
valid flow, just setting $f(e) = 0$ is already a valid one. However if the flow of each edge has to satisfy a demand, than suddenly finding a valid flow is already pretty complicated.
We will consider two problems:
1. finding an arbitrary flow that satisfies all constraints
2. finding a minimal flow that satisfies all constraints
Finding an arbitrary flow¶
We make the following changes in the network. We add a new source $s'$ and a new sink $t'$, a new edge from the source $s'$ to every other vertex, a new edge for every vertex to the sink $t'$, and
one edge from $t$ to $s$. Additionally we define the new capacity function $c'$ as:
• $c'((s', v)) = \sum_{u \in V} d((u, v))$ for each edge $(s', v)$.
• $c'((v, t')) = \sum_{w \in V} d((v, w))$ for each edge $(v, t')$.
• $c'((u, v)) = c((u, v)) - d((u, v))$ for each edge $(u, v)$ in the old network.
• $c'((t, s)) = \infty$
If the new network has a saturating flow (a flow where each edge outgoing from $s'$ is completely filled, which is equivalent to every edge incoming to $t'$ is completely filled), then the network
with demands has a valid flow, and the actual flow can be easily reconstructed from the new network. Otherwise there doesn't exist a flow that satisfies all conditions. Since a saturating flow has to
be a maximum flow, it can be found by any maximum flow algorithm, like the Edmonds-Karp algorithm or the Push-relabel algorithm.
The correctness of these transformations is more difficult to understand. We can think of it in the following way: Each edge $e = (u, v)$ with $d(e) > 0$ is originally replaced by two edges: one with
the capacity $d(i)$ , and the other with $c(i) - d(i)$. We want to find a flow that saturates the first edge (i.e. the flow along this edge must be equal to its capacity). The second edge is less
important - the flow along it can be anything, assuming that it doesn't exceed its capacity. Consider each edge that has to be saturated, and we perform the following operation: we draw the edge from
the new source $s'$ to its end $v$, draw the edge from its start $u$ to the new sink $t'$, remove the edge itself, and from the old sink $t$ to the old source $s$ we draw an edge of infinite
capacity. By these actions we simulate the fact that this edge is saturated - from $v$ there will be an additionally $d(e)$ flow outgoing (we simulate it with a new source that feeds the right amount
of flow to $v$), and $u$ will also push $d(e)$ additional flow (but instead along the old edge, this flow will go directly to the new sink $t'$). A flow with the value $d(e)$, that originally flowed
along the path $s - \dots - u - v - \dots t$ can now take the new path $s' - v - \dots - t - s - \dots - u - t'$. The only thing that got simplified in the definition of the new network, is that if
procedure created multiple edges between the same pair of vertices, then they are combined to one single edge with the summed capacity.
Minimal flow¶
Note that along the edge $(t, s)$ (from the old sink to the old source) with the capacity $\infty$ flows the entire flow of the corresponding old network. I.e. the capacity of this edge effects the
flow value of the old network. By giving this edge a sufficient large capacity (i.e. $\infty$), the flow of the old network is unlimited. By limiting this edge by smaller capacities, the flow value
will decrease. However if we limit this edge by a too small value, than the network will not have a saturated solution, e.g. the corresponding solution for the original network will not satisfy the
demand of the edges. Obviously here can use a binary search to find the lowest value with which all constraints are still satisfied. This gives the minimal flow of the original network.
|
{"url":"https://gh.cp-algorithms.com/main/graph/flow_with_demands.html","timestamp":"2024-11-05T19:46:25Z","content_type":"text/html","content_length":"125473","record_id":"<urn:uuid:aa0bffd6-3023-4180-ac70-a2593e30408a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00203.warc.gz"}
|
2 Ways to See Formula in Microsoft Excel2 Ways to See Formula in Microsoft Excel
2 Ways to See Formula in Microsoft Excel
For friends who feel the exel file feels heavy when opened, there is a possibility in your file there is a formula that does not find the source.
How to See Formula Formulas in Microsoft Excel - Exel is an application specifically for processing numbers as well as words.
This application can be used to perform summation calculations, averages, logic and so forth.
By using this application we can find out and correct the data we enter by looking at one by one the formulas that we have created.
In addition, this way you can also find the wrong formula or formula that does not have a source that makes your computer's performance become sluggish when opening excel files.
To find out how to view and display excel formulas, you can use the methods below.
Read Also Download Hp LaserJet P1006 Printer Driver For Free
1. See the formula in the formula bar
For how to display a formula in the first ms excel, you can do this by clicking on the box that has the formula.
This method can be said is a very often very good way for people to see the formulas in ms excel.
By directing the krusor to the place that has the formula, then clicking it. You can already see all the formula formulas in the excel file.
2. With Formula 2 show facilities
In addition to the above way you can also use the other way, namely by using the Show Formula menu.
By using this menu you can see all the formulas in the excel file that you are currently open.
Read Also Difference between Windows 10 Professional with Windows 10 Home
To use it you can follow the steps below.
• Click the Formula bar menu
• Click the Formula show button in the Formula Auditing group
• Later in this way all who have a formula will be displayed formula formula
In the way above, you can see the formula contained in the excel file. In addition, this way you can also find which formulas are errors and make your computer slower performance.
So long ago newbie article this time with the title 2 Ways to See Formula in Microsoft Excel. I hope this method can help my teammates.
0 Response to "2 Ways to See Formula in Microsoft Excel"
|
{"url":"https://www.newbieadvisor.com/2021/12/how-to-see-formulas-in-msoffice.html","timestamp":"2024-11-06T20:43:14Z","content_type":"application/xhtml+xml","content_length":"191808","record_id":"<urn:uuid:05b37c2d-9478-403c-863a-8a19862bf2ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00733.warc.gz"}
|
Declines in math readiness underscore the urgency of math awareness by Manil Suri
When President Ronald Reagan proclaimed the first National Math Awareness Week in April 1986, one of the problems he cited was that too few students were devoted to the study of math.
“Despite the increasing importance of mathematics to the progress of our economy and society, enrollment in mathematics programs has been declining at all levels of the American educational system,”
Reagan wrote in his proclamation.
Nearly 40 years later, the problem that Reagan lamented during the first National Math Awareness Week – which has since evolved to become “Mathematics and Statistics Awareness Month” – not only
remains but has gotten worse.
Whereas 1.63%, or about 16,000, of the nearly 1 million bachelor’s degrees awarded in the U.S. in the 1985-1986 school year went to math majors, in 2020, just 1.4%, or about 27,000, of the 1.9
million bachelor’s degrees were awarded in the field of math – a small but significant decrease in the proportion.
Post-pandemic data suggests the number of students majoring in math in the U.S. is likely to decrease in the future.
A key factor is the dramatic decline in math learning that took place during the lockdown. For instance, whereas 34% of eighth graders were proficient in math in 2019, test data shows the percentage
dropped to 26% after the pandemic.
These declines will undoubtedly affect how much math U.S. students can do at the college level. For instance, in 2022, only 31% of graduating high school seniors were ready for college-level math –
down from 39% in 2019.
These declines will also affect how many U.S. students are able to take advantage of the growing number of high-paying math occupations, such as data scientists and actuaries. Employment in math
occupations is projected to increase by 29% in the period from 2021 to 2031.
About 30,600 math jobs are expected to open up per year from growth and replacement needs. That exceeds the 27,000 or so math graduates being produced each year – and not all math degree holders go
into math fields. Shortages will also arise in several other areas, since math is a gateway to many STEM fields.
For all of these reasons and more, as a mathematician who thinks deeply about the importance of math and what it means to our world – and even to our existence as human beings – I believe this year,
and probably for the foreseeable future, educators, policymakers and employers need to take Mathematics and Statistics Awareness Month more seriously than ever before.
Struggles with mastery
Subpar math achievement has been endemic in the U.S. for a long time.
Data from the National Assessment of Educational Progress shows that no more than 26% of 12th graders have been rated proficient in math since 2005.
The pandemic disproportionately affected racially and economically disadvantaged groups. During the lockdown, these groups had less access to the internet and quiet studying spaces than their peers.
So securing Wi-Fi and places to study are key parts of the battle to improve math learning.
Some people believe math teaching techniques need to be revamped, as they were through the Common Core, a new set of educational standards that stressed alternative ways to solve math problems.
Others want a return to more traditional methods. Advocates also argue there is a need for colleges to produce better-prepared teachers.
Other observers believe the problem lies with the “fixed mindset” many students have – where failure leads to the conviction that they can’t do math – and say the solution is to foster a “growth”
mindset – by which failure spurs students to try harder.
Although all these factors are relevant, none address what in my opinion is a root cause of math underachievement: our nation’s ambivalent relationship with mathematics.
Low visibility
Many observers worry about how U.S. children fare in international rankings, even though math anxiety makes many adults in the U.S. steer clear of the subject themselves.
Mathematics is not like art or music, which people regularly enjoy all over the country by visiting museums or attending concerts. It’s true that there is a National Museum of Mathematics in New
York, and some science centers in the U.S. devote exhibit space to mathematics, but these can be geographically inaccessible for many.
A 2020 study on media portrayals of math found an overall “invisibility of mathematics” in popular culture. Other findings were that math is presented as being irrelevant to the real world and of
little interest to most people, while mathematicians are stereotyped to be singular geniuses or socially inept nerds, and white and male.
Math is tough and typically takes much discipline and perseverance to succeed in. It also calls for a cumulative learning approach – you need to master lessons at each level because you’re going to
need them later.
While research in neuroscience shows almost everyone’s brain is equipped to take up the challenge, many students balk at putting in the effort when they don’t score well on tests. The myth that math
is just about procedures and memorization can make it easier for students to give up. So can negative opinions about math ability conveyed by peers and parents, such as declarations of not being “a
math person.”
A positive experience
Here’s the good news. A 2017 Pew poll found that despite the bad rap the subject gets, 58% of U.S. adults enjoyed their school math classes. It’s members of this legion who would make excellent
recruits to help promote April’s math awareness. The initial charge is simple: Think of something you liked about math – a topic, a puzzle, a fun fact – and go over it with someone. It could be a
child, a student, or just one of the many adults who have left school with a negative view of math.
Can something that sounds so simplistic make a difference? Based on my years of experience as a mathematician, I believe it can – if nothing else, for the person you talk to. The goal is to stimulate
curiosity and convey that mathematics is much more about exhilarating ideas that inform our universe than it is about the school homework-type calculations so many dread.
Raising math awareness is a first step toward making sure people possess the basic math skills required not only for employment, but also to understand math-related issues – such as gerrymandering or
climate change – well enough to be an informed and participating citizen. However, it’s not something that can be done in one month.
Given the decline in both math scores and the percentage of students studying math, it may take many years before America realizes the stronger relationship with math that President Reagan’s
proclamation called for during the first National Math Awareness Week in 1986.
Manil Suri is a Professor of Mathematics and Statistics at the University of Maryland, Baltimore County.
Post a comment
|
{"url":"https://www.brothamagazine.com/daily-digest/declines-in-math-readiness-underscore-the-urgency-of-math-awareness-by-manil-suri/","timestamp":"2024-11-11T13:49:37Z","content_type":"text/html","content_length":"61082","record_id":"<urn:uuid:77e200f2-df2a-4660-8db8-30c50b381635>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00826.warc.gz"}
|
From cppreference.com
template< class RandomIt >
void nth_element( RandomIt first, RandomIt nth, RandomIt last ); (1)
template< class RandomIt, class Compare >
void nth_element( RandomIt first, RandomIt nth, RandomIt last, Compare comp ); (2)
nth_element is a partial sorting algorithm that rearranges elements in [first, last) such that:
• The element pointed at by nth is changed to whatever element would occur in that position if [first, last) was sorted.
• All of the elements before this new nth element are less than or equal to the elements after the new nth element.
More formally, nth_element partially sorts the range [first, last) in ascending order so that the condition !(*j < *i) (for the first version, or comp(*j, *i) == false for the second version) is met
for any i in the range [first, nth) and for any j in the range [nth, last). The element placed in the nth position is exactly the element that would occur in this position if the range was fully
nth may be the end iterator, in this case the function has no effect.
[edit] Parameters
first, last - random access iterators defining the range sort
nth - random access iterator defining the sort partition point
comparison function object (i.e. an object that satisfies the requirements of Compare) which returns true if the first argument is less than (i.e. is ordered before) the second.
The signature of the comparison function should be equivalent to the following:
comp - bool cmp(const Type1 &a, const Type2 &b);
The signature does not need to have const &, but the function object must not modify the objects passed to it.
The types Type1 and Type2 must be such that an object of type RandomIt can be dereferenced and then implicitly converted to both of them.
Type requirements
-RandomIt must meet the requirements of ValueSwappable and RandomAccessIterator.
-The type of dereferenced RandomIt must meet the requirements of MoveAssignable and MoveConstructible.
[edit] Return value
[edit] Complexity
Linear in std::distance(first, last) on average.
The algorithm used is typically introselect although other selection algorithms with suitable average-case complexity are allowed.
[edit] Example
#include <iostream>
#include <vector>
#include <algorithm>
#include <functional>
int main()
std::vector<int> v{5, 6, 4, 3, 2, 6, 7, 9, 3};
std::nth_element(v.begin(), v.begin() + v.size()/2, v.end());
std::cout << "The median is " << v[v.size()/2] << '\n';
std::nth_element(v.begin(), v.begin()+1, v.end(), std::greater<int>());
std::cout << "The second largest element is " << v[1] << '\n';
The median is 5
The second largest element is 7
[edit] See also
partial_sort_copy copies and partially sorts a range of elements
(function template)
stable_sort sorts a range of elements while preserving order between equal elements
(function template)
sort sorts a range into ascending order
(function template)
|
{"url":"http://acm2014.cct.lsu.edu/localdoc/cppreference/en/cpp/algorithm/nth_element.html","timestamp":"2024-11-03T15:19:41Z","content_type":"text/html","content_length":"47702","record_id":"<urn:uuid:288770da-1001-4082-b944-48c75fec63ce>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00865.warc.gz"}
|
Experimental Study on Wind-Wave Momentum Flux in Strongly Forced Conditions
1. Introduction
Momentum transfer from a shear flow to a wavy boundary has been of great interest throughout the past century. Solution of this problem for light wind conditions has lead to a better understanding of
air–sea interaction and its influence on ocean and atmosphere dynamics. To fully parameterize air–sea fluxes, the influence of the surface wave state must be taken into account. Therefore, it is
important to resolve both wind-wave and wind-current momentum fluxes for various wind, wave, and current conditions.
As wind speed increases, the underlying physical processes at the air–sea interface change dramatically, moving from smooth, linear cases to turbulence-dominated rough airflow regimes. This study
seeks to fill in the gap in understanding and parameterization of wind-wave momentum transfer in strongly forced wind-wave conditions.
Present theories are able to describe a regime corresponding to light wind blowing over sinusoidal waves of small steepness. A wave causes compression and decompression of airflow streamlines along
its surface, and thus pressure differences appear between windward and leeward sides of the wave. The part of the wave-induced surface pressure pattern that is in phase with the slope of the surface
acts to deposit wind momentum into the irrotational flow field of the wave. This process was described in analytical solutions by
Miles (1957
. He introduced a nondimensional wave growth function and suggested its dependence on wind forcing. Moreover, when waves of various frequencies are superimposed and represented as wave spectra, it is
possible to parameterize wind-wave momentum transfer using a spectral wave growth function (i.e.,
Snyder et al. 1981
), which is defined as
are air and water densities,
) is the spectrum of surface elevation, and
is the wave radian frequency.
Wave growth parameterization through one function that depends on wave age only is alluring, because it can be easily used by numerical spectral wave models to describe wave motion throughout the
oceans. The theory, however, provided a solution only for light wind forcing. Also, according to Miles, the controlling parameter for wind-wave forcing is the curvature of the wind profile at the
“matched layer” height. The matched layer is defined by the height at which the horizontal wind speed is equal to the wave phase speed. Such a parameter, however, was not found to be useful in
practice. First, a matched layer does not exist in wind opposing wave cases; second, even in moderate wind forcing situations, the matched layer height is near or within the viscous sublayer, which
has a linear wind speed profile. For these reasons, a lot of experimental effort was expended to reinforce the theoretical solution. Resulting empirical parameterizations use the dimensionless ratio
U[10]/C[p] as a more appropriate wind-wave forcing parameter, where U[10] is wind speed at 10-m height and C[p] is wave phase speed. A logarithmic wind speed profile assumption (Smith et al. 1992) is
typically used to extrapolate wind speed to a desired height reference.
The most obvious approach to estimate momentum transfer experimentally was to use a static air pressure probe (i.e.,
Elliott 1972b
Nishiyama and Bedard 1991
) while simultaneously measuring surface elevation (
Elliott 1972a
Hsu et al. 1982
). Momentum transfer from wind to waves is then given by the pressure–slope correlation,
where ∂
is the wave surface slope,
is the static air pressure at the surface, and the averaging 〈〉 is done over one wavelength. However, it was soon realized that wave-induced static pressure fluctuations quickly decay with height.
Because the lowest position of the static pressure probe must be higher than the highest wave crest, such measurements do not adequately sample static pressure at the water surface, especially above
wave troughs.
As a solution to this problem, vertical arrays of probes were used to extrapolate the magnitude of wave-induced fluctuations to the surface (Snyder et al. 1981; Hristov et al. 2003; Donelan et al.
2005b, among others). These measurements indicated that wave-induced pressure fluctuations decay proportionally to e^−αkz, where k is wavenumber, z is height, and α ≈ 1. However, the exact form of
the pressure fluctuation decay near the surface, especially in strongly forced conditions, is still unknown. Therefore, one of the goals of the present study is to provide guidance for pressure
extrapolation to the surface for future stationary probe measurements.
Another more direct experimental solution was to mount a static pressure probe on a frame that moves with surface elevation, thereby maintaining the probe at a small constant height from the surface.
This method can potentially provide the most accurate measurements; however, it is technically challenging. In one of the first attempts (Dobson 1971), a surface buoyant platform was used to carry
the pressure probe. In other attempts, coupled surface elevation sensors and vertical motor systems were used. Shemdin and Hsu (1967) reported successful following of a predetermined monochromatic
wave in laboratory conditions. However, until recently, most of the attempts to use a wave follower for a random wave field faced technical difficulties, especially in the field. Among the most
successful were Snyder et al. (1981) (Bight of Abaco, Bahamas, experiment), Harris and DeCicco (1993) (Chesapeake Light Tower), Donelan (1999) (laboratory experiment), Jacobs et al. (2002) (Meetpost
Noordwijk research and monitoring platform), and Donelan et al. (2006) (Lake George experiment). The resulting empirical estimates of γ(U[10]/C[p]) have some agreement for calm mature seas; however,
the limited data available for strong wind forcing conditions are not sufficient to form a complete understanding of underlying processes.
Strong wind over young waves with intense wave breaking and spray generation are typical in the North Sea, in the Southern Ocean, and during storms and hurricanes. The shape of the ocean surface is
such that it can no longer be described by a linear wave theory. Moreover, wave breakers and spray add completely new physical elements to the problem. Andreas and Emanuel (2001) suggested that
reentrant sea spray is responsible for a large part of the total air–sea momentum flux in high winds. Kudryavtsev and Makin (2001) suggested that additional momentum flux due to airflow separation
behind breaking waves reaches 50% of the total momentum flux in high winds.
It is clear that, as wind forcing and surface roughness increase, wind-wave momentum flux also increases at a certain rate. A more interesting question is, however, what happens to the nondimensional
wave growth rate γ? It is generally agreed that the wind forcing and wave shape in some form (i.e., wave steepness, crest sharpness, or even wave breaking probability) are the parameters that control
the deviation of γ from the values derived from linear theory. However, it is still debated whether the net effect of nonlinear corrections should be positive or negative. Analytical solutions by Van
Duin (1996) and Belcher (1999) suggest a reduction of the wave growth rate with steepness and/or wind forcing. They assume that waves are steep enough to shelter their lee side but smooth enough (ak
< 0.2) so that the airflow does not separate. Kudryavtsev and Makin (2001), on the other hand, describe airflow separation behind steep breaking waves and suggest a mechanism through which additional
momentum is pumped into waves. Few experimental studies provide sufficient data to validate these theories. Banner (1990) observed a sharp increase in momentum transfer above a breaking wave; Donelan
et al. (2006) concluded that wave steepness has positive effect on the wave growth rate; and Touboul et al. (2009) found that, over very steep waves, the wind-wave momentum flux predicted by Miles
(1957) results in a better agreement with experiments if enhanced by a sheltering coefficient. However, a recent comprehensive review paper on the subject (Peirson and Garcia 2008) summarized all
available data (their Fig. 7) and concluded that the growth rate γ first decreases with ak up to about 0.22 and then increases as steepness increases farther into the wave breaking region. In the
present paper, we confirm the decrease of γ in the range of ak between 0.03 and 0.19.
An example of the airflow pattern in a nonseparated sheltering regime is shown in Fig. 1 (bottom). Such airflow structure with streamlines following the surface of the wave is predicted by linear
theory. A finite-amplitude correction (Belcher 1999) to the linear theory suggests a modest reduction of the wave growth rate, which only becomes significant for very high wind forcing and wave
steepness. The correction proposed by Van Duin (1996) is more significant; he argues that wave nonlinearity has a stabilizing effect on the wave growth and also provides an expression for the maximum
wave amplitude. Both papers mention that above a certain steepness (ak ~ 0.2) an airflow separation bubble forms and the theory no longer applies. Kudryavtsev and Makin (2001), on the other hand,
emphasize the role of the airflow separation (Fig. 1, top; see also Ryn et al. 2007; Reul et al. 2008) in the vertical flux of momentum. According to their model, the separation effect causes a sharp
pressure drop behind a breaking wave; therefore, the momentum flux is growing faster than expected by linear theory.
Fig. 1.
Direct numerical simulation of wind above waves by Shen et al. (2003, Fig. 5; courtesy of the Cambridge University Press). Velocities are shown in the frame of reference moving with the wave. (top)
Streamlines form sheltering bubble (wave age c/U = 0.8) and (bottom) airflow separation does not form (c/U = 1.2). Vertical and horizontal coordinates are normalized by wavelength L.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
From the empirical point of view, it is hard to distinguish separated and nonseparated sheltering, because within a given wave field wave steepness varies but ensemble averaging is needed to
reconstruct the pattern of wave-induced airflow pressure fluctuations. The averaged pattern includes both separated and nonseparated cases, and the resulting parameterization for
often incorporates both effects with unknown weights. Nonetheless, the sheltering effect (separated or not) can be quantified through a sheltering coefficient
providing an empirical solution for various practical applications such as numerical wave modeling. Coefficient
can be used in a variety of ways; in our work, its definition and purpose are discussed in detail in
section 3d
It is the main goal of the present paper to investigate the shape of function (3) by means of a laboratory experiment. The sheltering coefficient G is expected to mainly depend on wave shape and wind
forcing; therefore, they are the parameters that were artificially controlled to vary over a wide range. Other wind and wave parameters such as wind speed and gustiness, spray volume, wave breaking
probability, wave asymmetries, and nonlinear interactions are also expected to have an effect on G (Babanin and Makin 2008). Some of them are addressed in this study, but others are left for future
The experiment design and ranges of investigated parameters in this study were chosen such that the observed phenomenon would be as similar as possible to the wind-wave interaction in the ocean.
Nonetheless, it is important to note that the wind-wave momentum fluxes in the field and in the laboratory can be different for a number of reasons, and strictly speaking they cannot be directly
compared. The major sources of differences are the following: First, length scales, such as the wavelength and amplitude, are typically much smaller in the laboratory. Second, in the field, waves
typically have a three-dimensional shape, whereas they are nearly two dimensional in the laboratory. Third, a wave produced by a mechanical wave maker in the laboratory is monochromatic and strictly
periodic, whereas in the field the wind-wave spectrum is wide, leading to a large variety of possible wave shapes. According to the linearized theory, these differences should not influence the
results, presented in nondimensional space. However, because our study covers nonlinear and strongly forced conditions, we are likely to exceed the limits of applicability of linear theory.
On the other hand, laboratory work has a number of advantages over experiments conducted in the filed. Unlike the real ocean, conditions in a wave tank are repeatable; the input parameters are
controllable; and, finally, high winds pose less of a threat to equipment and researchers. Therefore, we see the purpose of our experiment in the extension of field measurements to higher winds, as
well as in quantitative clarification of underlying physical processes observed in the field (e.g., the role of wave steepness in the wave growth).
2. Experiment setup
The experimental data presented in this work were acquired in the Air-Sea Interaction Saltwater Tank (ASIST) at the University of Miami Rosenstiel School of Marine and Atmospheric Science. ASIST is a
15 m × 1 m × 1 m wind-wave flume. It was filled with freshwater up to a 42-cm level (Fig. 2) and equipped with fully programmable wind, current, and surface gravity wave generators. The tank is
capable of producing winds up to 30 m s^−1 at the centerline, current up to 0.5 m s^−1, and mechanically generated waves with up to 10-cm height. Fully transparent acrylic glass walls, bottom, and
top of the tank allow using optical nonintrusive methods for flow measurements and visualization.
An “Elliott” type of static air pressure probe (Elliott 1972b) was used to study the wind-wave momentum transfer. Because wave-induced airflow pressure fluctuations quickly decay with height, it was
important to measure pressure as close to the wave surface as possible. Therefore, the probe was continuously moved vertically and kept at a small distance (1–3 cm) from the wave surface.
To make wave following possible in strong wind conditions with occasional spray and wave breaking, a robust and fast response elevation gauge was developed. The new Digital Laser Elevation Gauge
(DLEG) essentially is a vertical laser beam crossing the water surface. Fluorescein added to the water makes the laser beam highly visible; this creates a brightness contrast as the beam crosses the
air–water interface. A digital line scan camera looks at the beam through the tank’s sidewall. The brightness threshold location on a line image signifies the water elevation. Line images, sampled at
250 Hz, were acquired through a firewire board and processed by a National Instruments Labview code in real time. The code had an advanced edge detection algorithm and thus provided a surface
elevation signal clean from whitecap and spray-related spikes. The output signal had 4-ms temporal and 0.5-mm spatial resolution. The same software acquired data from other instruments and controlled
the entire experiment. A detailed description of this system can be found in Savelyev (2009). The edge detection algorithm used to determine the water surface location is described in the appendix.
The heart of the wave follower is a linear servo motor with its programmable controller. To provide the optimal following trajectory, at every time step, in addition to a new water elevation
coordinate, the current motor position was considered to make a new motion decision. This allowed smooth reattachment of water elevation and wave-follower trajectories, limited unnecessary
vibrations, and disabled positive feedbacks. The key principle behind the smooth motion was that instead of position; only the velocity of the motor was controlled. Every time step, it was chosen as
is current surface elevation,
is current motor elevation, and Δ
is sampling time step. This allowed the motor to be in constant motion and thus eliminated stop and go vibration, the so-called jerking problem, which is harmful for the motor and pressure sensors.
The follower motion lagged about 30 ms behind surface elevation. To compensate for this lag, another elevation gauge was installed 4 cm upstream from the pressure probe location.
The static pressure Elliott probe is essentially a metal disk 2 cm in diameter with an inlet in the middle of each side. Both inlets are connected to a single pressure transducer. The shape of the
probe is such that airflow, disturbed by the disk edge, restores by the time it reaches the center of the disk. This ensures that the measured pressure is static pressure, as long as the wind
direction is within ±12° of the disk plane. More detailed information on the probe design and specifications can be found in Elliott (1972b). In ASIST, for the given experimental setup, the dominant
wind speed components are in vertical and along-tank directions. Thus, the disk was mounted vertically along the tank direction on the bottom of a vertically moving shaft. In the event of flooding
due to wave breaking or spray, a backflow was initiated through the probe.
The observed static pressure difference between the windward and leeward sides of a wind wave was in the 1–30-Pa range. At such small fluctuation magnitudes, a number of factors that can contaminate
the pressure measurements need to be considered. Vertical accelerations generate additional pressure fluctuations in the air column between the pressure probe and the transducer,
is air density,
is wave-follower acceleration, and Δ
is vertical distance between pressure probe and pressure transducer. This is compensated for in the analysis, because the acceleration of the probe may be deduced from the time history of its
position. Also, to minimize these fluctuations, the transducer was mounted, with its membrane vertical, just 10 cm above the probe. As a result, the correction
was within 1% of the wave-induced airflow pressure fluctuations.
There are also errors associated with the transducer membrane’s finite mass, noise cancelling electronics, and the pressure wave propagation time between the probe and the pressure transducer. These
effects were investigated using a controlled pressure chamber and are presented in the form of a phase–frequency response function (Fig. 3) and an amplitude–frequency response function (Fig. 4).
These response functions were applied to all measured pressure signals in Fourier space. More information on the pressure probe calibration can be found in Donelan et al. (2005a), where similar
procedures were followed.
Fig. 3.
Pressure measurement’s phase–frequency response function.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
Fig. 4.
Pressure measurement’s amplitude–frequency response function.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
Within each 30-min run, the wind speed was held constant and a monochromatic wave of a constant frequency and amplitude was generated mechanically. During the propagation from wave maker to the test
location (8.7 m), mechanically generated waves were observed to keep constant wavelength; however, their amplitude increased significantly, especially under strong wind forcing. Therefore, all wind
and wave parameters were measured at the exact location of the Elliott pressure probe (see Fig. 2) and the results represent growth rates only at that particular point.
The friction force, applied by the wind to the water surface, generated a near-surface current with the magnitude of 3–10 cm s^−1, depending on the wind speed. The current reversed its direction at ~
(10–15)-cm depth to form a compensating backflow. This effect introduces a Doppler shift to the wave dispersion relationship, effectively increasing the wavelength of a given wave frequency. However,
the effect was neglected in this study, because the drift current was typically 20–50 times slower than the wave phase speed.
After data acquisition, correction, and calibration, the water surface elevation, pressure probe elevation, and static air pressure time series were obtained at 250-Hz sampling frequency. Table 1
provides a summary of all successful runs, as well as corresponding wave parameters, obtained from these measured time series.
Table 1.
Summary of all runs. Here, U[10] is the equivalent wind speed at 10-m height, U[λ][/2] is the wind speed at half the wavelength height, C[p] is the wave phase speed, a and k are the amplitude and
wavenumber, τ[w] is the wind-wave momentum flux, and γ is the spectral wave growth function.
The parameters were selected to extract momentum flux from the wind to a single wave component at various wind speeds and wave amplitudes. All experiments were repeated at three wave frequencies
(0.75, 1, and 1.25 Hz) to test the scaling invariance. The minimum amplitude was chosen to be ~0.5 cm, because at lower values the pressure fluctuations were too small to be accurately sampled. The
maximum amplitude (~4.5 cm) limitation was due to the pressure fluctuations exceeding the upper limit of the pressure transducers. The minimum wind speed was chosen to be U[10] = 7 m s^−1 in order to
provide a sufficient pressure fluctuation signal to be above the noise floor of the transducer.
The largest effort was expended to increase the maximum wind speed limitation (U[10] = 26.9 m s^−1). Wave breaking and spray started to be visually noticeable around the wind speed U[10] = 22 m s^−1.
At higher wind speeds, the amount of whitecaps and spray rapidly increased, eventually causing unacceptable degradation of the wave-follower performance, as well as repeated clogging of the pressure
probe by water intrusion. For these reasons, most of our attempts at higher wind speeds did not produce satisfactory results, with the exception of two runs with U[10] = 26.9 m s^−1. The role of
spray and wave breaking is thoroughly investigated in section 3d; however, because we only sampled at the inception of such conditions, our conclusions lack statistical confidence and call for
additional experiments in higher winds.
3. Data analysis and results
a. Pressure profiles
Wind-wave momentum flux is given by
, for which the static air pressure at the surface is required. Because pressure was measured at a small but finite height, it must be extrapolated to the surface. However, the exact form of the
extrapolation function is unknown,
is the pressure probe’s instantaneous elevation above water surface,
is the measured pressure,
is the pressure at the surface, and
is the wavenumber. Potential theory, as well as previous experiments in low winds (e.g.,
Banner 1990
Hristov et al. 2003
), suggests
is an empirically determined constant. In strong wind forcing, the vertical decay of the pressure fluctuation has not been previously observed.
For this purpose an averaged function p(z, x) was measured for each run, which was conducted under constant wind and mechanical wave forcing for 30 min. The vertical distance z is the elevation above
the average surface at the wave phase x.
Using a numerical bandpass filter, the mechanically generated wave frequency was extracted from the surface elevation time series. For every data point, the real and imaginary parts of a Hilbert
transform determined wave phase. Once the phase of the wave was known, phase-resolved averages of surface elevation reproduced the mean shape of the long wave. Pressure measurements, collected and
averaged at multiple levels above each wave phase, enabled pressure extrapolation to the surface (Fig. 5). For the purpose of illustration of p(z, x), pressure measurements were averaged over areas λ
/72 wide and 5 mm high, where λ is the wavelength of the long (mechanically generated) wave.
Fig. 5.
Two examples of phase-resolved airflow pressure fluctuations p(Θ, z) above the mean wave surface (solid line): (top) U[λ][/2]/C[p] = 11.75 and ak = 0.072 (run 26) and (bottom) U[λ][/2]/C[p] = 11.37
and ak = 0.115 (run 46). Vertical and horizontal length scales are normalized by the wavelength L.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
During each run, the pressure was sampled at a range of heights (above the wavy surface) starting from the closest nonwetting height [~(1–3 cm)] up to about 1/10 of the wavelength. Although the
pressure probe can only sample one elevation z and phase x at a time, over the course of 30 min there was sufficiently dense sampling to provide bin-averaged mapping of the pressure distributions
with elevation and phase. By averaging to obtain p(z, x), the turbulent component of the pressure fluctuations as well as the effects of random realizations of wind waves were removed.
The vertical pressure profile was found to vary with wave phase. It sometimes has a local minimum above the leeward side of the wave crest. For example, two cases with similar wind forcing but
different wave steepness are shown in Fig. 5. Although airflow streamlines were not resolved in this experiment, in some cases a pressure minimum is visible above the lee side of a steeper wave, such
as in Fig. 5 (bottom). Therefore, the bottom and top panels in Fig. 5 possibly illustrate averaged pressure patterns with and without the airflow separation, respectively. Further analysis (sections
3e and 4), however, shows that the airflow separation is weak in these conditions and plays a minor role in wind-wave momentum flux.
Analysis of all runs revealed that for each wave phase in the given height range the pressure profile is best described by a linear fit. The pressure measurements were within a small height above the
surface (~1/10 of the wavelength); therefore, only the linear term of the Taylor series expansion of the pressure was preserved. This assumption allows estimation of the rate of the vertical
exponential decay based on the available data,
where ∂
is the slope of the pressure profile near the surface and
is the pressure, linearly extrapolated to the surface. This provides an estimate for
at each phase Θ of the studied wave. Averaging
(Θ) over all available runs gives the phase dependent rate of vertical exponential decay of pressure fluctuations (
Fig. 6
Fig. 6.
The intensity of exponential pressure vertical decay rate (7) as a function of wave phase. Error bars show 95% confidence intervals for the value of α, and the dash–dotted line represents the closest
analytical function fit for α. The solid line is the measured normalized mean surface elevation. Solid and dashed lines are measured and fitted normalized mean surface elevation, respectively.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
The wide spread of 95% confidence intervals shows that for various wind-wave conditions the vertical decay of pressure near the surface is not necessarily exponential. Also, these results clearly
suggest that the exponential decay coefficient
depends on phase. The minimum mean-square fit yields
with the corresponding mean surface elevation (shown by solid line in
Fig. 6
) having a fitted cosine wave [
= −cos(Θ)] shown by dashed line.
In agreement with a priori expectations, the observed value of α fluctuates around 1. However, these fluctuations were found to be wave phase dependent. More experimental work on this topic is needed
to confirm this dependence and to investigate the effect of wind forcing severity on it. It is likely that the wave phase dependence exists because of sheltering, and, if so, α bears some dependence
on wind forcing and/or wave shape.
Note that, in the following analysis, there was no need for Eq. (9), because the value of p[0] was available in all cases. Therefore, the uncertainty in Eq. (9) does not undermine other results of
this work.
b. Momentum flux from wind to waves
Once static air pressure at the surface and surface slope are obtained, momentum flux is given by
. It was calculated for each run and is shown in
Fig. 7
(top). As expected, momentum flux intensifies as wind speed increases, and the steepness of the imposed wave (shown by symbols) strongly influences the momentum input as well. The bulk air–sea
momentum flux
is often parameterized as a function of wind speed
and a drag coefficient
, where
is the friction velocity (e.g.,
Powell et al. 2003
Donelan et al. 2004
). As momentum flux crosses the water surface, part of it transfers into the wave motion because of normal pressure force and the other part transfers directly into the near-surface current because
of friction force. Therefore, the drag coefficient
can be represented by two components,
, where
is responsible for the form drag and
is responsible for the friction drag. This way, the wind-wave momentum flux measured in this study can be expressed as
A drag coefficient is expected to primarily depend on surface shape. We have found that within our dataset it can be well described by
are shown in
Fig. 7
(bottom) by a solid line, which is strongly correlated (correlation coefficient 0.97) with our data. The relationship appears to hold throughout the entire range of tested parameters,
from 7 to 26.9 m s
from 0.6 to 4.5 cm, and
from 2.76 to 6.35 m
. This result leads to two conclusions: first, in these strongly forced conditions, the rate of momentum flux into the wave field does not depend on the actual size of waves but is controlled only by
the surface slope. Second, unlike the bulk air–sea momentum flux drag coefficient, wind-wave drag coefficient does not directly depend on wind speed. However, it is likely that in realistic
tends to increase with wind speed, forming an indirect relationship between
. More detailed discussion on the applicability of this result to the realistic ocean environment is given in
section 3f
Fig. 7.
Wind-wave momentum flux as a function (top) of wind speed and (bottom) of the wind speed multiplied by wave steepness. Solid line is defined by Eqs. (10) and (11). Symbols show (top) the range of the
wave steepness and (bottom) the value of the wavenumber. The 2 large asterisks in (bottom) show 2 runs with the highest wind speed U[10] = 26.9 m s^−1.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
c. Wave growth rate as a function of wind forcing
The nondimensional wave growth function
is the ratio of rate of work done on a progressive surface wave by atmospheric pressure to the total energy of the wave (scaled by the ratio of water density to air density). In other words, it is a
relative growth rate of energy. It may be calculated as
where 〈
〉 is the quadrature spectrum (
Snyder et al. 1981
). However, it is impossible to obtain the entire function based on one 30-min run because of the rapidly increasing error in the tail of the wave spectrum. Therefore, each run was used to calculate
only one value, corresponding to the frequency of the mechanically generated wave. This simplifies Eq.
is the variance of the surface elevation of the mechanically generated wave at the given frequency. All variables on the right-hand side of Eq.
are known and the resulting wave growth function can be calculated by varying the frequency of interest. It is shown in
Fig. 8
together with parameterizations obtained by
Snyder et al. (1981)
Hsiao and Shemdin (1983)
Hasselmann and Bösenberg (1991)
Donelan (1999)
, and
Donelan et al. (2006)
Fig. 8.
Spectral wave growth function dependence on wind forcing. Present study data points are compared to parameterizations obtained by other authors.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
According to the wave growth theory (
Miles 1957
), we expect
to depend primarily on wind forcing. Here, wind forcing is represented by (
− 1)
, where
is the wind speed at the height equal to half of the wavelength. A least squares fit to the data yields the dependence
which is shown in
Fig. 9
. It closely matches an averaged fit to multiple datasets (
Shemdin and Hsu 1967
Larson and Wright 1975
Wu et al. 1977
Wu 1979
Snyder et al. 1981
) compiled by
Plant (1982)
Fig. 9.
Spectral wave growth function dependence on wind forcing. Stars, circles, and inverted triangles represent steepness region of the studied wave. The solid line is the least mean-square quadratic fit
to our data, and the dashed line corresponds to Plant (1982) compilation.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
d. Analysis of the wave growth rate sensitivity to secondary parameters
The close match between our results and previously observed growth rates (Fig. 9) is encouraging, but both the present data and the dataset compiled by Plant (1982) have significant scatter around
their mean values. In a weakly nonlinear setting, the main cause for this scatter is anticipated to be the variation in wave steepness, which has been shown to influence the growth rate both
experimentally (Peirson and Garcia 2008) and theoretically (e.g., Belcher 1999). As the wave field becomes strongly nonlinear, occurrences of breaking waves and spray are also expected to contribute
to the observed scatter. All of these parameters, including the wind forcing, are not completely independent; therefore, further correlation analysis and empirical function construction must be
approached with caution.
Let us assume that the final empirical wave growth function will take the form
) is an unknown function that depends on one or more of the wind-wave parameters. For further simplification and because of the lack of data, we further assume that
is a linear polynomial. Next, we examine if
can be one of the following parameters: wave amplitude
; wave frequency
; wave steepness
; wave crest sharpness
, which is defined as the ratio between the crest elevation above the mean water level and the crest width at half the elevation;
; or the percentage of waves breaking
. A wave was considered to be breaking if at any point its orbital velocity exceeded half of its phase velocity (
Zhang 2008
). The orbital velocity was calculated by means of a Hilbert transformation (
Melville 1983
). In the following analysis of the wave breaking probability, only runs with
> 0 were considered.
The least mean-square fits of the linear functions G(ζ) are summarized in Table 2. The relative range value is the measure of the function’s G sensitivity to the current parameter ζ. It is defined as
the difference in G between the largest and the smallest values of ζ, normalized by mean G and multiplied by 100%. For example, we see a strong dependence of G on ak (40%), f (35%), U[λ][/2]^3 (30%),
U[λ][/2] (28%), and P[br] (26%) within the investigated range of these variables. However, this does not necessarily mean that γ depends on all of them, because some of these parameters are not
independent. To keep track of all correlations, the full correlation matrix is given in Table 3. For example, wave steepness is strongly correlated with crest sharpness and breaking probability. Less
noticeable is the correlation between ak and f: lower-frequency waves tend to be less steep, and steeper waves tend to have higher amplitude. Note that, although some of these interdependencies are
typical to ocean conditions, others are purely artificial and unique to this dataset. For example, the U[λ][/2] correlation with f(−0.344) reflects not a fundamental wave growth law but merely our
choices of wind speed and wave frequency. For these reasons, to avoid spurious correlations, function G(ζ) was investigated as a function of only one secondary parameter at a time. Also, in some
cases below, function G(ζ) retains dimensions of the secondary parameter ζ, whereas the right-hand side of Eq. (15) is expected to be nondimensional. This was done simply to show a qualitative role
of a given parameter, and such polynomials are not intended to be used as parameterizations, mainly because of large confidence intervals.
Table 2.
Summary of linear fits G(ζ) = Kζ + b. The first column states which parameter represents ζ; the second column gives the range of ζ within the dataset; the third column is the slope of the linear
regression K; the fourth column is the 95% confidence interval of K; the fifth column is b; and the sixth column is the difference between the values of the linear fit at limiting points (given in
the second column) shown as a percent of an average value. Only points with P[br] > 0 were used to form G(P[br]); all runs were used for the rest of parameters.
Table 3.
The correlation matrix for all considered secondary parameters.
An a priori assumption for this study was that the wave growth function, in addition to its dependence on wind forcing also strongly depends on ak; therefore, these parameters were purposely varied
during the experiment. Indeed, there was a strong sensitivity of G to ak, resulting in a 40% drop of the growth rate as ak varied from its minimum to maximum observed values. The linear regression
slope and its 95% confidence interval was −2.52 ± 1.26. This dependence on wave steepness is in accord with previous experimental observations and theoretical predictions, as discussed in more detail
in section 3e.
The wave growth also exhibited an unexpected sensitivity to the wave frequency, with G = −0.77f + 1.85. The dependence of G on f can be attributed to their indirect relationship through wave
steepness (longer waves tend to be less steep). However, the correlation between ak and f is only moderate (0.237), prompting us to look further into the G(f) function behavior. In the present study,
only three wave frequencies were studied (0.75, 1, and 1.25 Hz); therefore, no conclusive statements can be made on the matter. In future work, it will be useful to conduct a set of experiments with
constant wind forcing and wave steepness but various wave frequencies to reveal the nature of G(f).
Based on experimental work (Banner 1990; Touboul et al. 2009) and the theoretical model of Kudryavtsev and Makin (2001), we know that a sharp wave crest, particularly the crest of a breaking wave,
causes airflow separation, which in turn enhances the wind-wave momentum flux. A number of experimental (e.g., Reul et al. 2008) and numerical (e.g., Ryn et al. 2007) works have covered the topic of
airflow separation, but its detailed description in terms of wind-wave parameters is still incomplete. Although it is an important research subject, the present study did not directly measure airflow
velocity streamlines and as such could not resolve flow separation. Pressure patterns shown in Fig. 5 (top and bottom) may provide some information on the matter; for example, the local pressure
minimum above the lee side of the wave in Fig. 5 (top) may suggest the existence of the separation bubble. However, such pressure patterns are not individual events but averages over hundreds or
thousands of wave periods, which are likely to include both separated and nonseparated cases. Moreover, short wind waves that are not visible on these averaged images were likely to introduce an
unknown variability to the location of the separation bubble. Therefore, these limitations do not allow us to establish a direct relationship between the airflow separation and the wind-wave momentum
flux. Nonetheless, one might hypothesize that, in addition to wind forcing and wave steepness, the effect of the airflow separation on the wave growth is controlled by the wave crest sharpness s and/
or wave breaking probability P[br]. However, in our experiments, the parameter s was not found to have any influence on the wave growth (Table 2). Function G was found to increase with breaking
probability (by 26% within the studied range; Table 2). However, most of the already limited number of runs with P[br] > 0 has low breaking probability. Therefore, the 95% confidence interval of the
linear regression slope is wide, 0.66 ± 0.73. Such statistical error does not allow making definite conclusions and calls for more data from future experiments.
Additionally, runs with breaking probability P[br] > 0 on average result in values of the wave growth rate 4% below the parameterization given by Eq. (14), whereas cases with P[br] = 0 are 22%
higher. This suggests an opposite effect: that is, reduction of γ because of wave breaking. However, in the following section it will be shown that the wave growth rate is sensitive to wave
steepness. It is likely that cases with P[br] = 0 produce higher growth rates, compared to P[br] > 0, mainly because of lower wave steepness. These two effects are hard to isolate within our dataset,
because the correlation coefficient between ak and P[br] is 0.79 (Table 3).
A limited amount of spray was observed during some runs within this study. Therefore, it is interesting to investigate how spray presence affected the wind-wave momentum flux. On one hand, spray is
expected to increase the bulk air–sea momentum flux: spray droplets are accelerated by the airflow before they reenter the water column. This slows down the wind speed near the surface (Pielke and
Lee 1991), causing weaker wave-induced pressure fluctuations. Therefore, we would see a reduction in the wave growth function. On the other hand, based on Powell et al. (2003) field measurements,
Makin (2005) suggested a reduction of the net air–sea momentum flux and an increase of the near-surface wind speed due to spray. However, an alternative model by Troitskaya and Rybushkina (2008) is
able to account for the drag reduction with no regard to spray. In any case, Powell et al. (2003) observed this effect only at U[10] exceeding 33 m s^−1, which is well above the range of wind speeds
in our study.
To investigate the role of spray, we have conducted an additional experiment and obtained a rough estimate of the spray concentration in the air. As a part of the digital laser elevation gauge
technique, described in section 2, a digital line scan camera acquired images of spray droplets, crossing the path of the vertical laser beam (~2 mm in diameter). The number of spray droplets that
crossed the beam in 1 s at the height range of 15–30 cm above the mean water level is shown as a function of wind speed in Fig. 10. It can be seen that, apart from occasional droplets, the spray
amount is significantly smaller for U[10] below 22 m s^−1, which is also around the cutoff wind speed for most of our dataset (with the exception of two runs with U[10] = 26.9 m s^−1). This is not a
coincidence, because most of our attempts to conduct wave following and pressure measurements in heavy spray conditions did not produce useful data.
Fig. 10.
The number of spray droplets that crossed the vertical laser beam in 1 s at the height range 15–30 cm above the water as a function of wind speed.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
Large asterisks in the bottom panel of Fig. 7 show two successful runs conducted in conditions where spray concentration is expected to be the highest (i.e., U[10] = 26.9 m s^−1). Both of them
produced τ[w] values lower than the fitted curve, thus suggesting the negative effect of spray on the wind-wave momentum flux (i.e., Pielke and Lee 1991). Clearly, more data at higher winds are
needed to confirm and quantify this effect. Andreas (1998) expects nearly an order of magnitude increase in spray volume as wind speed increases from 22 to 32 m s^−1. At that point, spray-related
effects might dominate the wind-wave momentum flux. However, because of technical limitations of our experimental approach, described in section 2, we were unable to proceed to such high winds. For
more detailed discussion regarding the effect of spray on the air–sea momentum flux in higher winds, the reader is referred to Andreas (2004).
e. Wave growth dependence on wave steepness
In accord with a priori expectations, the wave growth function was found to be particularly sensitive to the wave steepness among other secondary parameters. Next, we perform a more detailed analysis
of this dependence and compare it to previous findings.
Previously we assumed a possibility of the separation of variables
within the function
) =
), where
= 0.52[(
) − 1]
. To test this assumption, the experiment was designed in a way that forms vertical clusters of points with nearly constant wind forcing but variable
Fig. 8
). These clusters were created on purpose to provide cross sections of
) and thus map the entire surface
) as the true function of two variables. Each of these clusters was analyzed separately; all of them are shown as
with corresponding linear fits in
Fig. 11
. This figure gives a glimpse of the complex structure of
), but, given the limited amount of data, we have to approximate each
) dependence as a linear fit. It is evident that a decline of
with steepness takes place in almost all cases. Therefore, it is reasonable to generalize the data as having a linear decline of
with increasing
for any wind forcing. For simplicity, this relative decline is further assumed to be independent of wind forcing in the studied range. To generalize this dependence, each
) function was normalized by its average 〈
〉 so that the linear fit through all available data gives an average
) linear fit. The resulting averaged dependence of the wave growth function on wave steepness is given by
where the 95% confidence interval of the linear regression slope is −1.9 ± 0.58. The present data with a corresponding linear fit (solid line) are shown in
Fig. 12
. The dashed–dotted line is a simplified representation of the dataset compiled by
Peirson and Garcia (2008
, their Fig. 7), and the dashed line is the nonlinear correction to the wave growth rate proposed by
Belcher [1999
, Eq. (4.9),
≈ 20]. Both lines were normalized by their mean values 〈
〉 averaged over the range of
from 0.03 to 0.19 for the purpose of slope comparison ∂
) with Eq.
Fig. 11.
Spectral wave growth function dependence on wave steepness for various wind forcing. Symbols represent groups of runs with constant wind forcing. Solid lines represent a linear fit through each
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
Fig. 12.
Normalized and averaged dependence of spectral wave growth function on wave steepness. Symbols are as in Fig. 11. The solid line represents averaged linear fit, the dashed line is Belcher (1999), and
the dashed–dotted line is Peirson and Garcia (2008).
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
In contrast with this study, one of the conclusions of Donelan et al. (2006) suggests a proportionality between the wave growth rate and wave steepness (i.e., γ ∝ ak). Two proportionality
coefficients are given for low and high wind forcing, but both of them are positive. We are unable to give a definitive explanation for this contradiction, in part because of the large statistical
uncertainties of both results. The difference in γ(ak) might be in part attributed to the difference in the wave spectrum in the laboratory and in the field. A broad wind-wave spectrum results in a
large range of possible wave steepnesses within a given wave field, whereas steepness stays nearly constant for each wave in the laboratory. Therefore, for example, if the mean steepness ak = 0.1,
both in the laboratory and in the field, unlike the laboratory, waves in the field will frequently exceed the critical steepness and break. According to Banner (1990), this will enhance the wind-wave
momentum flux. Laboratory waves, on the other hand, will not yet experience the full airflow separation at this steepness. Hence, the growth rate measured in the field might appear to be increasing
with steepness. To test this hypothesis, it would be useful to compare momentum fluxes from wind to both monochromatic and wind-wave spectra in the future experiments.
f. Results applicability
The growth rate correction due to wave nonlinearity obtained in this study [Eq. (16)] is in close agreement with the theoretical prediction by Belcher (1999). This suggests that the theory of
nonseparated sheltering is relevant to the airflow regime within the studied range of parameters. The empirical nonlinear correction for γ, compiled by Peirson and Garcia (2008), suggests a sharp
decline while ak is below 0.09 and a nearly constant value while ak is between 0.09 and 0.23. As wave steepness approaches breaking threshold, a sharp increase is expected because of the airflow
separation. Our data were collected in the range of ak between 0.03 and 0.19 and are in agreement with Peirson and Garcia (2008) within the range of 95% confidence intervals. Although there are
indications of airflow separation in some cases (e.g., Fig. 5b), it appears that our study mostly covers the range of ak where nonseparated sheltering is the dominant mechanism for the wind-wave
momentum transfer. Full airflow separation is expected to take a more dominant role and to enhance the wave growth for wave steepness above the range covered within this study.
In an ideal environment of pure wind waves, wave steepness can be related to the wind forcing using Toba’s law [
Toba 1972
, his Eq. (3.18) and Fig. 4],
where the friction constant was assumed to be 0.026. Using Eqs.
, we get the dependence of
on wind forcing,
Fig. 13
, Eq.
is shown as a solid line. The other three lines illustrate the wave growth function [Eq.
], with
given by Eq.
. The three curves correspond to various values of
, and the dots correspond to the data points listed in
Table 1
. The resulting correlation coefficient between the two-dimensional function
) and the data points is 0.78.
Fig. 13.
The wave growth parameterization: Eqs. (13) and (14) are illustrated by 3 examples with varying wave steepness, and the solid line represents Eq. (16).
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
Because the wave steepness and wind forcing are controllable in the laboratory, they can and often do violate Toba’s law [Eq. (17)], which is only applicable for pure wind waves. According to Eq.
(17), for wind forcing U[λ][/2]/C[p] > 6 the wave steepness is expected to be beyond the range covered in this study, ak > 0.2. Therefore, our results [i.e., Eqs. (15), (16), or (18)] are only
applicable for pure wind-wave conditions for wind forcing U[λ][/2]/C[p] within the range from 4 (i.e., the minimal wind forcing within this study) to 6.
Toba’s law predicts pure wind waves to be steeper as wind forcing increases. In addition, our results show that the growth rate γ decreases with steepness. Therefore, growth rate increases because of
wind forcing but also decreases because of higher steepness caused by the wind forcing. In laboratory experiments, on the other hand, the steepness remains arbitrary, independent of wind forcing.
This can possibly explain the tendency of laboratory-based parameterizations γ(U[λ][/2]/C[p]) to yield faster growing values compared to field data (e.g., Fig. 8). Based on this conclusion, we
suggest that, for practical purposes, laboratory-based parameterizations γ(U[λ][/2]/C[p]) should only be used for the wave steepness they were measured at, or ideally they should include ak as one of
the input parameters, γ(U[λ][/2]/C[p], ak) [e.g., Eqs. (15) and (16)]. On the other hand, field measurements, conducted in pure wind-wave conditions, can produce correct γ(U[λ][/2]/C[p]), because
Toba’s law is inherently satisfied. However, the application of such parameterizations is limited to pure wind waves.
For wind forcing U[λ][/2]/C[p] above 6, the parameterization given by Eqs. (15) and (16) is not applicable for a pure wind sea, because the tested range of wave steepness is too low. This
parameterization is expected to be useful in conditions that include strong winds blowing over swell or in situations where the wind sharply changes direction with respect to wave fronts, effectively
reducing the wave steepness.
Similar narrow limitations apply to our wind-wave momentum flux results. For the lowest values of wind forcing
tested in this study, Toba’s law is satisfied and can be combined with Eqs.
, yielding
= 0.0142
, or
If Eq.
is extrapolated far outside of the tested region, toward the state of equilibrium (i.e.,
= 1.37), momentum flux becomes simply
= 10
), independent of any wind-wave conditions. In future studies, it will be interesting to observe if this is true for the state of equilibrium and, if not, what the limits are of the applicability of
in mature seas.
4. Conclusions
In this work, an experimental effort was undertaken to improve our understanding of the airflow pressure fluctuations near the air–sea interface. Pressure measurements were used to obtain wind-wave
momentum flux and parameterize the wave growth rate in high winds.
The momentum flux
from wind to a mechanically generated wave was found to be a function of wind speed, wavelength, and wave amplitude. It can be parameterized as
= 0.146(
is the wind-wave drag coefficient (
Fig. 7
, bottom). The momentum flux measurements were further used to obtain the wave growth function
dependence on various wind-wave parameters. Primarily it was found to be sensitive to the wind forcing. The empirical dependence is given by
is a nondimensional function that takes the role of a correction coefficient due to waves nonlinearity, wave breaking, spray, etc. Some of the related parameters were investigated (
Table 2
), and the following simplified linear relationships were established.
First and foremost, G was found to decrease with the wave steepness, G = ak(−1.9 ± 0.58) + 1.2, where ±0.58 is the 95% confidence interval on the linear regression slope. The range of ak studied was
0.03 to 0.19. This result was found to be in agreement with previous observations (Peirson and Garcia 2008) and with the nonseparated sheltering theory of Belcher (1999). Although airflow streamlines
were not resolved in this study and therefore airflow separation was not directly observed, we hypothesize that the nonseparated sheltering is the dominant mechanism controlling the wind-wave
momentum transfer in the studied range of ak (0.03–0.19). Furthermore, we suggest that the separated sheltering mechanism, described by Kudryavtsev and Makin (2001), becomes dominant and enhances the
wind-wave momentum flux for wave steepness above the range covered in this study.
Although the coefficient G potentially can be a function of many wind-wave parameters, the most robust dependence is on the wave steepness. Other parameters, considered in our analysis, were
investigated over narrow ranges; therefore, the resulting parameterizations have only qualitative and preliminary meaning, calling for additional data to build statistical confidence.
According to the hypothesis of Andreas and Emanuel (2001), after spray particles detach from the water surface they are accelerated by the airflow. Therefore, as these particles reenter the water
column, they transfer some of the wind momentum with them. This mechanism slows down the wind speed in close proximity to the water surface and hence is responsible for weaker pressure fluctuations
and slower wave growth. Our rough estimate of spray concentration in the wind tunnel suggested that only two runs (with top wind speeds within our experiment) were conducted in heavy spray
conditions. Although the results from these runs support the hypothesis, more data are needed to construct reliable quantitative parameterization.
Banner (1990) measured airflow pressure fluctuations above breaking waves and observed an enhancement of the wind-wave momentum flux. Our data suggests a similar dependence: that is, G(P[br]) = P[br]
(0.66 ± 0.73) + 0.88. However, because of the limited amount of runs with the breaking probability P[br] > 0, large statistical uncertainly within the G(P[br]) function does not allow definite
conclusions to be made. For wave fields with larger breaking probability and wave steepness, airflow streamlines are expected to switch to the full separation regime and sharply increase the value of
the function γ(U[λ][/2]/C[p], ak). However, a more detailed experiment is needed to observe and quantify this phenomenon.
The authors gratefully acknowledge the support of the National Science Foundation (Grants OCE 0526318 and AGS 0933942) and the Office of Naval Research (Grant ONR N000140610288). We thank Mike
Rebozo, Tom Snowden, and Hector Garcia for technical assistance during the experiments.
The Edge Detection Algorithm
One of the greatest technical challenges of the wave following experiment in rough conditions is the reliable and precise detection of the water surface location. An edge detection algorithm was
developed specifically for the purpose of water surface measurements in rough wind-wave conditions. The algorithm was development for line scan image processing within the DLEG technique, as well as
for areal images of a laser sheet crossing the water surface.
The idea behind the edge detection algorithm was to attempt to reconstruct the logic a human would use to find the water surface on an image such as Fig. A1. There are two main challenges that a
rough surface poses for edge detection: first, spray particles are as bright as the surface and have a risk of being recognized as such; second, wave breaking foam, attached to the surface,
misrepresents the actual water elevation. Both of these features are easily identified by a human eye but pose a significant challenge in automation development. To deal with the first problem, an
image is defocused, where its resolution is decreased by a factor of 64 × 64. Because spray particles are small, their high brightness has little impact on mean brightness of 64 × 64 pixel areas. On
the defocused image a raw estimate of the brightness edge can be easily found without the risk of spray contamination.
Fig. A1.
Image of a laser sheet crossing the water surface in high wind conditions with foam and spray present in the field of view. Horizontal and vertical axes are given in pixels.
Citation: Journal of Physical Oceanography 41, 7; 10.1175/2011JPO4577.1
Although the coarse surface edge estimate effectively deals with spray, it would still have an error due to the foam. To filter that effect out, a critical surface wave curvature criterion was used.
If the curvature of the coarse surface elevation signal ∂^2η/∂x^2 exceeded a critical value, such points were considered contaminated by the foam and were replaced by a function, smoothly filling the
gap. Such function was found using small smoothing iterations, starting with unchanged function until the curvature criterion was satisfied.
The curvature threshold was optimized individually for each run, depending on wave conditions (i.e., expected maximum crest sharpness). In the image dimension, the typical critical curvature value
was one vertical pixel per (one horizontal pixel × one horizontal pixel). Whether the replacement in form of the smooth function actually represents the true surface is an open question, because the
air–water interface line does not exist within the foam. Nonetheless, the described method gives a good estimate on where the surface would have been in the absence of foam.
Once the coarse step of the edge detection is complete, the image resolution is increased by an arbitrary factor and the same edge detection principle is applied for the part of the image within
close proximity to the rough elevation. The second step has the highest computation cost; therefore, such reduction in edge search area improves algorithm efficiency and reduces the chance of picking
up an edge somewhere within air or water away from the surface. The third step increases the image resolution to the maximum and simply interpolates the surface elevation curve to provide elevation
data for each pixel column of the original image.
• Andreas, E. L., 1998: A new sea spray generation function for wind speeds up to 32 m s−1. J. Phys. Oceanogr., 28, 2175–2184.
• Andreas, E. L., 2004: Spray stress revisited. J. Phys. Oceanogr., 34, 1429–1440.
• Andreas, E. L., and K. A. Emanuel, 2001: Effects of sea spray on tropical cyclone intensity. J. Atmos. Sci., 58, 3741–3751.
• Babanin, A. V., and V. K. Makin, 2008: Effects of wind trend and gustiness on the sea drag: Lake George study. J. Geophys. Res., 113, C02015, doi:10.1029/2007JC004233.
• Banner, M. L., 1990: The influence of wave breaking on the surface pressure distribution in wind-wave interactions. J. Fluid Mech., 221, 463–495.
• Belcher, S. E., 1999: Wave growth by non-separated sheltering. Eur. J. Mech., 18B, 447–462.
• Dobson, F. W., 1971: Measurements of atmospheric pressure on wind-generated sea waves. J. Fluid Mech., 48, 91–127.
• Donelan, M. A., 1999: Wind-induced growth and attenuation of laboratory waves. Wind-over-Wave Couplings: Perspective and Prospects, S. G. Sajadi, N. H. Thomas, and J. C. R. Hunt, Eds., Clarendon
Press, 183–194.
• Donelan, M. A., B. K. Haus, N. Reul, W. J. Plant, M. Stiassnie, H. C. Graber, O. B. Brown, and E. S. Saltzman, 2004: On the limiting aerodynamic roughness of the ocean in very strong winds. J.
Geophys. Res., 31, L18306, doi:10.1029/2004GL019460.
• Donelan, M. A., A. V. Babanin, I. R. Young, M. L. Banner, and C. McCormick, 2005a: Wave-follower field measurements of the wind input spectral function. Part I: Measurements and calibrations. J.
Atmos. Oceanic Technol., 22, 799–813.
• Donelan, M. A., F. Dobson, H. Graber, N. Madsen, and C. McCormick, 2005b: Measurements of wind waves and wave-coherent air pressures on the open sea from a moving SWATH vessel. J. Atmos. Oceanic
Technol., 22, 896–908.
• Donelan, M. A., A. V. Babanin, I. R. Young, and M. L. Banner, 2006: Wave-follower field measurements of the wind-input spectral function. Part II: Parameterization of the wind input. J. Phys.
Oceanogr., 36, 1672–1689.
• Elliott, J. A., 1972a: Microscale pressure fluctuations near waves being generated by the wind. J. Fluid Mech., 54, 427–448.
• Elliott, J. A., 1972b: Instrumentation for measuring static pressure fluctuations within the atmospheric boundary layer. Bound.-Layer Meteor., 2, 476–495.
• Harris, D. B., and D. J. DeCicco, 1993: Wave follower instrumentation platform redesign and test. Proc. OCEANS ’93: Engineering in Harmony with Ocean, Victoria, Canada, IEEE, Vol. 1, I439–I443.
• Hasselmann, D., and J. Bösenberg, 1991: Field measurements of wave-induced pressure over wind-sea and swell. J. Fluid Mech., 230, 391–428.
• Hristov, T. S., S. D. Miller, and C. A. Friehe, 2003: Dynamical coupling of wind and ocean waves through wave-induced air-flow. Nature, 422, 55–58.
• Hsiao, S. V., and O. H. Shemdin, 1983: Measurements of wind velocity and pressure with a wave follower during MARSEN. J. Geophys. Res., 88, 9841–9849.
• Hsu, C. T., H. Y. Wu, E. Y. Hsu, and R. L. Street, 1982: Momentum and energy transfer in wind generation of waves. J. Phys. Oceanogr., 12, 929–951.
• Jacobs, C. M. J., W. A. Oost, C. van Oort, and E. H. W. Worrell, 2002: Observations of wave-turbulence interaction at sea using a wave follower. Extended Abstracts, 27th General Assembly, Nice,
France, EGS, 2178.
• Kudryavtsev, V. N., and V. K. Makin, 2001: The impact of air-flow separation on the drag of the sea surface. Bound.-Layer Meteor., 98, 155–171.
• Larson, T. R., and J. W. Wright, 1975: Wind-generated gravity-capillary waves: Laboratory measurements of temporal growth rates using microwave backscatter. J. Fluid Mech., 70, 417–436.
• Makin, V. K., 2005: A note on the drag of the sea surface at hurricane winds. Bound.-Layer Meteor., 115, 169–176.
• Melville, W. K., 1983: Wave modulation and breakdown. J. Fluid Mech., 128, 489–506.
• Miles, J. W., 1957: On the generation of surface waves by shear flows. J. Fluid Mech., 3, 185–204.
• Miles, J. W., 1959: On the generation of surface waves by shear flows, Part 2. J. Fluid Mech., 6, 568–582.
• Miles, J. W., 1960: On the generation of surface waves by turbulent shear flows. J. Fluid Mech., 7, 469–478.
• Nishiyama, R. T., and A. J. Bedard Jr., 1991: A “Quad-Disc” static pressure probe for measurement in adverse atmospheres: With a comparative review of static pressure probe designs.. Rev. Sci.
Instrum., 62, 2193–2204.
• Peirson, W. L., and A. W. Garcia, 2008: On the wind-induced growth of slow water waves of finite steepness. J. Fluid Mech., 608, 243–274.
• Pielke, R. A., and T. J. Lee, 1991: Influence of sea spray and rainfall on the surface wind profile during conditions of strong wind. Bound.-Layer Meteor., 55, 305–308.
• Plant, W. J., 1982: A relationship between wind stress and wave slope. J. Geophys. Res., 87, 1961–1967.
• Powell, M. D., P. J. Vickery, and T. A. Reinhold, 2003: Reduced drag coefficient for high wind speeds in tropical cyclones. Nature, 422, 279–283.
• Reul, N., H. Branger, and J. P. Giovanangeli, 2008: Air flow structure over short-gravity breaking water waves. Bound.-Layer Meteor., 126, 477–505.
• Ryn, D. N., D. H. Choi, and V. C. Patel, 2007: Analysis of turbulent flow in channels roughened by two-dimensional ribs and three-dimensional blocks. Part I: Resistance. Int. J. Heat Fluid Flow,
28, 1098–1111.
• Savelyev, I. B., 2009: A laboratory study of the transfer of momentum across the air-sea interface in strong winds. Ph.D. thesis, University of Miami, 101 pp.
• Shemdin, O. H., and E. Y. Hsu, 1967: Direct measurement of aerodynamic pressure above a simple progressive gravity wave. J. Fluid Mech., 30, 403–416.
• Shen, L., X. Zhang, D. P. Yue, and M. Triantafyllou, 2003: Turbulent flow over a flexible wall undergoing a streamwise travelling wave motion. J. Fluid Mech., 484, 197–221.
• Smith, S. D., and Coauthors, 1992: Sea surface wind stress and drag coefficients: The HEXOS results. Bound.-Layer Meteor., 60, 109–142.
• Snyder, R. L., F. W. Dobson, J. A. Elliott, and R. B. Long, 1981: Array measurements of atmospheric pressure fluctuations above surface gravity waves. J. Fluid Mech., 102, 1–59.
• Toba, Y., 1972: Local balance in the air-sea boundary processes I. On the growth process of wind waves. J. Oceanogr. Soc. Japan, 28, 109–121.
• Touboul, J., C. Kharif, E. Pelinovsky, and J-P Giovanangeli, 2009: On the interaction of wind and steep gravity wave groups using Miles’ and Jeffreys’ mechanisms. Nonlinear Processes Geophys., 15
, 1023–1031.
• Troitskaya, Yu. I, and G. V. Rybushkina, 2008: Quasi-linear model of interaction of surface waves with strong and hurricane winds. Izv. Atmos. Oceanic Phys., 44, 621–645.
• Van Duin, C. A., 1996: An asymptotic theory for the generation of nonlinear surface gravity waves by turbulent air flow. J. Fluid Mech., 320, 287–304.
• Wu, H. Y., E. Y. Hsu, and R. L. Street, 1977: The energy transfer due to air-input, non-linear wave-wave interaction and white cap dissipation associated with wind-generated waves. Stanford
University Tech. Rep. 207, 158 pp.
• Wu, J., 1979: Oceanic whitecaps and sea state. J. Phys. Oceanogr., 9, 1064–1068.
• Zhang, F., 2008: On the variability of the winds stress at the air-sea interface. Ph.D. thesis, University of Miami, 131 pp.
|
{"url":"https://journals.ametsoc.org/view/journals/phoc/41/7/2011jpo4577.1.xml","timestamp":"2024-11-11T19:17:45Z","content_type":"text/html","content_length":"1049715","record_id":"<urn:uuid:f55c1b58-4514-4312-84ba-a447b2c50b9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00625.warc.gz"}
|
Online Calculators
• Statistics Calculators - Compute measures of location and dispersion.
Started as a simple standard deviation calculator, it now also calculates mean (average), median, mode, variance, quartiles, inter quartile range and deviation, and mean absolute deviation. Like
other calculators on this site, the statistics calculator will be expanded over the next few months, to include more common statistics formulas.
• Simple Printing Calculator - A virtual 9 digit pocket (!) calculator.
This Virtual Calculator emulates a real pocket calculator. It features basic operators and a virtual paper roll that displays your previous calculations. Type on your keyboard or use your mouse
to press the buttons.
• Suan Pan and Soroban emulators - Practice using the Chinese and the Japanese abaci.
Suan Pan is the Chinese name for the Chinese abacus. It has two decks, the lower deck has 5 beads for each rod, the upper deck has two. The version of Suan Pan on this website has 18 rods, you
can perform calculations with numbers up to 18 digits.
Soroban is the Japanese name for the Japanese abacus. Like the Chinese abacus, it has two decks, but the lower deck has 4 beads for each rod and the upper deck has only one. The Soroban on this
site has 18 rods.
The abaci emulators require javascript to be enabled in your browser.
• Units conversion. - Convert lengths, areas, weights and temperatures.
Select 'conversions' to go to the conversion pages. You can convert between many units of measure of length, area, weight or temperature.
|
{"url":"http://www.alcula.com/","timestamp":"2024-11-13T03:11:51Z","content_type":"application/xhtml+xml","content_length":"14096","record_id":"<urn:uuid:89d174aa-eb5c-45bf-92b9-1820377b4ce0>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00390.warc.gz"}
|
Making N-notes Equal Temperament scales
Until we hopefully get alternative scales, we can make N notes Equal Temperament scales using the mod matrix Key/Seq to Global Pitch.
To do this, assign Key/Seq to Global Pitch in the mod matrix to adjust the key tracking for the oscillator pitch. The mod amount in semitones needed is:
semitones = (128 * 12) / N - 128
where N is the number of notes in an octave.
E.g.for 24TET we need to halve the control voltage range, and we do this by subtracting 64 semitones from the 128 semitone control range, which takes us from 12 to 24 semitones per octave. The
maximum for a mod matrix slot is +/- 60 semitones, and this is where "stacking" modulators comes in; we need to assign Key/Seq twice to the Global Pitch, one assignment with the max of -60, and one
with -4, to get to -64 semitones in total. For 31TET we need -78.45 semitones, and to achieve this we need to use 3 slots because for values above 12 semitones the amount is in whole semitones - so
we use slots of -60, -18 and -0.45 to get to -78.45.
When the scale has been stretched or compressed the common key is the middle E.
We can also retune the synth to get a different common key. To hit e.g. middle C as the common key, for 24TET -2 semitones is needed, for 48TET 3 semitones is needed. For N that aren't doublings of
12, we'll not hit the right tuning, so a tweak of the Fine tune is needed - e.g. for the in between 36TET, -2.5 semitones is needed - so a little fiddly for those.
The attached ZIP archive contains these N-TET template patches:
19TET_E, 24TET_E, 31TET_E, 36TET_E, 48TET_E
19TET_C, 24TET_C, 31TET_C, 36TET_C, 48TET_C
For 24TET_C and 48TET_C, the absolute tuning is spot on, for 19TET_C, 31TET_C and 36TET_C a tweak of Fine Tune is needed. For 36TET it's a full semitone up, for 19TET and 31TET it's a little less -
I've set the VCO 1 and 2 tunings so that the Fine Tune must be increased to hit the standard tuning for middle C for those. We could potentially use the Mod Envelope to create the tuning offset, but
it's nice to save that envelope for sound design.
These patches use page 4 of the mod matrix to keep the tuning stuff "out of the way". The sound is the standard Init sound, ready for making new microtonal patches from scratch. Since all it consists
of is 2 or 3 mod slots and a little tuning of the VCOs, those things can also easily be copied into existing patches to make them ET microtonal.
Edit: changed to include the scales with common keys as both middle E and C
|
{"url":"https://legacy-forum.arturia.com/index.php?topic=106919.0","timestamp":"2024-11-13T02:21:30Z","content_type":"application/xhtml+xml","content_length":"22514","record_id":"<urn:uuid:2e1316ad-26c5-4e50-b869-aba12b2e96ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00222.warc.gz"}
|
Numerical computation of the basic reproduction number in population dynamics
The basic reproduction number, or \(R_0\), is a quantity defined in ecology and epidemiology as a means to investigate what formally are the properties of stability of the zero solution of a linear
system of equations.
In most modern models the population is structured, i.e. individuals’ fertility and mortality are differentiated by some properties, like age, sex, or dimension; in those models the basic
reproduction number is characterized as the spectral radius of an operator, called next generation operator.
Despite the importance of this quantity, and the number of works devoted to its applications in epidemiology, the only attempt to develop an algorithm for its numerical computation was carried out in
We thus focus on the numerical computation, and in particular on an algorithm which is more general, and more accurate than the existing one for equal computing resources.
This seminar concerns the results of Francesco’s MSc thesis.
• [1] , Numerical approximation of the basic reproduction number for a class of age-structured epidemic models, Appl. Math. Lett., 73 (2017), pp. 106–112, DOI: 10.1016/j.aml.2017.04.031.
|
{"url":"http://cdlab.uniud.it/events/seminar-20180716-florian","timestamp":"2024-11-05T13:47:21Z","content_type":"text/html","content_length":"16365","record_id":"<urn:uuid:f66e9097-8bd0-46d3-a5c7-e32a840d9fdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00685.warc.gz"}
|
41 research outputs found
We study the problem of consistent query answering under primary key violations. In this setting, the relations in a database violate the key constraints and we are interested in maximal subsets of
the database that satisfy the constraints, which we call repairs. For a boolean query Q, the problem CERTAINTY(Q) asks whether every such repair satisfies the query or not; the problem is known to be
always in coNP for conjunctive queries. However, there are queries for which it can be solved in polynomial time. It has been conjectured that there exists a dichotomy on the complexity of CERTAINTY
(Q) for conjunctive queries: it is either in PTIME or coNP-complete. In this paper, we prove that the conjecture is indeed true for the case of conjunctive queries without self-joins, where each atom
has as a key either a single attribute (simple key) or all attributes of the atom
Motivated by a growing market that involves buying and selling data over the web, we study pricing schemes that assign value to queries issued over a database. Previous work studied pricing
mechanisms that compute the price of a query by extending a data seller's explicit prices on certain queries, or investigated the properties that a pricing function should exhibit without detailing a
generic construction. In this work, we present a formal framework for pricing queries over data that allows the construction of general families of pricing functions, with the main goal of avoiding
arbitrage. We consider two types of pricing schemes: instance-independent schemes, where the price depends only on the structure of the query, and answer-dependent schemes, where the price also
depends on the query output. Our main result is a complete characterization of the structure of pricing functions in both settings, by relating it to properties of a function over a lattice. We use
our characterization, together with information-theoretic methods, to construct a variety of arbitrage-free pricing functions. Finally, we discuss various tradeoffs in the design space and present
techniques for efficient computation of the proposed pricing functions.Comment: full pape
We consider the problem of computing a relational query $q$ on a large input database of size $n$, using a large number $p$ of servers. The computation is performed in rounds, and each server can
receive only $O(n/p^{1-\varepsilon})$ bits of data, where $\varepsilon \in [0,1]$ is a parameter that controls replication. We examine how many global communication steps are needed to compute $q$.
We establish both lower and upper bounds, in two settings. For a single round of communication, we give lower bounds in the strongest possible model, where arbitrary bits may be exchanged; we show
that any algorithm requires $\varepsilon \geq 1-1/\tau^*$, where $\tau^*$ is the fractional vertex cover of the hypergraph of $q$. We also give an algorithm that matches the lower bound for a
specific class of databases. For multiple rounds of communication, we present lower bounds in a model where routing decisions for a tuple are tuple-based. We show that for the class of tree-like
queries there exists a tradeoff between the number of rounds and the space exponent $\varepsilon$. The lower bounds for multiple rounds are the first of their kind. Our results also imply that
transitive closure cannot be computed in O(1) rounds of communication
In this paper, we study the communication complexity for the problem of computing a conjunctive query on a large database in a parallel setting with $p$ servers. In contrast to previous work, where
upper and lower bounds on the communication were specified for particular structures of data (either data without skew, or data with specific types of skew), in this work we focus on worst-case
analysis of the communication cost. The goal is to find worst-case optimal parallel algorithms, similar to the work of [18] for sequential algorithms. We first show that for a single round we can
obtain an optimal worst-case algorithm. The optimal load for a conjunctive query $q$ when all relations have size equal to $M$ is $O(M/p^{1/\psi^*})$, where $\psi^*$ is a new query-related quantity
called the edge quasi-packing number, which is different from both the edge packing number and edge cover number of the query hypergraph. For multiple rounds, we present algorithms that are optimal
for several classes of queries. Finally, we show a surprising connection to the external memory model, which allows us to translate parallel algorithms to external memory algorithms. This technique
allows us to recover (within a polylogarithmic factor) several recent results on the I/O complexity for computing join queries, and also obtain optimal algorithms for other classes of queries
Many problems in static program analysis can be modeled as the context-free language (CFL) reachability problem on directed labeled graphs. The CFL reachability problem can be generally solved in
time $O(n^3)$, where $n$ is the number of vertices in the graph, with some specific cases that can be solved faster. In this work, we ask the following question: given a specific CFL, what is the
exact exponent in the monomial of the running time? In other words, for which cases do we have linear, quadratic or cubic algorithms, and are there problems with intermediate runtimes? This question
is inspired by recent efforts to classify classic problems in terms of their exact polynomial complexity, known as {\em fine-grained complexity}. Although recent efforts have shown some conditional
lower bounds (mostly for the class of combinatorial algorithms), a general picture of the fine-grained complexity landscape for CFL reachability is missing. Our main contribution is lower bound
results that pinpoint the exact running time of several classes of CFLs or specific CFLs under widely believed lower bound conjectures (Boolean Matrix Multiplication and $k$-Clique). We particularly
focus on the family of Dyck-$k$ languages (which are strings with well-matched parentheses), a fundamental class of CFL reachability problems. We present new lower bounds for the case of sparse input
graphs where the number of edges $m$ is the input parameter, a common setting in the database literature. For this setting, we show a cubic lower bound for Andersen's Pointer Analysis which
significantly strengthens prior known results.Comment: Appeared in POPL 2023. Please note the erratum on the first pag
|
{"url":"https://core.ac.uk/search/?q=author%3A(Koutris%2C%20Paraschos)","timestamp":"2024-11-03T00:38:15Z","content_type":"text/html","content_length":"111920","record_id":"<urn:uuid:b0775c57-91f8-462a-a133-bec99efe3773>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00891.warc.gz"}
|
FI:MB201 Linear models B - Course Information
FI:MB201 Linear models B - Course Information
MB201 Linear models B
Faculty of Informatics
Autumn 2018
Extent and Intensity
4/2. 6 credit(s) (plus extra credits for completion). Type of Completion: zk (examination).
doc. Mgr. Ondřej Klíma, Ph.D. (lecturer)
doc. Mgr. Josef Šilhan, Ph.D. (lecturer)
doc. Ilja Kossovskij, Ph.D. (seminar tutor)
RNDr. Jiří Pecl, Ph.D. (seminar tutor)
Guaranteed by
prof. RNDr. Jan Slovák, DrSc.
Faculty of Informatics
Contact Person: prof. RNDr. Jan Slovák, DrSc.
Supplier department: Faculty of Science
Wed 16:00–17:50 D2, Fri 8:00–9:50 D1, Fri 8:00–9:50 D3, Fri 8:00–9:50 D2
□ Timetable of Seminar Groups:
MB201/T01: Tue 18. 9. to Thu 13. 12. Tue 8:00–9:50 106, J. Pecl, Nepřihlašuje se. Určeno pro studenty se zdravotním postižením.
MB201/01: Wed 10:00–11:50 A320, O. Klíma
MB201/02: Wed 12:00–13:50 A320, O. Klíma
MB201/03: Wed 14:00–15:50 A320, J. Šilhan
MB201/04: Wed 14:00–15:50 B204, I. Kossovskij
Prerequisites (in Czech)
! MB005 Foundations of mathematics && !NOW( MB101 Linear models ) && ! MB101 Linear models
Course Enrolment Limitations
The course is also offered to the students of the fields other than those the course is directly associated with.
fields of study / plans the course is directly associated with
there are 15 fields of study the course is directly associated with, display
Course objectives
Introduction to linear algebra and analytical geometry.
Learning outcomes
At the end of this course, students should be able to: understand basic concepts of linear algebra and probability; apply these concepts to iterated linear processes; solve basic problems in
analytical geometry.
□ The course is the first part of the four semester block of Mathematics. In the entire course, the fundamentals of general algebra and number theory, linear algebra, mathematical analysis,
numerical methods, combinatorics, as well as probability and statistics are presented. The extended version MB201 adds more demanding mathematical tools and relations to the content of MB101.
□ Additionally to the content of MB101, we shall cover: 1. Warm up -- axiomatics of scalars, formal proofs, inclusion and exclusion principle, matrix calculus in the plane, formal constructions
of numbers
□ 2. Vectors and matrices -- Laplace development of determinants, abstract vector spaces, linear mappings, unitary and adjoint mappings
□ 3. Linear models -- Perron (-Frobenius) theory of positive matrices, canonical matrix forms and decompositions, pseudoinverses
□ 4. Analytical geometry -- projective extension, affine, Euclidean and projective classification of quadrics.
recommended literature
□ MOTL, Luboš and Miloš ZAHRADNÍK. Pěstujeme lineární algebru. 3. vyd. Praha: Univerzita Karlova v Praze, nakladatelství Karolinum, 2002, 348 s. ISBN 8024604213. info
□ J. Slovák, M. Panák a kolektiv, Matematika drsně a svižně, učebnice v přípravě
not specified
□ FUCHS, Eduard. Logika a teorie množin (Úvod do oboru). 1. vyd. Brno: Rektorát UJEP, 1978, 175 s. info
□ RILEY, K.F., M.P. HOBSON and S.J. BENCE. Mathematical Methods for Physics and Engineering. second edition. Cambridge: Cambridge University Press, 2004, 1232 pp. ISBN 0 521 89067 5. info
□ HORÁK, Pavel. Algebra a teoretická aritmetika. 2. vyd. Brno: Masarykova univerzita, 1993, 145 s. ISBN 8021008164. info
Teaching methods
Lecture combining theory with problem solving. Seminar groups devoted to solving problems.
Assessment methods
During the semester, two obligatory mid-term exams are evaluated (each for max 10 points). In the seminar groups there are 5 tests during the semester. The seminars are evaluated in total by max
5 points. Students, who collect during the semester (i.e., from tests and mid-term exams) less than 8 points, are graded as X and they do not proceed to the final examination. The final written
test for max 20 points is followed by the oral examination for max 10 points. For successful examination (the grade at least E) the student needs in total 27 points or more.
Language of instruction
Follow-Up Courses
Further comments (probably available only in Czech)
Study Materials
The course is taught annually.
Listed among pre-requisites of other courses
The course is also listed under the following terms Autumn 2012, Autumn 2013, Autumn 2014, Autumn 2015, Autumn 2016, Autumn 2017.
• Enrolment Statistics (recent)
• Permalink: https://is.muni.cz/course/fi/autumn2018/MB201
|
{"url":"https://is.muni.cz/course/fi/MB201","timestamp":"2024-11-07T23:15:28Z","content_type":"text/html","content_length":"29510","record_id":"<urn:uuid:369b6362-5e27-434a-8a51-ea2822ce1d4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00885.warc.gz"}
|
Least Action (Gravity/Free)
written by Todd Timberlake
The EJS Least Action (Gravity/Free) model illustrates the principle of least action for the one dimensional motion of a free particle or a particle subject to a constant gravitational force. The
simulation displays height versus time, with the path broken into equally spaced time intervals. The user can set the initial and final heights, as well as the number of time intervals to be used.
The user can then adjust the intermediate heights in order to minimize the action along the path, or else allow the computer to implement an algorithm for finding the path of least action. Both the
action of the current path and the least action so far observed with the current parameters are displayed.
The simulation can also display average values for velocity, change in velocity, acceleration, kinetic energy, potential energy, the Lagrangian function, and total mechanical energy for each segment
of the path. This helps to illustrate that the path of least action is also a path of constant acceleration (zero if there is no gravity) and constant total energy.
The algorithm minimizes the action by examining three consecutive points on the path. The outer two points are held fixed and basic calculus can be used to determine the value for the middle height
that minimizes the action for this segment of the path. This procedure is repeated for each segment of three points on the path, moving left to right. If this entire process is repeated over and over
the path will gradually approach the path of global least action. This approach, and the entire EJS Least Action (Gravity/Free) model, was inspired by the
Principle of Least Action Interactive page
by Edwin Taylor and Slavomir Tuleja.
Please note that this resource requires at least version 1.5 of Java (JRE).
Principle of Least Action Tutorial This is a tutorial exercise that helps students understand the Principle of Least Action as applied to a particle moving vertically under a constant gravitational
force. Students use basic calculus to derive the condition for minimizing the action along a segment of … download 148kb .pdf download 9kb .tex
Published: September 5, 2012
Rights: This activity handout is released under the Creative Commons BY-NC-SA 3.0 license.
previous versions
Least Action (Gravity/Free) Source Code
The source code zip archive contains an XML representation of the EJS Least Action (Gravity/Free) Model. Unzip this archive in your EJS workspace to compile and run this model using EJS.
download 39kb .zip
Published: September 3, 2012
Rights: This material is released under the
GNU General Public License Version 3
previous versions
Subjects Levels Resource Types
Classical Mechanics
- General
- Gravity
- Upper Undergraduate - Instructional Material
- Motion in One Dimension
- Lower Undergraduate = Interactive Simulation
= Gravitational Acceleration
- Work and Energy
= Conservation of Energy
Intended Users Formats Ratings
- Learners
- Educators - application/java
- General Publics
Access Rights:
Free access
This material is released under a GNU General Public License Version 3 license.
Rights Holder:
Todd Timberlake
EJS, Easy Java Simulations, Lagrangian, OSP, Open Source Physics, free particle, gravity, least action, least action
Record Cloner:
Metadata instance created September 3, 2012 by Todd Timberlake
Record Updated:
June 10, 2014 by Andreu Glasmann
Last Update
when Cataloged:
September 3, 2012
ComPADRE is beta testing Citation Styles!
<a href="https://www.compadre.org/OSP/items/detail.cfm?ID=12400">Timberlake, Todd. "Least Action (Gravity/Free)." Version 1.0.</a>
T. Timberlake, Computer Program LEAST ACTION (GRAVITY/FREE), Version 1.0 (2012), WWW Document, (https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12400&DocID=3102).
T. Timberlake, Computer Program LEAST ACTION (GRAVITY/FREE), Version 1.0 (2012), <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12400&DocID=3102>.
Timberlake, T. (2012). Least Action (Gravity/Free) (Version 1.0) [Computer software]. Retrieved November 2, 2024, from https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12400&DocID=3102
Timberlake, Todd. "Least Action (Gravity/Free)." Version 1.0. https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12400&DocID=3102 (accessed 2 November 2024).
Timberlake, Todd. Least Action (Gravity/Free). Vers. 1.0. Computer software. 2012. Java (JRE) 1.5. 2 Nov. 2024 <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12400&DocID=3102>.
@misc{ Author = "Todd Timberlake", Title = {Least Action (Gravity/Free)}, Month = {September}, Year = {2012} }
%A Todd Timberlake %T Least Action (Gravity/Free) %D September 3, 2012 %U https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12400&DocID=3102 %O 1.0 %O application/java
%0 Computer Program %A Timberlake, Todd %D September 3, 2012 %T Least Action (Gravity/Free) %7 1.0 %8 September 3, 2012 %U https://www.compadre.org/Repository/document/ServeFile.cfm?ID=12400&DocID=
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
Least Action (Gravity/Free):
Is Based On Easy Java Simulations Modeling and Authoring Tool
The Easy Java Simulations Modeling and Authoring Tool is needed to explore the computational model used in the Least Action (Gravity/Free).
relation by Wolfgang Christian
See details...
Know of another related resource? Login to relate this resource to it.
|
{"url":"https://www.compadre.org/OSP/items/detail.cfm?ID=12400&Attached=1","timestamp":"2024-11-02T04:36:29Z","content_type":"application/xhtml+xml","content_length":"42282","record_id":"<urn:uuid:73f7cf93-0980-4a9c-9ea2-10e3029b4dbb>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00610.warc.gz"}
|
TOC | Previous | Next | Index
49.1 Significance of the Overall Model (.NET, C#, CSharp, VB, Visual Basic, F#)
Class GoodnessOfFit tests the overall model significance for least squares model-fitting classes, such as LinearRegression, PolynomialLeastSquares, and OneVariableFunctionFitter.
GoodnessOfFit instances can be constructed from:
● A LinearRegression object.
● A PolynomialLeastSquares object, plus the vectors of x and y data.
● A OneVariableFunctionFitter object, plus the vectors of x and y data and the solution found by the fitter.
For example:
Code Example – C# goodness of fit
var x = new DoubleVector(0.3330, 0.1670, 0.0833, 0.0416,
0.0208, 0.0104, 0.0052);
var y = new DoubleVector(3.636, 3.636, 3.236, 2.660,
2.114, 1.466, 0.866);
int degree = 2;
var pls =
new PolynomialLeastSquares(degree, x, y);
var gof = new GoodnessOfFit(pls, x, y);
Code Example – VB goodness of fit
Dim X As New DoubleVector(0.333, 0.167, 0.0833, 0.0416, 0.0208,
0.0104, 0.0052)
Dim Y As New DoubleVector(3.636, 3.636, 3.236, 2.66, 2.114, 1.466,
Dim Degree As Integer = 2
Dim PLS As New PolynomialLeastSquares(Degree, X, Y)
Dim GoF As New GoodnessOfFit(PLS, X, Y)
A variety of properties are provided for assessing the significance of the overall model:
● RegressionSumOfSquares gets the regression sum of squares. This quantity indicates the amount of variability explained by the model. It is the sum of the squares of the difference between the
values predicted by the model and the mean.
● ResidualSumOfSquares gets the residual sum of squares. This is the sum of the squares of the differences between the predicted and actual observations.
● ModelDegreesOfFreedom gets the number of degrees of freedom for the model, which is equal to the number of predictors in the model.
● ErrorDegreesOfFreedom gets the number of degress of freedom for the model error, which is equal to the number of observations minus the number of model paramters.
● RSquared gets the coefficient of determination.
● AdjustedRsquared gets the adjusted coefficient of determination.
● MeanSquaredResidual gets the mean squared residual. This quantity is the equal to ResidualSumOfSquares / ErrorDegreesOfFreedom (equals the number of observations minus the number of model
● MeanSquaredRegression gets the mean squared for the regression. This is equal to RegressionSumOfSquares / ModelDegreesOfFreedom (equals the number of predictors in the model).
● FStatistic gets the overall F statistic for the model. This is equal to the ratio of MeanSquaredRegression / MeanSquaredResidual. This is the statistic for the hypothesis test where the null
hypothesis, 0 and the alternative hypothesis is that at least one paramter is nonzero.
● FStatisticPValue gets the p-value for the F statistic.
For example, if lr is a LinearRegression object:
Code Example – C# goodness of fit
var gof = new GoodnessOfFit( lr );
double sse = gof.ResidualSumOfSquares;
double r2 = gof.RSquared;
double fstat = gof.FStatistic;
double fstatPval = gof.FStatisticPValue;
Code Example – VB goodness of fit
Dim GoF As New GoodnessOfFit(LR)
Dim SSE As Double = GoF.ResidualSumOfSquares
Dim R2 As Double = GoF.RSquared
Dim FStat As Double = GoF.FStatistic
Dim FStatPval As Double = GoF.FStatisticPValue
Lastly, the FStatisticCriticalValue() function computes the critical value for the F statistic at a given significance level:
Code Example – C# goodness of fit
double critVal = gof.FStatisticCriticalValue(.05);
Code Example – VB goodness of fit
Dim CritVal As Double = GoF.FStatisticCriticalValue(0.05)
|
{"url":"https://www.centerspace.net/doc/NMath/user/goodness-of-fit-84031.htm","timestamp":"2024-11-03T13:19:30Z","content_type":"text/html","content_length":"20403","record_id":"<urn:uuid:3adf36e4-fc8d-4753-9383-48ee8a550a60>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00430.warc.gz"}
|
Interactive optimization (coffee tasting problem)
We observe that there are many problems with different properties. In practice, most can be categorized for which optimal algorithms exist. The categorization exists as follows:
• Discrete vs Continuous
□ Discrete: integer programming, Combinatorial problems
□ Continuous: linear, quadratic, smooth/non-smooth, black-box
□ Both discrete and continuous variales: mixed integer problems
□ Categorical variables (no order)
• Unconstrained vs Constrained
• Deterministic vs Stochastic outcome of objective functions
• Single or multiple objective functions
Here we discuss notions of continuous optimization.
1st order derivability or differentiability
Assuming n=1, let \(f: \mathbb{R} \mapsto \mathbb{R}\)
We say that \(f\) is differentiable in \(x\) if, \[ \lim_{h \to 0} \frac{f(x+h) - f(x)}h \textrm{ exists} \] This limit is denoted by \(f'(x)\) and is called derivative of \(f\) in \(x\).
Taylor's 1st order approximation
If \(f\) is differentiable in \(x\), then the first order Taylor's expansion is given by \[f(x+h)=f(x)+f'(x)h+o(||h||)\] For \(h\) small enough, \(h \mapsto f(x+h)\) is approximated by, \[h \mapsto f
(x) + f'(x)h \] giving the first order approximation of \(f\).
We can generalize the notion of derivative to hihger dimensions. Given \(f: \mathbb{R}^n \mapsto \mathbb{R}\), \[\nabla f_x = \left(\begin{array}{c} \frac{\partial f(x)}{\partial x_1} \\ . \\.\\.\
frac{\partial f(x)}{\partial x_n} \end{array}\right) \] The gradient of a differentiable function is orthogonal to the level sets.
2nd order derivability or differentiability
Let \(f: \mathbb{R}^n \mapsto \mathbb{R}\) be differentiable on \(\mathbb{R}\) and let, \[f:x \mapsto f'(x) \] be its derivative function. Now if \(f'\) is derivable, then we denote \(f''(x)\) as the
second order derivative of \(f \).
Taylor's 2nd order approximation
If the second order derivative of \(f\) exists then the second order Taylor expansion is given by, \[f(x+h)=f(x) + f'(x)h + \frac{1}{2}f''(x)h^2 + o(||h||^2)\] For \(h\) small enough, we get the
quadratic approximation of \(f\), \[h \mapsto f(x) + f'(x)h + \frac{1}{2}f''(x)h^2 \]
Again, we can generalize the second order derivative to functions \(f: \mathbb{R}^n \mapsto \mathbb{R}\) with the notion of Hessian matrix, \[ \mathbb{H}(x) = \nabla^2 f(x) = \begin{bmatrix} \frac{\
partial^2 f(x)}{\partial x_1^2} & . & . & \frac{\partial^2 f(x)}{\partial x_1 \partial x_n} \\ . & . & . & . \\ . & . & . & . \\ \frac{\partial^2 f(x)}{\partial x_n \partial x_1} & . & . & \frac{\
partial^2 f(x)}{\partial x_n^2} \end{bmatrix}\] If \(f(x) = \frac{1}{2}x^TAx\) with \(A\) being symmetric, then \[ \mathbb{H}(f(x)) = \nabla^2 f = A \]
Local minima
\[x^*: \exists \textrm{ a neighborhood } V \textrm{ of } x^* \textrm{ s.t. }\] \[\forall x \in V: f(x) \ge f(x^*)\] given,
\(f: \mathbb{R} \rightarrow \mathbb{R} \textrm{ is differentiable }\)
\(f'(x) = 0 \textrm{ at optimal points }\)
Global minima
\[\forall x \in \Omega: f(x) \ge f(x^*)\] given,
\(f: \mathbb{R} \rightarrow \mathbb{R} \textrm{ is differentiable }\)
\(f'(x) = 0 \textrm{ at optimal points }\)
Optimality conditions
Assume that \(f\) is twice continuously differentiable.
Necessary conditions:
If \(x^*\) is a local minimum, then \(\nabla f(x^*) = 0 \textrm{ and } \nabla^2 f(x^*) \) is positive semi-definite.
Sufficient conditions:
If \(x^*\) which satisfies \(\nabla f(x^*) = 0 \textrm{ and } \nabla^2 f(x^*) \) is positive-definite, then \(x^*\) is a strict local minimum
Convex functions
Let \(f: U \subset \mathbb{R}^n \mapsto \mathbb{R} \), then \(f\) is convex if \(\forall t \in [0,1] \), \[ f(tx + (1-t)y) \le tf(x) + (1-t)y \] Theorem:
If \(f\) is differentiable, then it is convex iff \(\forall x,y\), \[ f(y) - f(x) \ge \nabla f(x).(y-x) \] Theorem:
If \(f\) is continuously differentiable twice, then it is convex iff \( \nabla^2 f(x) \textrm{ is positive semi-definite } \forall x\).
For differentiable and convex functions, critical points are global minima of the function.
Functions can be difficult to optimize due to:
Gradient direction vs Newton direction
The gradient direction \(\nabla F(x)\) defines the direction of maximum ascend. The negative of the gradient then gives us a amximum descent direction and it directly points towards the optimum i.f.f
\(\nabla^2 f(x) = \mathbb{H} = I \)and the condition number of the Hessian is equal to 1. But in cases with poor conditioning, gradient descent becomes slow to converge.
For convex quadratic functions, the Newton direction points towards the optimum independent of the condition number of the Hessian matrix. For other functions, Newton direction will not always point
towards the optimum but is still a good direction to follow. Recall the second order approximation, \[f(x+h) = f(x) + f'(x)h + \frac{1}{2}f''(x)h^2 \] We seek a step such that \(\nabla f(x+h) = 0\).
This implies that, \[f'(x) + f''(x)h = 0 \] \[\implies h = - [f''(x)]^{-1} f'(x) \] gives the Newton direction.
In some settings, we can compute the Newton direction analytically in which case we should. Yet we need to approximate numerically \(\nabla^2 f(x)\) and invert it which can be very expensive.
Quasi-Newton methods
In these methods, we try to get the best of Newton methods but avoiding expensive computation by updating \(\mathbb{H}_t\) iteratively using \(\nabla f(x_t)\) and this lets us approximate the inverse
of \(\nabla^2 f(x_t)\), \[ x_{t+1} = x_t - \sigma_t \mathbb{H}_t \nabla f(x_t) \] Examples include BFGS, L-BFGS.
Stochastic gradient descent
With gradient descent, \[ \nabla Q(w) = \frac{1}{N} \sum_{i=1}^N \nabla Q_i (w) \] and \[ w_{t+1} = w_t - \sigma_t \nabla Q(w) \] Now typically \(N\) can be very large and computation of all \(\nabla
Q_i(w)\) would be expensive. So instead, we can use an approximation of \(\nabla Q(w)\), \[ \nabla Q(w) \approx \nabla Q_i(w) \textrm{, gradient of a single example} \] or do a mini-batch, \[ \nabla
Q(w) \approx \frac{1}{n_{batch}} \sum_{i=1}^{n_{batch}} \nabla Q_i (w) \] This is the idea of stochastic gradient descent.
Constrained optimization
Given a constrained optimization problem, \[\textrm{min } f(x),f:\mathbb{R}^n \mapsto \mathbb{R}\] subject to, \[g_i(x) \le 0\] then the minima can either lie in the feasible region, or in the
non-feasible region. In the former case, the constraint \(g_i(x)\) plays no role and is then referred to as an inactive constraint. In the latter case, the best minima we can get would be at the
boundary of the constraint and it is then referred to as an active constraint. In this case, \(\nabla f(x^*) \textrm{ & } \nabla g(x^*) \) would be co-linear i.e., \[ \exists \lambda \in \mathbb{R} \
textrm{ s.t. }\] \[ \nabla f(x^*) = - \lambda \nabla g(x^*) \] \[ \implies \nabla f(x^*) + \lambda \nabla g(x^*) = 0 \] where \(\lambda\) is known as the Lagrange-multiplier and
if the constraint is inactive, \(\lambda = 0\)
if the constraint is active, \(\lambda > 0\)
In general, the following (KKT conditions) must hold, \[\lambda \ge 0\] \[g(x) \le 0\] \[\lambda g(x) = 0\]
|
{"url":"https://theboxtroll.com/notes/optimization.html","timestamp":"2024-11-02T08:25:52Z","content_type":"text/html","content_length":"10792","record_id":"<urn:uuid:ba6b15c6-0939-4d8e-a16e-cba242fc6323>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00533.warc.gz"}
|
SCIP 6.0.2
• The abspower constraint handler now passes more accurate convexity information to the NLP relaxation.
Examples and applications
• added parsing functionality for optcumulative constraints in CIP format
Interface changes
Interfaces to external software
• Updated the Mosek LP solver interface to support Mosek 9.0.
Build system
• new target to 'doc' to build documentation
• ctests now fail if parameter file not found
• add flag STATIC_GMP and improve GMP find module
• remove non-API methods from library (API methods use new macro SCIP_EXPORT)
• increase minimal required CMake version to 3.3
• correct paths and dependency information when installing SCIP
Fixed bugs
• fixed SCIP-Jack presolving bug that could lead to wrong results for Steiner arborescence problems
• fixed wrong unboundedness result in case not all constraints were already in the LP and enforcement was skipped because an optimal solution was found
• fixed wrong enforcement of constraints in the disjunction constraint handler
• fixed wrong behavior of concurrent solve ignoring initial solutions
• fixed bug in concurrent solve when problem was already solved in presolving
• aggregate non-artificial integer variable for XOR constraints with two binary variables and delete constraint
• copy the objective offset when copying the original problem
• fixed bug in SCIPlpiGetBInvARow in lpi_cpx using wrong size of resulting vector
• fixed quadratic runtime behavior in sepa_aggregation
• fixed statistics of separators
• improve numerical stability in varbound constraint handler by using double-double arithmetic
• fixed bug in propagation of dual proofs
• fixed bugs that arise for multiaggregated indicator variables by disallowing multiaggregation for them
• improve numerical stability in SCIPcomputeBilinEnvelope* by using double-double arithmetic
• fixed bug related to releasing pending bound changes in tree.c
• set STD FENV_ACCESS pragma to on in code that changes floating-point rounding mode
• disable GCC optimizations in main interval arithmetic code to prevent wrong optimizations
• fixed wrong assert in cons_xor concerning the variable type
• fixed different behavior of SCIPisLbBetter and SCIPisUbBetter between having NDEBUG defined or not
• correctly handle bound disjunctions in symmetry detection
• fixed issue in reliability branching related to the LP error flag not being reset
• fixed treatment of near-infinite bounds in shiftandpropagate's problem transformation
• fixed handling of infinite values in SCIPcomputeHyperplaneThreePoints()
• fixed comparisons of infinite values in heur_intshifting.c and heur_shifting.c
• fixed bug related to updating unprocessed cuts in the cutpool
• fixed bug related to enabling quadratic constraints during CONSINITLP
• add missing SCIP_EXPORT for functions used by GCG
• fixed memory leak and wrong initialization for trival cases in cons_symresack.c
• fixed bug with upgrading to packing/partitioning orbitopes
• fixed bug with the status while upgrading in presol_symbreak.c
• fixed wrong stage while clearing the conflict store
• fixed behavior of SCIPfixVar() by setting infeasible pointer to TRUE if fixval lies outside variable domain
• allow tightenVar() in SCIP_STAGE_PROBLEM stage
• fixed bug in cumulative constraint handler when separating the LP solution
• fixed issues with integer overflow in cumulative constraint handler
• fixed bug where the convexity of Benders' decomposition subproblems was checked even when users defined subproblem solving methods. Now, as per the documentation, the user must explicitly state
whether the subproblem is convex
• fixed wrong indexing in heur_dualval
• fixed issue with basis status in SoPlex LPi
• statistics now output primal/dual bounds if objective limit is reached
• memory check in debug mode is now disabled by default
• message is now provided to the user to inform that automatic Benders' auxiliary variable lower bound computations are not activated when user defined subproblem solving methods are present
• corrected documentation of the primalgap in SCIP; describe when it will be infinite
SCIP 6.0.1
• when using a debug solution every (multi-)aggregation will be checked w.r.t. this solution
Performance improvements
• try greedy solution first before solving knapsack exactly using dynamic programming in SCIPsolveKnapsackExactly, compute greedy solution by weighted median selection.
• don't consider implied redcost by default in the reduced cost propagator
Interface changes
Deleted and changed API methods and macros
• The preprocessor macro NO_CONFIG_HEADER now needs to be defined when including SCIP header files from a SCIP build or installation that has been build via the Makefile-only build system.
• The following preprocessor macros have been renamed: WITH_ZLIB to SCIP_WITH_ZLIB, WITH_GMP to SCIP_WITH_GMP, WITH_READLINE to SCIP_WITH_READLINE, NO_SIGACTION to SCIP_NO_SIGACTION, NO_STRTOK_R to
SCIP_NO_STRTOK_R, ROUNDING_FE to SCIP_ROUNDING_FE, ROUNDING_FP to SCIP_ROUNDING_FP, ROUNDING_MS to SCIP_ROUNDING_MS. Note, however, that the names of macros NO_RAND_R and NO_STRERROR_R have not
been changed so far.
New API functions
Command line interface
• warn about coefficients in MPS files with absolute value larger than SCIP's value for infinity
Changed parameters
• default clock type for timing is now wallclock
Unit tests
• added unit tests for exact knapsack solving and (weighted) median selection algorithms
Build system
• add missing GMP dependency when compiling with SYM=bliss
• add DL library when linking to CPLEX to avoid linker errors
• new config.h header defining the current build configuration, e.g. SCIP_WITH_GMP
Fixed bugs
• fixed handling of weights in cons_sos1 and cons_sos2 (NULL pointer to weights)
• fixed handling of unbounded LPs in SCIP and in several LPIs; added heuristic method to guess solution
• the STO reader is capable of handling scenarios defined using lower case "rhs"
• fixed OPB reader for instances without explicit plus signs
• correct dual solution values for bound constraints
• fixed recognition of variable with only one lock in cons_bivariate, cons_quadratic, and cons_nonlinear
• fixed update of constraint violations in solution repair in cons_bivariate, cons_quadratic, and cons_nonlinear
• print error message and terminate if matrix entries of a column are not consecutive in mps format
• fixed incorrect handling of fixed variables when transfer of cuts from LNS heuristic for Benders' decomposition
• fix returning local infeasible status by Ipopt interface if Ipopt finds problem locally infeasible
• skip attempt to apply fixings in linear constraint handler during solving stage as LP rows cannot change anymore
• fixed bug when reading >= indicator constraints in MPS format
• fix issue with nodes without domain changes if we ran into solution limit in prop_orbitalfixing
• fixed unresolved reference to CppAD's microsoft_timer() function on builds with MS/Intel compilers on Windows
• ignore implications added through SCIPaddVarImplication() that are redundant to global bounds also in the special case of an implication between two binary variables; also, use implications
instead of cliques in the case of a binary implied variable with nonbinary active representative
• fixed bug with aggregated variables that are aggregated in propagation of cons_sos1
• fixed some special cases in SCIPselect/SCIPselectWeighted methods
• relaxed too strict assertion in Zirounding heuristic
• fixed the upgrade routine to XOR constraints: aggregate integer variable if its coefficient has the wrong sign
• fixed handling of nonartificial parity variables when deleting redundant XOR constraints
• earlier deletion of trivial XOR constraints (at most 1 operator left)
• fixed wrong hashmap accesses and added sanity check for the correct hashmap type
• avoid copying of unbounded solutions from sub-SCIPs as those cannot be checked completely
• corrected the output of the first LP value in case of branch-and-price
• fixed possible integer overflow, which led to wrong conclusion of infeasibility, in energetic reasoning of cons_cumulative.c
• do not scale linear constraints to integral coefficients
SCIP 6.0.0
• new diving heuristic farkasdiving that dives into the direction of the pseudosolution and tries to construct Farkas-proofs
• new diving heuristic conflictdiving that considers locks from conflict constraints
• restructuring of timing of symmetry computation that allows to add symmetry handling components within presolving
• lp/checkstability is properly implemented for SoPlex LPI (spx2)
• new branching rule lookahead that evaluates potential child and grandchild nodes to determine a branching decision
• limits on the number of presolving rounds a presolver (maxrounds) or propagator/constraint handler (maxprerounds) participates in are now compared to the number of calls of the particular
presolving method, not the number of presolving rounds in general, anymore
• new miscellaneous methods for constraints that have a one-row linear representation in pub_misc_linear.h
• a Benders' decomposition framework has been added. This framework provides the functionality for a user to solve a decomposed problem using Benders' decomposition. The framework includes
classical optimality and feasibility cuts, integer optimality cuts and no-good cuts.
• add statistic that presents the number of resolves for instable LPs
• new readers for stochastic programming problems in SMPS format (reader_sto.h, reader_smps.h)
Performance improvements
• cuts generated from certain quadratic constraints with convex feasible region are now global
• performance improvements for Adaptive Large Neighborhood Search heur_alns.c
□ all neighborhoods now start conservatively from maximum fixing rate
□ new default parameter settings for bandit selection parameters
□ no adjustment of minimum improvement by default
• improved bound tightening for some quadratic equations
• constraint handler checking order for original solutions has been modified to check those with negative check priority that don't need constraints after all other constraint handlers and
constraints have been checked
• deactivate gauge cuts
Examples and applications
• new example brachistochrone in CallableLibrary examples collection; this example implements a discretized model to obtain the trajectory associated with the shortest time to go from point A to B
for a particle under gravity only
• new example circlepacking in CallableLibrary examples collection; this example models two problems about packing circles of given radii into a rectangle
• new price-and-branch application for the ringpacking problem
• new stochastic capacitated facility location example demonstrating the use of the Benders' decomposition framework
Interface changes
New and changed callbacks
• added parameter locktype to SCIP_DECL_CONSLOCK callback to indicate the type of variable locks
Deleted and changed API methods
• Symmetry:
□ removed function SCIPgetTimingSymmetry() in presol_symmetry.h since this presolver does not compute symmetries independent of other components anymore
□ additional argument recompute to SCIPgetGeneratorsSymmetry() to allow recomputation of symmetries
• Random generators:
□ the seed of SCIPinitializeRandomSeed() is now an unsigned int
□ the seed of SCIPsetInitializeRandomSeed() is now an unsigned int and it returns an unsigned int
□ new parameter for SCIPcreateRandom() to specify whether the global random seed shift should be used in the creation of the random number generator
• Miscellaneous:
□ additional arguments preferrecent, decayfactor and avglim to SCIPcreateBanditEpsgreedy() to choose between weights that are simple averages or higher weights for more recent observations (the
previous default). The last two parameters are used for a finer control of the exponential decay.
□ functions SCIPintervalSolveUnivariateQuadExpression(), SCIPintervalSolveUnivariateQuadExpressionPositive(), and SCIPintervalSolveUnivariateQuadExpressionPositiveAllScalar() now take an
additional argument to specify already existing bounds on x, providing an entire interval ([-infinity,infinity]) gives previous behavior
New API functions
Changed parameters
• Removed parameters:
□ heuristics/alns/stallnodefactor as the stall nodes are now controlled directly by the target node limit within the heuristic
□ presolving/symmetry/computepresolved since this presolver does not compute symmetries independent of other components anymore
□ separating/maxincrounds
New parameters
• lp/checkfarkas that enables the check of infeasibility proofs from the LP
• heuristics/alns/unfixtol to specify tolerance to exceed the target fixing rate before unfixing variables, (default: 0.1)
• propagating/orbitalfixing/symcomptiming to change the timining of symmetry computation for orbital fixing
• lp/alwaysgetduals ensure that the dual solutions are always computed from the recent LP solve
• display/relevantstats indicates whether the small relevant statistics are displayed at the end of solving
• propagating/orbitalfixing/performpresolving that enables orbital fixing in presolving
• presolving/symbreak/addconsstiming to change the timining of symmetry computation for symmetry handling inequalities
• propagating/orbitalfixing/enabledafterrestarts to control whether orbital fixing is enabled after restarts
• benders/∗ new submenu for Benders' decomposition related settings. This includes the settings related to the included Benders' decompositions and the general Benders' decomposition settings.
• benders/<decompname>/benderscuts/∗ submenu within each included Benders' decomposition to control the Benders' decomposition cuts. The cuts are added to each decomposition separately, so the
setting are unique to each decomposition.
Data structures
• new enum SCIP_LOCKTYPE to distinguish between variable locks implied by model (check) constraints (SCIP_LOCKYPE_MODEL) and variable locks implied by conflict constraints (SCIP_LOCKYPE_CONFLICT)
• expression interpreter objects are now stored in the block memory
Deleted files
• removed presolving plugin presol_implfree
• separated scip.c into several smaller implementation files scip_*.c for better code overview; scip.c was removed, but the central user header scip.h remains, which contains includes of the
separated headers
Fixed bugs
• fixed bug in gcd reductions of cons_linear regarding an outdated flag for variable types
• fixed bug in heur_dualval regarding fixing routine for integer variables
• suppress debug solution warnings during problem creation stage
• fixed check for activated debugging solution in components constraint handler
• fixed potential bug concerning solution linking to LP in SCIPperformGenericDivingAlgorithm()
• fixed reward computation in ALNS on continuous, especially nonlinear, problems
• fixed bug in freeing reoptimization data if problem was solved during presolving
• fixed check of timing in heur_completesol
• fixed wrong propagation in optcumulative constraint handler
• fixed non-deterministic behavior in OBBT propagator
• don't disable LP presolving when using Xpress as LP solver
• fixed possible NULL pointer usage in cons_pseudoboolean
• ensured that SCIPgetDualbound() returns global dual bound instead of the dual bound of the remaining search tree
• fixed rare division-by-zero when solving bivariate quadratic interval equation
• use total memory for triggering memory saving mode
• fix parsing of version number in the CMake module for Ipopt
• fixed handling of implicit integer variables when attempting to solve sub-MIP in nlpdiving heuristic
• added workaround for bug when solving certain bivariate quadratic interval equations with unbounded second variable
• fixed bug with releasing slack variable and linear constraint in cons_indicator
• fixed problem when writing MPS file with indicator constraints with corresponding empty linear constraints
• fixed bug in heur_vbound triggered when new variables were added while constructing the LP
• fixed bug with unlinked columns in SCIProwGetLPSolCutoffDistance()
• updated CppAD to version 20180000.0
• remove LEGACY mode, compiler needs to be C++11-compliant
|
{"url":"https://www.scipopt.org/doc/html/RN6.php","timestamp":"2024-11-05T03:02:34Z","content_type":"text/html","content_length":"31344","record_id":"<urn:uuid:1bdc2720-5e53-48f6-a2a6-821b41bc0b50>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00535.warc.gz"}
|
Publications of Motohico Mulase
Selected Publications of Motohico Mulase
Quantum Curves, Hitchin Moduli Spaces, Opers, Gromov-Witten Invariants, TQFT, Topological Recursion (Eynard-Orantin Theory), Hurwitz Numbers, and Witten-Kontsevich Theory:
● Interplay between opers, quantum curves, WKB analysis, and Higgs bundles, with Olivia Dumitrescu, SIGMA 17, 036, 53 pages (2021)
● From the Hitchin section to opers through nonabelian Hodge, with Olivia Dumitrescu, Laura Fredrickson, Georgios Kydonakis, Rafe Mazzeo, and Andrew Neitzke, Journal of Differential Geometry 117
(2), 223--253 (2021)
● Mirror curve of orbifold Hurwitz numbers, with Olivia Dumitrescu, Revue Roumaine de Mathématiques Pures et Appliquées, 66 > (2), 307--328 (2021)
● Topological Recursion and its Influence in Analysis, Geometry, and Topology, Book edited with Chiu-Chu Melissa Liu, Proceedings of Symposia in Pure Mathematics 100, 549 pps, American Mathematical
Society (2018) [ISBN: 978-1-4704-3541-7]
● An invitation to 2D TQFT and quantization of Hitchin spectral curves (Revised Version), with Olivia Dumitrescu, Banach Center Publications 114, 85--144 (2018) [DOI: 10.4064/bc114-3]
● Edge contraction on dual ribbon graphs and 2D TQFT, with Olivia Dumitrescu, Journal of Algebra 494, 1--27 (2018)
● Quantization of spectral curves for meromorphic Higgs bundles through topological recursion (the 2017 revised version for publication), with Olivia Dumitrescu, Proceedings of Symposia in Pure
Mathematics 100, 179--230, American Mathematical Society (2018)
● Lectures on the topological recursion for Higgs bundles and quantum curves, with Olivia Dumitrescu, in The Geometry, Topology and Physics of Moduli Spaces of Higgs Bundles, Richard Wentworth and
Graeme Wilkin, Editors, Lecture Notes Series, Institute for Mathematical Sciences, National University of Singapore Vol 36, 103--198 (2018) [ISBN: 978-981-3229-08-2]
● Quantum curves for simple Hurwitz numbers of an arbitrary base curve, with Xiaojun Liu and Adam Sorkin, Proceedings of Symposia in Pure Mathematics 100, 533--549, American Mathematical Society
● An invitation to 2D TQFT and quantization of Hitchin spectral curves (Enlarged Japanese Edition), with Olivia Dumitrescu, in Proceedings of the 15th Oka Symposium (2017)
● Quantum spectral curve for the Gromov-Witten theory of the complex projective line, with Petr Dunin-Barkowski, Paul Norbury, Alexandr Popolitov, and Sergey Shadrin, Journal für die reine und
angewandte Mathematik 2017-726, 267--289 (2017)
● Recursions and asymptotics of intersection numbers, with Kefeng Liu and Hao Xu, International Journal of Mathematics 27, (2016) [ preliminary preprint ]
● Edge-contraction on dual ribbon graphs, 2D TQFT, and the mirror of orbifold Hurwitz numbers, with Olivia Dumitrescu, arXiv:1508.05922 (2015)
● Spectral curves and the Schrödinger equations for the Eynard-Orantin recursion, with Piotr Sułkowski, Advances in Theoretical and Mathematical Physics 19, No. 5, 955--1015 (2015)
● Quantum curves for Hitchin fibrations and the Eynard-Orantin theory, with Olivia Dumitrescu, Letters in Mathematical Physics 104, 635--671 (2014)
● Mirror symmetry for orbifold Hurwitz numbers, with Vincent Bouchard, Daniel Hernández Serrano, and Xiaojun Liu, Journal of Differential Geometry 98, 375--423 (2014)
● The spectral curve and the Schrödinger equation of double Hurwitz numbers and higher spin structures, with Sergey Shadrin and Loek Spitz, Communications in Number Theory and Physics 7, no. 1,
125--143 (2013)
● The Laplace transform, mirror symmetry, and the topological recursion of Eynard-Orantin, arXiv:1210.2106 math.QA (2012), in Geometric Methods in Physics, Trends in Mathematics. Kielanowski,
Odesskii, Odzijewicz, Schlichenmaier, and Voronov, Eds., 127--142, Birkhäuser Basel, 2013.
● The spectral curve of the Eynard-Orantin recursion via the Laplace transform, with Olivia Dumitrescu, Brad Safnuk and Adam Sorkin, in Algebraic and Geometric Aspects of Integrable Systems and
Random Matrices, Dzhamay, Maruno and Pierce, Eds. Contemporary Mathematics 593, 263--315 (2013)
● Topological recursion for the Poincaré polynomial of the combinatorial moduli space of curves, with Michael Penkava, Advances in Mathematics 230, 1322--1339 (2012)
● The Kontsevich constants for the volume of the moduli of curves and topological recursion, with Kevin Chapman and Brad Safnuk, Communications in Number Theory and Physics 5, 643--698 (2011)
● The Laplace transform of the cut-and-join equation and the Bouchard-Marino conjecture on Hurwitz numbers, with Bertrand Eynard and Brad Safnuk, Publications of the Research Institute for
Mathematical Sciences 47, 629--670 (2011)
● A matrix model for simple Hurwitz numbers, and topological recursion, with Gaëtan Borot, Bertrand Eynard, and Brad Safnuk, Journal of Geometry and Physics 61, 522--540 (2011)
● Polynomial recursion formula for linear Hodge integrals, with Naizhen Zhang, Communications in Number Theory and Physics 4, 267--294 (2010)
● Mirzakhani's Recursion Relations, Virasoro Constraints and the KdV Hierarchy with Brad Safnuk, Indian Journal of Mathematics 50, 189--228 (2008)
● A Child's Point of View (in Japanese, Kodomo no me), an expository book chapter on the Witten-Kontsevich theory, mirror symmetry, and the Gromov-Witten theory. Invited contribution. In "Kono
Suugagusha ni Deaete Yokatta (The Great Moments of Meeting Mathematicians)," By Kazuhiko Aomoto, Takashi Ono, Mitsuyoshi Kato, Yasuyuki Kawahigashi, Shoshichi Kobayashi, Koichiro Harada, Motohico
Mulase, et al. Sugagu-Shobo, 2011.
● New developments in Witten-Kontsevich theory (in Japanese), Surikagaku (Mathematical Sciences) 543, 8--14 (September 2008)
Hitchin's Integrable Systems, Geometric Langlands Duality, and Characterization of Prym Varieties:
● Hitchin integrable systems, deformations of spectral curves, and KP-type equations, with Andrew R. Hodge, Advanced Studies in Pure Mathematics 59, 31--77 (2010)
● Geometry of character varieties of surface groups, Research Institute for Mathematical Sciences Kokyuroku 1605, 1--21 (2008)
● Prym varieties and integrable systems, with Yingchen Li, Communications in Analysis and Geometry 5, 279--332 (1997)
● The Hitchin systems and the KP equations, with Yingchen Li, International Journal of Mathematics 7, 227--244 (1996)
Ribbon Graphs, Grothendieck's Dessins d'Enfants, Belyi Morphisms, Strebel Differentials, and Orbifold Structure of the Moduli Space of Riemann Surfaces:
Matrix Models, Matrix Duality, Topological Expansion of Matrix Integrals, and their Generalizations with Applications to Geometry of Moduli Spaces
Surveys on the KP Theory, Sato Grassmannians, and their Applications to the Schottky Problem and Matrix Integrals:
● Algebraic theory of the KP equations, in Perspectives in Mathematical Physics, R. Penner and S.-T. Yau, Editors, International Press Company, 157--223 (1994)
● Matrix integrals and integrable systems, in Topology, geometry and field theory, K. Fukaya et. al., Editors, World Scientific, 111--127 (1994)
● KP equations, strings and the Schottky problem, in Algebraic Analysis, M. Kashiwara et. al., Editors, vol.II, Academic Press, 473--492 (1988)
A Solution to the Schottky Problem in terms of KP Equations, and its Supersymmetric Generalizations:
Solvability and Complete Integrability of KP equations, Birkhoff Decomposition of the Group of Pseudo-Differential Operators, and Supersymmetric Generalizations:
Sato Grassmannian, Commutative Rings of Differential Operators, Moduli of Vector Bundles on Algebraic Curves, and their Supersymmetric Generalizations:
● Category of vector bundles on algebraic curves and infinite dimensional Grassmannians, International Journal of Mathematics 1, 293--342 (1990)
● Geometric classification of Z2-commutative algebras of super differential operators, Marcel Dekker Lecture Notes in Pure and Applied Mathematics 145, Einstein Metrics and Yang-Mills Connections,
161--180 (1993)
● Normalization of the Krichever Data, Contemporary Mathematics 136, Curves, Jacobians, and Abelian Varieties, 297--304 (1992)
● Geometric classification of commutative algebras of ordinary differential operators, Differential Geometric Methods in Theoretical Physics, NATO Advanced Sciences Institutes Series 245 B, 13--27
● A correspondence between an infinite Grassmannian and arbitrary vector bundles on algebraic curves, Proceedings of Symposia in Pure Mathematics 49, 39--50 (1989)
Unpublished Lecture Notes:
Combinatorial Structure of the Moduli Space of Riemann Surfaces and the KP Equations, (1997)
Lectures on the Combinatorial Structure of the Moduli Spaces of Riemann Surfaces and Feynman Diagram Expansion of Matrix Integrals, (2001)
|
{"url":"https://www.math.ucdavis.edu/~mulase/publication.html","timestamp":"2024-11-08T14:33:40Z","content_type":"text/html","content_length":"23600","record_id":"<urn:uuid:f9e0ab6c-a39b-45bf-92ca-5e1d1d5b3082>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00544.warc.gz"}
|
STP Binomial Program
written by Harvey Gould, Jan Tobochnik, Wolfgang Christian, and Anne Cox
The STP Binomial program displays the probability that n spins are up out of a total N noninteracting spin 1/2 magnetic moments. The default number of spins is 60 and the probability that a single
spin points up is 0.5.
STP Binomial is part of a suite of Open Source Physics programs that model aspects of Statistical and Thermal Physics (STP). The program is distributed as a ready-to-run (compiled) Java archive.
Double clicking the stp_Binomial.jar file will run the program if Java is installed on your computer. Additional programs can be found by searching ComPADRE for Open Source Physics, STP, or
Statistical and Thermal Physics.
Please note that this resource requires at least version 1.5 of Java.
Subjects Levels Resource Types
Mathematical Tools
- Instructional Material
- Probability - Lower Undergraduate
= Interactive Simulation
Thermo & Stat Mech - Upper Undergraduate
= Lecture/Presentation
= Binomial Distribution
Intended Users Formats Ratings
- Learners
- application/java
- Educators
Access Rights:
Free access
© 2008 Wolfgang Christian
Additional information is available.
coin toss, osp
Record Creator:
Metadata instance created May 27, 2008 by Anne Cox
Record Updated:
November 7, 2013 by Bruce Mason
Last Update
when Cataloged:
May 27, 2008
Other Collections:
ComPADRE is beta testing Citation Styles!
<a href="https://www.compadre.org/OSP/items/detail.cfm?ID=7260">Gould, H, J. Tobochnik, W. Christian, and A. Cox. "STP Binomial Program."</a>
H. Gould, J. Tobochnik, W. Christian, and A. Cox, Computer Program STP BINOMIAL PROGRAM (2008), WWW Document, (https://www.compadre.org/Repository/document/ServeFile.cfm?ID=7260&DocID=414).
H. Gould, J. Tobochnik, W. Christian, and A. Cox, Computer Program STP BINOMIAL PROGRAM (2008), <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=7260&DocID=414>.
Gould, H., Tobochnik, J., Christian, W., & Cox, A. (2008). STP Binomial Program [Computer software]. Retrieved November 12, 2024, from https://www.compadre.org/Repository/document/ServeFile.cfm?ID=
Gould, H, J. Tobochnik, W. Christian, and A. Cox. "STP Binomial Program." https://www.compadre.org/Repository/document/ServeFile.cfm?ID=7260&DocID=414 (accessed 12 November 2024).
Gould, Harvey, Jan Tobochnik, Wolfgang Christian, and Anne Cox. STP Binomial Program. Computer software. 2008. Java 1.5. 12 Nov. 2024 <https://www.compadre.org/Repository/document/ServeFile.cfm?ID=
@misc{ Author = "Harvey Gould and Jan Tobochnik and Wolfgang Christian and Anne Cox", Title = {STP Binomial Program}, Month = {May}, Year = {2008} }
%A Harvey Gould %A Jan Tobochnik %A Wolfgang Christian %A Anne Cox %T STP Binomial Program %D May 27, 2008 %U https://www.compadre.org/Repository/document/ServeFile.cfm?ID=7260&DocID=414 %O
%0 Computer Program %A Gould, Harvey %A Tobochnik, Jan %A Christian, Wolfgang %A Cox, Anne %D May 27, 2008 %T STP Binomial Program %8 May 27, 2008 %U https://www.compadre.org/Repository/document/
: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the
Citation Source Information
area for clarifications.
Citation Source Information
The AIP Style presented is based on information from the AIP Style Manual.
The APA Style presented is based on information from APA Style.org: Electronic References.
The Chicago Style presented is based on information from Examples of Chicago-Style Documentation.
The MLA Style presented is based on information from the MLA FAQ.
STP Binomial Program:
Is Part Of Statistical and Thermal Physics 2nd Ed. Programs
STP Binomial Program is a part of STP Application package that contains curricular materials for the teaching of Statistical and Thermal Physics.
relation by Anne Cox
Covers the Same Topic As Binomial Distribution Model
The EJS version of this program allows for editing of the model using Easy Java Simulations.
relation by Anne Cox
See details...
Know of another related resource? Login to relate this resource to it.
|
{"url":"https://www.compadre.org/OSP/items/detail.cfm?ID=7260","timestamp":"2024-11-12T10:00:40Z","content_type":"application/xhtml+xml","content_length":"41703","record_id":"<urn:uuid:1e56cf77-08e9-41b0-bb5f-cdd4a6646584>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00308.warc.gz"}
|
Papers with Code - From Symmetry to Asymmetry: Generalizing TSP Approximations by Parametrization
From Symmetry to Asymmetry: Generalizing TSP Approximations by Parametrization
We generalize the tree doubling and Christofides algorithm, the two most common approximations for TSP, to parameterized approximations for ATSP. The parameters we consider for the respective
parameterizations are upper bounded by the number of asymmetric distances in the given instance, which yields algorithms to efficiently compute constant factor approximations also for moderately
asymmetric TSP instances. As generalization of the Christofides algorithm, we derive a parameterized 2.5-approximation, where the parameter is the size of a vertex cover for the subgraph induced by
the asymmetric edges. Our generalization of the tree doubling algorithm gives a parameterized 3-approximation, where the parameter is the number of asymmetric edges in a given minimum spanning
arborescence. Both algorithms are also stated in the form of additive lossy kernelizations, which allows to combine them with known polynomial time approximations for ATSP. Further, we combine them
with a notion of symmetry relaxation which allows to trade approximation guarantee for runtime. We complement our results by experimental evaluations, which show that both algorithms give a ratio
well below 2 and that the parameterized 3-approximation frequently outperforms the parameterized 2.5-approximation with respect to parameter size.
PDF Abstract
Data Structures and Algorithms
|
{"url":"https://cs.paperswithcode.com/paper/from-symmetry-to-asymmetry-generalizing-tsp","timestamp":"2024-11-08T20:56:42Z","content_type":"text/html","content_length":"94111","record_id":"<urn:uuid:b851fdb6-0fe2-4ade-bedd-2bb288002341>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00086.warc.gz"}
|
TechTalk: Are we on the verge of a quantum leap in computing? - Cisco UK & Ireland Blog
Cisco UKI
TechTalk: Are we on the verge of a quantum leap in computing?
4 min read
Imagine a mathematical problem so complex it would take the computers of today 10,000 years to solve.
Now imagine a machine with the processing power to complete the same calculation in just a second. How is that even possible? Quantum computing.
We’re inching closer to a new era of the possible, and so for my latest TechTalk blog I wanted to explore this mind-bending area of technology and engineering and what it could mean for society in
the future…
Let’s start from the top: What is quantum computing?
The ‘classical’ computers of today (it feels strange writing that!) share information in a sequence of 1s and 0s (or bits), created by transistors being switched either on or off.
Quantum computers work by using the ability of subatomic particles to exist in more than one state at a time. It feels like an alien concept at times, so bear with me…
Rather than bits, quantum computers operate on qubits. We still have 1s and 0s but qubits can also have every possible combination of 1s and 0s, (on or off) at the same time.
Qubits are processed simultaneously as opposed to sequentially in a traditional computer. This opens up a whole new realm of possibilities in terms of processing power.
Why is quantum such a step forward?
Quantum is more than just a leap. The number of possible combinations increases exponentially with the number of qubits. As a result, this translates into exponential speed increases, versus today’s
It will give us the ability to solve very complex problems that we can’t right now, owing to being able to analyse different options in a problem simultaneously.
In terms of how big a jump we’re talking, in 2015, researchers from Nasa and Google found their D-Wave quantum computing outperformed a traditional desktop machine by x108 times.
Sounds impressive, right? According to Google what this machine does in a second would take a conventional computer 10,000 years to complete.
What will quantum computing allow us to do differently?
With such a significant leap forward in processing power, the main use case is in tackling highly complicated mathematical problems.
I can see this being applied in both data analytics (at a scale difficult to imagine at the moment), artificial intelligence, as well as in cryptography and cyber security.
Many experts in this field say we around five years away from creating a commercially useful quantum computer. There have been a number of small successes so far (such as being able to send a qubit
through a logic gate) but there is still a way to go before businesses will be able to use them.
How does quantum impact Moore’s Law?
Moore’s Law computer processing power doubling every two years, as hardware gets smaller and smaller.
People are now starting to challenge Moore’s Law, purely because we have reached a point where it’s becoming impractical to shrink transistors any further.
In classical computing if you double the power, you need to double the hardware.
But with quantum, to increase power all you need to do is add one more qubit.
So, 2 qubits can perform 4 simultaneous calculations, 3 qubits can perform 8 calculations, 4 qubits can perform 16 and so on.
Growth in qubits therefore puts us on an exponential curve – beating the linear prediction of Moore’s Law.
What are the barriers for making this happen?
The (simple) answer: The complexity of the technology itself. Quantum theory at times completely defies ‘traditional’ logic.
Quantum is being in multiple matters of state at once, rather than just on or off. It’s alien to the way humans naturally think, and as a result it makes it harder to explain to those who might
invest in developing the technology further.
To further add to this complexity, while the first true applications may be five years away, it is unlikely we will see that in the form of a personal computer.
However, that could be one of those classic statements: “One computer is enough for everybody.”
From what we can see at the moment, these machines will be primarily used for solving those massively complex mathematical problems – your average person in street simple doesn’t need that level of
processing power, for now.
So what will quantum do for us?
Advances in quantum are closely tied to the advances being made in physics and chemistry. These subjects are running in parallel, pushing along the different streams of research in their respective
An interesting example of this is nanotechnology (read my previous TechTalk on nanotech here), and the types of new materials that we’re able to create.
Using quantum computing we could run a vast number of tests and models on molecular interactions at an atomic level.
Once you start creating those types of modelling at an atomic level, then you can start to produce personalised medicines for people.
Not just that, but better virus fighting drugs, stronger building materials, and even more energy efficient storage systems.
This sort of modelling is simple, something we can’t do using classical computing methods at the moment.
Quantum will allow us to tackle the limit around any information related task, and I’m fascinated to see how this technology will develop over the coming years.
There is even an experiment looking at the quantum teleportation of data – the realms of science fiction may just become reality in a new quantum era.
|
{"url":"https://gblogs.cisco.com/uki/techtalk-are-we-on-the-verge-of-a-quantum-leap-in-computing/","timestamp":"2024-11-08T15:25:05Z","content_type":"text/html","content_length":"45403","record_id":"<urn:uuid:a9e4d430-d7c3-407e-a8f4-49415a4272a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00386.warc.gz"}
|
Various Forms of Tangents in Hyperbola | Learn and Solve Questions
Equation of Tangent to Hyperbola: Preface
A hyperbola is a set of all points (x, y) such that the difference of the distances between (x, y) and two different points is constant. A hyperbola has two vertices that lie on the axis of symmetry
known as the transverse axis. The transverse axis of the hyperbola can either be horizontal or vertical. In this article, we will get to know about the different types of equations of the tangent to
hyperbola like the equation of tangent of hyperbola in slope form, equation of tangent of hyperbola in parametric form, the chord of contact of hyperbola, and point of contact of the tangent to a
Equation of Hyperbola
The general equation of hyperbola can be represented as $\dfrac{x^{2}}{a^{2}}-\dfrac{y^{2}}{b^{2}}=1$.
Point Form Equation of a Tangent to Hyperbola
In this form, the tangent is drawn from the point of contact of the tangent to the hyperbola. Let’s say the point of contact of the hyperbola to tangent is $(x_{1},y_{1})$, then the equation of the
tangent to hyperbola will be $\dfrac{xx_{1}}{a^{2}}+\dfrac{yy_{1}}{b^{2}}=1$.
Equation of Tangent to Hyperbola in Slope Form
This type of equation gives us the equation of the tangent of hyperbola in terms of the slope of the line “m”. The equation is $y=mx\pm \sqrt{a^{2}m^{2}-b^{2}}$
This equation is also called “The condition of tangency”.
Equation of Pair of Tangents in Hyperbola
When the equation of the hyperbola is $\dfrac{x^{2}}{a^{2}}-\dfrac{y^{2}}{b^{2}}=1$, then the pair of tangents can be represented using $SS_{1}=T^{2}$ i.e
Chord of Contact of Hyperbola
A chord of contact is a chord passing through endpoints of tangents drawn from a point
$(x_{1},y_{1})$ to the hyperbola. The equation of chord of contact of hyperbola will be $\dfrac{xx_{1}}{a^{2}}+\dfrac{yy_{1}}{b^{2}}=1$ .
Equation of Tangent to Hyperbola in Parametric Form
The parametric coordinates of any hyperbola can be represented as$a sec\theta, btan\theta$ and the equation of a tangent to hyperbola will be $\dfrac{x(asec\Theta )}{a^{2}}-\dfrac{y(atan\Theta )}{b^
Interesting Facts
• When an object, let's say a jet, moves faster than the speed of sound, it creates a conical form of a wave in space and that wave intersects the ground, the curve we get from that intersection is
a hyperbola.
• The cooling towers are generally made of hyperbolic shape to achieve 2 things: first, the least amount of material used to make it and second, the structure should be strong enough to withstand
strong winds.
Solved Examples
Example 1. Find the equation of a tangent to the hyperbola $x^{2}-4y^{2}=4$ which is parallel to the line $x+2y=0$.
Solution: Equation of hyperbola : $\dfrac{x^{2}}{4}-\dfrac{y^{2}}{1}=1$ ,
So ,$a^{2}=4\Rightarrow a=2$
$b^{2}=1\Rightarrow a=1$
The slope of the given line will be $\dfrac{1}{2}$. Now, using the condition of tangency we will calculate the value of c.
So, the equation of tangent will be
Example 2. Find the equation of the chord of contact of the hyperbola $\dfrac{x^{2}}{6}-\dfrac{y^{2}}{2}=1$ if the tangents are drawn from (3,2).
Solution: We know that The equation of chord of contact of a hyperbola is $\dfrac{xx_{1}}{a^{2}}-\dfrac{yy_{1}}{b^{2}}=1$. So,
$x-2y=2$is the required equation of chord of contact.
Practice Questions
Question 1. What is the value of m for which $y=mx+6$ is tangent to the hyperbola $\dfrac{x^{2}}{100}-\dfrac{y^{2}}{49}=1$?
Ans: $\sqrt{\dfrac{17}{20}}$,$- \sqrt{\dfrac{17}{20}}$
Question 2. A common tangent to $9x^{2}-16y^{2}=144$ and $x^{2}+y^{2}=9$ is____.
Ans: $y=3\sqrt{\dfrac{2}{7}}x\pm \dfrac{15}{\sqrt{7}}$
The article summarises the concept of the chord of contact and tangents as a hyperbola. We learnt about different types of forms of tangents and how to find the equation of these tangents, then we
did some examples to brush up on our concepts and get a better understanding of the topic. We hope to have helped you clear your doubts on this topic and learn something new. Do try out the solved
examples and practise questions to evaluate your understanding of the concepts discussed here.
FAQs on Various Forms of Tangents in Hyperbola
1. How do you know if a line is a tangent to a hyperbola?
We can know if a line is a tangent to a hyperbola by using the condition of tangency which is $c^{2}=a^{2}m^{2}-b^{2}$.
2. What are the parametric coordinates of a hyperbola?
$(asec\Theta,btan\Theta)$ are the parametric coordinates of a hyperbola with the parameter $\Theta$.
3. What is $T$ and $S_{1}$ in a hyperbola?
In a hyperbola, T is $\dfrac{xx_{1}}{a^{2}}-\dfrac{yy_{1}}{b^{2}}=1$and $S_{1}$ is $(\dfrac{x_{1}^{2}}{a^{2}}-\dfrac{y_{1}^{2}}{b^{2}}-1)$
|
{"url":"https://www.vedantu.com/maths/various-forms-of-tangents-in-hyperbola","timestamp":"2024-11-06T05:26:15Z","content_type":"text/html","content_length":"225790","record_id":"<urn:uuid:b77e99cd-9b08-4938-a6f4-17ad8c6e1d04>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00067.warc.gz"}
|
at Your Next Kid's Party
Balloon arches turn your usual party decorations at home into an impressive experience for all your guests. At first glance, they may seem complicated to make, but they are actually quite easy to
make. If you want to make a lasting impression on your party setting, then go for balloon arches.
A standard balloon arch can be as simple as arranging monochromatic colors into a patterned arch. The most common patterns for balloon arches are spirals and flowers.
How do you make a balloon arch?
Some balloon arches are made from helium-filled balloons so that the floating balloons form an arch. However, ordinary inflatable balloons can easily form arches even without a stand or without
leaning against a wall.
How many balloons are needed for a balloon arch?
First, you need to know the length of the arch from one end to the other. We recommend using one of these two methods.
Use some PVC pipe to get an idea of how big you want the balloon arch to be. Bend the pipe to form the arch at the same location where you want the balloon arch to be. See how it looks, and if you
are happy with the size, measure the entire length of the PVC pipe used. (Later, you can use the same PVC pipe as the support for the arch. This is very desirable if you are setting up the balloon
arch in an area where there may be wind or where you will have children or adults running around.)
Or, you can get a rough idea of the height and width of the arch you need by measuring the following
1. If you want a wide balloon arch, simply add up the height and width of the area where the arch will stand to get a rough total length of the balloon arch.
2. If you need a round balloon arch, multiply the height by 1.5 and add it to the width.
3. To create a balloon arch much larger than the width, add the height to twice the width to accurately estimate the length of the balloon arch needed.
Balloon arches can be made up of clusters, with each cluster consisting of four interconnected balloons.
So, next, to determine how many balloons you will need for your arch, divide the length (in inches or centimeters) by the diameter (also in inches or centimeters) of the balloons you have chosen.
From this, you have calculated the number of clusters needed to complete the balloon arch. Then, multiply the result by 4 - the number of balloons in each cluster. We recommend increasing this result
by 20% to ensure you have a compact balloon arch.
For example, if you need a 25-foot arch, use a small 5-inch balloon ......
1. length of arch in inches = 25' x 12" = 300"
2. Total length divided by balloon diameter = 300" ÷ 5" = 60 clusters
3. multiply clusters by 4 balloons = 60 x 4 = 240 balloons
4. Increase the total number by 20% = 240 x 120% = 288 balloons
As another example, suppose an arch of 12 meters is needed, using 11 inch (28 cm) balloons ......
1. length of the arch cm = 12m x 100 = 1200cm
2. Total length divided by balloon diameter = 1200cm ÷ 28cm = 42 clusters
3. multiply the clusters by 4 balloons = 42 x 4 = 168 balloons
4. Increase the total number by 20% = 168 x 120% = 200 balloons (approximately)
The standard sizes for balloon diameters are 5", 9", 11", 14" or 16". Having said that, you may decide to blow up an 11" balloon to 10". Needless to say, you will need to calculate the above formula
If you are still unsure how to calculate the number of balloons you need, we have calculated the total number of balloon arch lengths for a handful of balloons.
Total Balloon Arch Length Total Balloons Needed For Clustered Arch
Feet Meters 5" Balloons 9" Balloons 11" Balloons 14" Balloons
10' 3m 116 64 52 40
15' 4.6m 172 96 80 60
20' 6.1m 228 128 104 80
25' 7.6m 288 160 132 100
30' 9.1m 344 192 156 120
35' 10.7m 400 224 184 140
40' 12.2m 460 256 208 160
45' 13.7m 516 288 236 180
50' 15.2m 576 320 264 200
|
{"url":"https://www.aloballoons.com/How-to-Make-an-Impression-at-Your-Next-Kid-s-Party-with-Balloon-Arches-id45410727.html","timestamp":"2024-11-14T10:02:00Z","content_type":"text/html","content_length":"130086","record_id":"<urn:uuid:21d42b38-6ac7-478d-a4a4-1ff4283b72ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00743.warc.gz"}
|
Arbitrarily Long Factorizations in Mapping Class Groups
On a compact oriented surface of genus g with n= 1 boundary components, d1, d2,..., dn, we consider positive factorizations of the boundary multitwist td1 td2 tdn, where tdi is the positive Dehn
twist about the boundary di. We prove that for g= 3, the boundary multitwist td1 td2 can be written as a product of arbitrarily large number of positive Dehn twists about nonseparating simple closed
curves, extending a recent result of Baykur and Van Horn- Morris, who proved this result for g= 8. This fact has immediate corollaries on the Euler characteristics of the Stein fillings of contact
three manifolds.
E. DALYAN, M. Korkmaz, and M. Pamuk, “Arbitrarily Long Factorizations in Mapping Class Groups,” INTERNATIONAL MATHEMATICS RESEARCH NOTICES, pp. 9400–9414, 2015, Accessed: 00, 2020. [Online].
Available: https://hdl.handle.net/11511/42182.
|
{"url":"https://open.metu.edu.tr/handle/11511/42182","timestamp":"2024-11-02T06:17:32Z","content_type":"application/xhtml+xml","content_length":"54202","record_id":"<urn:uuid:1f95b0f4-7a84-45ee-b3ac-e93555e09c0c>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00224.warc.gz"}
|
Modeling heterogeneous materials via two-point correlation functions: Basic principles
Heterogeneous materials abound in nature and man-made situations. Examples include porous media, biological materials, and composite materials. Diverse and interesting properties exhibited by these
materials result from their complex microstructures, which also make it difficult to model the materials. Yeong and Torquato introduced a stochastic optimization technique that enables one to
generate realizations of heterogeneous materials from a prescribed set of correlation functions. In this first part of a series of two papers, we collect the known necessary conditions on the
standard two-point correlation function S2 (r) and formulate a conjecture. In particular, we argue that given a complete two-point correlation function space, S2 (r) of any statistically homogeneous
material can be expressed through a map on a selected set of bases of the function space. We provide examples of realizable two-point correlation functions and suggest a set of analytical basis
functions. We also discuss an exact mathematical formulation of the (re)construction problem and prove that S2 (r) cannot completely specify a two-phase heterogeneous material alone. Moreover, we
devise an efficient and isotropy-preserving construction algorithm, namely, the lattice-point algorithm to generate realizations of materials from their two-point correlation functions based on the
Yeong-Torquato technique. Subsequent analysis can be performed on the generated images to obtain desired macroscopic properties. These developments are integrated here into a general scheme that
enables one to model and categorize heterogeneous materials via two-point correlation functions. We will mainly focus on basic principles in this paper. The algorithmic details and applications of
the general scheme are given in the second part of this series of two papers.
All Science Journal Classification (ASJC) codes
• Statistical and Nonlinear Physics
• Statistics and Probability
• Condensed Matter Physics
Dive into the research topics of 'Modeling heterogeneous materials via two-point correlation functions: Basic principles'. Together they form a unique fingerprint.
|
{"url":"https://collaborate.princeton.edu/en/publications/modeling-heterogeneous-materials-via-two-point-correlation-functi","timestamp":"2024-11-15T04:57:01Z","content_type":"text/html","content_length":"55368","record_id":"<urn:uuid:8c771cb9-2b6d-4a41-a019-2ed1ffdc76dc>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00827.warc.gz"}
|
Bodhayan Roy
Post Doctoral Fellow
Department of Computer Science and Engineering
Indian Institute of Technology Bombay
Powai, Mumbai 400076, India
E-mail: broy [at] cse [dot] iitb [dot] ac [dot] in
Main Research Interests
Discrete and Computational Geometry
Parameterized Complexity
Graph Drawing
Selected Publications
• Bodhayan Roy, "Point visibility graph recognition is NP-hard" , International Journal of Computational Geometry and Applications, vol. 26(1), pp. 1-32, 2016.
• Ajit Arvind Diwan, Subir Kumar Ghosh and Bodhayan Roy, "Four-connected triangulations of planar point sets", Discrete & Computational Geometry vol. 53(4), pp. 713-746, 2015.
• Pritam Bhattacharya, Subir Kumar Ghosh and Bodhayan Roy, "Vertex guarding in weak visibility polygons", Proceedings of the First Conference on Algorithms and Discrete Applied Mathematics, IIT
Kanpur, 2015, Lecture Notes in Computer Science, vol. 8959, pp. 45-57, Springer, 2015.
|
{"url":"https://www.cse.iitb.ac.in/~broy/","timestamp":"2024-11-07T18:44:10Z","content_type":"text/html","content_length":"3235","record_id":"<urn:uuid:ed10d793-a9bc-4ca9-8cf2-6a8a62740ad8>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00261.warc.gz"}
|
Rubik's Cube: Your Basic 15 Steps To Solve It
Not only for engineers(cliche!): learn to solve the Rubik’s Cube in less than 2h. Rarely has any puzzle lasted so long. Almost fifty years after its invention, the Rubik’s Cube© still keeps its
interest for the next generation, getting variants developed so as technical innovations. From Cubers communities with 100s of dedicated online channels, literature, and speedcubing competitions,
that puzzle’s popularity is unrivaled.
Initially named the magic cube, the Rubik’s Cube© was invented by the Hungarian architect Erno Rubik in 1974, was first produced in 1977, and became a worldwide success in the 80s.
Thirty years later, at least 350 Million cubes have been produced until the original patent expired and even more millions have been produced since.
We firmly believe that the huge success of the Rubik’s cube is mainly due to its apparently impressive number of combinations (“Must be a Nobel prize to solve that!”) yet with simple design, and
simple color principle … but in fact feasible in less time than many 2D puzzles.
Grab a coffee, scroll the page together with the video,
and within 2h the cube is yours!
Solving The Rubik’s Cube
Over the decades, many logic strategies have been developed (see Wikipedia’s link below) with their own sequences to solve the cube bits by bits. Various methods involve more or less complicated
sequences to learn. We propose here to learn the simplest method, consisting of solving the cube layer by layer, with not-optimized sequences and preserving what was positioned before. FYI, in many
record-breaking methods, or the computerized one indicated at the bottom of this post, you do not see final positioned pieces until the last moves, making it very hard to see the correct progress.
Following the steps and video below, it takes about 2 hours to learn How to solve the Rubik’s Cube© within 5 minutes
The best video tutorial to learn the 3 types of pieces, basic moves, and first combinations (the method is well-known and popularized in many books and sites, but definitely made extremely clear and
comprehensive here by TheCubicle):
Two basic sequences will be used repeatedly throughout the method. We recommend exercising these 2 first, in order not to mess up the cube in the middle of the solving and to have to start from
scratch again (as we all did!). It is to be noticed that if you repeat any of them 6 times consecutively, the cube returns to its original state:
the “Righty ALG” = R U R’ U’
the “Lefty ALG” = L’ U’ L U
Note: the main hand turns & fingers move but if the initial position is correct, it does not need to be repositioned between steps, making these two sequences extremely fast.
Help! one can also enter any combination in this nice online 3D Rubik’s cube simulator to visualize any sequence, such as the two above
Side rotation to invert the cross
Solving the Bottom Layer
• Make the white cross
□ First, position the 4 petals around the yellow center: find yourself the basic 1 to 3 moves to place each of the 4 white petals 🙂
□ Then invert the cross: side after side: align the 2nd layer center color corresponding to the one on the same side but on the top layer. Only then turn that full side 180°. Once done on the 4
sides, you should have a full white cross on the bottom face with two aligned same-color pieces on each side.
Top corner to bring down
• Solve the bottom white 4 corners: position the cube to have the white cross on the bottom side. For each corner that is not already white, position the required piece on the first layer, above
the desired final position. Then repeat the Righty ALG until the part has reached the correct desired position. Repeat for every 3 corners if not already white.
Solving the Middle layer
With the previous step, all 4 side centerpieces have already in the final position. Just their 4 edges may be incorrect.
Solve the 4 middle-layer edges: for each edge needing to join the middle layer (means no edge with a yellow side):
Top edge to move right
• turn the top layer and position the desired top edge to move above the adjacent centerpiece of the same color (important as that color will stay on the same face after the move hereafter)
• then it needs to join its final position side position on the middle layer by either moving it to the left or to the right of the middle layer:
□ if to the right edge: U, Righty ALG, rotate the cube to the left, Lefty ALG
□ if to the left edge: U’, Lefty ALG, rotate the cube to the right, Righty ALG
With this second layer, you should now have the 2 bottom layers completely finished
Solving the Top layer, The Yellow Face
Make the yellow cross on the top face (forget about the corners for the moment). The 3 possible situations:
□ if only a line of the cross is only already formed: orientate the cube so that the line is horizontal left-right on the top face, and then:
F, Righty ALG, F’ (“F” is a clockwise rotation of the 1 face facing you. F’ being counterclockwise)
□ if only one corner of the cross is already formed: orientate the cube so that this partial cross is on the bottom right of the top surface, then apply:
f, Righty ALG, f’ (“f” is a clockwise rotation of the 2 front layers facing you. f’ being counterclockwise)
□ if just the center of the yellow cross is already formed: apply 1st step above and you’ll get the corner of the cross, and then apply the 2nd step to get the full yellow cross
Position the 4 corners (but the wrong orientation/twist is OK)
Rotate the top layer so that the maximum corners are in their definitive position (for example the red/green/yellow corner on the corner between the red and the green face. It must be between these 2
faces, but twisted is OK). Then swap the needed corners, if needed, using 2 possible scenarios:
□ swap 2 same-side corners: position the cube so that the 2 corners to exchange are placed on the top right, then: 3x Righty ALG, turn the cube to the left, 3x Lefty ALG
Once done the corners are swapped, but you may need to rotate the top layer to reposition all faces together
□ swap 2 opposite corners: first swap 1 corner, then a 2nd time, by doing the previous move twice
Twist the 4 corners: position the cube with the yellow cross with corners to orientate facing down.
For each corner:
□ Turn the bottom layer (supporting the yellow cross), so that the corner to twist is on the bottom right
□ perform the Righty ALG as many times as needed until the yellow is facing down
Note: do finish ultimately the Righty ALG each time, even if the corner is already correctly positioned. Do not reorientate the cube in between each yellow corner, but turn only the bottom layer.
This is the simplest but maybe the most deceptive move: in this step, until you have correctly orientated all 4 yellow corners, the cube looks scrambled.
Now either the cube is completely solved, or most likely, still, some work on the edges
Swap the top edges. 2 scenarios:
3 middle edges to swap
□ 3 edges need to be swapped: spin the cube so that the already correctly-positioned edge is facing you, then perform the following sequence of 4 moves to swap the other edges:
○ Righty ALG once,
○ Lefty ALG once,
○ Righty ALG five times,
○ Lefty ALG five times
Note: depending on the initial configuration of the 3 pieces to be swapped, you may need to repeat this sequence twice
□ 4 edges need to be swapped: apply the same sequence as above, but without starting with an already solved edge in front of you as you have none yet. Then once you have it, you are exactly in
the previous case, apply it again with the solved edge in front of you
Note: this is where having some practice of the Lefty ALG and Righty ALG initial advice comes into practice: if you make a mistake in any of these repetitive sequences or a wrong count …. you’ll
likely have to restart everything from scratch, as you’ll likely not being able to undo the last moves 🙁
Voila! You’re finished solving your first magic cube. Congratulations!
Improve Your Cube Skills
Once you are confident with the method above, a world of cubes opens to you: many other algorithms, tricks, improvements, and methods exist. For this, check the dedicated cube sites of cube clubs.
Additionally to the basic move describes in the video (R, U, L, F, M …) You will need to be familiar with the following Cube acronyms:
□ CFOP: Cross-F2L-OLL-PLL or the Fridrich method, is the one seen above
□ F2L: First Two Layers
□ OLL: Orientation of the last layer
□ PLL: Permute Last Layer
If you feel confident with memorizing a couple more move combinations, we recommend you then learn and practice the Roux method to solve faster the Rubik’s cube and go well under 60 seconds.
The Math of the Rubik’s Cube
For the math, we let you refer to the math section of the dedicated Wikipedia article. Yet so few colors on a just 3×3 cube size, as said, “if one had one standard-sized Rubik’s Cube for each
permutation, one could cover the Earth’s surface 275 times, or stack them in a tower 261 light-years high“.
2×2 Rubik’s Cube
One could think, like the Go or the Chess games, the cube does not evolve in its design. While the original still exists and is produced, many new versions have been issued, being clearly sometimes
direct knock-offs of the original invention or bringing a fresh look and technical innovation in the rotating mechanism, anti-friction, maintenance, or manufacturing process … or connectivity (the
most mechanical puzzle becomes electronic!):
Cube Variants
• designed for speed (and validated by the world cube association)
• magnetic version
• motorized cube
• connected version, to have an app either giving you advice or scrambling it, or connecting to play remotely
• robot to solve them automatically
• other shapes and sizes: pyramid, spherical, octagons (…) from 2×2, 4×4 up to 21×21!
Robot Speed Cube
Have your DIY $40 robot solve the cube for you!
The full project is brought to you by Andrea with his CUBOtino robot project. He made efficient design choices for ease of assembly (2 servos, an ESP32 programmed via USB, and some simple parts,
printable without support) and provides videos and full detailed PDF instructions.
FYI: This Rubik’s cube algorithm uses the Kociemba algorithm, the proven shortest-optimized algorithm that solves the cube in a maximum of 20 moves whatever the initial state may be. Details on this
algorithm can be found here.
Rubik’s Cube World Record
No cube article would be complete without a video of the fastest-solving man in the world … and because it’s so fast, we can even include 5 performances within some seconds:
Or solve it blindly, or while juggling 3 cubes at a time (…).
Dismounted Cube
No need to dismount your Rubik’s Cube or reposition the stickers anymore to “solve” it!
You’re now ready to buy your next Rubik’s cube timer and join the online Rubik’s cube or Speed Cube community and participate in the World Cube Association.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?
|
{"url":"https://innovation.world/rubiks-cube-15-steps-to-solve/","timestamp":"2024-11-15T00:09:35Z","content_type":"text/html","content_length":"301972","record_id":"<urn:uuid:90d35fd8-1bf4-4d8f-b969-676cb264b2fc>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00406.warc.gz"}
|
Celestial holography primer 3/5
Andrea Puhm
CPhT, École Polytechnique
Fri, Oct. 08th 2021, 10:00-12:30
Amphi Claude Bloch, Bât. 774, Orme des Merisiers
One of the most powerful tools for understanding quantum aspects of gravity is the holographic principle, which asserts a duality between a theory of quantum gravity on a given manifold and a field
theory living on its boundary. A concrete realization in spacetimes with negative curvature is the AdS/CFT correspondence, but it remains an important open question if and how the holographic
principle is realized for general spacetimes. Recently, the holographic dual of quantum gravity in asymptotically flat spacetimes has been conjectured to be a codimension-two conformal field theory
which lives on the celestial sphere at null infinity, aptly referred to as celestial CFT. A first hint at such a duality is the equivalence between the action of the Lorentz group and global
conformal transformations on the celestial sphere. Moreover, when recast in a basis of boost eigenstates, scattering amplitudes transform as conformal correlators of primary operators in the dual
celestial CFT. These celestial correlators appear to have some, but not all, of the properties of standard CFT correlators. The goal of this course will be to give an introductory guide to recent
advances in celestial holography. From the CFT perspective 2D is special: the global conformal group gets enhanced to local conformal symmetries. Remarkably, this infinite dimensional enhancement
also appears in the 4D S-matrix which will be a main protagonist in this course. Even more surprisingly, the symmetry structure is much larger: every soft factorization theorem gives a dual
``current'' thus yielding a rich celestial symmetry algebra. Clearly, the exploration of celestial holography has just begun! Plan of the course: 1. Symmetries of asymptotically flat spacetimes: BMS
supertranslations and Virasoro/Diff(S2) superrotations. Connection to soft theorems of the S-matrix and memory effects. 2. Conformal primary wavefunctions, celestial amplitudes and their conformally
soft and collinear (OPE) limits. 3. Shadows, light rays and conformal block expansion of celestial correlators. 4. Conformally soft sector of celestial CFT: current algebras, soft charges and
|
{"url":"https://www.ipht.fr/en/Phocea/Vie_des_labos/Seminaires/index.php?id_type=6&type=6&id=994150","timestamp":"2024-11-08T08:04:55Z","content_type":"text/html","content_length":"27443","record_id":"<urn:uuid:a806305b-76ac-47ac-ad8d-d62895fdde24>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00424.warc.gz"}
|
Corresponding Angles How To Find - Angleworksheets.com
Find The Missing Corresponding Angles Worksheet – There are many resources that can help you find angles if you’ve been having trouble understanding the concept. These worksheets will help you
understand the different concepts and build your understanding of these angles. Students will be able to identify unknown angles using the vertex, arms and arcs … Read more
|
{"url":"https://www.angleworksheets.com/tag/corresponding-angles-how-to-find/","timestamp":"2024-11-10T22:01:47Z","content_type":"text/html","content_length":"47325","record_id":"<urn:uuid:f893fd61-e923-4e78-b006-4e78b8f4f8dc>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00064.warc.gz"}
|
I.I. Drobysh Advanced methods of calculating Value at Risk in market risk estimation
I.I. Drobysh Advanced methods of calculating Value at Risk in market risk estimation
On the base of systematization of scientific papers of Russian and foreign authors the article summarizes gathered experience of methods of calculating Value at Risk taking into account contemporary
trends. Classification of methods and analysis of their comparative accuracy are implemented. In whole, traditional methods (delta-normal method, historical simulating method, Monte-Carlo method)
give less accurate estimates of VaR in comparison with the methods developed later. Among advanced methods, as more accurate should be noted: parametric methods based on asymmetric models of
generalized autoregressive conditional heteroskedasticity, and applying distributions other than normal to errors in GARCH models, Hull–White method, method of filtered historical simulation, extreme
value method, some specifications of CAViaR method. With that, in the largest number of analyzed articles, the method GARCH-EVT, that combines the generalized autoregressive conditional
heteroskedasticity model and the extreme values theory, is noted as the most accurate.
quantile of the distribution function, Value at Risk, method of calculating, methods for verifying estimates.
PP. 51-62.
DOI: 10.14357/20790279180305
1. Vilenskii P.L., Livshits V.N., Smolyak S.A. 2015. Otsenka effektivnosti investitsionnykh proektov: Teoriya i praktika: Uchebnoe posobie. [Estimation of investment project efficiency: Theory and
practice: Text edition] M.: Poli Print Servis. 1300 p.
2. Drobysh I.I. 2016. Sravnitel’nyi analiz metod otsenki rynochnogo riska, osnovannykh na velichine Value at Risk [Comparative analysis of market risk estimation method based on Value at risk].
Ekonomika i matematicheskie metody [Economics and mathematical methods]. 4:74–93.
3. Men’shikov I.S., Shelagin D.A. 2000. Rynochnye riski: modeli i metody [Market Risks: models and methods]. M.: Vychislitel’nyi tsentr RAN [Computer Center of RAS]. 55 p.
4. Sener E., Baronyan S., Menguturk L. 2012. Ranking the predictive performances of value-at-risk estimation methods. International Journal of Forecasting. 28:849–873.
5. Abad P., Benito S. 2013. A detailed comparison of value at risk in international stock exchanges. Mathematics and Computers in Simulation. 94:258–276.
6. Bali T., Theodossiou P. 2007. A conditional-SGT-VaR approach with alternative GARCH models. Annals of Operations Research. 151:241–267.
7. McNeil A., Frey R. 2000. Estimation of tail-related risk measures for heteroscedastic financial time series: an extreme value approach. Journal of Empirical Finance. 7:271–300.
8. Abad P., Benito S., Lopez C. 2014. A comprehensive review of value at Risk methodologies. The Spanish Review of Financial Economics. 12:15–32.
9. Engle R.F. 1982. Autoregressive conditional heteroscedasticity with estimates of the variance of UK inflation. Econometrica. 50:987–1008.
10. Bollerslev T. 1986. Generalized autoregressive conditional heteroscedasticity. Journal of Econometrics. 21:307–327.
11. Danielsson J., de Vries C. 2000. Value-at-risk and extreme returns. Annales d’Economie et de Statistique. 60:239–270.
12. Guermat C., Harris R. 2002. Forecasting valueat-risk allowing for time variation in the variance and kurtosis of portfolio returns. International Journal of Forecasting. 18:409–419.
13. Níguez T. 2008. Volatility and VaR forecasting in the madrid stock exchange. Instituto Spanish Economic Review. 10 (3):169–196.
14. Beder T. 1996. Report card on value at risk: high potential but slow starter. Bank Accounting & Finance. 10:14–25.
15. Boudoukh J., Richardson M., Whitelaw R. 1998. A Hybrid Approach to Calculating Value at Risk. The Best of Both Worlds. 11:64–67.
16. Bao Y., Lee T., Saltoglu B. 2006. Evaluating predictive performance of value-at-risk models in emerging markets: a reality check. Journal of Forecasting. 25:101–128.
17. Tolikas K., Koulakiotis A., Brown R. 2007. Extreme risk and value-at-risk in the German stock market. European Journal of Finance. 13:373–395.
18. Hull J., White A. 1998. Incorporating volatility updating into the historical simulation method for value-at-risk. Journal of Risk. 1:5–19.
19. Barone-Adesi G., Giannopoulos K., Vosper L. 1999. VaR without correlations for nonlinear portfolios. Journal of Futures Markets. 19:583–602.
20. Engle R., Manganelli S. 2004. CAViaR: conditional autoregressive value at risk by regression quantiles. Journal of Business & Economic Statistics. 22:367–381.
21. Angelidis T., Benos A., Degiannakis S. 2007. A robust VaR model under different time periods and weighting schemes. Review of Quantitative Finance and Accounting. 28:187–201.
22. Byström H. 2004. Managing extreme risks in tranquil and volatile markets using conditional extreme value theory. International Review of Financial Analysis. 13:133–152.
23. Genςay R., Selςuk F. 2004. Extreme value theory and value-at-risk: Relative performance in emerging markets. International Journal of Forecasting. 20:287–303.
24. Bekiros S., Georgoutsos D. 2005. Estimation of value at risk by extreme value and conventional methods: a comparative evaluation of their predictive performance. Journal of International
Financial Markets, Institutions & Money. 15(3):209–228.
25. Fernandez V. 2005. Risk management under extreme events. International Review of Financial Analysis. 14:113–148.
26. Kuester K., Mittnik S., Paolella M. 2006. Value-atrisk prediction: a comparison of alternative strategies. Journal of Financial Econometrics. 4:53–89.
27. Marimoutou V., Raggad B., Trabelsi, 2009. A. Extreme value theory and value at risk: application to oil market. Energy Economics. 31:519–530.
28. Zikovic S., Aktan B. 2009. Global financial crisis and VaR performance in emerging markets: a case of EU candidate states – Turkey and Croatia. Proceedings of Rijeka faculty of economics. Journal
of Economics and Business. 27:149–170.
29. Xu D., Wirjanto T. 2010. An empirical characteristic function approach to VaR under a mixture-of-normal distribution with time-varying volatility. Journal of Derivates. 18:39–58.
30. Nozari M., Raei S., Jahanguin P., Bahramgiri M. 2010. A comparison of heavy-tailed estimates and filtered historical simulation: evidence from emerging markets. International Review of Business
Papers. 6(4):347–359.
31. Gerlach R., Chen C., Chan N. 2011. Bayesian time-varying quantile forecasting for value-at-risk in financial markets. Journal of Business & Economic Statistics. 29:481–492.
|
{"url":"http://www.isa.ru/proceedings/index.php?option=com_content&view=article&id=1028","timestamp":"2024-11-08T14:33:57Z","content_type":"application/xhtml+xml","content_length":"26738","record_id":"<urn:uuid:4a32aa99-c5f0-470c-b722-cf7077e71f4d>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00790.warc.gz"}
|
What are Precoding and Combining Weights / Matrices in a MIMO Beamforming System
Figure: configuration of single-user digital precoder for millimeter Wave massive MIMO system
Precoding and combining are two excellent ways to send and receive signals over a multi-antenna communication process, respectively (i.e., MIMO antenna communication). The channel matrix is the basis
of both the precoding and combining matrices. Precoding matrices are typically used on the transmitter side and combining matrixes on the receiving side. The two matrices allow us to generate
multiple simultaneous data streams between the transmitter and receiver. The nature of the data streams is also orthogonal. That helps decrease or cancel (theoretically) interference between any two
data streams.
The channel matrix is first properly diagonalized. Diagonalization is the process of transforming any matrix into an equivalent diagonal matrix, where all of the diagonal elements are non-zero and
all of the other elements are zeros. Let me explain an example,
H =
(H = Channel Matrix)
For a typical wireless communication system,
y = h*x + n
We can now calculate the corresponding diagonal matrix for H and also assume the diagonal matrix is D.
D =
h11 0 0
0 h22 0
0 0 h33
y = D*x + n
You will now find it much simpler to retrieve all independent data streams sent from TX.
y1 h11 0 0 x1
y2 = 0 h22 0 *x2 + n
y3 0 0 h33 x3
y1 = h11*x1 + n
y2 = h22*x2 + n
y3 = h33*x3 + n
If the combining matrix is W and the precoding matrix is F.
F*H*W = D
When the precoding F matrix is used on the TX side and the W matrix is combined on the RX side.
In an environment with many scatterers, modern wireless communication systems use spatial multiplexing to increase data flow within the system. To transmit multiple data streams over the channel, a
set of precoding and combining weights is derived from the channel matrix. Then, each data stream can be independently retrieved. Magnitude and phase terms are included in these weights, which are
frequently utilized in digital communication.
Also read about
BER vs. SNR denotes how many bits in error are received in a communication process for a particular Signal-to-noise (SNR) ratio. In most cases, SNR is measured in decibel (dB). For a typical
communication system, a signal is often affected by two types of noises 1. Additive White Gaussian Noise (AWGN) 2. Rayleigh Fading In the case of additive white Gaussian noise (AWGN), random
magnitude is added to the transmitted signal. On the other hand, Rayleigh fading (due to multipath) attenuates the different frequency components of a signal differently. A good signal-to-noise ratio
tries to mitigate the effect of noise. Calculate BER for Binary ASK Modulation The theoretical BER for binary ASK (BASK) in an AWGN channel is given by: BER = (1/2) * erfc(0.5 * sqrt(SNR_ask)); Enter
SNR (dB): Calculate BER BER vs. SNR curves for ASK, FSK, and PSK Calculate BER for Binary FSK Modulation The theoretical BER for binary FSK (BFSK) in an AWGN channel is g
Modulation Constellation Diagrams BER vs. SNR BER vs SNR for M-QAM, M-PSK, QPSk, BPSK, ... 1. What is Bit Error Rate (BER)? The abbreviation BER stands for bit error rate, which indicates how many
corrupted bits are received (after the demodulation process) compared to the total number of bits sent in a communication process. It is defined as, In mathematics, BER = (number of bits received in
error / total number of transmitted bits) On the other hand, SNR refers to the signal-to-noise power ratio. For ease of calculation, we commonly convert it to dB or decibels. 2. What is Signal the
signal-to-noise ratio (SNR)? SNR = signal power/noise power (SNR is a ratio of signal power to noise power) SNR (in dB) = 10*log(signal power / noise power) [base 10] For instance, the SNR for a
given communication system is 3dB. So, SNR (in ratio) = 10^{SNR (in dB) / 10} = 2 Therefore, in this instance, the signal power i
Signal Processing RMS Delay Spread, Excess Delay Spread, and Multipath... RMS Delay Spread, Excess Delay Spread, and Multipath (MPCs) The fundamental distinction between wireless and wired
connections is that in wireless connections signal reaches at receiver thru multipath signal propagation rather than directed transmission like co-axial cable. Wireless Communication has no set
communication path between the transmitter and the receiver. The line of sight path, also known as the LOS path, is the shortest and most direct communication link between TX and RX. The other
communication pathways are called non-line of sight (NLOS) paths. Reflection and refraction of transmitted signals with building walls, foliage, and other objects create NLOS paths. [ Read More about
LOS and NLOS Paths] Multipath Components or MPCs: The linear nature of the multipath component signals is evident. This signifies that one multipath component signal is a scalar multiple of
Wireless Signal Processing Gaussian and Rayleigh Distribution Difference between AWGN and Rayleigh Fading 1. Introduction Rayleigh fading coefficients and AWGN, or additive white gaussian noise [↗] ,
are two distinct factors that affect a wireless communication channel. In mathematics, we can express it in that way. Let's explore wireless communication under two common noise scenarios: AWGN
(Additive White Gaussian Noise) and Rayleigh fading. y = hx + n ... (i) The transmitted signal x is multiplied by the channel coefficient or channel impulse response (h) in the equation above, and
the symbol "n" stands for the white Gaussian noise that is added to the signal through any type of channel (here, it is a wireless channel or wireless medium). Due to multi-paths the channel impulse
response (h) changes. And multi-paths cause Rayleigh fading. 2. Additive White Gaussian Noise (AWGN) The mathematical effect involves adding Gauss
MATLAB Code % Developed by SalimWireless.Com clc; clear; close all; % Configuration parameters fs = 10000; % Sampling rate (Hz) t = 0:1/fs:1-1/fs; % Time vector creation % Signal definition x = sin(2
* pi * 100 * t) + cos(2 * pi * 1000 * t); % Calculate the Fourier Transform y = fft(x); z = fftshift(y); % Create frequency vector ly = length(y); f = (-ly/2:ly/2-1) / ly * fs; % Calculate phase
while avoiding numerical precision issues tol = 1e-6; % Tolerance threshold for zeroing small values z(abs(z) < tol) = 0; phase = angle(z); % Plot the original Signal figure; subplot(3, 1, 1); plot
(t, x, 'b'); xlabel('Time (s)'); ylabel('|y|'); title('Original Messge Signal'); grid on; % Plot the magnitude of the Fourier Transform subplot(3, 1, 2); stem(f, abs(z), 'b'); xlabel('Frequency (Hz)
'); ylabel('|y|'); title('Magnitude of the Fourier Transform'); grid on; % Plot the phase of the Fourier Transform subplot(3, 1, 3); stem(f,
Modulation ASK, FSK & PSK Constellation BASK (Binary ASK) Modulation: Transmits one of two signals: 0 or -√Eb, where Eb is the energy per bit. These signals represent binary 0 and 1. BFSK (Binary
FSK) Modulation: Transmits one of two signals: +√Eb ( On the y-axis, the phase shift of 90 degrees with respect to the x-axis, which is also termed phase offset ) or √Eb (on x-axis), where Eb is
the energy per bit. These signals represent binary 0 and 1. BPSK (Binary PSK) Modulation: Transmits one of two signals: +√Eb or -√Eb (they differ by 180 degree phase shift), where Eb is the energy
per bit. These signals represent binary 0 and 1. This article will primarily discuss constellation diagrams, as well as what constellation diagrams tell us and the significance of constellation
diagrams. Constellation diagrams can often demonstrate how the amplitude and phase of signals or symbols differ. These two characteristics lessen the interference between t
Compare the BER performance of QPSK with other modulation schemes (e.g., BPSK, 4-QAM, 16-QAM, 64-QAM, 256-QAM, etc) under similar conditions. MATLAB Code clear all; close all; % Set parameters for
QAM snr_dB = -20:2:20; % SNR values in dB qam_orders = [4, 16, 64, 256]; % QAM modulation orders % Loop through each QAM order and calculate theoretical BER figure; for qam_order = qam_orders %
Calculate theoretical BER using berawgn for QAM ber_qam = berawgn(snr_dB, 'qam', qam_order); % Plot the results for QAM semilogy(snr_dB, ber_qam, 'o-', 'DisplayName', sprintf('%d-QAM', qam_order));
hold on; end % Set parameters for QPSK EbNoVec_qpsk = (-20:20)'; % Eb/No range for QPSK SNRlin_qpsk = 10.^(EbNoVec_qpsk/10); % SNR linear values for QPSK % Calculate the theoretical BER for QPSK
using the provided formula ber_qpsk_theo = 2*qfunc(sqrt(2*SNRlin_qpsk)); % Plot the results for QPSK semilogy(EbNoVec_qpsk, ber_qpsk_theo, 's-',
Channel Impulse Response (CIR) Wireless Signal Processing CIR, Doppler Shift & Gaussian Random Variable The Channel Impulse Response (CIR) is a concept primarily used in the field of
telecommunications and signal processing. It provides information about how a communication channel responds to an impulse signal. What is the Channel Impulse Response (CIR) ? It describes the
behavior of a communication channel in response to an impulse signal. In signal processing, an impulse signal has zero amplitude at all other times and amplitude ∞ at time 0 for the signal. Using a
Dirac Delta function, we can approximate this. ...(i) δ( t) now has a very intriguing characteristic. The answer is 1 when the Fourier Transform of δ( t) is calculated. As a result, all frequencies
are responded to equally by δ (t). This is crucial since we never know which frequencies a system will affect when examining an unidentified one. Since it can test the system for all freq
|
{"url":"https://www.salimwireless.com/2022/10/precoding-and-combining-matrices-in.html","timestamp":"2024-11-12T04:10:30Z","content_type":"application/xhtml+xml","content_length":"121475","record_id":"<urn:uuid:a31312d4-79d3-4bf5-b443-f55927ffa720>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00142.warc.gz"}
|
personal webpage of Daniel Skodlerack - Pure Undergraduate seminar 1 Spring 24
In the Spring Undergraduate seminar the participants studying through recommended books. Topics: (Reference for questions)
1. Algebraic Topology: A. Hatcher, Algebraic Topology, Chapter 2 and 3. (Prof. Shi Wang)
2. Hodge Theory: Claire Voisin, Hodge Theory and Complex Algebraic Geometry I, Chapter 5 to 8 (Prof. Ziyuan Ding)
3. Algebraic number theory and local class field theory: J. Neukirch, Algebraic number theory, Chapter 2 to 5, J.-P. Serre, Local fields, as a related reference, (Prof. Daniel Skodlerack)
4. Smooth representation theory and local Langlands: Bushnell-Henniart, Local Langlands for GL(2) (Prof. Daniel Skodlerack)
5. Non-linear dispersive equations: J. Duoandikoetxea, Fourier Analysis, Chapter 1 to 5, Chapter 8; and T. Tao,, Nonlinear dispersive equations, Chapter 1 to 3 (Prof. Haitian Yue)
The students give a regular report on their reading.
Tuesdays 6pm-7:40pm, IMS S506 Starting at March the 5th.
1. March 5th: Report on reading on p-adic numbers, Topic 3. "Non-Archimedian valuations and completions" (Yulun Wu)
2. March 12th on Topic 1: Section 3.1.1 "Cohomology groups and the Universal Coeffiecient Theorem" (Yiyang Gong)
3. March 19th on Topic 2: Chapter 5 "Harmonic forms and Cohomology" (Yutong Li)
4. March 26th on Topic 3 (related): "Mordell's Theorem" (Jiande Zhang)
5. April 2nd on Topic 5: "Fourier transform and Sobolev spaces" (Liang Shang)
6. April 9th on Topic 3: "Extending p-adic absolute value on finite extensions of Q_p and normed vector spaces". (Yulun Wu)
7. April 16th on Topic 1: Section 3.1.2: "Cohomology of spaces" (Yiyang Gong)
8. April 23rd on Topic 2: Chapter 6 "Hodge theory in Case of Kähler manifolds" (Yutong Li)
9. April 30th on Topic 3: "Conics and p-adic numbers" (Jiande Zhang)
10. May 7th on Topic 5: "Hardy-Littlewood maximal operator and singular integrals" (Liang Shang)
11. May 14th on Topic 3 (Yulun Wu)
12. May 21st on Topic 1 "The cup product" (Yiyang Gong)
13. May 28th on Topic 2 "Singular integrals" (Liang Shang)
14. June 4th on Topic 3 (Jiande Zhang)
15. June 11th on Topic 5 (Yutong Li)
|
{"url":"https://www.skodleracks.co.uk/pure-undergraduate-seminar-1-spring-24/","timestamp":"2024-11-13T23:55:01Z","content_type":"text/html","content_length":"18734","record_id":"<urn:uuid:739ccfb0-5ffb-49e6-9b68-448af3ca03fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00602.warc.gz"}
|
C# Program to Check Odd Number | CodeToFun
C# Basic
C# Interview Programs
C# Program to Check Odd Number
Updated on Oct 06, 2024
By Mari Selvan
π οΈ 190 - Views
β ³ 4 mins
π ¬ 1 Comment
Photo Credit to CodeToFun
π Introduction
In the realm of programming, dealing with numbers is a fundamental task. One common requirement is checking whether a given number is odd or not.
An odd number is an integer that is not exactly divisible by 2 and leaves a remainder of 1 when divided by 2.
In this tutorial, we will explore a C# program designed to check whether a given number is odd. The program involves using the modulo operator to determine if the number is not divisible by 2.
π Example
Let's delve into the C# code that accomplishes this task.
using System;
class OddNumberChecker {
// Function to check if a number is odd
static bool IsOdd(int number) {
// If the remainder is 1, the number is odd
return number % 2 != 0;
// Driver program
static void Main() {
// Replace this value with the number you want to check
int number = 15;
// Call the function to check if the number is odd
if (IsOdd(number))
Console.WriteLine($"{number} is an odd number.");
Console.WriteLine($"{number} is not an odd number.");
π » Testing the Program
To test the program with different numbers, modify the value of number in the Main method.
π § How the Program Works
1. The program defines a class OddNumberChecker containing a static method IsOdd that takes an integer number as input and returns true if the number is odd, and false otherwise.
2. Inside the Main method, replace the value of number with the desired number you want to check.
3. The program calls the IsOdd method and prints the result using Console.WriteLine.
π Between the Given Range
Let's take a look at the C# code that checks and displays odd numbers in the specified range.
using System;
class Program {
static void Main() {
// Define the range
int start = 1;
int end = 10;
Console.WriteLine($"Odd Numbers in the range {start} to {end}:");
// Loop through the range and check for odd numbers
for (int i = start; i <= end; i++) {
if (IsOdd(i)) {
Console.Write($"{i} ");
// Function to check if a number is odd
static bool IsOdd(int number) {
return number % 2 != 0;
π » Testing the Program
Odd Numbers in the range 1 to 10:
The range for this program is fixed from 1 to 10. To test the program, simply compile and run it.
π § How the Program Works
1. The program defines a function IsOdd that checks whether a given number is odd or not.
2. In the Main method, it specifies the range from 1 to 10 and prints the header indicating the range.
3. It then iterates through the range and calls the IsOdd function to check for odd numbers.
4. If a number is odd, it is printed.
π § Understanding the Concept of Odd Number
Before delving into the code, let's understand the concept of odd numbers. An odd number is an integer that is not exactly divisible by 2. In other words, when an odd number is divided by 2, the
remainder is 1.
π ’ Optimizing the Program
Before delving into the code, let's understand the concept of odd numbers. An odd number is an integer that is not exactly divisible by 2. In other words, when an odd number is divided by 2, the
remainder is 1.
Feel free to incorporate and modify this code as needed for your specific use case. Happy coding!
π ¨β π » Join our Community:
To get interesting news and instant updates on Front-End, Back-End, CMS and other Frameworks. Please Join the Telegram Channel:
π Hey, I'm Mari Selvan
For over eight years, I worked as a full-stack web developer. Now, I have chosen my profession as a full-time blogger at codetofun.com.
Buy me a coffee to make codetofun.com free for everyone.
Buy me a Coffee
Share Your Findings to All
Search any Post
Recent Post (Others)
Inline Feedbacks
View all comments
If you have any doubts regarding this article (C# Program to Check Odd Number), please comment here. I will help you immediately.
|
{"url":"https://codetofun.com/c-sharp/odd-number/","timestamp":"2024-11-12T12:54:44Z","content_type":"text/html","content_length":"96033","record_id":"<urn:uuid:a27f33b3-fb23-4a6d-bd2d-0c89f52dd05e>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00244.warc.gz"}
|
Wednesday Search Challenge (7/23/14): How are nuclear blast zones like choosing a good place to live?
California State Capital dome (2/2007)
EVERY so often I get asked really interesting questions that I'm not sure how to answer. I have to pause and think about the question, and then come up with an approach that will answer it.
Here's a question like that.
It involves trying to figure out the best place to live between different locations. You can think about it as trying to find the place that's nice and quiet--and maximally far from big city centers
(the "blast zones"). Or you can think of this question as trying to find a place that minimizes the distance you have to travel to the city centers.
Imagine it this way: Suppose you have 4 cities that you want to visit regularly. Say, the capitals of California, Washington, Montana, and Nevada. (If you want, imagine that you have franchise
businesses--or family members--in each of those locations and need to visit each place once a month. How can you minimize travel time between the locations? To simplify the problem, assume you'll
drive from place to place.)
This gives us our question for the week:
1. Can you find the place that's "in the middle" of all four cities? (The capitals given above.) That is, find the place you'd like to live that's equidistant from all four city centers. (Feel
free to ignore curvature of the Earth effects.)
Obviously, there's not a simple query that's going to solve this for you.
You'll have to step back and ask, "How CAN I solve this problem?"
Hint: You're probably going to use a map... but how?
I'll give another hint tomorrow (along with some thoughts about collections).
When you post your solution, be sure to say HOW you figured it out. (Just sticking a pin in the map roughly in the middle isn't the kind of solution I'm looking for...)
For today, enjoy thinking about this research Challenge!
Search on.
30 comments:
1. I know that there are several apps on iOS and Android that help people choose a meeting location that is in the middle so that no one has to drive longer than the other person. Began by searching
for an online tool [ travel equidistant tool ] to
Let's Meet in the Middle looks good as a nice small town with outdoor activities and equidistant from all the relatives.
I inputted the 4 capital cities. Let's Meet In The Middle focuses on finding restaurants and I thought this would be a good gauge of finding a place to move to. LMITM returned 3 results. Ed's
Fast Break Grill in Hines, OR also close to Burns, OR (pop. 2,729) and Malheur Lake.
2. Good day, Dr. Russell, fellow SearchResearchers
I was thinking about this SearchResearch Challenge and how to solve it.
My first thought was trying some of the Industrial Engineering techniques. Then I realized that a much simpler answer could be found.
This is what I did:
Searched for the capitals of California, Washington, Montana, and Nevada.
[California Capital] A. Sacramento
[Washington capital ] A. Olympia
[Montana capital] A. Helena
[Nevada capital] A. Carson City
[Find middle points between locations]
Let's Meet in the Middle
Site mentions: Finds the exact point that lies halfway between two or more places. Find your personal center of gravity--the geographical average location for all of the places you have lived in.
See the results on a Google Map.
< Google Map with location
Don't know if this is the answer Dr. Russell is looking for. It was so easy so I think it is not.
1. Can you find the place that's "in the middle" of all four cities? (The capitals given above.) That is, find the place you'd like to live that's equidistant from all four city centers. (Feel
free to ignore curvature of the Earth effects.)
A: Latitude:42.84476
If we search for Center of minimum distance:This method uses a mathematical algorithm to find the exact point that minimizes the total travel distance from all locations in 'Your Places'.
Calculation Methods
Center of minimum distance: Latitude:39.98705 Longitude:-120.05832
1. nice find Ramón on the Fortune article… thought you would find this interesting too -
UPS Orion The Traveling Salesman Problem - Pogue "browning up"
this clears it up ;-⦘
Reducibility Among Combinatorial Problems, Karp
TSP, LS
3. I think the answer to this question depends on whether you or not you are a crow. If you are, you would probably want to choose Murphy, Idaho. If not, Boise would be the city to choose.
Arriving at the answer to this question was tricky. I already knew all the state capitals, so I didn't need to look those up. I first plotted the four cities on a Google map and tried drawing
intersecting lines, but that didn't help me. Then I thought about the headline you gave and thought that drawing intersecting circles would be more helpful. You can't draw circles on a Google map
with the incorporated tools, so I searched for [nuclear blast zone calculator], which brought me to http://nuclearsecrecy.com/nukemap/. Unfortunately (although fortunate for the real world), I
wasn't able to "blast" the cities in a big enough radius to see where they would overlap.
I searched [draw circles Google map] and found http://www.freemaptools.com/radius-around-point.htm, a tool that lets you draw, color, and resize multiple circles on a map and save the output.
After playing around with various distances, I found that a radius of 675 km gave me a nice little zone where all four circles met. Zooming in on the map showed that the only marked town in the
area was Murphy, Idaho, although the Idaho state capital, Boise, was just outside. The KML file I created can be downloaded here if you are interested: http://www.freemaptools.com/download/
Going back to my original map, I added Murphy and Boise to it. I used the distance calculator and the Get Directions tool, and I found that the total "crow flight" distance is slightly less from
Murphy than from Boise (about 1468 total one-way miles from Murphy as compared to 1492 from Boise), as is the driving distance (total 1881 from Murphy, 2014 from Boise). However, the total
one-way travel time from Boise to each of the cities is less than from Murphy (31 hours 13 minutes from Boise, 31 hours 31 minutes from Murphy). I assume this is because Boise is a major city and
there is easier access to the highways while Murphy is a tiny county seat with a population of 97, according to Wikipedia. Personally, I prefer a larger city, so I would probably choose Boise
anyway, but that's not what the question asked.
I look forward to seeing what other people find.
4. I remembered that I had presented on a tool that could tell you how far you could drive in a certain amount of time from a location. I was wondering if it could help me check on my previous
Searched [ map how far can I drive ] and there on the list was the tool "How Far Can I Drive" on the same site Nancy used.
I went to Google Maps and got driving directions from Sacramento to Burns, Oregon. The shortest route was 495 miles.
Back to How Far Can I Drive and input Option (1) Sacramento, CA (2b) 500 miles. Proceeded with each of the other capital cities and each overlay was added to the map.
The intersection of all four was too far off from Burns, OR, but the answer I would give with this tool is Nampa, ID.
Looking at the options made me think. Is your question about distance in miles or distance in time?
1. I have no answer yet & came to see what ideas people had. Well I came up with the same capitals as Ramón so that's a start. To add to Fred's comments I am assuming we are driving to each
location. I am assuming driving distance based on the shortest distance provided by Google Maps. I see it usually highlights the shortest distance plus route 2 other options. These appear to
be based on time & distance which has me wondering how Google maps actually calculates these numbers. I did a quick trial & error method of picking a location from the map but I suspect that
a process will do the work for me & much more accurately. I've thought how I would use triangulation/resection in terms of navigation but that doesn't deal with using existing interstates.
I've also started thinking in terms of how Google Maps gathers data to create the maps in the first place. Everything is located in relationship to other locations. The maps displays this
data visually. So how can we make the data work for us. I am sharing in hopes others may use their ideas as well to come up with an answer.
5. EDIT - The intersection of all four was NOT too far off from Burns, OR,...
6. I also looked at Nampa, but I decided I'd rather live in Boise. They are only 21 miles apart.
7. Looks like Doyle CA according to geomidpoint.com.
Hardest bit for a Limey was finding the capital cities, but Wikipedia helped out.
Tried doing directions between the diagonally opposites, which actually comes close to the suggested Doyle.
But then, would I want to live 4,275 feet above sea level in a tiny village with a mean high of 30 degrees centigrade plus in Summer, no thanks! Let's add some sense into our decision.
Richard Law
Flying Shavings
Rose Cottage
West Yorkshire
BD20 9BW
phone: 01535632182
8. Those using Meet in the Middle. Are you able to select "Route Halfway Point" and unselect "Midpoint" since we want the Route for driving purposes? I haven't been able to switch to "Route Halfway
Point" on Chrome OS.
1. Hope this isn't a double posting. First time I lost my comments using the PC which I rarely use.
I tried on a PC & discovered by testing that if I put my address & Google's address I could switch between the two modes, but adding a 3rd & 4th location only allows Midpoint Mode. When its
in Route mode you see R= Route & the M = Midpoint which can be quite different. So back to the drawing board. I like Fred's method using "How Far Can I Travel" & perhaps I'm not doing it
right. The image overlaid you provided certainly makes Nampa Id look ideal. My mileages on Google Maps are telling me another story. I'll give it another look & see what's up.
2. Riding on the shirt tails of Nancy & Fred I used Murphy Idaho as the centre marker (could have used Nampa ID as well). Then in Google Maps I used directions from Murphy to Helena Mt (mileage
500miles and then Murphy to Olympia (mileage 546 miles, Murphy to Sacramento (mileage 538) and Murphy to Carson City (mileage 423).I hadn't considered using Google Maps in this way. Have a
look http://goo.gl/0B0sKA
So we have a pretty solid case for Murphy Id but the Carson City leg about 123 miles shorter than the longest route which is significant.
3. …it's a shirttail train — for grins, modded your map for my preferred location - subjective choice, but I'd prefer the long leg to be Helena — more time to enjoy the Big Sky — I hadn't used
the click & drag feature in maps either - handy and gives good info & alts.. How does one quantify zen miles vs regular road miles?
fwiw; Boise is a nice place, but with an AFB - Mountain Home right there, it would be a strong candidate to be ground zero for a nBlastZone?? FIFI - was in Boise, 7/10-13
another factor for the data center locations in Prineville? Hard to be a green center if it is a smoking hole…
at any rate, I'm sure DrD will "like" the ID location. qualified like
could Dan be leaving Goo to start a In-N-Out Burger or Slaters
with family members in ID.?
"…imagine that you have franchise businesses--or family members-"
…weird, now I'm too hungry to continue to search… if only a drone or driverless car or autonomous robot would bring me a burger…
9. took a more self contained, intuitive approach… looked at the lat/lons for the capitols and then just estimated a mid-point…
landed me here not exactly teeming with suitable communities to move to so zoomed
out and picked a town… given that you wanted to avoid larger settlements, nixed Boise & environs and picked Baker City, OR…
then went to the wiki mini atlas and looked around the area —
considered Bend and Mount Vernon, but decided on Prineville as a suitable compromise… used Wolfram|Alpha to check the distances -
this favors Olympia and makes Helena a stretch, but still inside the fuzzy parameters you set out and the spongy preferences I have - besides, like Apple says, "it just works" and facebook seemed
"like" it too… know this is all probably too subjective & arbitrary, but this approach appealed to the inner Lewis & Clark need to roam an area - maps/tools not withstanding.
The Dalles was too close to Portland and Olympia, the river must have held some appeal for the GoogleGang?
Was encouraged to see that my guess landed me in the same general vicinity as Fred initially selected… found the ⌘F works on the mini atlas map and that helps navigate locales…
fwiw - given current events, seeing DrD headline "nuclear blast zones" or "Nu-cu-lar" anyway, it makes me fret that the Goo algorithm
that crunches such topics is waving a red ⚑ (as opposed to those on the Brooklyn Bridge)… "Don't be evil, uh, cryptic"
10. OK used PUBLISH and it Vanished my comment. So will try PREVIEW.
A couple of things noticed so far. Dan says we are driving. LMITM says using more than 2 addresses picks the midpoint.
Remember you will be driving over high mountain passes in winter and you'll be stuck behind a Winnybaggo in the summer.
Is this the Chinese Postman puzzle obfuscated by Dan ?
I hope so.
jon tU
1. Route Inspection Problem
Eulerian Cycle
glad this is simple…
2. I wasn't THAT clever, Jon. I was really just trying to find the location that's closest to each of the cities--no traveling involved! (Chinese, Postal, or otherwise.)
11. I started my search with [calculate equidistant points map]. Borrowing "equidistant" from the challenge seemed like the best phrasing/keyword for the search. The first result looked promising -
Confession, I had to look up the capitals. Just to be sure. I entered in the points and right away noticed that two options were given: calculate a halfway point or find a midpoint. Midpoint was
the only option for more than 2 locations. I entered the four cities and the map calculated a midpoint.
Helena, Mt 12:40
Sacramento, CA 8:34
Olympia, WA 8:43
Carson City, NV 6:57
STDEV 2:25
TOTAL TIME 36 h 54 m
AVG 9:13
What? How can that be the best midpoint if one route is over 12 hours and another only 7? So, I manually adjusted the point studying the suggested routes to the midpoint to see if I could find
something closer to Helena. Just by studying the map I was able to pinpoint a restaurant with: STDEV 0:45 TOTAL TIME 31 h 24 m AVG 7:51. So, each route is within an hour of the other. You are in
the car for 5 hours less overall and your average time is over an hour less.
I believe I read on the site that the midpoint is treated as a center of gravity, as if you lifted the four corners of a blanket and placed a ball in the middle and it rolled to absolute center.
From what I'm understanding of the map, not the best way to go about picking a central location. For a quick solution, I opted to study the routes between cities and then varied my "midpoints" to
calculate those formulas: standard deviation, total time traveled (adding together total time traveled for one visit to each location), and average time spent in the car.
I zeroed in on Marsing, ID and used the 'search nearby' feature to locate schools, and some commerce (a Red Box). I chose this address/area to move to: 28 1st St S, Marsing, ID 83639
1. Nicely reasoned. I like it.
2. I'm still stuck to Nampa. :)
A 1029 square foot single family home with 3 bedrooms and 1 bathroom on 1121 N Midland Blvd, Nampa, ID 83651 dists only
Olympia, WA 7:31 (516 mi)
Sacramento, CA 7:56 (532 mi)
Carson City, NV 6:26 (417 mi)
Helena, MT 7:09 (504 mi)
STDEV 0:38 (51.5 mi)
TOTAL 29 h 02 m (1969 mi)
AVG 7:15:30 (492.25 mi)
3. I've been writing "a html" instead of "a href", which is why my links end up not showing up, although they're there when I hit the Preview button.
Here's the missing link: 1121 N Midland Blvd, Nampa, ID 83651.
12. This was an interesting approach to solving the same type problem: http://datascopeanalytics.com/what-we-think/2014/03/14/finding-the-ideal-spot-in-southeast-michigan
Unfortunately, I couldn't get their map to work. Particularly of interest, the section on geometric medians. "No such formula is known."
13. First PUBLISHED was vanished again
Dan said ignore earth curvature. Wolfram Mathematica says this makes the problem Euclidean meaning just straight lines on a flat map. This is also known as TSP. Travelling Salesman Problem. I
have tried several ways of figuring this but my brain cell is not up to the task.
However I found RouteXL in which one can use a real Google map and with the 4 Capital and a mythical place in the middle it drew a shortest path route
So that's my 2 cents. Sorry we don't use pennies anymore do we.
1. …interesting tool/map/route - surprisingly, almost identical distance/time to my pick of Prineville (see above 12:15 post - "modded map" -) yours: optimal travel time 40:56 hours distance
4128.8 km (2566 miles) to 2587 miles on mine & virtually the same travel times… shows there is more than one way to skin a postal route.
…that's my 2¢… or wooden nickel (too bad it wasn't Mt. Vernon, OR) - worth about the same ≅
14. I am reexamining the elements of the problem because other than searching for another map app I haven’t figured out how I could get a result within 100 miles of each other for the four cities.
--More than two locations- Four that must be reached by car
--Not looking for the geographical average center but a physical center reached by four different routes. Map projection formulas and mapping applications don’t seem to exist for routes. We can
measure routes by distance, time or both.
--The equidistant center may be some distance from a town.Having Sacramento & Carson City very near each other & sharing interstates makes it difficult to get within 100 miles of equal distances
to the center of all four cities. I have tried rerouting Google Maps manually from Sacramento on to different highways eg. 395 but tools not cooperating & I would think Google Maps being up to
date knows the shortest routes. Perhaps it is one of those we can’t get there from here
15. I realized the area we have is a triangle more or less so roughly I found the center of the triangle. Without taking into account routes however. http://goo.gl/fGyiCV
16. On Google Earth, finding the directions [ from:Olympia, WA to:Sacramento, CA to:Carson City,NV to:Helena, MT to: Olympia, WA ] places the map center very close to their centroid (geographical
center). This is as the crow flies, though, so I guess of no real use here. (This doesn't work on Google Maps because the search box shifts the center -- and everything -- to the right.)
Just looking at the map's roads and trying to guess, it's easy to realize Nampa, Idaho is probably the more equidistant you can find. (In geometry, there's no real equidistant point between more
than three points unless they form a "cyclic polygon". In maps, this will be just as rare, just more difficult to find.) Nampa lies between 418 miles / 6h56 and 533 miles / 8h30 from those
capitals. All other apparent midpoints I've checked have larger differences between the shortest and the longest one, so in that sense are "less equidistant".
The "Center of mimimum distance" on Geo Midpoint makes you rethink what should be the ideal spot. In fact, if you visit those four points with the same average frequency, and only one of them per
travel (go tere and come back home), you might consider living in a place that minimizes total travelling radial distance, even if that means living very close from one of the locations and very
far from another one. The sum of distances from Nampa to those four cities is 1977 miles / 31h27. The sum of distances from the "Center of minimum distance" (39.9864466,-120.0727352) to the same
cities is 1767 miles / 26h52. Of course, if you live only 1h06 from a place and 13h02 from another one, I bet you wouldn't visit them equally often.
17. So I was thinking about the blast zone part of your question and how that might play out in the selection. Would a place so close to Boise, Idaho (State Capital, pop. 212,303) be a wise decision?
Looking up [ population helena, mt ] shows it only has a population 29,134. Wouldn't Boise be a more dangerous choice? What would the fallout look like?
[ nuclear blast zone map ] to Nukemap by Alex Wellerstein
1.Set the location to Boise, ID
2. Ivy King 500 kilotons
3. Airburst and radioactive fallout
Advanced options - Burst height 200 m
other effects wind speed 10 mph (after searching [ average wind speed boise idaho ]
to this Blast Zone Boise, Idaho
Maybe Nampa is OK or I'll just move in next door to remmij in Prineville.
1. I don't know Fred, Luís might be on the rational/logical train with Nampa - just have to hope the targeting is to the south and the wind is blowing that way too… more preppers in Nampa too.
comparison tool
(although, the highlights box seems backwards, thee Nukemap should be part of the comparison categories.)
Driverless cars for everybody… who wants to be driverless…
neighborhood - who's going to blow up Mr. R.?
18. As Fred points out, there can be many different criteria for evaluating a particular place. I proposed just pure distance as the key metric. Others (Nancy, Luis, Rosemary, etc.) brought up "drive
time" as one criterion. Protection from a nuclear blast could be another, flying time could be another metric, region with the lowest probable exposure to fallout might be the final metric.
In all these cases, to find the "best midpoint" you have to create a "metric" to evaluate each point on the ground, and then find the location(s) that minimize that metric. In my case, the metric
was "Euclidian distance from the other cities." This gets more interesting if you have more cities to deal with--say 12 or 20. But this becomes more of an interesting math question than a
SearchResearch question. (Although as you can see, math creeps in everywhere, even in the simplest of questions. In this case when you have larger number of cities or complex metrics,
optimization models such as dynamic or linear programming would be the best way to solve the challenge. But we won't go there. Yet.)
|
{"url":"https://searchresearch1.blogspot.com/2014/07/wednesday-search-challenge-72314-how.html","timestamp":"2024-11-08T20:20:19Z","content_type":"text/html","content_length":"190994","record_id":"<urn:uuid:d76e140d-8d16-4ded-b679-02e40b4a2697>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00653.warc.gz"}
|
CCEA Physics unit 2 solutions Archives - Fridge Physics
Wave speed is given in meters per second (the number of waves that pass per second). Wavelength is measured in meters and frequency is measured in hertz (Hz), or number of waves per second.
In this tutorial you will learn how to calculate the speed of a wave.
The equation for this calculation is written like this:
$v = { \text f \; \text x \; \lambda }$
Chilled practice question
Calculate the velocity of a wave with a wavelength of 6 m and a frequency of 50 Hz
Frozen practice question
Find the velocity of a wave which has a time period of 10 s and a wavelength of 24 m, you will need to calculate the frequency from the wave period equation first.
Science in context
Wave speed is given in meters per second (the number of waves that pass per second). Wave Speed = Frequency x Wave length. Wavelength is measured in meters and frequency is measured in hertz (Hz), or
number of waves per second.
Millie’s Master Methods
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s…
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
The Fridge Physics Store
Feedback to students in seconds – Voice to label thermal bluetooth technology…
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
The size of the current is the rate of flow of charge. Electrons are negatively charged particles which transfer energy through wires as electricity.
What is Charge?
The size of the current is the rate of flow of charge. Electrons are negatively charged particles which transfer energy through wires as electricity. Charge is measured in coulombs (C). Electrons are
really small and the effect of one electron would be really difficult to measure, It is easier to measure the effect of a large number of electrons. One Coulomb of charge contains 6 × 10^18
Charge equation
To calculate Charge we use this equation.
$Q = { \mathit I \, \mathit t} $
Charge demo
In this tutorial you will learn how to calculate the the charge flowing in an electrical circuit.
Chilled practice question
Calculate the charge when a current of 16 A flows for 2 minutes.
Frozen practice question
How long must a current of 26 A flow to transfer 936 KC.
Science in context
The size of the current is the rate of flow of charge.
Millie’s Master Methods
Millie’s Magic Triangle
The ability to rearrange equations is the first step to successfully solve Physics calculations. Millie’s…
Calculation Master Method
Performing and mastering this routine will guarantee you maximum marks when solving Physics calculations. Calculation…
The Fridge Physics Store
Teacher Fast Feedback
Feedback to students in seconds – Voice to label thermal bluetooth technology…
Get Fridge Physics Merch
Why not buy a Fridge Physics baseball cap, woollen beanie, hoodie or polo shirt, all colours and sizes available. Free delivery to anywhere in the UK!…
|
{"url":"https://fridgephysics.com/tag/ccea-physics-unit-2-solutions/","timestamp":"2024-11-08T11:18:32Z","content_type":"text/html","content_length":"262132","record_id":"<urn:uuid:0169b55a-e701-49a6-899b-63a411bef046>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00557.warc.gz"}
|
Latest word problems - page 2 of 23
We will send a solution to your e-mail address. Solved problems are also published
. Please enter the e-mail correctly and check whether you don't have a full mailbox.
Please do not submit problems from current active math competitions such as Mathematical Olympiad, correspondence seminars etc...
|
{"url":"https://www.hackmath.net/en/word-math-problems?list=1&page=2","timestamp":"2024-11-06T10:46:03Z","content_type":"text/html","content_length":"36047","record_id":"<urn:uuid:b4f87197-f365-434c-b4d8-6c9e16f9e0a8>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00157.warc.gz"}
|
Millimeters Per Square Second to Centimeters Per Square Second Converter
1 millimeter per square second = 0.1 centimeters per square second
Both millimeters per square second and centimeters per square second are units of measurement for acceleration. You can convert millimeters per square second to centimeters per square second
multiplying by 0.1. Check other acceleration converters.
|
{"url":"https://unitconverter.io/millimiters-per-square-second/centimeters-per-square-second","timestamp":"2024-11-14T20:53:58Z","content_type":"text/html","content_length":"29980","record_id":"<urn:uuid:0c460ced-b856-414a-b5af-f9dd0e5c09a1>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00702.warc.gz"}
|
Next: MINIMUM STRONG CONNECTIVITY AUGMENTATION Up: Cuts and Connectivity Previous: MINIMUM K-EDGE CONNECTED SUBGRAPH   Index
• INSTANCE: Graph
• SOLUTION: A connectivity augmenting set E' for G, i.e., a set E' of unordered pairs of vertices from V such that
• MEASURE: The weight of the augmenting set, i.e.,
• Good News: Approximable within 2 [172] and [319].
• Comment: The same bound is valid also when G' must be bridge connected (edge connected) [319]. Minimum k-Connectivity Augmentation, the problem in which G' has to be k-connected (vertex or edge
connected), is also approximable within 2 [310]. If the weight function satisfies the triangle inequality, the problem is approximable within 3/2 [173]. Variation in which G is planar and the
augmentation must be planar is approximable within 5/3 in the unweighted case (where w(u,v)=1) [166]. If INIMUM K-VERTEX CONNECTED SUBGRAPH.
• Garey and Johnson: ND18
Viggo Kann
|
{"url":"https://www.csc.kth.se/~viggo/wwwcompendium/node100.html","timestamp":"2024-11-04T10:12:38Z","content_type":"text/html","content_length":"4921","record_id":"<urn:uuid:bdf4966e-4921-47f8-9d09-cac2f0d7a204>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00090.warc.gz"}
|