content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
"By the Usual Compactness Argument"
It's a sad truth, but the mathematics research literature is very tough going for beginners. By "beginners" I mean bright high-school students, or university students, or beginning graduate students,
or even professional mathematicians who are trained in an area different from the article he/she is trying to read.
As a high-school student, I used to go to the mathematics library at the University of Pennsylvania to look up and try to read articles articles in number theory. Usually I couldn't understand them
at a first reading, so I'd photocopy them and take them home to puzzle over. I remember being completely flummoxed by a paper on Bell numbers that used the "umbral calculus"; I just didn't understand
that you were supposed to move the exponents down as indices. That is, in an equation like
B[4] = (B + 1)^3
you were supposed to expand the right hand side, getting
B^3 + 3B^2 + 3B^1 + 1
and then magically change this to
B[3] + 3B[2] + 3B[1] + 1 .
I had nobody to ask about stuff like that. Although my high-school teachers were great, they didn't know about the umbral calculus.
Things like this permeate the mathematical literature. Take compactness, for example. Compactness is a marvelous tool that lets you deduce -- usually in a non-constructive fashion -- the existence of
objects (particularly infinite ones) from the existence of finite "approximations". Formally, compactness is the property that a collection of closed sets has a nonempty intersection if every finite
subcollection has a nonempty intersection; alternatively, if every open cover has a finite subcover.
Now compactness is a topological property, so to use it, you really should say explicitly what the topological space is, and what the open and closed sets are. But mathematicians rarely, if ever, do
that. In fact, they usually don't specify anything at all about the setting; they just say "by the usual compactness argument" and move on. That's great for experts, but not so great for beginners.
I really wonder who was the very first to take this particular lazy approach to mathematical exposition. So far, the earliest reference I found was in a 1953 article by John W. Green in the Pacific
Journal of Mathematics 3 (2), 393-402. On page 400 he writes
By the usual compactness argument ([2, p.62]), there does exist a minimizing curve K.
Can anybody find an earlier occurrence of this exact phrase?
12 comments:
Just a remark: often, the REFEREE makes you take out details and say things like "by the usual compactness argument".
A colleague and I are tempted to start a "Journal of Omitted Details".
I find myself hoping that journals and papers will be superseded in the near future as the standard way of disseminating mathematics.
My feeling is that the world would be a better place if maths was treated like open-source code, and farmed using some distributed version control tool like Git.
I can think of lots of reasons not to do this immediately, but they mostly seem stupid:
Ideas won't be attributable to individual mathematicians any more.
They aren't at the moment. A paper typically arises as a collaboration between several named mathematicians, following conversations with many unnamed mathematicians. Each paper may contain zero,
one, or more ideas, not necessarily all of the same provenance. A system that allows more fine-grained contributions can only improve the situation.
Peer review will become impossible.
No, it could be exactly the same as it is at the moment. A mathematician looks at a version of a paper and offers their opinion anonymously.
Mathematicians will stop doing maths if they can't see published papers with their names on them.
I don't believe this for a minute: this isn't why people do maths. Mathematicians already do lots of things which don't result in published papers with equal (or greater) pride. Programmers
didn't stop writing programs when this happened either, and they're just as proud.
Committees won't be able to evaluate mathematicians by counting papers. That's a good thing: we never wanted that to happen anyway.
Might be worth looking for earlier papers that cite Green's ref 2 (Eine Minimumaufgabe über Eilinien, Christiaan Huygens).
I guess 1953 was a good year for the usual compactness argument; see F A Valentine, Minimal sets of visibility, Proc Amer Math Soc 4 (1953) 917-921. On page 918 we read,
Hence, by the usual compactness argument, we have $\prod_{x\in S}V(x)\ne0$, if $S\/$ is not convex.
The earliest appearance in Math Reviews is in MR0155131 (27 #5071), the review by H R Pitt of William Feller, On the classical Tauberian theorems, Arch Math 14 (1963) 317-322.
It's in quotes in the review, which may mean the reviewer is quoting the author.
I submit math be dropped as a subject except for the few who might paid for it.
Math is the modern latin. its useless to anything of discovery or invention and interferes with people applying themselves to other subjects that could progress the intelligence of mankind.
Math is just a language of reality and one needs not be bilingual to do cool things in science and humanity.
Down with math and up with intellectual insight and imagination .
The depths of Byers' ignorance have still not be plumbed. Who knows what gems of inane stupidity we might find?
The fact that he's writing this on a computer system that probably uses error-correcting codes, principles of information transmission due to Shannon, cryptography based on number theory, and a
variety of other mathematical techniques, is completely beyond him.
Mr Shallit.
I understand that and many things have math as a important issue. i said its oksy for those few who get paid to use math.
Yet its unrelated to almost everything ever done in human intellectual progress and so science.
Its been a tool, like in making a house, but just a help.
Math is very overrated as relevant to scientific innovation or revolution.
I think it should only be studied by professionals who need it truly.
Dividing things up forever is mot needed for everyone else.
Mr. Byers, up to what level of math should be taught to students in school?
I posted Jeff's question to MathOverflow: http://mathoverflow.net/questions/143569/first-occurrence-of-by-the-usual-compactness-argument
An answer by Benjamin Dickman cites three occurences in 1947, the earliest being W. Ambrose, Direct sum theorem for Haar measures, Transactions of the American Mathematical Society, 61(1) (1947)
Moritz Firsching found it in German ("was aus der Kompaktheit von R in der üblichen Weise folgt") in a paper by Urysohn and Alexandroff, Ueber Räume mit verschwindender erster Brouwerscher Zahl,
from 1928.
Just math enough for life. Enough to avoid needs to upgrade for regular jobs.
Math is entirely a thing of memory save for the few who advance some discovery.
So computers should do the memory work.
Math is the modern LATIN in which a subject is taught thinking it helps somehow one to do unrelated subjects.
Its in the way as old Latin was in the previous centuries.
Its not a thinking mans subject.
Its just memorized operations and so mere attentiveness brings results.
Sharp minded people do math but its just a coincidence.
The great modern rule of thumb is IF a computer can do it then its not a thing of intellectual striving.
Its just memory application.
Its just memorized operations and so mere attentiveness brings results.
You really have absolutely no idea what mathematicians do, do you? | {"url":"https://recursed.blogspot.com/2013/09/by-usual-compactness-argument.html","timestamp":"2024-11-15T04:45:53Z","content_type":"text/html","content_length":"101262","record_id":"<urn:uuid:3242dc7f-0718-4a86-9676-db76d2ad9573>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00626.warc.gz"} |
Adding excess stress terms into the momentum equations
APPROACH 2)
Inclusion of second-order velocity gradient terms with defining new UDSs. (Answer to the query title)
While Approach 1 has given results, I can confirm that the velocity profile lacked the viscous characteristics at the corners of the velocity profile. To avoid this, the second-order terms can be
solved with the momentum equation and removed with the source terms as in the original form shared here.
So I have defined the following User Defined Scalars to do this.
enum //defining new UDSs//
DEFINE_EXECUTE_ON_LOADING(define_variables, libname)
Set_User_Scalar_Name(DUUDXX, "DUUDXX");
Set_User_Scalar_Name(DUUDYY, "DUUDYY");
Set_User_Scalar_Name(DVVDXX, "DVVDXX");
Set_User_Scalar_Name(DVVDYY, "DVVDYY");
Solve those UDS with 0 flux and 0 diffusivity:
DEFINE_SOURCE(DUUDYY_source, c, t, dS, eqn)
real source;
real x[ND_ND];
source = C_UDSI(c, t, DUUDYY) - C_MU_L(c, t) * C_DUDY(c, t);
dS[eqn] = 1; // Set the derivative of the source term
return source;
DEFINE_SOURCE(DVVDXX_source, c, t, dS, eqn)
real source;
real x[ND_ND];
source = C_UDSI(c, t, DVVDXX) - C_MU_L(c, t) * C_DVDX(c, t);
dS[eqn] = 1; // Set the derivative of the source term
return source;
DEFINE_SOURCE(DVVDYY_source, c, t, dS, eqn)
real source;
real x[ND_ND];
source = C_UDSI(c, t, DVVDYY) - C_MU_L(c, t) * C_DVDY(c, t);
dS[eqn] = 1; // Set the derivative of the source term
return source;
Add the calculated second-order velocity gradients into the momentum equation:
DEFINE_SOURCE(momentum_x_source, c, t, dS, eqn)
real source;
/* Source term for the x-momentum equation */
source = - C_UDSI_G(c, t, DUUDXX)[0] - C_UDSI_G(c, t, DUUDYY)[1]; // remove second order derivative from x momentum
dS[eqn] = 0.0; // Set the derivative of the source term if needed
return source;
DEFINE_SOURCE(momentum_y_source, c, t, dS, eqn)
real source;
/* Source term for the y-momentum equation */
source = - C_UDSI_G(c, t, DVVDXX)[0] - C_UDSI_G(c, t, DVVDYY)[1] ; // remove second order derivative from Y momentum
dS[eqn] = 0.0; // Set the derivative of the source term if needed
return source;
This way I could validate my CFD solution using viscoelastic fluid flow between the parallel plates published in the literature. Hopefully, this might help someone who has been trying to solve
viscoelastic flow equations using Fluent software. | {"url":"https://www.cfd-online.com/Forums/fluent-udf/257479-adding-excess-stress-terms-into-momentum-equations.html","timestamp":"2024-11-02T17:51:13Z","content_type":"application/xhtml+xml","content_length":"84129","record_id":"<urn:uuid:a29d37ea-3ffd-4f23-8915-bf915b958907>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00456.warc.gz"} |
Puzzle ZWBR
At my local farmer's merchant, you can buy chicken feed for £4 per tonne, pig feed for £3 per tonne, and cattle feed for 40p per tonne. The feed can only be purchased by the tonne, and part tonnes
aren't sold.
Last week I bought some animal feed, and luckily I managed to buy exactly 100 tonnes for exactly £100. How much of each feed did I buy?
hide workings hint answer print
Share link – www.brainbashers.com/puzzle/zwbr
Note: BrainBashers has a Dark Mode option – I recommend not using your browser's dark mode or extensions for BrainBashers | {"url":"https://www.brainbashers.com/showpuzzles.asp?puzzle=ZWBR&showworkings=Y","timestamp":"2024-11-13T12:00:33Z","content_type":"text/html","content_length":"12229","record_id":"<urn:uuid:6e0abe28-b8bd-42b9-8b58-c0e94149f10f>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00357.warc.gz"} |
Department Colloquium | Mathematics
Past Events
Pierrick Bousseau (University of Georgia)
Modular forms are complex analytic functions with striking symmetries, which play fundamental role in number theory. In the last few decades there have been a series of astonishing predictions from
theoretical physics that various basic mathematical numbers when put in a generating…
Note alternate location: 380Y
Kevin Buzzard (Imperial College)
Computer theorem provers (which know the axioms of mathematics and can check proofs) have existed for decades, but it's only recently that they have been noticed by mainstream mathematicians. Modern
work of Tao, Scholze and others has now been taught to Lean (one of these systems), and (…
Ziquan Zhuang (Johns Hopkins, Clay Math. Inst.)
Around 10 years ago, Donaldson and Sun discovered that metric limits of Ricci positive Kähler–Einstein manifolds are algebraic varieties, and their metric tangent cones also underlie some algebraic
structure. I will talk about a general algebraic geometry theory behind this phenomenon. In…
Amol Aggarwal (Columbia)
Random surfaces are central paradigms in equilibrium statistical mechanics. As these surfaces become larger, their statistical behaviors become strongly dependent on how their boundaries are pinned
down. This can lead to phase transitions, such as facet edges separating a flat region of the…
Huy Pham (Stanford)
A major goal of additive combinatorics is to understand the structures of subsets A of an abelian group G which has a small doubling K = |A+A|/|A|. Freiman's celebrated theorem first provided a
structural characterization of sets with small doubling over the integers, and subsequently Ruzsa in…
Hannah Larson (UC Berkeley)
The moduli space M_g of genus g curves (or Riemann surfaces) is a central object of study in algebraic geometry. Its cohomology is important in many fields. For example, the cohomology of M_g is the
same as the cohomology of the mapping class group, and is also related to spaces of modular forms…
Gavril Farkas (Humboldt-Universität zu Berlin)
Determining the structure of the equations of an algebraic curve in its canonical embedding (given by its holomorphic forms) has been a central question in algebraic geometry from the beginning of
the subject. In 1984 Mark Green put forward a very elegant conjecture linking the complexity of the…
Cole Graham (Brown)
The world teems with examples of invasion, in which one steady state spatially invades another. Invasion can even display a universal character: fine details recur in seemingly unrelated systems.
Reaction-diffusion equations provide a mathematical framework for these phenomena. In this talk…
Svitlana Mayboroda (ETH Zurich and University of Minnesota)
Harmonic measure is the probability that a Brownian traveler starting from the center of the domain exists through a particular portion of the boundary. It is a fundamental concept at the
intersection of PDEs, probability, harmonic analysis, and geometric measure theory,…
In the study of fluid dynamics, turbulence poses a significant challenge in predicting fluid behavior, and it remains a mystery for mathematicians and physicists alike. Recently, there has been some
exciting progress in our understanding of ideal turbulence: starting from Onsager’s theorem… | {"url":"https://mathematics.stanford.edu/events/department-colloquium","timestamp":"2024-11-08T08:35:20Z","content_type":"text/html","content_length":"69767","record_id":"<urn:uuid:014a5755-561b-40d3-bdd0-7c297bd75e50>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00225.warc.gz"} |
Conditional Probability from a Two-Way Table
Question Video: Conditional Probability from a Two-Way Table Mathematics • Third Year of Secondary School
The table below contains data from a survey of core gamers who were asked whether their preferred gaming platform was the smartphone, the console, or the PC. The gamers are split by gender. Find the
probability that a core gamer chosen at random prefers using a console. Give your answer to three decimal places. Given that a core gamer prefers to play using a console, find the probability that
they are male. Give your answer to three decimal places.
Video Transcript
The table below contains data from a survey of core gamers who were asked whether their preferred gaming platform was the smartphone, the console, or the PC. The gamers are split by gender. Find the
probability that a core gamer chosen at random prefers using a console. Give your answer to three decimal places. Given that a core gamer prefers to play using a console, find the probability that
they are male. Give your answer to three decimal places.
Now, firstly, we recall that if we’re trying to find the probability of an event occurring, we divide the number of ways that event can occur by the total number of outcomes. And the first part of
this question asks us to find the probability that a gamer chosen at random prefers to use a console. Now, they don’t specify whether we’re interested in male or female gamers. So, in fact, we’re
going to calculate the totals.
We begin by calculating the total number of gamers who prefer to use a smartphone. That’s 52 plus 48, which is 100. Similarly, to calculate the total number of gamers who preferred the console, we
add 37 and 23, to give us 60. Finally, the total number of gamers who prefer to use the PC is 48 plus 35, which is 83. The total number of gamers questioned is found by adding all of the values in
this column. That’s 100 plus 60 plus 83, which is 243.
Now, remember, we’re looking to find the probability that the gamer chosen at random prefers to use a console. So that’s this second row. The total number of outcomes or the total number of gamers
here we calculated to be 243. So the probability that a core gamer chosen at random prefers to use a console is 60 divided by 243, which is 0.2469 and so on. That’s 0.247.
The second part of this question states that given that a core gamer prefers to play using a console, find the probability that they are male. This phrase “given that” is an indication that we’re
going to use conditional probability. If we let event A be the event that the gamer chosen is male and event B be the event that they prefer to use a console, we use the bar notation to show that
we’re trying to find the probability of A occurring given that B has occurred. And what this does is narrow down the data somewhat.
We’re told that the gamer prefers to play using a console. So we can narrow our data down into just those people who prefer to play using a console. And we want to find the probability that they are
male. So we’re going to divide the number of male gamers who said they preferred using a console by the total number of gamers who said they preferred using a console. That’s 37 divided by 60. That’s
0.61666 and so on, which correct to three decimal places is 0.617. So the probability that a core gamer is male given that they prefer to play using a console is 0.617. | {"url":"https://www.nagwa.com/en/videos/750187364073/","timestamp":"2024-11-06T10:18:17Z","content_type":"text/html","content_length":"253222","record_id":"<urn:uuid:b31a2720-3265-49b6-9eee-59fe0b821964>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00544.warc.gz"} |
Intercomparison and improvement of two-stream shortwave radiative transfer schemes in Earth system models for a unified treatment of cryospheric surfaces
Articles | Volume 13, issue 9
© Author(s) 2019. This work is distributed under the Creative Commons Attribution 4.0 License.
Intercomparison and improvement of two-stream shortwave radiative transfer schemes in Earth system models for a unified treatment of cryospheric surfaces
Snow is an important climate regulator because it greatly increases the surface albedo of middle and high latitudes of the Earth. Earth system models (ESMs) often adopt two-stream approximations with
different radiative transfer techniques, the same snow therefore has different solar radiative properties depending whether it is on land or on sea ice. Here we intercompare three two-stream
algorithms widely used in snow models, improve their predictions at large zenith angles, and introduce a hybrid model suitable for all cryospheric surfaces in ESMs. The algorithms are those employed
by the SNow ICe and Aerosol Radiative (SNICAR) module used in land models, dEdd–AD used in Icepack, the column physics used in the Los Alamos sea ice model CICE and MPAS-Seaice, and a two-stream
discrete-ordinate (2SD) model. Compared with a 16-stream benchmark model, the errors in snow visible albedo for a direct-incident beam from all three two-stream models are small ($<±\mathrm{0.005}$)
and increase as snow shallows, especially for aged snow. The errors in direct near-infrared (near-IR) albedo are small ($<±\mathrm{0.005}$) for solar zenith angles θ<75^∘, and increase as θ
increases. For diffuse incidence under cloudy skies, dEdd–AD produces the most accurate snow albedo for both visible and near-IR ($<±\mathrm{0.0002}$) with the lowest underestimate (−0.01) for
melting thin snow. SNICAR performs similarly to dEdd–AD for visible albedos, with a slightly larger underestimate (−0.02), while it overestimates the near-IR albedo by an order of magnitude more (up
to 0.04). 2SD overestimates both visible and near-IR albedo by up to 0.03. We develop a new parameterization that adjusts the underestimated direct near-IR albedo and overestimated direct near-IR
heating persistent across all two-stream models for θ>75^∘. These results are incorporated in a hybrid model SNICAR-AD, which can now serve as a unified solar radiative transfer model for snow in ESM
land, land ice, and sea ice components.
Received: 25 Jan 2019 – Discussion started: 20 Feb 2019 – Revised: 15 Jul 2019 – Accepted: 17 Jul 2019 – Published: 06 Sep 2019
Snow cover on land, land ice, and sea ice, modulates the surface energy balance of middle and high latitudes of the Earth, principally because even a thin layer of snow can greatly increase the
surface albedo. Integrated over the solar spectrum, the broadband albedo of opaque snow ranges from 0.7 to 0.9 (e.g., Wiscombe and Warren, 1980; Dang et al., 2015). In contrast, the albedo of other
natural surfaces is smaller: 0.2, 0.25, and 0.5–0.7 for damp soil, grassland, and bare multi-year sea ice, respectively (Perovich, 1996; Liang et al., 2002; Brandt et al., 2005; Bøggild et
al., 2010). The accumulation, evolution, and depletion of snow cover thus modify the seasonal cycle of surface albedo globally. In particular, snow over sea ice absorbs more solar energy and begins
to melt in the spring, which forms melt ponds that bring the sea ice albedo to as low as 0.15 to further accelerate ice melt (Light et al., 2008, 2015). An accurate simulation of the shortwave
radiative properties of snowpack is therefore crucial for spectrally partitioning solar energy and representing snow–albedo feedbacks across the Earth system. Unfortunately, computational demands and
coupling architectures often constrain representation of snowpack radiative processes in Earth system models (ESMs; please refer to Table 1 for all abbreviations used in this work) to relatively
crude approximations such as two-stream methods (Wiscombe and Warren, 1980; Toon et al., 1989). In this work, we intercompare two-stream methods widely used in snow models and then introduce a new
parameterization that significantly reduces their snowpack reflectance and heating biases at large zenith angles, to produce more realistic behavior in polar regions.
Snow albedo is determined by many factors including the snow grain radius, the solar zenith angle, cloud transmittance, light-absorbing particles, and the albedo of underlying ground if snow is
optically thin (Wiscombe and Warren, 1980; Warren and Wiscombe, 1980); it also varies strongly with wavelength since the ice absorption coefficient varies by 7 orders of magnitudes across the solar
spectrum (Warren and Brandt, 2008). At visible wavelengths (0.2–0.7µm), ice is almost nonabsorptive such that the absorption of visible energy by snowpack is mostly due to the light-absorbing
particles (e.g., black carbon, organic carbon, mineral dust) that were incorporated during ice nucleation in clouds, scavenged during precipitation, or slowly sedimented from the atmosphere by
gravity (Warren and Wiscombe, 1980, 1985; Doherty et al., 2010, 2014, 2016; Wang et al., 2013; Dang and Hegg, 2014). As snow becomes shallower, visible photons are more likely to penetrate through
snowpack and get absorbed by darker underlying ground. At near-infrared (near-IR) wavelengths (0.7–5µm), ice is much more absorptive, so that the snow near-IR albedo is lower than the visible
albedo. Larger ice crystals form a lower albedo surface than smaller ice crystals; hence aged snowpacks absorb more solar energy. Photons incident at smaller solar zenith angles are more likely to
penetrate deeper vertically and be scattered in the snowpack until being absorbed by the ice, the underlying ground, or absorbing impurities, which also leads to a smaller snow albedo. To compute the
reflected solar flux, spectrally resolved albedo must be weighted by the incident solar flux, which is mostly determined by solar zenith angle, cloud cover and transmittance, and column water vapor.
Modeling the solar properties of snowpacks must consider the spectral signatures of these atmospheric properties.
Several parameterizations have been developed to compute the snow solar properties without solving the radiative transfer equations and some are incorporated into ESMs or regional models. Marshall
and Warren (1987) and Marshall (1989) parameterized snow albedo in both visible and near-IR bands as functions of snow grain size, solar zenith angle, cloud transmittance, snow depth, underlying
surface albedo, and black carbon content. Marshall and Oglesby (1994) used this in an ESM. Gardner and Sharp (2010) computed the all-wave snow albedo with similar inputs. This was incorporated into
the regional climate model RACMO (https://www.projects.science.uu.nl/iceclimate/models/racmo.php, last access: 22 July 2019) to simulate snow albedo in glaciered regions like Antarctica and Greenland
(Kuipers Munneke et al., 2011). Dang et al. (2015) parameterized snow albedo as a function of snow grain radius, black carbon content, and dust content for visible and near-IR bands and 14 narrower
bands used in the Rapid Radiative Transfer Model (RRTM; Mlawer and Clough, 1997). Their algorithm can also be expanded to different solar zenith angles using the zenith angle parameterization
developed by Marshall and Warren (1987). Aoki et al. (2011) developed a more complex model based on the offline snow albedo and a transmittance look-up table. This can be applied to multilayer
snowpack to compute the snow albedo and the solar heating profiles as functions of snow grain size, black carbon and dust content, snow temperature, and snowmelt water equivalent. These
parameterizations are often in the form of simplified polynomial equations, which are especially suitable to long-term ESM simulations that require less time-consuming snow representations.
More complex models that explicitly solve the multiple-scattering radiative transfer equations have also been developed to compute snow solar properties. Flanner and Zender (2005) developed the SNow
Ice and Aerosol Radiation model (SNICAR) that utilizes two-stream approximations (Wiscombe and Warren, 1980; Toon et al., 1989) to predict heating and reflectance for a multilayer snowpack. They
implemented SNICAR in the Community Land Model (CLM) to predict snow albedo and vertically resolved solar absorption for snow-covered surfaces. Before SNICAR, CLM prescribed snow albedo and confined
all solar absorption to the top snow layer (Flanner and Zender, 2005). Over the past decades, updates and new features have been added to SNICAR to consider more processes such as black carbon–ice
mixing states (Flanner et al., 2012) and snow grain shape (He et al., 2018b). Concurrent with the development of SNICAR, Briegleb and Light (2007) improved the treatment of sea ice solar radiative
calculations in the Community Climate System Model (CCSM). They implemented a different two-stream scheme with delta-Eddington approximation and the adding–doubling technique (hereafter, dEdd–AD)
that allows CCSM to compute bare, ponded, and snow-covered sea ice albedo and solar absorption profiles of multilayer sea ice. Before these improvements, the sea ice albedo was computed based on
surface temperature, snow thickness, and sea ice thickness using averaged sea ice and snow albedo. dEdd–AD has been adopted by the sea ice physics library Icepack (https://github.com/CICE-Consortium/
Icepack/wiki, last access: 22 July 2019), which is used by the Los Alamos sea ice model CICE (Hunke et al., 2010) and Model for Prediction Across Scales Sea Ice (MPAS-Seaice; Turner et al., 2019).
CICE itself is used in numerous global and regional models.
SNICAR and dEdd–AD solve the multiple-scattering radiative transfer equations and provide much improved solar radiative representations for the cryosphere, though their separate development and
implementation created an artificial divide for snow simulation. In ESMs that utilize both SNICAR and dEdd–AD, such as the Community Earth System Model (CESM, http://www.cesm.ucar.edu/, last access:
22 July 2019) and the Energy Exascale Earth System Model (E3SM, previously known as ACME, https://e3sm.org/, last access: 22 July 2019), the solar radiative properties of snow on land and snow on sea
ice are computed separately via SNICAR and dEdd–AD. As a result, the same snow in nature has different solar radiative properties such as reflectance depending on which model represents it. These
differences are model artifacts that should be eliminated so that snow has consistent properties across the Earth system.
In this paper, we evaluate the accuracy and biases of three two-stream models listed in Table 2, including the algorithms used in SNICAR and dEdd–AD, for representing reflectance and heating. In
Sects. 2–4, we describe the radiative transfer algorithms and calculations performed in this work. The results and model intercomparisons are discussed in Sect. 5. In Sect. 6, we introduce a
parameterization to reduce the simulated albedo and heating bias for solar zenith angles larger than 75^∘. In Sect. 7, we summarize the major differences of algorithm implementations between SNICAR
and dEdd–AD in ESMs. We use these results to develop and justify a unified surface shortwave radiative transfer method for all Earth system model components in the cryosphere, presented in Sect. 8.
2Radiative transfer model
In this section, we summarize the three two-stream models and the benchmark DISORT model with 16 streams. These algorithms are well documented in papers by Toon et al. (1989), Briegleb and Light
(2007), Jin and Stamnes (1994), and Stamnes et al. (1988). Readers interested in detailed mathematical derivations should refer to those papers. We only include their key equations to illustrate the
difference among two-stream models for discussion purposes.
2.1SNICAR in land models CLM and ELM
SNICAR is implemented as the default snow shortwave radiative transfer scheme in CLM and the E3SM land model (ELM). It adopts the two-stream algorithms and the rapid solver developed by Toon et al.
(1989) to compute the solar properties of multilayer snowpacks. These two-stream algorithms are derived from the general equation of radiative transfer in a plane-parallel media:
$\begin{array}{}\text{(1)}& \begin{array}{rl}& \mathit{\mu }\frac{\partial I}{\partial \mathit{\tau }}\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}\mathit{\mu },\phantom{\rule{0.125em}{0ex}}\
mathrm{\Phi }\right)=I\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}\mathit{\mu },\phantom{\rule{0.125em}{0ex}}\mathrm{\Phi }\right)-\frac{\mathit{\varpi }}{\mathrm{4}\mathit{\pi }}{\int }_{\
mathrm{0}}^{\mathrm{2}\mathit{\pi }}{\int }_{-\mathrm{1}}^{\mathrm{1}}P\left(\mathit{\mu },\phantom{\rule{0.125em}{0ex}}\mathit{\mu }{}^{\prime },\phantom{\rule{0.125em}{0ex}}\mathit{\varphi },\
phantom{\rule{0.125em}{0ex}}\mathit{\varphi }{}^{\prime }\right)\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}I\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}\mathit{\mu }{}^{\prime
},\phantom{\rule{0.125em}{0ex}}\mathrm{\Phi }{}^{\prime }\right)\mathrm{d}\mathit{\mu }{}^{\prime }\mathrm{d}\mathit{\varphi }{}^{\prime }-S\left(\mathit{\tau },\mathit{\mu },\mathrm{\Phi }\right)\
where Φ is azimuth angle, μ is the cosine of the zenith angle, and ϖ is single-scattering albedo. On the right-hand side, the three terms are intensity at optical depth τ, internal source term due to
multiple scattering, and external source term S. For a purely external source at solar wavelengths S is
$\begin{array}{}\text{(2)}& S=\frac{\mathit{\varpi }}{\mathrm{4}}{F}_{\mathrm{s}}P\left(\mathit{\mu },\phantom{\rule{0.125em}{0ex}}-{\mathit{\mu }}_{\mathrm{0}},\phantom{\rule{0.125em}{0ex}}\mathit{\
varphi },\phantom{\rule{0.125em}{0ex}}{\mathit{\varphi }}_{\mathrm{0}}\right)\mathrm{exp}\left(\frac{-\mathit{\tau }}{{\mathit{\mu }}_{\mathrm{0}}}\right),\end{array}$
where πF[s] is incident solar flux, and μ[0] is the incident direction of the solar beam. Integrating Eq. (1) over azimuth and zenith angles yields the general solution of two-stream approximations
(Meador and Weaver, 1980). The upward and downward fluxes at optical depth τ of layer n can be represented as
$\begin{array}{}\text{(3a)}& & {F}_{n}^{+}={k}_{\mathrm{1}n}\mathrm{exp}\left({\mathrm{\Lambda }}_{n}\mathit{\tau }\right)+{\mathrm{\Gamma }}_{n}{k}_{\mathrm{2}n}\mathrm{exp}\left(-{\mathrm{\Lambda
}}_{n}\mathit{\tau }\right)+{C}_{n}^{+}\left(\mathit{\tau }\right),\text{(3b)}& & {F}_{n}^{-}={\mathrm{\Gamma }}_{n}{k}_{\mathrm{1}n}\mathrm{exp}\left({\mathrm{\Lambda }}_{n}\mathit{\tau }\right)+{k}
_{\mathrm{2}n}\mathrm{exp}\left(-{\mathrm{\Lambda }}_{n}\mathit{\tau }\right)+{C}_{n}^{-}\left(\mathit{\tau }\right),\end{array}$
where Λ[n], Γ[n], and C[n] are known coefficients determined by the two-stream method, incident solar flux, and solar zenith angle; whereas k[1n] and k[2n] are unknown coefficients determined by the
boundary conditions. For an N-layer snowpack, the solutions for upward and downward fluxes are coupled at layer interfaces to generate 2N equations with 2N unknown coefficients k[1n] and k[2n].
Combining these equations linearly generates a new set of equations with terms in tri-diagonal form that enables the application of a fast tri-diagonal matrix solver. With the solved coefficients,
the upward and downward fluxes are computed at different optical depths (Eqs. 3a and 3b) and eventually the reflectance, transmittance, and absorption profiles of solar flux for any multilayer
SNICAR itself implements all three two-stream algorithms in Toon et al. (1989): Eddington, quadrature, and hemispheric mean. In practical simulations, it utilizes the Eddington and hemispheric-mean
approximations to compute the visible and near-IR snow properties, respectively (Flanner et al., 2007). In addition to its algorithms, SNICAR implements the delta transform of the fundamental input
variable asymmetry factor (g), single-scattering albedo (ϖ), and optical depth (τ) to account for the strong forward scattering in snow (Eqs. 2a–2c, Wiscombe and Warren, 1980).
2.2dEdd–AD in sea ice models Icepack, CICE, and MPAS-Seaice
Icepack, CICE, and MPAS-Seaice use the same shortwave radiative scheme dEdd–AD developed and documented by Briegleb and Light (2007). Sea ice is divided into multiple layers to first compute the
single-layer reflectance and transmittance using two-stream delta-Eddington solutions to account for the multiple scattering of light within each layer (Equation set 50, Briegleb and Light, 2007),
where the name “delta” implies dEdd–AD implements the delta transform to account for the strong forward scattering of snow and sea ice (Eqs. 2a–2c, Wiscombe and Warren, 1980). The single-layer direct
albedo and transmittance are computed by equations
$\begin{array}{}\text{(4a)}& \begin{array}{rl}& R\left({\mathit{\mu }}_{\mathrm{0},\phantom{\rule{0.125em}{0ex}}n}\right)={A}_{n}\mathrm{exp}\left(\frac{-\mathit{\tau }}{{\mathit{\mu }}_{\mathrm{0},\
phantom{\rule{0.125em}{0ex}}n}}\right)\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}+{B}_{n}\left(\mathrm{exp}\left({\mathit{\epsilon }}_{n}\mathit{\tau }\
right)-\mathrm{exp}\left(-{\mathit{\epsilon }}_{n}\mathit{\tau }\right)\right)-{K}_{n},\end{array}\text{(4b)}& \begin{array}{rl}& T\left({\mathit{\mu }}_{\mathrm{0},\phantom{\rule{0.125em}{0ex}}n}\
right)={E}_{n}\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}+{H}_{n}\left(\mathrm{exp}\left({\mathit{\epsilon }}_{n}\mathit{\tau }\right)-\mathrm{exp}\left
(-{\mathit{\epsilon }}_{n}\mathit{\tau }\right)\right)\mathrm{exp}\left(\frac{-\mathit{\tau }}{{\mathit{\mu }}_{\mathrm{0},\phantom{\rule{0.125em}{0ex}}n}}\right),\end{array}\end{array}$
where coefficients A[n], B[n], K[n], E[n], H[n], and ε[n] are determined by the single-scattering albedo (ϖ), asymmetry factor (g), optical depth (τ), and angle of the incident beam at layer n (μ[0,
n]). Following the delta-Eddington assumption, simple formulas are available for the single-layer reflectance and transmittance under both clear sky (direct flux, Eqs. 4a and 4b) and overcast sky
(diffuse flux) conditions. However, the formula derived by applying diffuse-flux upper boundary conditions sometimes yields negative albedos (Wiscombe, 1977). To avoid the unphysical values, diffuse
reflectance $\stackrel{\mathrm{‾}}{R}$ and transmittance $\stackrel{\mathrm{‾}}{T}$ of a single layer are computed by integrating the direct reflectance R(μ) and transmittance T(μ) over the incident
hemisphere assuming isotropic incidence:
$\begin{array}{}\text{(5a)}& & \stackrel{\mathrm{‾}}{R}=\mathrm{2}{\int }_{\mathrm{0}}^{\mathrm{1}}\mathit{\mu }R\left(\mathit{\mu }\right)\mathrm{d}\mathit{\mu },\text{(5b)}& & \stackrel{\mathrm{‾}}
{T}=\mathrm{2}{\int }_{\mathrm{0}}^{\mathrm{1}}\mathit{\mu }T\left(\mathit{\mu }\right)\mathrm{d}\mathit{\mu }.\end{array}$
This is the same as the method proposed by Wiscombe and Warren (1980, their Eq. 5). In practice, eight Gaussian angles are implemented to perform the integration for every layer.
The computed single-layer reflectance and transmittance of direct and diffuse components are then combined to account for the interlayer scattering of light to compute the reflectance and
transmission at every interface (Equation set 51, Briegleb and Light, 2007), and eventually the upward and downward fluxes (Equation set 52, Briegleb and Light, 2007). These upward and downward
fluxes at each optical depth are then used to compute the column reflectance and transmittance, and the absorption profiles for any multilayered media, such as snowpacks on land and sea ice.
In nature, a large fraction of sea ice is covered by snow during winter. As snow melts away in late spring and summer, it exposes bare ice, and melt ponds form on the ice surface. Such variation in
sea ice surface types requires the shortwave radiative transfer model to be flexible and capable of capturing the light refraction and reflection. Refractive boundaries exist where air (refractive
index m[re]=1.0), snow (assuming snow as medium of air containing a collection of ice particles, m[re]=1.0), pond (assuming pure water, m[re]=1.33), and ice (assuming pure ice, m[re]=1.31) are
present in the same sea ice column. The general solution of delta-Eddington and the two-stream algorithms used in SNICAR are not applicable to such nonuniformly refractive layered media. To include
the effects of refraction, Briegleb and Light (2007) modified the adding formula at the refractive boundaries (i.e., interfaces between air and ice, snow and ice, and air and pond). The reflectance
and transmittance of the adjacent layers above and below the refractive boundary are combined with modifications to include the Fresnel reflection and refraction of direct and diffuse fluxes
(Sect. 4.1, Briegleb and Light, 2007). dEdd–AD can thus be applied to any layered media with either uniform (e.g., snow on land) or nonuniform (e.g., snow on sea ice) refractive indexes.
In this paper, we apply dEdd–AD to snowpacks that can be treated as uniform refractive media such as the land snow columns assumed in SNICAR for model evaluation. An ideal radiative treatment for
snow should, however, keep the potential to include refraction for further applications to snow on sea ice or ice sheets. Therefore, in addition to these two widely used algorithms in Icepack and
SNICAR, we evaluate a third algorithm (Sect. 2.3) that can be applied to layered media with either uniform or nonuniform refractive indexes.
2.3Two-stream discrete-ordinate algorithm (2SD)
A refractive boundary also exists between the atmosphere and the ocean, and models have been developed to solve the radiative transfer problems in the atmosphere–ocean system using the
discrete-ordinate technique (e.g., Jin and Stamnes, 1994; Lee and Liou, 2007). Similar to the two-stream algorithms of Toon et al. (1989) used in SNICAR, Jin and Stamnes (1994) also developed their
algorithm from the general equation
$\begin{array}{}\text{(6)}& \begin{array}{rl}& \mathit{\mu }\frac{\partial I}{\partial \mathit{\tau }}\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}\mathit{\mu }\right)=I\left(\mathit{\tau },\
phantom{\rule{0.125em}{0ex}}\mathit{\mu }\right)\\ & \phantom{\rule{0.25em}{0ex}}-\frac{\mathit{\varpi }}{\mathrm{4}\mathit{\pi }}{\int }_{-\mathrm{1}}^{\mathrm{1}}P\left(\mathit{\tau },\phantom{\
rule{0.125em}{0ex}}\mathit{\mu },\phantom{\rule{0.125em}{0ex}}\mathit{\mu }{}^{\prime }\right)I\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}\mathit{\mu }{}^{\prime }\right)\mathrm{d}\mathit{\mu
}{}^{\prime }-S\left(\mathit{\tau }\mathit{\mu }\right)\end{array}.\end{array}$
Equation (6) is the azimuthally integrated version of Eq. (1). However, for vertically inhomogeneous media like the atmosphere–ocean or sea ice, the external source term S(τ,μ) is different.
Specifically, for the medium of total optical depth τ^a above the refractive interface, one must consider the contribution from the upward beam reflected at the refractive boundary (second term on
the right-hand side):
$\begin{array}{}\text{(7)}& \begin{array}{rl}& {S}^{\mathrm{a}}\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}\mathit{\mu }\right)=\frac{\mathit{\varpi }}{\mathrm{4}\mathit{\pi }}{F}_{\mathrm{s}}P
\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}-{\mathit{\mu }}_{\mathrm{0}},\phantom{\rule{0.125em}{0ex}}\mathit{\mu }\right)\mathrm{exp}\left(\frac{-\mathit{\tau }}{{\mathit{\mu }}_{\mathrm{0}}}
\right)\\ & +\frac{\mathit{\varpi }}{\mathrm{4}\mathit{\pi }}{F}_{\mathrm{s}}R\left(-{\mathit{\mu }}_{\mathrm{0}},\phantom{\rule{0.125em}{0ex}}m\right)P\left(\mathit{\tau },+{\mathit{\mu }}_{\mathrm
{0}},\mathit{\mu }\right)\mathrm{exp}\left(\frac{-\left(\mathrm{2}{\mathit{\tau }}^{\mathrm{a}}-\mathit{\tau }\right)}{{\mathit{\mu }}_{\mathrm{0}}}\right)\end{array},\end{array}$
where $R\left(-{\mathit{\mu }}_{\mathrm{0}},\phantom{\rule{0.125em}{0ex}}m\right)$ is the Fresnel reflectance of radiation and m is the ratio of the refractive indices of the lower to the upper
medium. For the medium below the refractive interface, one must account for the Fresnel transmittance $T\left(-{\mathit{\mu }}_{\mathrm{0}},\phantom{\rule{0.125em}{0ex}}m\right)$ and modify the angle
of beam travel in media b:
$\begin{array}{}\text{(8)}& \begin{array}{rl}& {S}^{\mathrm{b}}\left(\mathit{\tau },\phantom{\rule{0.125em}{0ex}}\mathit{\mu }\right)=\frac{\mathit{\varpi }}{\mathrm{4}\mathit{\pi }}\frac{{\mathit{\
mu }}_{\mathrm{0}}}{{\mathit{\mu }}_{\mathrm{0}n}}{F}_{\mathrm{s}}T\left(-{\mathit{\mu }}_{\mathrm{0}},\phantom{\rule{0.125em}{0ex}}m\right)P\left(\mathit{\tau },-{\mathit{\mu }}_{\mathrm{0}},\
phantom{\rule{0.125em}{0ex}}\mathit{\mu }\right)\\ & \mathrm{exp}\left(\frac{-{\mathit{\tau }}^{\mathrm{a}}}{{\mathit{\mu }}_{\mathrm{0}}}\right)\mathrm{exp}\left(\frac{-\left(\mathit{\tau }-{\mathit
{\tau }}^{\mathrm{a}}\right)}{{\mathit{\mu }}_{\mathrm{0}n}}\right)\end{array},\end{array}$
where μ[0n] is the cosine zenith angle of refracted beam incident at angle μ[0] above the refractive boundary, by Snell's law:
$\begin{array}{}\text{(9)}& {\mathit{\mu }}_{\mathrm{0}n}=\sqrt{\mathrm{1}-\left(\mathrm{1}-{\mathit{\mu }}_{\mathrm{0}}^{\mathrm{2}}\right)/{m}^{\mathrm{2}}}.\end{array}$
For uniformly refractive media like snow on land, one can just set the refractive index m[re] equal to 1 for every layer. In this case, the Fresnel reflectance $R\left(-{\mathit{\mu }}_{\mathrm{0}},m
\right)$ is 0 in Eq. (7), the Fresnel transmittance $T\left(-{\mathit{\mu }}_{\mathrm{0}},m\right)$ is 1 in Eq. (8), and μ[0n] equals μ[0]: the two source terms S^a(τ,μ) and S^b(τ,μ) become the
same and equal the source term of homogenous media given in Eq. (2).
For two-stream approximations of this method, analytical solutions of upward and downward fluxes are coupled at each layer interface to generate 2N equations with 2N unknown coefficients for any N
-layer stratified column. The solutions of two-stream algorithms and boundary conditions for homogenous media are well documented (Sect. 8.4 and 8.10 of Thomas and Stamnes, 1999). Despite the extra
source terms, these 2N equations can also be organized into a tri-diagonal matrix similar to the method of Toon et al. (1989) used in SNICAR. Flexibility and speed therefore make this two-stream
discrete-ordinate algorithm (hereafter, 2SD) a potentially good candidate for long-term Earth system modeling. In this work, we only apply 2SD to the snowpack and note that it can be applied to any
uniformly or nonuniformly refractive media like snow on land or sea ice, with the delta transform implemented for fundamental optical variables (Eqs. 2a–2c, Wiscombe and Warren, 1980).
2.416-stream DISORT
In addition to the mathematical technique, the accuracy and speed of radiative transfer algorithms depend on the number of angles used for flux estimation in the upward and downward hemispheres.
SNICAR, dEdd–AD, and 2SD use one angle to represent upward flux and one angle to represent downward flux; hence they are named the two-stream algorithm. Lee and Liou (2007) use two upward and two
downward streams. Jin and Stamnes (1994) documented the solutions for any even number of streams. The computational efficiency of these models is lower than that of two-stream models while their
accuracy is better. To quantify the accuracy of the three two-stream algorithms for snow shortwave simulations, we use the 16-stream DIScrete-Ordinate Radiative Transfer model (DISORT) as the
benchmark model (http://lllab.phy.stevens.edu/disort/, last access: ) (Stamnes et al., 1988).
3Input for radiative transfer models
In this work, we focus on the performance of two-stream algorithms for pure snow simulations. The inputs for these three models are the same: single-scattering properties (SSPs, i.e.,
single-scattering albedo ϖ, asymmetry factor g, extinction coefficient σ[ext]) of snow determined by snow grain radius r, snow depth, solar zenith angle θ, solar incident flux, and the albedo of
underlying ground (assuming Lambertian reflectance of 0.25 for all wavelengths). A delta transform is applied to fundamental input optical variables for all simulations (Eqs. 2a–2c, Wiscombe and
Warren, 1980).
In snow, photon scattering occurs at the air–ice interface, and the absorption of photons occurs within the ice crystal. The most important factor that determines snow shortwave properties is the
ratio of total surface area to total mass of snow grains, also known as “the specific surface area” (e.g., Matzl and Schneebeli, 2006, 2010). The specific surface area (β) can be converted to a
radiatively effective snow grain radius r:
$\begin{array}{}\text{(10)}& \mathit{\beta }=\mathrm{3}/\left(r{\mathit{\rho }}_{\mathrm{ice}}\right),\end{array}$
where ρ[ice] is the density of pure ice, 917kgm^−3. Assuming the grains are spherical, the SSPs of snow can thus be computed using Mie theory (Wiscombe, 1980) and ice optical constants (Warren and
Brandt, 2008). In nature, snow grains are not spherical, and many studies have been carried out to quantify the accuracy of such spherical representations (Grenfell and Warren, 1999; Neshyba et
al., 2003; Grenfell et al., 2005). In recent years, more research has been done to evaluate the impact of grain shape on snow shortwave properties (Dang et al., 2016; He et al., 2017, 2018a, b), and
they show that nonspherical snow grain shapes mainly alter the asymmetry factor. Dang et al. (2016) also point out that the solar properties of a snowpack consisting of nonspherical ice grains can be
mimicked by a snowpack consisting of spherical grains with a smaller grain size by factors up to 2.4. In this work, we still assume the snow grains are spherical, and this assumption does not
qualitatively alter our evaluation of the radiative transfer algorithms.
The input SSPs of snow grains are computed using Mie theory at a fine spectral resolution for a wide range of ice effective radius r from 10 to 3000µm that covers the possible range of grain radius
for snow on Earth (Flanner et al., 2007). The same spectral SSPs were also used to derive the band-averaged SSPs of snow used in SNICAR. Note Briegleb and Light (2007) refer to SSPs as inherent
optical properties.
4Solar spectra used for the spectral integrations
In climate modeling, snow albedo computation at a fine spectral resolution is expensive and unnecessary. Instead of computing spectrally resolved snow albedo, wider-band solar properties are more
practical. For example, CESM and E3SM aggregate the narrow RRTMG bands used for the atmospheric radiative transfer simulation into visible (0.2–0.7µm) and near-IR (0.7–5µm) bands. The land model
and sea ice model thus receive visible and near-IR fluxes as the upper boundary condition, and return the corresponding visible and near-IR albedos to the atmosphere model. In practice, these bands
are also partitioned into direct and diffuse components. Therefore, a practical two-stream algorithm should be able to simulate the direct visible, diffuse visible, direct near-IR, and diffuse
near-IR albedos and absorptions of snow accurately.
The band albedo α is an irradiance-weighted average of the spectral albedo α(λ):
$\begin{array}{}\text{(11)}& \mathit{\alpha }=\frac{{\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}\mathit{\alpha }\left(\mathit{\lambda }\right)F\left(\mathit{\lambda }
\right)\mathrm{d}\mathit{\lambda }}{{\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}F\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }}.\end{array}$
In this work, we use the spectral irradiance F(λ) generated by the atmospheric DISORT-based Shortwave Narrowband Model (SWNB2) (Zender et al., 1997; Zender, 1999) for typical clear-sky and cloudy-sky
conditions of midlatitude winter as shown in Fig. 1a. The total clear-sky down-welling surface flux at different solar zenith angles are also given in Fig. 1b.
5.1Spectral albedo and reflected solar flux
The spectral reflectance of pure deep snow computed using two-stream models and 16-stream DISORT is shown in Fig. 2. The snow grain radius is 100µm – a typical grain size for fresh new snow. For
clear sky with a direct beam source (left column), all three two-stream models show good accuracy at visible wavelengths (0.3–0.7µm), and within this band, the snow albedo is large and close to 1.
As wavelength increases, the albedo diminishes in the near-IR band. Two-stream models overestimate snow albedo at these wavelengths, with maximum biases of 0.013 (SNICAR and dEdd–AD) and 0.023 (2SD)
within wavelength 1–1.7µm. For cloudy-sky cases with diffuse upper boundary conditions, dEdd–AD reproduces the snow albedo at all wavelengths with the smallest absolute error (<0.005), and SNICAR
and 2SD both overestimate the snow albedo with maximum biases >0.04 between 1.1 and 1.4µm.
In both sky conditions, the errors of snow albedo are larger at near-IR wavelengths ranging from 1.0 to 1.7µm, while the solar incident flux peaks at 0.5µm then decrease as wavelength increases.
The largest error in reflected flux is within the 0.7–1.5µm band for SNICAR and 2SD, as shown in the third row of Fig. 2. dEdd–AD overestimates the direct snow albedo mostly at wavelengths larger
than 1.5µm where the error in reflected flux is almost negligible.
5.2Broadband albedo and reflected solar flux
Integrated over the visible and near-IR wavelengths, the error in band albedos computed using two-stream models for different cases is shown in Figs. 3–6.
Figure 3 shows the error in direct band albedo for fixed snow grain radius of 100µm with different snow depth and solar zenith angles. As introduced in Sect. 2, SNICAR and dEdd–AD both use the
delta-Eddington method to compute the visible albedo. They overestimate the visible albedo for solar zenith angles smaller than 50^∘ by up to 0.005, and underestimate it for solar zenith angles
larger than 50^∘ by up to −0.01. 2SD produces similar results for the visible band but at a larger solar zenith angle threshold of 75^∘. In the near-IR band, SNICAR and 2SD overestimate the snow
albedo for solar zenith angles smaller than 70^∘, beyond this, the error in albedo increases by up to −0.1 as solar zenith angle increases. dEdd–AD produces a similar error pattern with a smaller
solar zenith angle threshold at 60^∘. As snow ages, its average grain size increases. For typical old melting snow of grain radius 1000µm (Fig. 4), two-stream models produce similar errors of direct
albedo in all bands. Integrating over the entire solar band, the three two-stream models evaluated show similar error patterns for direct albedo.
For a fixed solar zenith angle of 60^∘, the error of direct albedo for different snow depth and snow grain radii is shown in Fig. 5. SNICAR and dEdd–AD underestimate the visible albedo in most
scenarios, while 2SD overestimates the visible albedo for a larger range of grain radius and snow depth. All three two-stream models tend to overestimate the near-IR albedo except for shallow snow
with large grain radius; the error of 2SD is 1 order of magnitude larger than that of SNICAR and dEdd–AD.
Figure 6 is similar to Fig. 5, but shows the diffuse snow albedo. In the visible band, SNICAR and dEdd–AD generate similar errors in that they both underestimate the albedo as snow grain size
increases and snow depth decreases. 2SD overestimates the albedo with a maximum error of around 0.015. In the near-IR, two-stream models tend to overestimate snow albedo, while the magnitude of
biases produced by SNICAR and 2SD is 1 order larger than that of dEdd–AD with the maximum error of 0.035 generated by SNICAR. As a result, the all-wave diffuse albedos computed using dEdd–AD are more
accurate than those computed using SNICAR and 2SD.
Figures 7, 8, and 9 show the errors in reflected shortwave flux caused by snow albedo errors seen in Figs. 3, 4, and 6. In general, two-stream models produce larger errors in reflected direct near-IR
flux (Figs. 7 and 8), especially with the 2SD model: the maximum overestimate of reflected near-IR flux is 6–8Wm^−2 for deep melting snow with a solar zenith angle <30^∘. Errors in reflected direct
visible flux are smaller (mostly within ±1Wm^−2) for all models in most scenarios, and become larger (mostly within ±3Wm^−2) as snow grain size increases to 1000µm if computed using 2SD. As
shown in Fig. 9, for diffuse flux with a solar zenith angle of 60^∘ at the top of the atmosphere (TOA), SNICAR and dEdd–AD generate small errors in reflected visible flux (mostly within ±1Wm^−2),
while 2SD always overestimates reflected visible flux by up to 5Wm^−2. In the near-IR, SNICAR and 2SD overestimate reflected flux by as much as 10–12Wm^−2; the error in reflected near-IR flux
produced by dEdd–AD is much smaller, mostly within ±1Wm^−2.
In general, dEdd–AD produces the most accurate albedo and thus reflected flux for both direct and diffuse components. SNICAR is similar to dEdd–AD for its accuracy of direct albedo and flux, yet
generates large error for the diffuse component. 2SD tends to overestimate snow albedo and reflected flux in both direct and diffuse components and shows the largest errors among three two-stream
models. Although the differences between algorithms are small, they can have a notable impact on snowpack melt. For example, compared to dEdd–AD, SNICAR and 2SD overestimate the diffuse albedo by
∼0.015 for melting snow (Fig. 6). In Greenland, the daily averaged downward diffuse solar flux from May to September is 200Wm^−2, and the averaged cloud cover fraction is 80% (Fig. 6, Dang et
al., 2017). In this case, SNICAR and 2SD overestimate the reflected solar flux by 2.4Wm^−2d^−1 – the amount of energy is otherwise enough to melt 10cm of snow water equivalent from May to
September. dEdd–AD also remediates compensating spectral biases (where visible and near-IR biases are of opposite signs) present in the other schemes. Those spectral biases do not affect the
broadband fluxes like the diffuse biases, but they nevertheless degrade proper feedbacks between snow–ice reflectance and heating.
5.3Band absorption of solar flux
Figure 10 shows absorption profiles of shortwave flux computed using the 16-stream DISORT model, with errors in absorbed fractional solar flux computed using two-stream models. The snowpack is 10cm
deep and is divided into five layers, each 2cm thick. The snow grain radii are set to 100µm and 1000µm. The figure shows fractional absorption for snow layers 1–4 and the underlying ground with an
albedo of 0.25.
As shown in the first column of Fig. 10, for new snow with a radius of 100µm, most solar absorption occurs in the top 2cm snow layer, where roughly 10% and 15% of diffuse and direct near-IR flux
is absorbed and dominates the solar absorption within the snowpack. In the second layer (2–4cm), the absorption of solar flux is less than 1% and gradually decreases within the interior layers. The
underlying ground absorbs roughly 2% of solar flux, mostly visible flux that penetrates the snowpack more efficiently. As snow ages and snow grain grows, photons penetrate deeper into the snowpack.
For typical old melting snow with a radius of 1000µm, most solar absorption still occurs in the top 2cm snow layer, where roughly 20% and 14% of diffuse and direct near-IR flux is absorbed. The
second snow layer (2–4cm) absorbs more near-IR solar flux by roughly 2%. More photons can penetrate through the snowpack, and result in a high fractional absorption by the underlying ground,
especially for the visible band. As snow depth increases, the ground absorption will decrease for both snow radii.
Comparing to 16-stream DISORT, two-stream models underestimate the column solar absorptions for new snow, and they overestimate them for old snow, especially for the surface snow layer and the
underground. Overall, dEdd–AD gives the most accurate absorption profiles among the three two-stream models, especially for new snow.
6Correction for direct albedo for large solar zenith angles
It has been pointed out in previous studies that the two-stream approximations become poor as solar zenith angle approaches 90^∘ (e.g., Wiscombe, 1977; Warren, 1982). As shown in Figs. 3 and 4, all
three two-stream models underestimate the direct snow albedo for large solar zenith angles. In the visible band, when the snow grain size is small, the error in direct albedo is almost negligible
(Fig. 3); while as snow ages and snow grains become larger, the error increases yet remains low if the snow is deep (Fig. 4). In the near-IR range, the biases of albedo are also larger for larger
snow grain radii. For a given snow size, the magnitudes of such biases are almost independent of snow depth and mainly determined by the solar zenith angle. In general, the errors of all-wave direct
albedo are mostly contributed by the errors of near-IR albedo, especially for optically thick snowpacks (i.e., semi-infinite), because the errors of direct albedo in the visible range are negligible
compared with those in the near-IR range. To improve the performance of two-stream algorithms, we develop a parameterization that corrects the underestimated near-IR snow albedo at large zenith
Figure 11 shows the direct near-IR albedo and fractional absorption of 2m thick snowpacks consisting of grains with radii of 100 and 1000µm, computed using two-stream algorithms and 16-stream
DISORT. For solar zenith angles>75^∘, two-stream models underestimate snow albedo and overestimate solar absorption within the snowpack, mostly in the top 2cm of snow, and the differences among the
three two-stream models are small. In Sect. 5, we have shown that dEdd–AD produces the most accurate snow albedo in general. With anticipated wide application of dEdd–AD, we develop the following
parameterization to adjust its low biases in computed near-IR direct albedo.
We define and compute ${R}_{{\mathrm{75}}_{+}}$ as the ratio of direct semi-infinite near-IR albedo computed using 16-stream DISORT (α[16-DISORT]) to that computed using dEdd–AD (α[dEdd-AD]), for
solar zenith angle>75^∘. This ratio is shown in Fig. 11c and can be parameterized as a function of snow grain radius (r, in meters) and the cosine of incident solar zenith angle (μ[0]), as shown in
Fig. 11c:
$\begin{array}{}\text{(12)}& \begin{array}{rl}& {R}_{{\mathrm{75}}_{+}}=\frac{{\mathit{\alpha }}_{\mathrm{16}\text{-}\mathrm{DISORT}}}{{\mathit{\alpha }}_{\mathrm{dEdd}-\mathrm{AD}}}={c}_{\mathrm{1}}
\left({\mathit{\mu }}_{\mathrm{0}}\right){\mathrm{log}}_{\mathrm{10}}\left(r\right)+{c}_{\mathrm{0}}\left({\mathit{\mu }}_{\mathrm{0}}\right),\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}
{0ex}}\text{for}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{\mathit{\mu }}_{\mathrm{0}}<\mathrm{0.26},\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\text{i.e.,}\phantom{\rule
{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}{\mathit{\theta }}_{\mathrm{0}}>\mathrm{75}{}^{\circ },\end{array}\end{array}$
where coefficients c[1] and c[0] are polynomial functions of μ[0], as shown in Fig. 11d:
$\begin{array}{}\text{(13a)}& & {c}_{\mathrm{1}}\left({\mathit{\mu }}_{\mathrm{0}}\right)=\mathrm{1.304}{\mathit{\mu }}_{\mathrm{0}}^{\mathrm{2}}-\mathrm{0.631}{\mathit{\mu }}_{\mathrm{0}}+\mathrm
{0.086},\text{(13b)}& & {c}_{\mathrm{0}}\left({\mathit{\mu }}_{\mathrm{0}}\right)=\mathrm{6.807}{\mathit{\mu }}_{\mathrm{0}}^{\mathrm{2}}-\mathrm{3.338}{\mathit{\mu }}_{\mathrm{0}}+\mathrm{1.467}.\
Since two-stream models always underestimate snow albedo, ${R}_{{\mathrm{75}}_{+}}$ always exceeds 1 (Fig. 11c). We can then adjust the direct near-IR snow albedo (α[dEdd-AD]) and direct near-IR
solar absorption (Fabs[dEdd-AD]) by snow computed using dEdd–AD with ratio ${R}_{{\mathrm{75}}_{+}}$:
$\begin{array}{}\text{(14a)}& & {\mathit{\alpha }}_{\mathrm{dEdd}\text{-}\mathrm{AD}}^{\mathrm{adjust}}={R}_{{\mathrm{75}}_{+}}{\mathit{\alpha }}_{\mathrm{dEdd}\text{-}\mathrm{AD}},\text{(14b)}& & {\
mathrm{\text{Fabs}}}_{\mathrm{dEdd}\text{-}\mathrm{AD}}^{\mathrm{adjust}}={\text{Fabs}}_{\mathrm{dEdd}\text{-}\mathrm{AD}}-\left({R}_{{\mathrm{75}}_{+}}-\mathrm{1}\right){\mathit{\alpha }}_{\mathrm
where F[nir] is the direct near-IR flux. This adjustment reduces the error of near-IR albedo from negative 2%–10% to within ±0.5% for solar zenith angles larger than 75^∘, and for grain radii
ranging from 30 to 1500µm (Fig. 12). Errors in broadband direct albedo are therefore also reduced to <0.01. The direct near-IR flux absorbed by the snowpack decreases after applying this adjustment.
When the solar zenith angle exceeds 75^∘, our model adjusts the computed direct near-IR albedo α[dEdd−AD] by the ratio ${R}_{{\mathrm{75}}_{+}}$ following Eqs. (12)–(14a) and reduces direct near-IR
absorption following Eq. (14b). If snow is divided into multiple layers, we assume all decreased near-IR absorption (second term on the right-hand side, Eq. 14b) is confined within the top layer.
This assumption is fairly accurate for the near-IR band since most absorption occurs at the surface of the snowpack (Figs. 10 and 11). As discussed previously, this parameterization is developed
based on albedo computed using dEdd–AD. For models that do not use dEdd–AD but SNICAR and 2SD, the same adjustment still applies given the small differences of near-IR direct albedo computed using
two-stream models (Fig. 11). For models that adopt other radiative transfer algorithms it is best for the developers to examine their model against a benchmark model such as 16-stream DISORT or
two-stream models discussed in this work before applying this correction.
Although the errors of direct near-IR albedos are large for large solar zenith angles, the absolute error in reflected shortwave flux is small (Figs. 7 and 8) as the down-welling solar flux reaches
snowpack and decreases as solar zenith angle increases (Fig. 1b). However, such small biases in flux can be important for high latitudes where the solar zenith angle is large for many days in late
winter and early spring.
7Implementation of snow radiative transfer model in Earth system models
ESMs often use band-averaged SSPs of snow and aerosols for computational efficiency, rather than using brute-force integration of spectral solar properties across each band (per Eq. 11). In addition
to using different radiative transfer approximations, SNICAR and dEdd–AD also adopt different methods to derive the band-averaged SSPs of snow for different band schemes.
In SNICAR, snow solar properties are computed for five bands: one visible band (0.3–0.7µm) and four near-IR bands (0.7–1, 1–1.2, 1.2–1.5, and 1.5–5µm). The solar properties of four subdivided
near-IR bands are combined by fixed ratios to compute the direct and diffuse near-IR snow properties. These two sets of ratios are derived offline based on the incident solar spectra typical of
midlatitude winter for clear- and cloudy-sky conditions (Fig. 1a).
The band-averaged SSPs of snow grains are computed following the Chandrasekhar mean approach (Thomas and Stamnes, 1999, their Eq. 9.27; Flanner et al., 2007). Specifically, spectral SSPs of snow
grains are weighted into bands according to surface incident solar flux typical of midlatitude winter for clear- and cloudy-sky conditions. In addition, the single-scattering albedo ϖ(λ) of ice
grains is also weighted by the hemispheric albedo α(λ) of an optically thick snowpack:
$\begin{array}{}\text{(15a)}& & \mathit{\varpi }\left(\stackrel{\mathrm{‾}}{\mathit{\lambda }}\right)=\frac{{\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}\mathit{\varpi
}\left(\mathit{\lambda }\right)F\left(\mathit{\lambda }\right)\mathit{\alpha }\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }}{{\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda
}}_{\mathrm{2}}}F\left(\mathit{\lambda }\right)\mathit{\alpha }\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }},\text{(15b)}& & g\left(\stackrel{\mathrm{‾}}{\mathit{\lambda }}\right)=\frac
{{\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}g\left(\mathit{\lambda }\right)F\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }}{{\int }_{{\mathit{\lambda }}_
{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}F\left(\mathit{\lambda }\right)\mathit{\alpha }\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }},\text{(15c)}& & {\mathit{\sigma }}_{\mathrm
{ext}}\left(\stackrel{\mathrm{‾}}{\mathit{\lambda }}\right)=\frac{{\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}{\mathit{\sigma }}_{\mathrm{ext}}\left(\mathit{\lambda }
\right)F\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }}{{\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}F\left(\mathit{\lambda }\right)\mathit{\alpha }\left(\
mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }}.\end{array}$
Two sets of snow band-averaged SSPs are generated for all grain radii, suitable for direct and diffuse light. For each modeling step and band, SNICAR is called twice to compute the direct and diffuse
snow solar properties.
In dEdd–AD, the snow-covered sea ice properties are computed for three bands: one visible band (0.3–07µm) and two near-IR bands (0.7–1.19 and 1.19–5µm). The solar proprieties of these two near-IR
bands are combined using ratios w[nir1] and w[nir2] for 0.7–1.19 and 1.19–5µm, depending on the fraction of direct near-IR flux f[nidr]:
$\begin{array}{}\text{(16a)}& & {w}_{\mathrm{nir}\mathrm{1}}=\mathrm{0.67}+\mathrm{0.11}\cdot \left(\mathrm{1}-{f}_{\mathrm{nidr}}\right),\text{(16b)}& & {w}_{\mathrm{nir}\mathrm{2}}=\mathrm{1}-{w}_
The band SSPs of snow are derived by integrating the spectral SSPs and the spectral surface solar irradiance measured in the Arctic under mostly clear sky.
$\begin{array}{}\text{(17a)}& & \mathit{\varpi }\left(\stackrel{\mathrm{‾}}{\mathit{\lambda }}\right)={\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}\mathit{\varpi }\
left(\mathit{\lambda }\right)F\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }\text{(17b)}& & g\left(\stackrel{\mathrm{‾}}{\mathit{\lambda }}\right)={\int }_{{\mathit{\lambda }}_{\mathrm
{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}g\left(\mathit{\lambda }\right)F\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }\text{(17c)}& & {\mathit{\sigma }}_{\mathrm{ext}}\left(\stackrel{\
mathrm{‾}}{\mathit{\lambda }}\right)={\int }_{{\mathit{\lambda }}_{\mathrm{1}}}^{{\mathit{\lambda }}_{\mathrm{2}}}{\mathit{\sigma }}_{\mathrm{ext}}\left(\mathit{\lambda }\right)F\left(\mathit{\lambda
}\right)\mathrm{d}\mathit{\lambda }\end{array}$
In addition, the band-averaged single-scattering albedo $\mathit{\varpi }\left(\stackrel{\mathrm{‾}}{\mathit{\lambda }}\right)$ is also increased to $\mathit{\varpi }{\left(\stackrel{\mathrm{‾}}{\
mathit{\lambda }}\right)}^{\prime }$ until the band albedo computed using averaged SSPs matches the band albedo $\stackrel{\mathrm{‾}}{\mathit{\alpha }}$ within 0.0001, where $\stackrel{\mathrm{‾}}{\
mathit{\alpha }}$ is
$\begin{array}{}\text{(18)}& \stackrel{\mathrm{‾}}{\mathit{\alpha }}=\underset{{\mathit{\lambda }}_{\mathrm{1}}}{\overset{{\mathit{\lambda }}_{\mathrm{2}}}{\int }}\mathit{\alpha }\left(\mathit{\
lambda }\right)F\left(\mathit{\lambda }\right)\mathrm{d}\mathit{\lambda }.\end{array}$
dEdd–AD adopts this single set of band SSPs for both direct and diffuse computations. In practice, the physical snow grain radius r is adjusted to a radiatively equivalent radius r[eqv] based on the
fraction of direct flux in the near-IR band (f[nidr]):
$\begin{array}{}\text{(19)}& {r}_{\mathrm{eqv}}=\left({f}_{\mathrm{nidr}}+\mathrm{0.8}\left(\mathrm{1}-{f}_{\mathrm{nidr}}\right)\right)r.\end{array}$
This r[eqv] and the corresponding snow SSPs are then used in the radiative transfer calculation. The computed direct and diffuse solar properties alone are less accurate, while the combined all-sky
broadband solar properties agree with SNICAR (Briegleb and Light, 2007). As a result, for each modeling step and band, the dEdd–AD radiative transfer subroutine is called only once to compute both
the direct and diffuse snow solar properties simultaneously.
SNICAR and dEdd–AD also use different approaches to avoid numerical singularities. In SNICAR, singularities occur when the denominator of term ${C}_{n}^{±}$ in Eq. (3) equals zero (i.e., ${\mathit{\
gamma }}^{\mathrm{2}}-\mathrm{1}/{\mathit{\mu }}_{\mathrm{0}}^{\mathrm{2}}=\mathrm{0}$), where γ is determined by the approximation method and SSPs of snow, and μ[0] is the cosine of the solar zenith
angle (Eqs. 23 and 24, Toon et al., 1989). When such a singularity is detected, SNICAR will shift μ[0] by +0.02 or −0.02 to obtain physically realistic radiative properties. In the dEdd–AD algorithm,
singularities arise only when μ[0]=0 (Eq. 4). Therefore, in practice, for μ[0]<0.01, dEdd–AD computes the sea ice solar properties for μ[0]=0.01 to avoid unphysical results.
8Towards a unified radiative transfer model for snow, sea ice, and land ice
Based on the intercomparison of three two-stream algorithms and their implementations in ESMs, we formulated the following surface shortwave radiative transfer recommendations for an accurate, fast,
and consistent treatment for snow on land, land ice, and sea ice in ESMs.
First, the two-stream delta-Eddington adding–doubling algorithm by Briegleb and Light (2007) is unsurpassed as a radiative transfer core. The evaluation in Sect. 5 shows that this algorithm produces
the least error for snow albedo and solar absorption within snowpack, especially under overcast skies. This algorithm applies well to both uniformly refractive media such as snow on land, and to
nonuniformly refractive media, such as bare, snow-covered, and ponded sea ice and bare and snow-covered land ice. Numerical singularities occur only rarely (when μ[0]=0) and are easily avoided in
model implementations. Among the three two-stream algorithms discussed here, dEdd–AD is also the most efficient one as it takes only two-thirds of the time of SNICAR and 2SD to compute solar
properties of multilayer snowpacks.
Second, any two-stream cryospheric radiative transfer model can incorporate the parameterization described in Sect. 6 to adjust the low bias of direct near-IR snow albedo and high bias of direct
near-IR solar absorption in snow, for solar zenith angles larger than 75^∘. These biases are persistent across all two-stream algorithms discussed in this work, and should be corrected for
snow-covered surfaces. Alternatively, adopting a four-stream approximation would reduce or eliminate such biases, though at considerable expense in computational efficiency.
Third, in a cryospheric radiative transfer model, one should prefer physically based parameterizations that are extensible and convergent (e.g., with increasing spectral resolution) for the
band-averaged SSPs and size distribution of snow. Although the treatments used in SNICAR and dEdd–AD are both practical since they both reproduce the narrowband solar properties with carefully
derived band-averaged inputs as discussed in Sect. 7, the snow treatment used in SNICAR is more physically based and reproducible since it does not rely on subjective adjustment and empirical
coefficients as used in dEdd–AD. Specifically, the empirical adjustment to snow grain radius implemented in dEdd–AD may not always produce compensating errors. For example, in snow containing
light-absorbing impurities such adjustment may also lead to biases in aerosol absorption since the albedo reduction caused by light-absorbing particles does not linearly depend on snow grain radius
(Dang et al., 2015). For further model development incorporating nonspherical snow grain shapes (Dang et al., 2016; He et al., 2018a, b), such adjustment on grain radius may fail as well. Moreover,
SNICAR computes the snow properties for four near-IR bands, which helps capture the spectral variation in albedo (Fig. 2) and therefore better represents near-IR solar properties. It is also worth
noting that unlike the radiative core of dEdd–AD, SNICAR is actively maintained, with numerous modifications and updates in the past decade (e.g., Flanner et al., 2012; He et al., 2018b). Snow
radiative treatments that follow SNICAR conventions for SSPs may take advantage of these updates. Note that any radiative core that follows SNICAR SSP conventions must be called twice to compute
diffuse and direct solar properties.
Fourth, a surface cryospheric radiative transfer model should flexibly accommodate coupled simulations with distinct atmospheric and surface spectral grids. Both the five-band scheme used in SNICAR
and the three-band scheme used in dEdd–AD separate the visible from near-IR spectrum at 0.7µm. This boundary aligns with the Community Atmospheric Model's original radiation bands (CAM; Neale et
al., 2010), though not with the widely used Rapid Radiative Transfer Model (RRTMG; Iacono et al., 2008), which places 0.7µm squarely in the middle of a spectral band. A mismatch in spectral
boundaries between atmospheric and surface radiative transfer schemes can require an ESM to unphysically apportion energy from the straddled spectral bin when coupling fluxes between surface and
atmosphere. The spectral grids of surface and atmosphere radiation need not be identical so long as the coarser grid shares spectral boundaries with the finer grid. In practice maintaining a portable
cryospheric radiative module such as SNICAR requires a complex offline toolchain (Mie solver, spectral refractive indices for air, water, ice, and aerosols, spectral solar insolation for clear and
cloudy skies) to compute, integrate, and rebin SSPs. Aligned spectral boundaries between surface and atmosphere would simplify the development of efficient and accurate radiative transfer for the
coupled Earth system.
Last, it is important to note that, although we only examine the performance of the dEdd–AD for pure snow in this work, this algorithm can be applied to the surface solar calculation of all
cryospheric components with or without light-absorbing particles present. First, Briegleb and Light (2007) proved its accuracy for simulating ponded and bare sea ice solar properties against
observations and a Monte Carlo radiation model. Second, In CESM and E3SM, the radiative transfer simulation of snow on land ice is carried out by SNICAR with prescribed land ice albedo. Adopting the
dEdd–AD radiative core in SNICAR will permit these ESMs to couple the snow and land ice as a nonuniformly refractive column for more accurate solar computations since bare, snow-covered, and ponded
land ice is physically similar to bare, snow-covered, and ponded sea ice, and the latter is already treated well by the dEdd–AD radiative transfer core. Third, adding light-absorbing particles in
snow will not change our results qualitatively. Both dEdd–AD and SNICAR simulate the impact of light-absorbing particles (black carbon and dust) on snow and/or sea ice using self-consistent particle
SSPs that follow the SNICAR convention (e.g., Flanner et al., 2007; Holland et al., 2012). These particles are assumed to be either internally or externally mixed with snow crystals; the combined
SSPs of mixtures (e.g., Appendix A of Dang et al., 2015) are then used as the inputs for radiative transfer calculation. The adoption of the dEdd–AD radiative transfer algorithm in SNICAR, and the
implementation of SNICAR snow SSPs in dEdd–AD enables a consistent simulation of the radiative effects of light-absorbing particles in the cryosphere across ESM components.
In summary, this intercomparison and evaluation has shown multiple ways that the solar properties of cryospheric surfaces can be improved in the current generation of ESMs. We have merged these
findings into a hybrid model SNICAR-AD, which is primarily composed of the radiative transfer scheme of dEdd–AD, five-band snow–aerosol SSPs of SNICAR, and the parameterization to correct for snow
albedo biases when solar zenith angle exceeds 75^∘. This hybrid model can be applied to snow on land, land ice, and sea ice to produce consistent shortwave radiative properties for snow-covered
surfaces across the Earth system. With the evolution and further understanding of snow and aerosol physics and chemistry, the adoption of this hybrid model will obviate the effort to modify and
maintain separate optical variable input files used for different model components.
SNICAR-AD is now implemented in both the sea ice (MPAS-Seaice) and land (ELM) components of E3SM. More simulations and analyses are underway to examine its impact on E3SM model performance and
simulated climate. The results are however beyond the scope of this work and will be thoroughly discussed in a future paper.
In this work, we aim to improve and unify the solar radiative transfer calculations for snow on land and snow on sea ice in ESMs by evaluating the following two-stream radiative transfer algorithms:
the two-stream delta-Eddington adding–doubling algorithm dEdd–AD implemented in sea ice models Icepack, CICE, and MPAS-Seaice, the two-stream delta-Eddington and two-stream delta-Hemispheric-Mean
algorithms implemented in snow model SNICAR, and a two-stream delta-discrete-ordinate algorithm. Among these three models, dEdd–AD produces the most accurate snow albedo and solar absorption
(Sect. 5). All two-stream models underestimate near-IR snow albedo and overestimate near-IR absorption when solar zenith angles are larger than 75^∘, which can be adjusted by a parameterization we
developed (Sect. 6). We compared the implementations of radiative transfer cores in SNICAR and dEdd–AD (Sect. 7) and recommended a consistent and hybrid shortwave radiative model SNICAR-AD for
snow-covered surfaces across ESMs (Sect. 8). Improved treatment of surface cryospheric radiative properties in the thermal infrared has recently been shown to remediate significant climate simulation
biases in polar regions (Huang et al., 2018). It is hoped that adoption of improved and consistent treatments of solar radiative properties for snow-covered surfaces as described in this study will
further remediate simulation biases in snow-covered regions.
The data and models are available upon request to Cheng Dang (cdang5@uci.edu). SNICAR and dEdd–AD radiative transfer core can be found at https://github.com/E3SM-Project/E3SM (last access: 22 July
CD and CZ designed the study. CD coded the offline dEdd-AD and 2SD schemes, performed two-stream and 16-stream model simulations, and wrote the paper with input from CZ and MF. CZ performed the SWNB2
simulations. MF provided the base SNICAR code and snow optical inputs.
The authors declare that they have no conflict of interest.
The authors thank Stephen G. Warren and Qiang Fu for insightful discussions on radiative transfer algorithms. The authors thank Adrian Turner for instructions on installing and running MPAS-Seaice.
The authors thank David Bailey and the one anonymous reviewer for their constructive comments that improved our paper. This research is supported as part of the Energy Exascale Earth System Model
(E3SM) project, funded by the U.S. Department of Energy, Office of Science, Office of Biological and Environmental Research.
This research has been supported by the U.S. Department of Energy (grant no. DE-SC0012998).
This paper was edited by Dirk Notz and reviewed by David Bailey and one anonymous referee.
Aoki, T., Kuchiki, K., Niwano, M., Kodama, Y., Hosaka, M., and Tanaka, T.: Physically based snow albedo model for calculating broadband albedos and the solar heating profile in snowpack for general
circulation models, J. Geophys. Res., 116, D11114, https://doi.org/10.1029/2010JD015507, 2011.
Bøggild, C. E., Brandt, R. E., Brown, K. J., and Warren, S. G.: The ablation zone in northeast Greenland: ice types, albedos and impurities, J. Glaciol., 56, 101–113, https://doi.org/10.3189/
002214310791190776, 2010.
Brandt, R. E., Warren, S. G., Worby, A. P., and Grenfell, T. C.: Surface albedo of the Antarctic sea ice zone, J. Climate, 18, 3606–3622, https://doi.org/10.1175/JCLI3489.1, 2005.
Briegleb, P. and Light, B.: A Delta-Eddington mutiple scattering parameterization for solar radiation in the sea ice component of the community climate system model, NCAR Technical Note NCAR/
TN-472+STR, https://doi.org/10.5065/D6B27S71, 2007.
Dang, C. and Hegg, D. A.: Quantifying light absorption by organic carbon in Western North American snow by serial chemical extractions, J. Geophys. Res.-Atmos., 119, 10247–10261, https://doi.org/
10.1002/2014JD022156, 2014.
Dang, C., Brandt, R. E., and Warren, S. G.: Parameterizations for narrowband and broadband albedo of pure snow and snow containing mineral dust and black carbon, J. Geophys. Res.-Atmos., 120,
5446–5468, https://doi.org/10.1002/2014JD022646, 2015.
Dang, C., Fu, Q., and Warren, S. G.: Effect of snow grain shape on snow albedo, J. Atmos. Sci., 73, 3573–3583, https://doi.org/10.1175/JAS-D-15-0276.1, 2016.
Dang, C., Warren, S. G., Fu, Q., Doherty, S. J., Sturm, M., and Su, J.: Measurements of light-absorbing particles in snow across the Arctic, North America, and China: Effects on surface albedo, J.
Geophys. Res.-Atmos., 122, 10149–10168, https://doi.org/10.1002/2017JD027070, 2017.
Doherty, S. J., Warren, S. G., Grenfell, T. C., Clarke, A. D., and Brandt, R. E.: Light-absorbing impurities in Arctic snow, Atmos. Chem. Phys., 10, 11647–11680, https://doi.org/10.5194/
acp-10-11647-2010, 2010.
Doherty, S. J., Dang, C., Hegg, D. A., Zhang, R., and Warren, S. G.: Black carbon and other light-absorbing particles in snow of central North America, J. Geophys. Res.-Atmos., 119, 12807–12831,
https://doi.org/10.1002/2014JD022350, 2014.
Doherty, S. J., Hegg, D. A., Johnson, J. E., Quinn, P. K., Schwarz, J. P., Dang, C., and Warren, S. G.: Causes of variability in light absorption by particles in snow at sites in Idaho and Utah, J.
Geophys. Res.-Atmos., 121, 4751–4768, https://doi.org/10.1002/2015JD024375, 2016.
Flanner, M. G. and Zender, C. S.: Snowpack radiative heating: Influence on Tibetan Plateau climate, Geophys. Res. Lett., 32, L06501, https://doi.org/10.1029/2004GL022076, 2005.
Flanner, M. G., Zender, C. S., Randerson, J. T., and Rasch, P. J.: Present-day climate forcing and response from black carbon in snow, J. Geophys. Res., 112, D11202, https://doi.org/10.1029/
2006JD008003, 2007.
Flanner, M. G., Liu, X., Zhou, C., Penner, J. E., and Jiao, C.: Enhanced solar energy absorption by internally-mixed black carbon in snow grains, Atmos. Chem. Phys., 12, 4699–4721, https://doi.org/
10.5194/acp-12-4699-2012, 2012.
Gardner, A. S. and Sharp, M. J.: A review of snow and ice albedo and the development of a new physically based broadband albedo parameterization, J. Geophys. Res., 115, F01009, https://doi.org/
10.1029/2009JF001444, 2010.
Grenfell, T. C. and Warren S. G.: Representation of a nonspherical ice particle by a collection of independent spheres for scattering and absorption of radiation, J. Geophys. Res., 104, 31697–31709,
https://doi.org/10.1029/1999JD900496, 1999.
Grenfell, T. C., Neshyba, S. P., and Warren, S. G.: Representation of a nonspherical ice particle by a collection of independent spheres for scattering and absorption of radiation: 3. Hollow columns
and plates, J. Geophys. Res., 110, D17203, https://doi.org/10.1029/2005JD005811, 2005.
He, C., Takano, Y., Liou, K. N., Yang, P., Li, Q., and Chen, F.: Impact of Snow Grain Shape and Black Carbon–Snow Internal Mixing on Snow Optical Properties: Parameterizations for Climate Models, J.
Climate, 30, 10019–10036, https://doi.org/10.1175/JCLI-D-17-0300.1, 2017.
He, C., Liou, K. N., Takano, Y., Yang, P., Qi, L., and Chen, F.: Impact of grain shape and multiple black carbon internal mixing on snow albedo: Parameterization and radiative effect analysis, J.
Geophys. Res.-Atmos., 123, 1253–1268, https://doi.org/10.1002/2017JD027752, 2018a.
He, C., Flanner, M. G., Chen, F., Barlage, M., Liou, K.-N., Kang, S., Ming, J., and Qian, Y.: Black carbon-induced snow albedo reduction over the Tibetan Plateau: uncertainties from snow grain shape
and aerosol–snow mixing state based on an updated SNICAR model, Atmos. Chem. Phys., 18, 11507–11527, https://doi.org/10.5194/acp-18-11507-2018, 2018b.
Holland, M. M., Bailey, D. A., Briegleb, B. P., Light, B., and Hunke, E.: Improved sea ice shortwave radiation physics in CCSM4: The impact of melt ponds and aerosols on Arctic sea ice, J. Climate,
25, 1413–1430, 2012.
Huang, X., Chen, X., Flanner, M., Yang, P., Feldman, D., and Kuo, C.: Improved representation of surface spectral emissivity in a global climate model and its impact on simulated climate, J. Climate,
31, 3711–3727, 2018.
Hunke, E. C., Lipscomb, W. H., Turner, A. K., Jeffery, N., and Elliott, S.: CICE: the Los Alamos Sea Ice Model, Documentation and Software User's Manual, Version 4.1, LA-CC-06-012, T-3 Fluid Dynamics
Group, Los Alamos National Laboratory, Los Alamos NM, USA, 2010.
Iacono, M. J., Delamere, J. S., Mlawer, E. J., Shephard, M. W., Clough, S. A., and Collins, W. D.: Radiative forcing by long-lived greenhouse gases: Calculations with the AER radiative transfer
models, J. Geophys. Res., 113, D13103, https://doi.org/10.1029/2008JD009944, 2008.
Jin, Z. and Stamnes, K.: Radiative transfer in nonuniformly refracting layered media: atmosphere–ocean system, Appl. Optics, 33, 431–442, https://doi.org/10.1364/AO.33.000431, 1994.
Kuipers Munneke, P., Van den Broeke, M. R., Lenaerts, J. T. M., Flanner, M. G., Gardner, A. S., and Van de Berg, W. J.: A new albedo parameterization for use in climate models over the Antarctic ice
sheet, J. Geophys. Res., 116, D05114, https://doi.org/10.1029/2010JD015113, 2011.
Lee, W. L. and Liou, K. N.: A coupled atmosphere–ocean radiative transfer system using the analytic four-stream approximation, J. Atmos. Sci., 64, 3681–3694, https://doi.org/10.1175/JAS4004.1, 2007.
Liang, S., Fang, H., Chen, M., Shuey, C. J., Walthall, C., Daughtry, C., Morisette, J., Schaaf, C., and Strahler, A.: Validating MODIS land surface reflectance and albedo products: Methods and
preliminary results, Remote Sens. Environ., 83, 149–162, https://doi.org/10.1016/S0034-4257(02)00092-5, 2002.
Light, B., Grenfell, T. C., and Perovich, D. K.: Transmission and absorption of solar radiation by Arctic sea ice during the melt season, J. Geophys. Res., 113, C03023, https://doi.org/10.1029/
2006JC003977, 2008.
Light, B., Perovich, D. K., Webster M. A., Polashenski, C., and Dadic, R.: Optical properties of melting first-year Arctic sea ice, J. Geophys. Res.-Oceans, 120, 7657–7675, https://doi.org/10.1002/
2015JC011163, 2015.
Marshall, S. and Oglesby, R. J.: An improved snow hydrology for GCMs. Part 1: Snow cover fraction, albedo, grain size, and age, Clim. Dynam., 10, 21–37, https://doi.org/10.1007/BF00210334, 1994.
Marshall, S. E.: A Physical Parameterization of Snow Albedo for Use in Climate Models, NCAR Cooperative thesis 123, National Center for Atmospheric Research, Boulder, CO, 175 pp. 1989.
Marshall, S. E. and Warren S. G., Parameterization of snow albedo for climate models, in: Large Scale Effects of Seasonal Snow Cover, edited by: Goodison, B. E., Barry, R. G., and Dozier, J.,
International Association of Hydrological Sciences, Washington, D. C., IAHS Publ., vol. 166, 43–50, 1987.
Matzl, M. and Schneebeli, M.: Measuring specific surface area of snow by near-infrared photography, J. Glaciol., 52, 558–564, https://doi.org/10.3189/172756506781828412, 2006.
Matzl, M. and Schneebeli, M.: Stereological measurement of the specific surface area of seasonal snow types: Comparison to other methods, and implications for mm-scale vertical profiling, Cold Reg.
Sci. Tech., 64, 1–8, https://doi.org/10.1016/j.coldregions.2010.06.006, 2010.
Meador, W. E. and Weaver, W. R.: Two-stream approximations to radiative transfer in planetary atmospheres: A unified description of existing methods and a new improvement, J. Atmos. Sci., 37,
630–643, https://doi.org/10.1175/1520-0469(1980)037<0630:TSATRT>2.0.CO;2, 1980.
Mlawer, E. J. and Clough, S. A.: On the extension of rapid radiative transfer model to the shortwave region: in: Proceedings of the 6th Atmospheric Radiation Measurement (ARM) Science Team Meeting,
US Department of Energy, CONF-9603149, 223–226, 1997.
Neale, R. B., Chen, C.-C., Gettelman, A., Lauritzen, P. H., Park, S., Williamson, D. L., Conley, A. J., Garcia, R., Kinnison, D., Lamarque, J. F., and Marsh, D.: Description of the NCAR community
atmosphere model (CAM 5.0), NCAR Technical Note, NCAR/TN-486+STR, 1, 1–12, 2010.
Neshyba, S. P., Grenfell, T. C., and Warren, S. G.: Representation of a nonspherical ice particle by a collection of independent spheres for scattering and absorption of radiation: 2. Hexagonal
columns and plates, J. Geophys. Res., 108, 4448, https://doi.org/10.1029/2002JD003302, 2003.
Perovich, D. K.: The optical properties of sea ice, Monograph 96-1, Cold Regions Research & Engineering Laboratory, US Army Corps of Engineers, Hanover, NH, USA, 1996.
Stamnes, K., Tsay, S. C., Wiscombe, W., and Jayaweera, K.: Numerically stable algorithm for discrete-ordinate-method radiative transfer in multiple scattering and emitting layered media, Appl.
Optics, 27, 2502–2509, https://doi.org/10.1364/AO.27.002502, 1988.
Thomas, G. and Stamnes, K.: Radiative Transfer in the Atmosphere and Ocean (Cambridge Atmospheric and Space Science Series), Cambridge University Press Cambridge, https://doi.org/10.1017/
CBO9780511613470, 1999.
Toon, O. B., McKay, C. P., Ackerman, T. P., and Santhanam, K.: Rapid calculation of radiative heating rates and photodissociation rates in inhomogeneous multiple scattering atmospheres, J. Geophys.
Res., 94, 16287–16301, https://doi.org/10.1029/JD094iD13p16287, 1989.
Turner, A. K., Lipscomb, W. H., Hunke, E. C., Jacobsen, D. W., Jeffery, N., Ringler, T. D., and Wolfe, J. D.: MPAS-Seaice: a new variable resolution sea-ice model, J. Adv. Model Earth Sy., in
preparation, 2019.
Wang, X., Doherty, S. J., and Huang, J.: Black carbon and other light-absorbing impurities in snow across Northern China, J. Geophys. Res.-Atmos., 118, 1471–1492, https://doi.org/10.1029/2012JD018291
, 2013.
Warren, S. G.: Optical properties of snow, Rev. Geophys., 20, 67–89, https://doi.org/10.1029/RG020i001p00067, 1982.
Warren, S. G. and Brandt, R. E.: Optical constants of ice from the ultraviolet to the microwave: A revised compilation, J. Geophys. Res., 113, D14220, https://doi.org/10.1029/2007JD009744, 2008.
Warren, S. G. and Wiscombe, W. J.: A model for the spectral albedo of snow. II: Snow containing atmospheric aerosols, J. Atmos. Sci., 37, 2734–2745, https://doi.org/10.1175/1520-0469(1980)037
<2734:AMFTSA>2.0.CO;2, 1980.
Warren, S. G. and Wiscombe, W. J.: Dirty snow after nuclear war, Nature, 313, 467–470, https://doi.org/10.1038/313467a0, 1985.
Wiscombe, W. J.: The delta-Eddington approximation for a vertically inhomogeneous atmosphere, NCAR Technical Note, NCAR/TN-121+STR, https://doi.org/10.5065/D65H7D6Z, 1977.
Wiscombe, W. J.: Improved Mie scattering algorithms, Appl. Optics, 19, 1505–1509, https://doi.org/10.1364/AO.19.001505, 1980.
Wiscombe, W. J. and Warren, S. G.: A model for the spectral albedo of snow. I: Pure snow, J. Atmos. Sci., 37, 2712–2733, https://doi.org/10.1175/1520-0469(1980)037<2712:AMFTSA>2.0.CO;2, 1980.
Zender, C. S.: Global climatology of abundance and solar absorption of oxygen collision complexes,J. Geophys. Res., 104, 24471–24484, https://doi.org/10.1029/1999JD900797, 1999.
Zender, C. S., Bush, B., Pope, S. K., Bucholtz, A., Collins, W. D., Kiehl, J. T., Valero, F. P., and Vitko Jr., J.: Atmospheric absorption during the atmospheric radiation measurement (ARM) enhanced
shortwave experiment (ARESE), J. Geophys. Res., 102, 29901–29915, https://doi.org/10.1029/97JD01781, 1997. | {"url":"https://tc.copernicus.org/articles/13/2325/2019/","timestamp":"2024-11-08T07:54:08Z","content_type":"text/html","content_length":"346378","record_id":"<urn:uuid:220290fa-9577-4dfb-9cde-7189dd384026>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00170.warc.gz"} |
How to Multiply Column by a Constant in Google Sheets
To multiply a column by a constant in Google Sheets, you'll need to use an ARRAYFORMULA function combined with the multiplication operator. Here's a step-by-step guide on how to do it:
1. Open your Google Sheet containing the data you want to multiply.
2. Click on an empty cell where you'd like to display the result.
3. Enter the following formula in the cell:
=ARRAYFORMULA(A1:A* constant)
Replace A1:A with the range of the column you want to multiply, and replace constant with the constant value you want to multiply the column by.
4. Press Enter to apply the formula.
Let's say you have a list of numbers in column A (from A1 to A5) that you want to multiply by the constant value 10. Here's how you can do it:
1. Click on an empty cell where you'd like to display the result, for example, B1.
2. Enter the following formula in the cell:
=ARRAYFORMULA(A1:A5 * 10)
3. Press Enter to apply the formula.
The result will be displayed in column B, with each value in column A multiplied by 10.
Did you find this useful? | {"url":"https://sheetscheat.com/google-sheets/how-to-multiply-column-by-a-constant-in-google-sheets","timestamp":"2024-11-11T15:10:10Z","content_type":"text/html","content_length":"10346","record_id":"<urn:uuid:8aa84c84-55df-46ee-aee1-383e59901487>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00406.warc.gz"} |
Glass Weight Calculator - Savvy Calculator
Glass Weight Calculator
About Glass Weight Calculator (Formula)
The Glass Weight Calculator is a useful tool for estimating the weight of glass based on its dimensions and characteristics. This calculator is commonly used in construction, architecture, and
manufacturing industries to determine the weight of glass panels for transportation, installation, and structural design purposes.
The formula to calculate the weight of glass depends on the type of glass being used and its dimensions:
Weight = Area × Thickness × Density
• Weight: The weight of the glass panel.
• Area: The surface area of the glass panel.
• Thickness: The thickness of the glass panel.
• Density: The density of the specific type of glass being used.
It’s important to note that the density of glass can vary depending on its composition, such as float glass, tempered glass, or laminated glass.
For example, if you have a glass panel with dimensions of 1 meter by 2 meters and a thickness of 6 millimeters, and the glass density is 2.5 grams per cubic centimeter, the weight can be calculated
Weight = (1 m × 2 m) × (6 mm) × (2.5 g/cm³) = 30 kg
The Glass Weight Calculator is valuable for architects, engineers, and manufacturers to assess the load-bearing capacity of structures, plan transportation, and ensure safety during installation. By
accurately estimating the weight of glass panels, professionals can make informed decisions about design, logistics, and safety measures.
Leave a Comment | {"url":"https://savvycalculator.com/glass-weight-calculator","timestamp":"2024-11-08T08:21:05Z","content_type":"text/html","content_length":"141535","record_id":"<urn:uuid:2885b066-20f9-4725-be11-49dfc2e4599b>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00043.warc.gz"} |
Cost-Sensitive Learning with Noisy Labels
Nagarajan Natarajan, Inderjit S. Dhillon, Pradeep Ravikumar, Ambuj Tewari.
Year: 2018, Volume: 18, Issue: 155, Pages: 1−33
We study binary classification in the presence of \emph{class- conditional} random noise, where the learner gets to see labels that are flipped independently with some probability, and where the flip
probability depends on the class. Our goal is to devise learning algorithms that are efficient and statistically consistent with respect to commonly used utility measures. In particular, we look at a
family of measures motivated by their application in domains where cost-sensitive learning is necessary (for example, when there is class imbalance). In contrast to most of the existing literature on
consistent classification that are limited to the classical 0-1 loss, our analysis includes more general utility measures such as the AM measure (arithmetic mean of True Positive Rate and True
Negative Rate). For this problem of cost-sensitive learning under class- conditional random noise, we develop two approaches that are based on suitably modifying surrogate losses. First, we provide a
simple unbiased estimator of any loss, and obtain performance bounds for empirical utility maximization in the presence of i.i.d. data with noisy labels. If the loss function satisfies a simple
symmetry condition, we show that using unbiased estimator leads to an efficient algorithm for empirical maximization. Second, by leveraging a reduction of risk minimization under noisy labels to
classification with weighted 0-1 loss, we suggest the use of a simple weighted surrogate loss, for which we are able to obtain strong utility bounds. This approach implies that methods already used
in practice, such as biased SVM and weighted logistic regression, are provably noise- tolerant. For two practically important measures in our family, we show that the proposed methods are competitive
with respect to recently proposed methods for dealing with label noise in several benchmark data sets.
PDF BibTeX | {"url":"https://jmlr.org/beta/papers/v18/15-226.html","timestamp":"2024-11-06T12:05:35Z","content_type":"text/html","content_length":"8405","record_id":"<urn:uuid:2dc1de49-8b37-40c9-a0ca-aa4278d25c8c>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00169.warc.gz"} |
Nanonewtons to Micronewtons Conversion (nN to μN)
Nanonewtons to Micronewtons Converter
Enter the force in nanonewtons below to convert it to micronewtons.
Do you want to convert micronewtons to nanonewtons?
How to Convert Nanonewtons to Micronewtons
To convert a measurement in nanonewtons to a measurement in micronewtons, divide the force by the following conversion ratio: 1,000 nanonewtons/micronewton.
Since one micronewton is equal to 1,000 nanonewtons, you can use this simple formula to convert:
micronewtons = nanonewtons ÷ 1,000
The force in micronewtons is equal to the force in nanonewtons divided by 1,000.
For example,
here's how to convert 5,000 nanonewtons to micronewtons using the formula above.
micronewtons = (5,000 nN ÷ 1,000) = 5 μN
Nanonewtons and micronewtons are both units used to measure force. Keep reading to learn more about each unit of measure.
What Is a Nanonewton?
One nanonewton is equal to 1/1,000,000,000 of a newton, which is equal to the force needed to move one kilogram of mass at a rate of one meter per second squared.
The nanonewton is a multiple of the newton, which is the SI derived unit for force. In the metric system, "nano" is the prefix for billionths, or 10^-9. Nanonewtons can be abbreviated as nN; for
example, 1 nanonewton can be written as 1 nN.
Learn more about nanonewtons.
What Is a Micronewton?
One micronewton is equal to 1/1,000,000 of a newton, which is equal to the force needed to move one kilogram of mass at a rate of one meter per second squared.
The micronewton is a multiple of the newton, which is the SI derived unit for force. In the metric system, "micro" is the prefix for millionths, or 10^-6. Micronewtons can be abbreviated as μN; for
example, 1 micronewton can be written as 1 μN.
Learn more about micronewtons.
Nanonewton to Micronewton Conversion Table
Table showing various
nanonewton measurements
converted to micronewtons.
Nanonewtons Micronewtons
1 nN 0.001 μN
2 nN 0.002 μN
3 nN 0.003 μN
4 nN 0.004 μN
5 nN 0.005 μN
6 nN 0.006 μN
7 nN 0.007 μN
8 nN 0.008 μN
9 nN 0.009 μN
10 nN 0.01 μN
20 nN 0.02 μN
30 nN 0.03 μN
40 nN 0.04 μN
50 nN 0.05 μN
60 nN 0.06 μN
70 nN 0.07 μN
80 nN 0.08 μN
90 nN 0.09 μN
100 nN 0.1 μN
200 nN 0.2 μN
300 nN 0.3 μN
400 nN 0.4 μN
500 nN 0.5 μN
600 nN 0.6 μN
700 nN 0.7 μN
800 nN 0.8 μN
900 nN 0.9 μN
1,000 nN 1 μN
More Nanonewton & Micronewton Conversions | {"url":"https://www.inchcalculator.com/convert/nanonewton-to-micronewton/","timestamp":"2024-11-07T03:33:40Z","content_type":"text/html","content_length":"68450","record_id":"<urn:uuid:de68f89f-95e1-4aef-a6a5-f0c15ebb152b>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00685.warc.gz"} |
Timing Your Passage
C & D Canal - Timing Your Passage
Since most of us can appreciate a fair current push to help us along our way, how do we determine the best times to make the Chesapeake and Delaware Canal passage?
For those of you that are just looking for the numbers, we will get them out of the way now:
1. An eastbound vessel entering the canal at Back Creek, MD - should do so 3 minutes before "Slack Water Flood Begins" at the Chesapeake City, MD (Reference Station) in order to catch the very
beginning of a fair current push.
2. A westbound vessel entering the canal at Reedy Point, DE - should do so about 7 minutes before "Slack Water Ebb Begins" at the Reedy Point Tower, DE, (Sub-Station) to catch the very beginning of
a westbound fair current.
For those who would like a little more detail - Read On!
Determining When a Fair Current Begins
We will be using the NOAA Tidal Current Tables; you must correct for DST when necessary. For those that prefer you can use Eldridge. Simply look up the C & D canal in "Table #1" and extract the time
of "Slack Water Flood" (eastbound vessels) or "Slack Water Ebb" (westbound vessels) at the Chesapeake and Delaware Canal reference station, in this case Chesapeake City, MD. From "Table #2", extract
the time corrections for the proper substation, and apply this correction to determine the correct time for slack water.
For our example we will use the NOAA Current Tables and compute the times for the first favorable current of the day on January 1st.
C & D Canal Fair Current Timing
Eastbound Passage Corrections Slack Water Flood Begins
Chesapeake City - Reference Station (Table #1) 0052
"Back Creek" (Table #2 corrections) -0003
Slack Water Flood Begins @ "Back Creek" 0049
In the table above we are transiting the C & D Canal eastbound from the Upper Chesapeake to the Delaware River. So we need to find out what time we should be at the western entrance of the canal to
catch a fair current. We know that the current floods eastbound in the C & D so we will be looking for the first flood current of the day.
1. Enter Table #1 of the NOAA current tables for the C & D Canal and for the selected day extract the time of the first "Slack Water Flood Begins". In this case it would be (0052).
2. Now Enter Table #2 of the NOAA current tables for the C & D Canal and locate the sub-station at the western end of the canal, in this case "Back Creek," and extract the time correction for
"Minimum Before Flood" (-0003).
3. Apply the correction to the 0052 obtained from table #1 and you will have the time that slack water flood begins (0049) and a fair current push at the western entrance to the canal.
For a westbound passage we simply repeat the process using "Slack Water Ebb," since the current ebbs to the west, and use the sub-station at the eastern entrance to the canal, "Reedy Point". (See
C & D Canal Fair Current Timing
Westbound Passage Corrections Slack Water Ebb Begins
Chesapeake City - Reference Station ( Table #1) 0647
"Reedy Point Tower" (Table #2 corrections) -0007
Slack Water Ebb Begins @ "Reedy Point Tower" 0640
On average, a 7 knot vessel will have a about a 3½ hour window from the time of slack water to make the passage and still have a fair current the length of the canal. A higher speed vessel will have
an even larger window.
How long will it take me to get through the canal?
There are 2 ways to approach this problem the "Right Way" and the "Other Way."
Let's take a look at the "Right Way"
Beginning at the entrance to the canal at a specific time, you must first compute the actual velocity of the current for that time.
1. Using the NOAA Tidal Current Tables, enter "Table 2" and locate your 1st station. Using the differences and ratios shown, calculate the times of Slack Water, Maximum Current Time (Flood or
Ebb),and Current Velocity for the times immediately preceding and following your ETA at that station
2. Determine the time interval between slack water and the maximum current for the time bracketing the time of your arrival.
3. Determine the time interval between slack water and the time of your arrival at the location above.
4. Enter "Table B" at the intersection of the closest times to those that you have determined in steps b. and c. and extract the current factor listed.
5. Apply this factor to the maximum current speed to arrive at the current velocity for the time you specified.
6. Apply this current velocity to your cruising speed and using T=D/S compute your ETA to the next station.
7. Using this ETA, repeat this process for the next station, and that ETA for the next until you have done the calculations for all 5 stations in the canal.
What about the "Other Way" you ask?
1. Determine the overall average current velocity for the canal.
2. Apply this current to your cruising speed.
3. Use T=D/S to compute the time required to transit the canal and your ETA.
So, What’s the Difference?
The "right way" will provide you with improved accuracy, but the 2 things to consider here are:
1. The tidal current predictions are only 90% accurate to within 30 minutes to begin with.
2. What happens if you miss your timing to the first waypoint and you have to start the whole process over again?
Hell, you might very well be through the canal by the time you redo all the calculations.
It may be just me, but I am seriously into "the other way!"
In an effort to keep it simple we have done some of the calculations for you (to 1 decimal place). Using the NOAA Tidal Current Tables #2 we extract the average maximum current velocity for the
Reference Station as well as all of the Sub-Stations located in the C & D Canal and then take the average.
C&D Canal Average Maximum Current Velocities
Current Station Ebb Current Avg. Maximum Flood Current Avg. Maximum
Back Creek 1.4 knots 1.2 knots
Chesapeake City (Ref. Sta.) 1.9 knots 2.1 knots
Chesapeake City Bridge 1.4 knots 2.0 knots
Conrail Bridge 1.3 knots 1.9 knots
St. Georges Bridge 1.3 knots 1.7 knots
Reedy Point Bridge* 2.1 knots 2.6 knots
Reedy Point Tower 1.4 knots 1.2 knots
Canal Average Ebb Current 1.5 knots Flood Current 1.9 knots
*Reported by Coast Pilot #3 Chapter 7
Keep in mind that this is the maximum average current speed, if you are of a more conservative nature you may want to modify this number and use that result for your calculations.
With the known distance of the C & D Canal (15.3 NM), the average current velocity, and our intended cruising speed, we can now calculate the time required to transit.
At 10 knots cruising speed (± the average current velocity) will give you your estimated SOG. Then dividing the distance by your SOG will tell you how long it will take to make the passage through
the C & D.
So if we are westbound facing a foul current (floods east) for instance, simply take your cruising speed (10 knots) and subtract the average current speed (1.9 knots) giving you a SOG of 8.1 knots.
Dividing the distance (15.3 NM) by this SOG will result in the time required to pass through the C & D Canal; about 1 hour and 53 minutes. If you waited for a fair current the same passage would only
require 1 hour 19 minutes saving you 34 minutes of traveling time. A 7 knot boat would save almost 1¼ hours.
You’ve Made It This Far
For those that have stuck around this far and have read this whole thing, you deserve something for your perseverance. So here is an extra we hope you will find useful.
We have talked about the timing to get through the canal with a fair current. The problem is that very few people transit the C & D Canal just to get to the other end. Most seem to have some other
destination in mind. Don't get me wrong, I do not mean to imply that oil refineries, nuclear power plants, and spoil areas aren’t visually appealing, it just seems to me that most cruisers have
something else in mind.
Eastbound Vessels
For an eastbound 7 knot vessel bound north on the Delaware River you will want to enter the C & D canal at Back Creek about 1 hour and 16 minutes (±10 minutes) after slack water flood begins at
Chesapeake City. This will provide you with a fair current thru the canal and put you in the shipping channel of the Delaware River just as Slack water flood begins off of Reedy point. This should
give you about 6 hours of fair current inbound on the Delaware River.
Regretfully, there is no grand solution if you are southbound for the Delaware Capes or Cape May with a low powered vessel. You will have to decide whether to battle a foul current in the C & D or
fight the fight when you are outbound the Delaware. The obvious choice, at least to me, would be to take advantage of the 1 to 2 knot push in the Upper Delaware for the trip to the Capes. Even so, a
7 knot boat will likely run out of the fair current in the vicinity of Miah Maull Light. At 15 knots however, you should be able to make the entire transit to the capes with a fair current. To make
best use of this, a 7 knot vessel will want to enter the C & D eastbound approximately 37 minutes after slack water ebb begins at Chesapeake City or 2 hours and 56 minutes before slack water ebb
begins in the shipping channel off of Reedy Point.
Westbound Vessels
Vessels westbound for Baltimore should enter the C & D Canal approximately 7 Minutes before slack water ebb begins at Chesapeake City. A 10 knot boat will typically carry a fair current all the way
to Baltimore while a 7 knot vessel will likely lose the fair current in the vicinity of Pooles Island. | {"url":"https://www.offshoreblue.com/cruise/cd-timing.php","timestamp":"2024-11-04T05:27:10Z","content_type":"text/html","content_length":"62447","record_id":"<urn:uuid:d9359893-8061-4739-b23c-c92044ca39ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00479.warc.gz"} |
Don't Read if You are a PC Type Person
An old, blind cowboy wanders into an all-girl biker bar by mistake.
He finds his way to a bar stool and orders a shot of Jack Daniels.
After sitting there for a while, he yells to the bartender, 'Hey, you wanna hear a blonde joke?'
The bar immediately falls absolutely silent.
In a very deep, husky voice, the woman next to him says, 'Before you tell that joke, Cowboy, I think it is only fair,Given that you are blind, that you should know five things: 1. The bartender is a
blonde girl with a baseball bat.2. The bouncer is a blonde girl. 3. I'm a 6-foot tall, 175-pound blonde woman with a black belt in karate. 4. The woman sitting next to me is blonde and a professional
weight lifter. 5. The lady to your right is blonde and a professional wrestler. Now, think about it seriously, Cowboy. Do you still wanna tell that blonde joke?' The blind cowboy thinks for a second,
shakes his head and mutters, 'No...not if I'm gonna have to explain it five times.'
Oh dear I must be having a particularly blonde day - I read through all that thinking "well, why should someone into computers not read this then?"!!
Oh dear I must be having a particularly blonde day - I read through all that thinking "well, why should someone into computers not read this then?"!!
Diane - you said what I was thinking but wasn't game to say....although I have now.... have been furiously looking up what PC means on google....
theres no such thing as PC in oz. every just speaks their mind
Maybe not in Oz but on the forum............
Guest ReadyPenny
Diane - you said what I was thinking but wasn't game to say....although I have now.... have been furiously looking up what PC means on google....
That made my day!! There is always someone else thinking exactly the same thing as you, but usually only one person actually says it and it's usually me!!! Good one Diane
Aye Tyke:smile: I thought SA was all about working hard, being fun to be around and politically incorrect
"Politically Correct" - in the UK.
Thanks Tyke.....I didn't think "Police Clearance" quite fitted the content of what was said.
Aye Tyke:smile: I thought SA was all about working hard, being fun to be around and politically incorrect
A lot freer here than in the UK. Unfortunately some folk do bring this attitude and other bad personality traits with them (cynicism,snobbery and such).
We soon get em out of it.!!!!
I'm originally from the north of England and have the attitude typical of the area................. Upfront, hate formality and call it what it is..........
Good to know mate. I really need to loosen up. Part of choosing Australia over Canada was the sense of humor and easy-going lifestyle everyone kept telling me about.
lol Ali, I'll take u to the Rhino Room once u get here. I was there last night and some of the things the comedians said were very Un-PC! They are very easy going out in the streets, but in a comedy
club it is literally no holds barred!
Incidentally I was taking the piss out of my friend last purely cos he was Canadian.
lol Ali, I'll take u to the Rhino Room once u get here. I was there last night and some of the things the comedians said were very Un-PC! They are very easy going out in the streets, but in a
comedy club it is literally no holds barred!
Incidentally I was taking the piss out of my friend last purely cos he was Canadian.
Looks like you come from the wrong side of the Pennines........
"There is only one good thing in Lancashire............................... that's the road to Yorkshire"
You can always tell a Yorkshireman................. but not much.
lol Ali, I'll take u to the Rhino Room once u get here. I was there last night and some of the things the comedians said were very Un-PC! They are very easy going out in the streets, but in a
comedy club it is literally no holds barred!
Incidentally I was taking the piss out of my friend last purely cos he was Canadian.
Rich I hold you to your offer and believe me I can laugh
Looks like you come from the wrong side of the Pennines........
"There is only one good thing in Lancashire............................... that's the road to Yorkshire"
You can always tell a Yorkshireman................. but not much.
Looks like you come from the wrong side of the Pennines........
"There is only one good thing in Lancashire............................... that's the road to Yorkshire"
You can always tell a Yorkshireman................. but not much.
Sheffield born and bred
Sheffield born and bred
Oh, I'll let you off then. Your location says Manchester.......doing some missionary work there?? Says he from the colony of New Yorkshire ...........
• 1 month later...
Two Irish Men, A Blonde & a Flag Pole
Two men were standing at the base of a flagpole, looking up.
A blonde walks by and asked them what they were doing.
Patrick replied, 'We're supposed to be finding the height of this flagpole, but we don't have a ladder.'
The blonde took out an adjustable spanner from her bag, loosened a few bolts and laid the flagpole down.
She got a tape measure out of her pocket, took a few measurements, and announced that it was 18 feet 6 inches.
Then, she walked off.
Michael said to Patrick, 'Isn't that just like a blonde! We need the height, and she gives us the bloody length.
This photo was taken in a Senior Center in Plymouth, Michigan.
The course was "How to Prevent Alzheimer's."
The Project of the Day was, "To keep your mind working, try to create something from memory."
It kinda brings a tear to your eye, doesn't it?]
Guest Helchops | {"url":"https://www.pomsinadelaide.com/topic/28539-dont-read-if-you-are-a-pc-type-person/","timestamp":"2024-11-13T16:01:54Z","content_type":"text/html","content_length":"342371","record_id":"<urn:uuid:e77df1cd-7c08-45a3-92e9-cfb8da4c31fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00793.warc.gz"} |
d02haf (bvp_shoot_bval)
NAG FL Interface
d02haf (bvp_shoot_bval)
FL Name Style:
FL Specification Language:
1 Purpose
d02haf solves a two-point boundary value problem for a system of ordinary differential equations, using a Runge–Kutta–Merson method and a Newton iteration in a shooting and matching technique.
2 Specification
Fortran Interface
Subroutine d02haf ( u, v, n, a, b, tol, fcn, soln, m1, w, sdw, ifail)
Integer, Intent (In) :: n, m1, sdw
Integer, Intent (Inout) :: ifail
Real (Kind=nag_wp), Intent (In) :: v(n,2), a, b, tol
Real (Kind=nag_wp), Intent (Inout) :: u(n,2)
Real (Kind=nag_wp), Intent (Out) :: soln(n,m1), w(n,sdw)
External :: fcn
C Header Interface
#include <nag.h>
d02haf_ (double u[], const double v[], const Integer *n, const double *a, const double *b, const double *tol,
void void (NAG_CALL *fcn)(const double *x, const double y[], double f[]),
double soln[], const Integer *m1, double w[], const Integer *sdw, Integer *ifail)
The routine may be called by the names d02haf or nagf_ode_bvp_shoot_bval.
3 Description
solves a two-point boundary value problem for a system of
ordinary differential equations in the range
. The system is written in the form:
$yi′=fi(x,y1,y2,…,yn), i=1,2,…,n$ (1)
and the derivatives
are evaluated by
. Initially,
boundary values of the variables
must be specified, some at
and some at
. You must supply estimates of the remaining
boundary values (called parameters below); the subroutine corrects these by a form of Newton iteration. It also calculates the complete solution on an equispaced mesh if required.
Starting from the known and estimated values of
, the subroutine integrates the equations from
(using a Runge–Kutta–Merson method). The differences between the values of
from integration and those specified initially should be zero for the true solution. (These differences are called residuals below.) The subroutine uses a generalized Newton method to reduce the
residuals to zero, by calculating corrections to the estimated boundary values. This process is repeated iteratively until convergence is obtained, or until the routine can no longer reduce the
residuals. See
Hall and Watt (1976)
for a simple discussion of shooting and matching techniques.
4 References
Hall G and Watt J M (ed.) (1976) Modern Numerical Methods for Ordinary Differential Equations Clarendon Press, Oxford
5 Arguments
1: $\mathbf{u}\left({\mathbf{n}},2\right)$ – Real (Kind=nag_wp) array Input/Output
On entry: ${\mathbf{u}}\left(\mathit{i},1\right)$ must be set to the known or estimated value of ${y}_{\mathit{i}}$ at $a$ and ${\mathbf{u}}\left(\mathit{i},2\right)$ must be set to the known or
estimated value of ${y}_{\mathit{i}}$ at $b$, for $\mathit{i}=1,2,\dots ,\mathit{n}$.
On exit
: the known values unaltered, and corrected values of the estimates, unless an error has occurred. If an error has occurred,
contains the known values and the latest values of the estimates.
2: $\mathbf{v}\left({\mathbf{n}},2\right)$ – Real (Kind=nag_wp) array Input
On entry: ${\mathbf{v}}\left(\mathit{i},\mathit{j}\right)$ must be set to $0.0$ if ${\mathbf{u}}\left(\mathit{i},\mathit{j}\right)$ is a known value and to $1.0$ if ${\mathbf{u}}\left(\mathit{i},
\mathit{j}\right)$ is an estimated value, for $\mathit{i}=1,2,\dots ,\mathit{n}$ and $\mathit{j}=1,2$.
Constraint: precisely $\mathit{n}$ of the ${\mathbf{v}}\left(i,j\right)$ must be set to $0.0$, i.e., precisely $\mathit{n}$ of the ${\mathbf{u}}\left(i,j\right)$ must be known values, and these
must not be all at $a$ or all at $b$.
3: $\mathbf{n}$ – Integer Input
On entry: $\mathit{n}$, the number of equations.
Constraint: ${\mathbf{n}}\ge 1$.
4: $\mathbf{a}$ – Real (Kind=nag_wp) Input
On entry: $a$, the initial point of the interval of integration.
5: $\mathbf{b}$ – Real (Kind=nag_wp) Input
On entry: $b$, the final point of the interval of integration.
6: $\mathbf{tol}$ – Real (Kind=nag_wp) Input
On entry
: must be set to a small quantity suitable for:
1. (a)testing the local error in ${y}_{i}$ during integration,
2. (b)testing for the convergence of ${y}_{i}$ at $b$,
3. (c)calculating the perturbation in estimated boundary values for ${y}_{i}$, which are used to obtain the approximate derivatives of the residuals for use in the Newton iteration.
You are advised to check your results by varying
Constraint: ${\mathbf{tol}}>0.0$.
7: $\mathbf{fcn}$ – Subroutine, supplied by the user. External Procedure
must evaluate the functions
(i.e., the derivatives
${y}_{\mathit{i}}^{\prime }$
), for
$\mathit{i}=1,2,\dots ,\mathit{n}$
, at a general point
The specification of
Fortran Interface
Subroutine fcn ( x, y, f)
Real (Kind=nag_wp), Intent (In) :: x, y(*)
Real (Kind=nag_wp), Intent (Out) :: f(*)
C Header Interface
void fcn (const double *x, const double y[], double f[])
In the description of the arguments of
denotes the actual value of
in the call of
1: $\mathbf{x}$ – Real (Kind=nag_wp) Input
On entry: $x$, the value of the argument.
2: $\mathbf{y}\left(*\right)$ – Real (Kind=nag_wp) array Input
On entry: ${y}_{\mathit{i}}$, for $\mathit{i}=1,2,\dots ,\mathit{n}$, the value of the argument.
3: $\mathbf{f}\left(*\right)$ – Real (Kind=nag_wp) array Output
On exit: the values of ${f}_{\mathit{i}}\left(x\right)$, for $\mathit{i}=1,2,\dots ,\mathit{n}$.
must either be a module subprogram USEd by, or declared as EXTERNAL in, the (sub)program from which
is called. Arguments denoted as
be changed by this procedure.
Note: fcn
should not return floating-point NaN (Not a Number) or infinity values, since these are not handled by
. If your code inadvertently
return any NaNs or infinities,
is likely to produce unexpected results.
8: $\mathbf{soln}\left({\mathbf{n}},{\mathbf{m1}}\right)$ – Real (Kind=nag_wp) array Output
On exit: the solution when ${\mathbf{m1}}>1$.
9: $\mathbf{m1}$ – Integer Input
On entry
: a value which controls output.
The final solution is not evaluated.
The final values of ${y}_{\mathit{i}}$ at interval $\left(b-a\right)/\left({\mathbf{m1}}-1\right)$ are calculated and stored in the array soln by columns, starting with values ${y}_{\mathit
{i}}$ at $a$ stored in ${\mathbf{soln}}\left(\mathit{i},1\right)$, for $\mathit{i}=1,2,\dots ,\mathit{n}$.
Constraint: ${\mathbf{m1}}\ge 1$.
10: $\mathbf{w}\left({\mathbf{n}},{\mathbf{sdw}}\right)$ – Real (Kind=nag_wp) array Output
On exit: if ${\mathbf{ifail}}={\mathbf{2}}$, ${\mathbf{3}}$, ${\mathbf{4}}$ or ${\mathbf{5}}$, ${\mathbf{w}}\left(\mathit{i},1\right)$, for $\mathit{i}=1,2,\dots ,\mathit{n}$, contains the
solution at the point where the integration fails and the point of failure is returned in ${\mathbf{w}}\left(1,2\right)$.
11: $\mathbf{sdw}$ – Integer Input
On entry
: the second dimension of the array
as declared in the (sub)program from which
is called.
Constraint: ${\mathbf{sdw}}\ge 3{\mathbf{n}}+17+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(11,{\mathbf{n}}\right)$.
12: $\mathbf{ifail}$ – Integer Input/Output
This routine uses an
input value codification that differs from the normal case to distinguish between errors and warnings (see
Section 4
in the Introduction to the NAG Library FL Interface).
On entry
must be set to one of the values below to set behaviour on detection of an error; these values have no effect when no error is detected. The behaviour relate to whether or not program execution
is halted and whether or not messages are printed when an error or warning is detected.
ifail Execution Error Printing Warning Printed
$\phantom{00}0$ halted No No
$\phantom{00}1$ continue No No
$\phantom{0}10$ halted Yes No
$\phantom{0}11$ continue Yes No
$100$ halted No Yes
$101$ continue No Yes
$110$ halted Yes Yes
$111$ continue Yes Yes
For environments where it might be inappropriate to halt program execution when an error is detected, the value
is recommended. If the printing of messages is undesirable, then the value
is recommended. Otherwise, the recommended value is
When the value $\mathbf{1}$, $\mathbf{11}$, $\mathbf{101}$ or $\mathbf{111}$ is used it is essential to test the value of ifail on exit.
On exit
unless the routine detects an error or a warning has been flagged (see
Section 6
6 Error Indicators and Warnings
If on entry
, explanatory error messages are output on the current error message unit (as defined by
Errors or warnings detected by the routine:
On entry, incorrect number of boundary values were flagged as known.
Number flagged as known: $⟨\mathit{\text{value}}⟩$, but number should be $⟨\mathit{\text{value}}⟩$.
On entry, ${\mathbf{m1}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{m1}}\ge 1$.
On entry, ${\mathbf{n}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{n}}\ge 1$.
On entry, ${\mathbf{sdw}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{sdw}}\ge 3×{\mathbf{n}}+\mathrm{max}\phantom{\rule{0.125em}{0ex}}\left(28,{\mathbf{n}}+17\right)$; that is, $⟨\mathit{\text{value}}⟩$.
On entry, ${\mathbf{tol}}=⟨\mathit{\text{value}}⟩$.
Constraint: ${\mathbf{tol}}>0.0$.
On entry all left-hand boundary values were flagged as known.
On entry no left-hand boundary values were flagged as known.
In the integration with initial or final parameters, the step size was reduced too far for the integration to proceed. Either this routine is not a suitable method for solving the problem, or the
initial choice of parameters is very poor.
In the integration with initial or final parameters, a suitable initial step could not be found. Either this routine is not suitable for solving the problem, or the initial choice of parameters
is very poor.
An initial step-length could be found for integration to proceed with the current parameters.
The step-length required to calculate the Jacobian to sufficient accuracy is too small
The Jacobian has an insignificant column. Make sure that the solution vector depends on all the parameters.
An internal singular value decomposition has failed.
This error can be avoided by changing the initial parameter estimates.
The Newton iteration has failed to converge.
This can indicate a poor initial choice of parameters or a very difficult problem.
Consider varying elements of the parameter convergence control if the residuals are small; otherwise vary initial parameter estimates.
Internal error in calculating residual. Please contact
Internal error in calculating Jacobian. Please contact
Internal error in Newton method. Please contact
An unexpected error has been triggered by this routine. Please contact
Section 7
in the Introduction to the NAG Library FL Interface for further information.
Your licence key may have expired or may not have been installed correctly.
Section 8
in the Introduction to the NAG Library FL Interface for further information.
Dynamic memory allocation failed.
Section 9
in the Introduction to the NAG Library FL Interface for further information.
7 Accuracy
If the process converges, the accuracy to which the unknown parameters are determined is usually close to that specified by you; the solution, if requested, may be determined to a required accuracy
by varying
8 Parallelism and Performance
Background information to multithreading can be found in the
is not thread safe and should not be called from a multithreaded user program. Please see
Section 1
in FL Interface Multithreading for more information on thread safety.
d02haf makes calls to BLAS and/or LAPACK routines, which may be threaded within the vendor library used by this implementation. Consult the documentation for the vendor library for further
Please consult the
X06 Chapter Introduction
for information on how to control and interrogate the OpenMP environment used within this routine. Please also consult the
Users' Note
for your implementation for any additional implementation-specific information.
The time taken by d02haf depends on the complexity of the system, and on the number of iterations required. In practice, integration of the differential equations is by far the most costly process
Wherever it occurs in the routine, the error argument
is used in ‘mixed’ form; that is
always occurs in expressions of the form
. Though not ideal for every application, it is expected that this mixture of absolute and relative error testing will be adequate for most purposes.
You are strongly recommended to set
to obtain self-explanatory error messages, and also monitoring information about the course of the computation. You may select the unit numbers on which this output is to appear by calls of
(for error messages) or
(for monitoring information) – see
Section 10
for an example. Otherwise the default unit numbers will be used, as specified in the
Users' Note
. The monitoring information produced at each iteration includes the current parameter values, the residuals and
-norms: a basic norm and a current norm. At each iteration the aim is to find parameter values which make the current norm less than the basic norm. Both these norms should tend to zero as should the
residuals. (They would all be zero if the exact parameters were used as input.) For more details, you may consult the specification of
, and especially the description of the argument
The computing time for integrating the differential equations can sometimes depend critically on the quality of the initial estimates. If it seems that too much computing time is required and, in
particular, if the values of the residuals printed by the monitoring routine are much larger than the expected values of the solution at
, then the coding of
should be checked for errors. If no errors can be found, an independent attempt should be made to improve the initial estimates. In practical problems it is not uncommon for the differential equation
to have a singular point at one or both ends of the range. Suppose
is a singular point; then the derivatives
${y}_{i}^{\prime }$
Section 3
) cannot be evaluated at
, usually because one or more of the expressions for
give overflow. In such a case it is necessary for you to take
a short distance away from the singularity, and to find values for
at the new value of
(e.g., use the first one or two terms of an analytical (power series) solution). You should experiment with the new position of
; if it is taken too close to the singular point, the derivatives
will be inaccurate, and the routine may sometimes fail with
or, in extreme cases, with an overflow condition. A more general treatment of singular solutions is provided by the subroutine
Another difficulty which often arises in practice is the case when one end of the range,
say, is at infinity. You must approximate the end point by taking a finite value for
, which is obtained by estimating where the solution will reach its asymptotic state. The estimate can be checked by repeating the calculation with a larger value of
. If
is very large, and if the matching point is also at
, the numerical solution may suffer a considerable loss of accuracy in integrating across the range, and the program may fail with
. (In the former case, solutions from all initial values at
are tending to the same curve at infinity.) The simplest remedy is to try to solve the equations with a smaller value of
, and then to increase
in stages, using each solution to give boundary value estimates for the next calculation. For problems where some terms in the asymptotic form of the solution are known,
will be more successful.
If the unknown quantities are not boundary values, but are eigenvalues or the length of the range or some other parameters occurring in the differential equations,
may be used.
10 Example
This example finds the angle at which a projectile must be fired for a given range.
The differential equations are:
$y′ = tanϕ v′ = -0.032 tanϕ v - 0.02v cosϕ ϕ′ = -0.032 v2 ,$
with the following boundary conditions:
$y= 0, v= 0.5 at x= 0, y= 0 at x= 5.$
The remaining boundary conditions are estimated as:
$ϕ=1.15 at x=0, ϕ=1.2, v=0.46 at x=5.$
We write
$\varphi =\mathrm{Z}\left(3\right)$
. To check the accuracy of the results the problem is solved twice with
respectively. Note the call to
before the call to
10.1 Program Text
10.2 Program Data
10.3 Program Results | {"url":"https://support.nag.com/numeric/nl/nagdoc_latest/flhtml/d02/d02haf.html","timestamp":"2024-11-10T18:11:58Z","content_type":"text/html","content_length":"84593","record_id":"<urn:uuid:c5dc52aa-56b3-4a7a-8370-4922f325276a>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00792.warc.gz"} |
Science | Deep Forest Sciences
Medicine Discovery.
Inductive Priors
Scientific Foundation Model
Scientific foundation models distill information from freely available unlabeled data into rich priors that can be used in downstream scientific discovery tasks. These models are trained to
recapitulate known scientific information, much as a human scientist starts by learning basic facts and doing basic experiments. These trained representations are then leveraged to accelerate
progress on real world challenges where much data may not be available.
Differentiable Physics.
Symmetries & Conservation Laws
Inductive Priors
Differentiable Physical Theory
Differentiable physics applies the techniques of differentiable programming to model physical, chemical, biological, and engineering systems. Differentiable physics methods span high dimensional
partial and ordinary differential equation solution, deep representation learning, rich priors from scientific foundation models, and more. Deep Forest Sciences is a leader in the emerging field of
differentiable physics. | {"url":"https://deepforestsci.com/science","timestamp":"2024-11-14T02:26:49Z","content_type":"text/html","content_length":"109421","record_id":"<urn:uuid:a92a9c98-a093-4419-9ef5-7b3613ed4846>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00632.warc.gz"} |
A smooth transition from powerlessness to absolute power
We study the phase transition of the coalitional manipulation problem for generalized scoring rules. Previously it has been shown that, under some conditions on the distribution of votes, if the
number of manipulators is o(√vn), where n is the number of voters, then the probability that a random profile is manipulable by the coalition goes to zero as the number of voters goes to infinity,
whereas if the number of manipulators is (√n), then the probability that a random profile is manipulable goes to one. Here we consider the critical window, where a coalition has size cn, and we show
that as c goes from zero to infinity, the limiting probability that a random profile is manipulable goes from zero to one in a smooth fashion, i.e., there is a smooth phase transition between the two
regimes. This result analytically validates recent empirical results, and suggests that deciding the coalitional manipulation problem may be of limited computational hardness in practice.
All Science Journal Classification (ASJC) codes
Dive into the research topics of 'A smooth transition from powerlessness to absolute power'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/a-smooth-transition-from-powerlessness-to-absolute-power","timestamp":"2024-11-14T19:07:10Z","content_type":"text/html","content_length":"49501","record_id":"<urn:uuid:5c753ef1-d7e7-41d6-9a6e-293f5a127d8c>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00374.warc.gz"} |
What would Borges tweet?
I was first exposed to Jorge Luis Borges in college, where I read most of the stories in Labyrinths instead of spending time on any of the other classwork I had that quarter. It was the first time I
realized you could write fiction about ideas. I was spellbound.
Among my favorite stories was the “The Library of Babel,” which conjures a fictional library that contains every possible permutation of a book that fits certain constraints (each and every book in
the library has 410 pages, and each page has a fixed number of characters, including spaces).
Because every possible permutation of the book exists in the library, some magical things result. As is the case when an infinity of monkeys bang away on typewriters for all eternity, most of the
books are pure gibberish. But (by definition) there also exist copies of every story ever written in every language in history; and likewise copies of this blog (come to think of it, if I could comb
the library easily, it might improve my posting rate).
“The Library of Babel” relies on at least one fancy trick of language — that a finite number of symbols and ideas can be assembled to represent concepts that are infinite (any linguists in the
audience care to comment?) It works because the universe Borges crafted is finite, but the atomic units (characters) can be assembled into something that feels infinite.
But why 410 pages and (say) 25 lines per page, and (just guessing here) 80 characters per line?
Why not update the idea in modern terms and reduce the universe to just 140 characters? In other words, what happens if we rethink Borges’ “Library” in terms of tweets?
Well, here’s what we can say more or less definitively:
• Twitter is pretty serious about the 140 character limit. So we have to stick with that.
• We have a limited number of “choices” as to what character can be put into each available slot in the tweet, but its a little hard to figure out exactly how many choices we have. Twitter says
that it counts characters based after a tweet has been normalized to something called normalization form c. That’s all fine and well, but what does it really mean? Well, it looks like there are
109,975 graphics characters defined in unicode, which is a lot.
• The total possible number of tweets is therefore 140^109975 (easiest way to think about this is that there are 109,975 possibilities for the first slot and 109,975 for the second slot, which
means 109,975 * 109,975 possibilities for the first two characters alone; add another character slot and you multiply by 109,975 again; and 109,975 * 109,975 * 109,975 is 109,975^3; that means
for 140 characters, we get 109,975^140. More on permutations here)
This turns out to be a very big number — the library of Babel, Twitter style, has 6.042X10^705 possible tweets. For those keeping track, that’s a number that is a little more than a 6 followed by
705 zeros.
When you compare it to the number of tweets that have happened so far (a little more than 29 billion as of today, per Gigatweet’s counter) its clear we’ve got a ways to go.
But just how far do we have to go?
Well, if we exclude the fact that a lot of tweets are retweets, we discover something discouraging. 29 billion tweets can be written as 2.9×10^10. That’s a drop in the bucket compared to 10^705.
Actually, its much less than a drop in the bucket — there is no bucket large enough nor drop small enough that would make the metaphor fit. Even the difference between the smallest theoretical
distance (the Planck length, or 1×10^-35 meters) and the estimated diameter of the universe (8.8×10^26 meters) is a mere 61 orders of magnitude, whereas the difference between tweets to date and the
number of tweets we’d need for all possible tweets to be . . . er . . . tweeted is 695 orders of magnitude.
So I’m afraid that the only way that we can hope for a Twitter library of Babel is for the rate of tweets to continue to rise exponentially — maybe someone wants to take a crack at figuring out how
long it would take if the tweet rate continued to grow exponentially? In the meantime, keep tweeting . . .
1 thought on “What would Borges tweet?”
1. I do not know which of us has written this tweet. | {"url":"http://www.johngirard.com/what-would-borges-tweet/","timestamp":"2024-11-07T21:44:22Z","content_type":"text/html","content_length":"15726","record_id":"<urn:uuid:9162619d-b85e-47d5-b6b7-857191e42863>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00644.warc.gz"} |
Multiplication and Division
These activities are part of our Primary collections, which are problems grouped by topic.
Find a great variety of ways of asking questions which make 8.
How would you find out how many football cards Catrina has collected?
Follow the clues to find the mystery number.
Yasmin and Zach have some bears to share. Which numbers of bears can they share so that there are none left over?
Can you work out how to make each side of this balance equally balanced? You can put more than one weight on a hook.
If you count from 1 to 20 and clap more loudly on the numbers in the two times table, as well as saying those numbers loudly, which numbers will be loud?
Ben and his mum are planting garlic. Can you find out how many cloves of garlic they might have had?
Help share out the biscuits the children have made.
This activity is best done with a whole class or in a large group. Can you match the cards? What happens when you add pairs of the numbers together?
These spinners will give you the tens and unit digits of a number. Can you choose sets of numbers to collect so that you spin six numbers belonging to your sets in as few spins as possible?
"Ip dip sky blue! Who's 'it'? It's you!" Where would you position yourself so that you are 'it' if there are two players? Three players ...?
This investigates one particular property of number by looking closely at an example of adding two odd numbers together.
This activity focuses on doubling multiples of five.
Throw the dice and decide whether to double or halve the number. Will you be the first to reach the target?
Are these statements relating to odd and even numbers always true, sometimes true or never true?
It's Sahila's birthday and she is having a party. How could you answer these questions using a picture, with things, with numbers or symbols?
How will you work out which numbers have been used to create this multiplication square?
Choose a symbol to put into the number sentence.
On Friday the magic plant was only 2 centimetres tall. Every day it doubled its height. How tall was it on Monday?
Four bags contain a large number of 1s, 3s, 5s and 7s. Can you pick any ten numbers from the bags so that their total is 37?
Choose four of the numbers from 1 to 9 to put in the squares so that the differences between joined squares are odd.
This problem challenges you to find out how many odd numbers there are between pairs of numbers. Can you find a pair of numbers that has four odds between them?
Can you work out how many flowers there will be on the Amazing Splitting Plant after it has been growing for six weeks?
At the beginning of May, Tom put his tomato plant outside. On the same day he sowed a bean in another pot. When will the two be the same height?
Frances and Rishi were given a bag of lollies. They shared them out evenly and had one left over. How many lollies could there have been in the bag?
If there are 3 squares in the ring, can you place three different numbers in them so that their differences are odd? Try with different numbers of squares around the ring. What do you notice?
This problem looks at how one example of your choice can show something about the general structure of multiplication. | {"url":"https://nrich.maths.org/multiplication-and-division-0","timestamp":"2024-11-13T17:48:26Z","content_type":"text/html","content_length":"90510","record_id":"<urn:uuid:834be902-30cd-4aad-b242-6754df046b39>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00615.warc.gz"} |
Gelfand-Naimark-Segal construction
This entry used to refer to “Ghez, Lima and Roberts” without more details. I have added pointer to:
• P. Ghez, R. Lima, John E. Roberts, Prop. 1.9 in: $W^\ast$-categories, Pacific J. Math. 120 1 (1985) 79-109 [euclid:pjm/1102703884]
diff, v28, current
Namely you added a link to Functorial Aspects of the GNS Representation (with its own nForum thread now here).
It’s not clear to me yet that this deserves a separate entry. It looks a lot like material for a subsection.
Create link request and reference to yet-to-be-created nLab page
Tom Mainiero
diff, v24, current
added a bunch of textbook references:
diff, v23, current
added some indication of the actual construction, below the statement of the theorem.
(This might deserve to be re-organized entirely, but I don’t have energy for this now.)
diff, v23, current
I have expanded the proof of the standard GNS construction (here), making explicit the use(s) of the Cauchy-Schwarz inequality.
Also I added the assumption that the given state sends the star-involution to complex conjugation, which is needed to make the inner product on the resulting vector space be Hermitian.
diff, v31, current | {"url":"https://nforum.ncatlab.org/discussion/12829/gelfandnaimarksegal-construction/?Focus=92148","timestamp":"2024-11-09T10:40:15Z","content_type":"application/xhtml+xml","content_length":"50221","record_id":"<urn:uuid:141f4bcd-d757-4ab4-a008-c4bb7f6d38a1>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00896.warc.gz"} |
Time duration with days
In the example shown, the goal is to enter a valid time based on days, hours, and minutes, then display the result as total hours.
The key is to understand that time in Excel is just a number. 1 day = 24 hours, and 1 hour = 0.0412 (1/24). That means 12 hours = 0.5, 6 hours = 0.25, and so on. Because time is just a number, you
can add time to days and display the result using a custom number format, or with your own formula, as explained below.
In the example shown, the formula in cell F5 is:
On the right side of the formula, the TIME function is used to assemble a valid time from its component parts (hours, minutes, seconds). Hours come from column C, minutes from column D, and seconds
are hardcoded as zero. TIME returns 0.5, since 12 hours equals one half day:
TIME(12,0,0) // returns 0.5
With the number 1 in C5, we can simplify the formula to:
which returns 1.5 as a final result. To display this result as total hours, a custom number format is used:
The square brackets tell Excel to display hours over 24, since by default Excel will reset to zero at each 24 hour interval (like a clock). The result is a time like "36:00", since 1.5 is a day and a
half, or 36 hours.
The formula in G5 simply points back to F5:
The custom number format used to display a result like "1d 12h 0m" is:
d"d" h"h" m"m"
More than 31 days
Using "d" to display days in a custom number format works fine up to 31 days. However, after 31 days, Excel will reset days to zero. This does not affect hours, which will continue to display
properly with the number format [h].
Unfortunately, a custom number format like [d] is not supported. However, in this example, since days, hours, and minutes are already broken out separately, you can write your own formula to display
days, minutes, and hours like this:
=B5&"d "&C5&"h "&D5&"m"
This is an example of concatenation. We are simply embedding all three numeric values into single text string, joined together with the ampersand (&) operator.
If you want to display an existing time value as a text string, you can use a formula like this:
=INT(A1)&" days "&TEXT(A1,"h"" hrs ""m"" mins """)
where A1 contains time. The INT function simply returns the integer portion of the number (days). The TEXT function is used to format hours and minutes. | {"url":"https://exceljet.net/formulas/time-duration-with-days","timestamp":"2024-11-09T10:34:30Z","content_type":"text/html","content_length":"53217","record_id":"<urn:uuid:09529dd3-11dc-4eea-95f9-d28dda87c519>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00153.warc.gz"} |
Equals column lineage | Dwh.dev documentation
We already have great functionality for displaying relationships from JOIN and WHERE clauses.
But that's not all.
Remember what we learned in school? If we know that x = y, we can substitute one for the other anywhere. Right? Now, take a look at this query:
SELECT T1.C1 FROM
T1 JOIN T2 ON T1.C1 = T2.C3
We get T1.C1 as the lineage result, correct?
But T1.C1 = T2.C3, which means this query is equivalent to:
SELECT T2.C3 FROM
T1 JOIN T2 ON T1.C1 = T2.C3
See what's happening here? The lineage of the upstream column T2.C3 is hidden from your view!
Have you ever encountered a tool that reveals this to you? Sure, you'll see that there's a dependency on table T2 at the object level. But no details. Good luck debugging that!
It gets even worse! If x = y and y = z, then x = z. Right?
SELECT C1 FROM
JOIN T2 ON C1 = C3
JOIN T3 ON C3 = C5;
You get the point…
All of this becomes even more complicated when you add mathematical operations, function calls, type conversions, unions…
Can we see additional data lineage generated by the equality conditions in JOIN and WHERE?
Sure! Here's what the full data lineage would look like for the examples above: | {"url":"https://docs.dwh.dev/features/data-lineage/equals-column-lineage","timestamp":"2024-11-07T17:01:22Z","content_type":"text/html","content_length":"206660","record_id":"<urn:uuid:4b109804-0427-45e1-a4b6-fb644f3ebce6>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00059.warc.gz"} |
General settings
Available Settings
A set of options is available in order to customize the behaviour of ydata-profiling and the appearance of the generated report. The depth of customization allows the creation of behaviours highly
targeted at the specific dataset being analysed. The available settings are listed below. To learn how to change them, check :doc:changing_settings.
General settings
Global report settings:
Parameter Type Default Description
title string Pandas Profiling Report Title for the report, shown in the header and title bar.
pool_size integer 0 Number of workers in thread pool. When set to zero, it is set to the number of CPUs available.
progress_bar boolean True If True, pandas-profiling will display a progress bar.
Variable summary settings
Settings related with the information displayed for each variable.
Parameter Type Default Description
sort None, asc or nan Sort the variables asc (ending), desc (ending) or None (leaves original sorting).
variables.descriptions dict {} Ability to display a description alongside the descriptive statistics of each variable ({'var_name': 'Description'}).
vars.num.quantiles list[float] [0.05,0.25,0.5,0.75,0.95] The quantiles to calculate. Note that .25, .5 and .75 are required for the computation of other metrics (median and IQR).
vars.num.skewness_threshold integer 20 Warn if the skewness is above this threshold.
vars.num.low_categorical_threshold integer 5 If the number of distinct values is smaller than this number, then the series is considered to be categorical. Set to 0
to disable.
vars.num.chi_squared_threshold float 0.999 Set to 0 to disable chi-squared calculation.
vars.cat.length boolean True Check the string length and aggregate values (min, max, mean, media).
vars.cat.characters boolean False Check the distribution of characters and their Unicode properties. Often informative, but may be computationally
vars.cat.words boolean False Check the distribution of words. Often informative, but may be computationally expensive.
vars.cat.cardinality_threshold integer 50 Warn if the number of distinct values is above this threshold.
vars.cat.imbalance_threshold float 0.5 Warn if the imbalance score is above this threshold.
vars.cat.n_obs integer 5 Display this number of observations.
vars.cat.chi_squared_threshold float 0.999 Same as above, but for categorical variables.
vars.bool.n_obs integer 3 Same as above, but for boolean variables.
vars.bool.imbalance_threshold float 0.5 Warn if the imbalance score is above this threshold.
Configuration example
profile = df.profile_report(
"num": {"low_categorical_threshold": 0},
"cat": {
"length": True,
"characters": False,
"words": False,
"n_obs": 5,
profile.config.variables.descriptions = {
"files": "Files in the filesystem",
"datec": "Creation date",
"datem": "Modification date",
Setting dataset schema type
Configure the schema type for a given dataset.
Set the variable type schema to Generate the profile report
import json
import pandas as pd
from ydata_profiling import ProfileReport
from ydata_profiling.utils.cache import cache_file
file_name = cache_file(
df = pd.read_csv(file_name)
type_schema = {"Survived": "categorical", "Embarked": "categorical"}
# We can set the type_schema only for the variables that we are certain of their types.
# All the other will be automatically inferred.
report = ProfileReport(df, title="Titanic EDA", type_schema=type_schema)
Missing data overview plots
Settings related with the missing data section and the visualizations it can include.
Parameter Type Default Description
missing_diagrams.bar boolean True Display a bar chart with counts of missing values for each column.
missing_diagrams.matrix boolean True Display a matrix of missing values. Similar to the bar chart, but might provide overview of the co-occurrence of missing values in rows.
missing_diagrams.heatmap boolean True Display a heatmap of missing values, that measures nullity correlation (i.e. how strongly the presence or absence of one variable affects the presence of
Configuration example: disable heatmap for large datasets
1 profile = df.profile_report(
2 missing_diagrams={
3 "heatmap": False,
4 }
5 )
6 profile.to_file("report.html")
Settings regarding correlation metrics and thresholds.
The default value is auto, but the following correlation matrices are available:
Parameter Description
auto Calculates the column pairwise correlation depending on the type schema:
- numerical to numerical variable: Spearman correlation coefficient
- categorical to categorical variable: Cramer's V association coefficient
- numerical to categorical: Cramer's V association coefficient with the numerical variable discretized automatically
spearman Spearman's correlation measures the strength and direction of monotonic association between two variables. Great to evaluate the strength of the relation between categorical or ordinal
pearson The Pearson correlation coefficient is the most common way of measuring a linear correlation. It is a number between –1 and 1 that measures the strength and direction of the relationship
between two variables.
kendall Kendall rank correlation coefficient is a statistic used to measure the ordinal association between two measured quantities. Kendall's is often used when data doesn't meet one of the
requirements of Pearson's correlation.
phi_k Phi K is especially suitable for working with mixed-type variables. Using this coefficient we can find (un)expected correlation and evaluate their statistical significance.
cramers Cramers is a correlation matrix that is commonly used to examine the association between categorical variables when there is more than 2x2 contingency.
For each correlation matrix you can use the following configurations:
Parameter Type Default Description
correlations.auto.calculate boolean True Whether to compute 'auto' correlation
correlations.auto.warn_high_correlations boolean True Show warning for correlations higher than the threshold
correlations.auto.threshold float 0.9 Warning threshold
correlations.pearson.calculate boolean False Whether to calculate Pearson correlation
correlations.pearson.warn_high_correlations boolean True Show warning for correlations higher than the threshold
correlations.pearson.threshold float 0.9 Warning threshold
correlations.spearman.calculate boolean False Whether to calculate Spearman correlation
correlations.spearman.warn_high_correlations boolean False Show warning for correlations higher than the threshold
correlations.spearman.threshold float 0.9 Warning threshold
correlations.kendall.calculate boolean False Whether to calculate Kendall rank correlation
correlations.kendall.warn_high_correlations boolean False Show warning for correlations higher than the threshold
correlations.kendall.threshold float 0.9 Warning threshold
correlations.phi_k.calculate boolean False Whether to calculate Phi K correlation
correlations.phi_k.warn_high_correlations boolean False Show warning for correlations higher than the threshold
correlations.phi_k.threshold float 0.9 Warning threshold
correlations.cramers.calculate boolean False Whether to calculate Cramer's V association coefficient
correlations.cramers.warn_high_correlations boolean True Show warning for correlations higher than the threshold
correlations.cramers.threshold float 0.9 Warning threshold
For instance, to disable all correlation computations (might be relevant for large datasets):
Disabling all correlation matrices
profile = df.profile_report(
title="Report without correlations",
"auto": {"calculate": False},
"pearson": {"calculate": False},
"spearman": {"calculate": False},
"kendall": {"calculate": False},
"phi_k": {"calculate": False},
"cramers": {"calculate": False},
# or using a shorthand that is available for correlations
profile = df.profile_report(
title="Report without correlations",
Settings related with the interactions section.
Parameter Type Default Description
interactions.continuous boolean True Generate a 2D scatter plot (or hexagonal binned plot) for all continuous variable pairs.
interactions.targets list [] When a list of variable names is given, only interactions between these and all other variables are computed.
Report's appearance
Settings related with the appearance and style of the report.
Parameter Type Default Description
html.minify_html bool True If True, the output HTML is minified using the htmlmin package.
html.use_local_assets bool True If True, all assets (stylesheets, scripts, images) are stored locally. If False, a CDN is used for some stylesheets and scripts.
html.inline boolean True If True, all assets are contained in the report. If False, then a web export is created, where all assets are stored in the '[REPORT_NAME]_assets/' directory.
html.navbar_show boolean True Whether to include a navigation bar in the report
html.style.theme string None Select a bootswatch theme. Available options: flatly (dark) and united (orange)
html.style.logo string nan A base64 encoded logo, to display in the navigation bar.
html.style.primary_color string #337ab7 The primary color to use in the report.
html.style.full_width boolean False By default, the width of the report is fixed. If set to True, the full width of the screen is used. | {"url":"https://docs.profiling.ydata.ai/4.5/advanced_settings/available_settings/","timestamp":"2024-11-08T04:28:42Z","content_type":"text/html","content_length":"77039","record_id":"<urn:uuid:a57d1a25-a76c-43fc-9215-ae5bd4f12d68>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00078.warc.gz"} |
ECCC - Joao Ribeiro
All reports by Author Joao Ribeiro:
TR24-093 | 16th May 2024
Omar Alrabiah, Jesse Goodman, Jonathan Mosheiff, Joao Ribeiro
Low-Degree Polynomials Are Good Extractors
We prove that random low-degree polynomials (over $\mathbb{F}_2$) are unbiased, in an extremely general sense. That is, we show that random low-degree polynomials are good randomness extractors for a
wide class of distributions. Prior to our work, such results were only known for the small families of (1) uniform sources, ... more >>>
TR22-156 | 15th November 2022
Huck Bennett, Mahdi Cheraghchi, Venkatesan Guruswami, Joao Ribeiro
Parameterized Inapproximability of the Minimum Distance Problem over all Fields and the Shortest Vector Problem in all $\ell_p$ Norms
Revisions: 2
We prove that the Minimum Distance Problem (MDP) on linear codes over any fixed finite field and parameterized by the input distance bound is W[1]-hard to approximate within any constant factor. We
also prove analogous results for the parameterized Shortest Vector Problem (SVP) on integer lattices. Specifically, we prove that ... more >>>
TR21-090 | 14th June 2021
Divesh Aggarwal, Eldon Chung, Maciej Obremski, Joao Ribeiro
On Secret Sharing, Randomness, and Random-less Reductions for Secret Sharing
Secret-sharing is one of the most basic and oldest primitives in cryptography, introduced by Shamir and Blakely in the 70s. It allows to strike a meaningful balance between availability and
confidentiality of secret information. It has a host of applications most notably in threshold cryptography and multi-party computation. All known ... more >>>
TR19-173 | 28th November 2019
Divesh Aggarwal, Siyao Guo, Maciej Obremski, Joao Ribeiro, Noah Stephens-Davidowitz
Extractor Lower Bounds, Revisited
Revisions: 1
We revisit the fundamental problem of determining seed length lower bounds for strong extractors and natural variants thereof. These variants stem from a ``change in quantifiers'' over the seeds of
the extractor: While a strong extractor requires that the average output bias (over all seeds) is small for all input ... more >>> | {"url":"https://eccc.weizmann.ac.il/author/1273/","timestamp":"2024-11-14T04:44:50Z","content_type":"application/xhtml+xml","content_length":"21325","record_id":"<urn:uuid:2eaa7b02-efcd-442e-8e16-ed8bc72d4f84>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00665.warc.gz"} |
pbeq (c41b1)
Poisson-Bolztmann Equation Module
The PBEQ module allows the setting up and the numerical solution of
the Poisson-Boltzmann equation on a discretized grid for a solute molecule.
Attention: Problems should be reported to
. Benoit Roux at Benoit.Roux@med.cornell.edu, phone (212) 746-6018
. Wonpil Im at Wonpil.Im@cornell.edu
. Dmitrii Beglov at beglovd@moldyn.com
| Syntax of the PBEQ commands
| Purpose of each of the commands
| Usage examples of the PBEQ module
[SYNTAX PBEQ functions]
PBEQ enter the PBEQ module
END exit the PBEQ module
SOLVe PB-theory-specifications
solver-specifications grid-specifications
iteration-specifications charge interpolation-spec.
boundary potential-spec. dielectric boundary-spec.
physical variable-spec. membrane-specifications
spherical droplet-spec. orthorhombic box-spec.
cylinder-specifications solvation force-spec.
ITERate PB-theory-specifications solver-specifications
ENPB [INTE atoms-selection]
WRITE property [[CARD] [write-range]] [UNIT integer]
READ [PHI] [PHIX] [FKAP] [MIJ] [UNIT integer]
COOR coordinate-manipulation-command
SCALar scalar-manipulation-command
PBAVerage [PHI] [ATOM atom-selection] [UPDATE] [units]
PB-theory-specifications::= [NONLinear] [PARTlinear]
default : linear PB by default (no need to specify)
NONLin [.FALSE.] : non-linear PBEQ solver
PARTlin [.FALSE.] : partially linearized PBEQ solver
[OSOR] [UNDER] [[FMGR] [NCYC integer] [NPRE integer] [NPOS integer]]
default : SOR (Successive OverRelaxation) method for linearized PB
OLDPB [.FALSE.] : old PBEQ solver (used in c26a2)
OSOR [.FALSE.] : optimization of the over-relaxation parameter
UNDER [.FALSE.] : Under-relaxation for non-linear and partially linearized
PBEQ solvers with fixed LAMBda value
FMGR [.FALSE.] : full multigrid method
NCYC [100] : maximum number of cycles (in FMGR)
NPRE [2] : number of relaxation for PRE-smoothing (in FMGR)
NPOS [2] : number of relaxation for POST-smoothing (in FMGR)
grid-specifications::= [NCEL integer] [DCEL real]
[NCLX integer] [NCLY integer] [NCLZ integer]
[XBCEN real] [YBCEN real] [ZBCEN real]
NCEL [65] : number of grid point in 1D for a cubic
DCEL [0.1] : size of grid unit cell
NCLX [NCEL] : number of grid point in X for general parallelepiped
NCLY [NCEL] : number of grid point in Y for general parallelepiped
NCLZ [NCEL] : number of grid point in Z for general parallelepiped
XBCEN [0.0] : the center of a box in X
YBCEN [0.0] : the center of a box in Y
ZBCEN [0.0] : the center of a box in Z
iteration-specifications::=[MAXIter integer] [DEPS real]
[DOMEga real] [LAMBda real] [KEEPphi]
MAXIter [2000] : number of iterations
DEPS [0.000002] : parameter (tolerance) of convergence
DOMEga [1.0] : initial mixing factor
LAMBda [1.0] : initial mixing factor (LAMBda = DOMEga)
KEEPphi [.FALSE.] : Use the potential from previous calculation
as a initial guess for current calculation
charge interpolation-spec.::= [BSPLine]
default : the trilinear interpolation method
BSPLine [.FALSE.] : the Cardinal B-spline method is used?
boundary potential-specifications::= [ZERO] [INTBP] [FOCUS] [PBC] [NPBC]
[NIMGB integer]
default : use the Debye-Huckel approximation at each boundary point
use XY periodic boundary conditions in membrane
INTBP [.FALSE.] : INTerpolation of Boundary Potential is used?
ZERO [.FALSE.] : boundary potential is set to ZERO ?
(metallic conductor boundary conditions)
FOCUS [.FALSE.] : previous potential is used to set up boundary potential?
PBC [.FALSE.] : 3d periodic boundary condition
NPBC [.FALSE.] : supress XY periodic boundary conditions in membrane
NIMGB [0] : use the image atoms for boundary potential
in membrane calculation
(NIMGB=1 means the 8 nearest image cells)
(NIMGB=2 means the 24 nearest image cells, i.e.,
2 shells of images)
dielectric boundary-specifications::= [SMOOTH] [SWIN real] [REEN]
default : the vdW surface is used for the dielectric boundary
SMOOth [.FALSE.] : invoke smoothing dielectric boundary
SWIN [0.5] : solute-solvent dielectric boundary Smoothing WINdow
REEN [.FALSE.] : the molecular (contact+reentrant) surface is created
with WATRadius for the dielectric boundary
physical variable-specifications::= [EPSW real] [EPSP real]
[WATR real] [IONR real]
[CONC real] [TEMP real]
EPSW [80.0] : bulk solvent dielectric constant
EPSP [1.0] : protein interior dielectric constant
WATR [0.0] : solvent probe radius
IONR [0.0] : ion exclusion radius (Stern layer)
CONC [0.0] : salt concentration [moles/liter]
TEMP [300.0] : Temperature [K]
membrane-specifications:: [TMEMb real] [HTMEmb real] [ZMEMb real] [EPSM real]
[EPSH real] [VMEMB real]
TMEMB [0.0] : thickness of membrane (along Z)
HTMEMB [0.0] : thickness of headgroup region
ZMEMB [0.0] : membrane position (along Z)
EPSM [1.0] : membrane dielectric constant
EPSH [EPSM] : membrane headgroup dielectric constant (optional)
VMEMB [0.0] : potential difference across membrane (entered in [volts])
spherical droplet-spec.::= [DROPlet real] [EPSD real]
[XDROplet real] [YDROplet real] [ZDROplet real]
[DTOM] [DKAP]
DROPlet [0.0] : radius of spherical droplet
EPSD [1.0] : dielectric constant of spherical droplet
XDROp [0.0] : position of spherical droplet in X
YDROp [0.0] : position of spherical droplet in Y
ZDROp [0.0] : position of spherical droplet in Z
DTOM [.FALSE.] : the dielectric constant of the overlapped region
with membrane is set to EPSM ?
DKAP [.FALSE.] : the Debye-Huckel factor inside sphere is set to KAPPA ?
orthorhombic box-spec.::= [LXMAx real] [LYMAx real] [LZMAx real]
[LXMIn real] [LYMIn real] [LZMIn real]
[BTOM] [BKAP]
LXMAx [0.0] : maximum position of a box along X-axis
LYMAx [0.0] : maximum position of a box along Y-axis
LZMAx [0.0] : maximum position of a box along Z-axis
LXMIn [0.0] : minimum position of a box along X-axis
LYMIn [0.0] : minimum position of a box along Y-axis
LZMIn [0.0] : minimum position of a box along Z-axis
EPSB [1.0] : dielectric constant inside box
BTOM [.FALSE.] : the dielectric constant of the overlapped region
with membrane is set to EPSM ?
BKAP [.FALSE.] : the Debye-Huckel factor inside box is set to KAPPA?
cylinder-specifications::= [RCYLN real] [HCYLN real] [EPSC real]
[XCYLN real] [YCYLN real] [ZCYLN real]
[CTOM] [CKAP]
RCYLN [0.0] : radius of cylinder
HCYLN [0.0] : height of cylinder
EPSC [1.0] : dielectric constant inside cylinder
XCYLN [0.0] : position of cylinder in X
YCYLN [0.0] : position of cylinder in Y
ZCYLN [0.0] : position of cylinder in Z
CTOM [.FALSE.] : the dielectric constant of the overlapped region
with membrane is set to EPSM ?
CKAP [.FALSE.] : the Debye-Huckel factor inside cylinder is set to KAPPA?
solvation force-spec.::= [FORCE] [STEN real] [NPBEQ integer]
FORCe [.FALSE.] : invoke solvation force calculation
STEN [0.0] : surface tension coefficient (in kcal/mol/A^2)
NPBEQ [1] : the frequency for calculating solvation forces
during minimizations and MD simulations
EPSU [-1] : unit to read given epsilon grid from
xval yval zval epsx epsy epsz
EPSG [-1] : unit to read given epsilon grid from
nx ny nz
xmin ymin zmin
dx dy dz
epsx epsy epsz
write-range::= [XFIRST real] [YFIRST real] [ZFIRST real]
[XLAST real] [YLAST real] [ZLAST real]
property::= [[PHI] [KCAL] [VOLTS]] [[PHIX] [KCAL] [VOLTS]]
[EPSX] [EPSY] [EPSZ]
PHI : electrostatic potential [ KCAL/MOL ] [ VOLTS ]
(default [UNIT CHARGE]/[ANGS])
PHIX : external static electrostatic Potential [ KCAL/MOL ] [ VOLTS ]
(default [UNIT CHARGE]/[ANGS])
FKAPPA2 : Debye screening factor
CHRG : charges on the lattice
EPSX : X sets of dielectric constant
EPSY : Y sets of dielectric constant
EPSZ : Z sets of dielectric constant
MIJ : MIJ matrix
TITLE : formatted title line
atoms-selection::= a selection of a group of atoms
General discussion regarding the PBEQ module
1. SOLVE
Prepare grids and solve PB equation for the selected atoms and return the
electrostatic free energy in ?enpb = (1/2)*Sum Q_i PHI_i over the lattice.
The factor of 1/2 is there for the linear response free energy of charging.
The atomic contributions are returned in WMAIN (destroying the radii).
NOTE: At the first stage of PBEQ or after "RESET", WMAIN should be set to
the atomic radii for the calculation. After a call to SOLVE the atomic
radii are saved in a special array. The atomic contribution to the
electrostatic free energy are returned in WMAIN (destroying the radii).
To modify the value of the radii, the keyword RESET must be issued.
1) PB SOLVERs
(Reference: Klapper et al. Proteins 1, 47 (1986)
A. Nicholls et al; J. Comput. Chem, 12(4),435-445 (1991))
Currently, PBEQ module supports various PB equation solvers.
The default solver uses the SOR (Successive OverRelaxation) method for
the linearized PB equation.
This is much faster than the old PBEQ solver which was used in c26a2.
With OSOR keyword, the relaxation parameter will be optimized. This is
especially useful when the system contains a salt concentration.
Solvers for non-linear and partially linearized PB equations for
1:1 charge-paired salt are now available. Both use the SOR method as a
default. In many cases, the direct use of both solvers may cause some
convergence problems. So, it is the best way to use the potential from
the linearized PB equation as a initial guess. Though, you may want to
use the under-relaxation by adjusting the mixing factor (LAMBda).
The partially linearized PB equation means that the linearized form of
one of two exponential function is used like
phi > 0 --> exp(phi) = 1 + phi
phi < 0 --> exp(-phi) = 1 - phi
Full multigrid (FMG) method is efficient for the uniform dielectric
medium. When there is a discontinuity in the dielectric function,
the method could be slower than the SOR method. You can improve the
calculation speed using the smoothing dielectric boundary. Cubic grid
should be used and number of grid points should be 2**(n+1) where n is
a integer upto 9. Currently, FMG does not support MEMBRANE and PBC.
(see ~chmtest/c28/pbeqtest5.inp and pbeqtest6.inp)
2) Grid
The number of grid points in X, Y, and Z (NCEL,NCLX,NCLY,NCLZ) must
be odd. Otherwise, the number of grid points will be increased by ONE
without any WARNING message.
3) Iteration
The maximum number of iterations (MAXIter) can be specified.
The convergence parameters DEPS should not be modified.
One could use the potential from previous calculation as a initial
guess for current calculation using KEEPphi keyword. This is useful for
the nonlinear (or partially linearized) PB equation. See also ITERate.
4) Charge Distribution Method
The default is the trilinear method to distribute a charge over
nearest 8 grid points. BSPLINE keyword will invoke the 3rd-order
B-splines interpolation over nearest 27 grid points.
B-splines method removes discontinuities in the reaction field forces.
5) Boundary Potential
By default, boundary potential is calculated using the Debye-Huckel
approximation for every boundary point. However, the computational
time increases prohibitively as the number of grid points and of atoms
in the system increases.
INTBP keyword uses the bilinear interpolation to construct
boundary potential in a box with DCEL and (NCLx,NCLy,NCLz) from those
in the same box with 2*DCEL and (NCLx/2+1,NCLy/2+1,NCLz/2+1).
ZERO keyword sets boundary potential at the edge of the grid to zero.
FOCUS keyword uses previously calculated potentials to set up boundary
(Reference: M.K. Gilson et al; J. Comput. Chem. 9(4),327-335 (1987))
(see also an example below)
PBC keyword invokes the full 3d periodic boundary condition so that
no boundary potential is calculated directly using the Debye-Huckel
(Reference: P.H. Hunenberger and J.A. McCammon JCP v.110(4) p.1856 (1999))
(alos, see ~chmtest/c28/pbeqtest4.inp)
NPBC keyword surpress XY periodic boundary conditions in membrane
Boundary potential of XY plane in membrane calculations can be constructed
using the image atoms. When NIMGB=1, boundary potential includes the
influence of the 8 nearest image cells.
6) Dielectric boundary
SMOOTH and REEN change the attribute of the solute-solvent boundary.
By default (NO SMOOTH), the boundary is defined by the van der Waals
surface or the molecular surface (with WATR). SMOOTH keyword changes
the boundary as a region having +/- SWIN (Smoothing WINdow) from the
surface of the solute. Within the solute-solvent boundary,
the dielectric constant and the Debye screening factor will be changed
continuously from EPSP and zero to EPSW and the screening factor
at bulk solvent.
REEN keyword with WATR creates the molecular (contact+reentrant) surface
as the dielectric boundary.
NOTE: WATR without REEN just increases the atomic radii by it.
7) Various geometric objects
PBEQ module supports three geometric objects with various options
(see spherical droplet-, orthorhombic box-, and cylinder-spec. above)
When using more than one geometry at the same time, the order of creating
geometries is as follows: first is a droplet, second is a cylinder, and
the last is a box.
4) Solvation force
This keyword invokes the calculation of the solvation free energy and
forces and must be followed by SMOOTH keyword. The solvation energy is
taken as a sum of electrostatic and nonpolar solvation energy.
The former is calculated from the PB equation and the latter by using
the surface tension coefficient (STEN) that relates free energy with
surface area. Note that the calculated surface is approximately the
van der Waals surface. If membrane is considered, the surface of the
membrane is also approximately included. The corresponding forces are
also calculated and will be used in minimizations and MD simulations
where NPBEQ can be used to specify the frequency for calculating the
solvation forces. Note that SWIN must be equal or greater to DCEL to
get correct solvation free energy and forces.
(Reference: W. Im, D. Beglov and B. Roux
Continuum Solvation Model: computation of electrostatic
forces from numerical solutions to the PB equation,
Comput. Phys. Commun. 109,1-17 (1998))
NOTE:To print out the force of each atom, PRNLEV should be greater
than 6.
2. ITERATE
Pursue the iteration on the grid. SOLVE must have been called first.
The main difference with the keyword KEEPphi (see above) is that the
physical specifications (e.g., dielectric interface, membrane, etc...)
must remain the same with ITERate. However, it is possible to change
from linear to non-linear PB using ITERate. (see pbeqtest5.inp)
3. ENPB
Compute the electrostatic PB energy Sum Q_i PHI_i over the lattice.
Notice that the electrostatic energy is twice as much as the electrostatic
free energy (see above). The value of the electrostatic energy is passed
through the substitution parameter enpb. With INTE keyword, you can specify
the atoms of interest.
4. CAPACITANCE
Compute the capacitance based on the net induced charge in the double
layer. The induced charge beyond the limits of the box are estimated based on
the analytical solution to a planar membrane.
5. COUNTERION
Compute the counter-ion (1:1 salt) distribution along Z-axis.
6. WRITE
The WRITE command is used to write out the grid properties. By default,
a binary file of the property will be written for the whole grid. The keyword
CARD implies that a formatted output will be produced. In that case, the
spatial range can be specified for the output. By default, the electrostatic
potential PHI is given in [UNIT CHARGE]/[ANGS]. If specified, the PHI can be
given in [VOLTS] or in [KCAL/MOL].
7. READ
The READ command is used to read the electrostatic potential PHI or PHIX
in [UNIT CHARGE]/[ANGS], Debye screening factor FKAPPA2, and
the generalized reaction field MIJ matrix written in a binary file.
8. RESET
Resets all assignments of the PBEQ module and free the HEAP array.
Destroys all lists and grids. By default, the grids and arrays remain assigned
when exiting and re-entering the PBEQ module. This is to allow multiple call
to PBEQ without having to free the HEAP and other arrays if they are going
to be used again. The RESET keyword must be used to re-assign new values for
the atomic radii.
9. Miscellaneous command manipulations
» miscom
are supported within the PBEQ module,
allowing opening and closing of files, streaming of files, label assignments
(e.g., LABEL), and repeated loops (e.g., GOTO), parameter substitutions
(e.g., @1,@2, etc...) control (e.g., IF 1 eq 10.0 GOTO LOOP) and CALC
(e.g., CALC energy = ?enpb).
NOTE: TIMER 2 gives the times of various components in PBEQ module;
the grid parameter preparation (subroutine MAYER),
iterative solution (subroutine PBEQ1), and,
force calculation (subroutine RFORCE and BFORCE).
10. COORMAN and SCALAR commands
» corman
and »
are supported within
the PBEQ module, allowing the easy manipulation of charges, radii, rotation
and translations of molecules, etc...
11. A set of "ATOMIC BORN RADII"
Atomic radii derived from solvent electrostatic charge distribution may be
used. (test/data/radius.str) These radii were tested with free energy
perturbation with explicit solvent.
(Reference: M. Nina, D. Beglov and B. Roux.
Atomic Radii for Continuum Electrostatics Calculations based on
Molecular Dynamics Free Energy Simulations.
J. Phys. Chem. 101(26),5239-5248,1997).
NOTE: A typo for residue HSD was present in the original set of radii.
Check with M. Nina for new updated file.
To get the set of appropriate radii when using SWIN,
the commands are as follows;
SCALAR WMAIN ADD {SWIN}
SCALAR WMAIN SET 0.0 SELE TYPE H* END
The factor has a linear relationship with SWIN.
SWIN 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
FACTOR 0.979 0.965 0.952 0.939 0.927 0.914 0.901 0.888 0.875 0.861
** FACTOR = -0.1296 x SWIN + 0.9914 (a least-square fit)
12. PBAVerage subcommand
This subcommand allows for the averaging of the (precalculated) electrostatic
potential (PHI values) over specified regions of the grid. The region is
specified as a rectangular box, with or without an atom selection. The units
may be specified as KCAL (kcal/mol), VOLT (volts), or not at all, in which
case the default units (charge/angs) are used. The calculated average may
be assigned to a CHARMM parameter through the symbol ?AVPH. The PBAV PHI
subcommand does not calculate the PHI values themselves; hence the electro-
static potential should have already been calculated before this subcommand
is given.
The following calculates the average PHI value over a rectangular-box region
of the grid:
PBAV PHI KCAL xfirst [real] xlast [real] -
yfirst [real] ylast [real] -
zfirst [real] zlast [real]
The grid limits must be specified the first time the PBAV PHI subcommand is
invoked. For subsequent invocations, the command will use the stored limits
unless the limits are respecified.
The following calculates the average PHI values over the grid points that are
both within the grid limits and within the van der Waals radii of the selected
PBAV PHI KCAL UPDAte xfirst [real] xlast [real] -
yfirst [real] ylast [real] -
zfirst [real] zlast [real] -
ATOM SELE [selection] END
The UPDAte keyword updates the atom-based grid, so that when the
PBAV PHI ATOM subcommand is given for the first time, the UPDATE keyword
must be used and an atom selection given. For subsequent invocations,
the atom selection (for defining the set of atoms over which the
calculation is to be done) and the UPDATE command (for updating the
grid, based on the position of the selected atoms) are optional.
If UPDATE is specified but the atom selection (or grid limits) are not,
the algorithm will use the atom selection (or grid limits) that were
last specified. If the PBAV PHI subcommand has not been
previously given, the grid limits must be specified.
Generalized Solvent Boundary Potential (GSBP)
GSBP is a boundary potential for simulating a reduced system while
incorporating implicitly the dominant electrostatic forces of the surrounding
atoms. It has been developed in the same spirit as the SBOUND and the SSBP,
» sbound
and ssbp:»
The current implementation of the method is described in W. IM, S. Berneche,
and B. Roux. J. Chem. Phys. (2000, in preparation). Briefly, the system is
partitioned in two regions: an inner region of interest and an outer region.
The inner region includes all atom explicitly.
GSBP represents the electrostatic forces from the outer region as the sum of
two components. One is the static external field (PHIX) which arises from
the charge distribution in the outer region (taking into consideration the
solvent as a featureless dielectric medium). The second contribution is
the reaction field which is created by the charge distribution inside the
inner region considering the whole molecular configuration and the dielectric
solvent. In the GSBP, the reaction field is calculated through a generalized
multipolar expansion of the instantaneous charge density in the inner system
coupled with a generalized reaction field matrix MIJ.
The numerical implementation of the GSBP can be divided into two parts;
SETUP and UPDATE parts. In the SETUP part, the static external field and the
MIJ matrix are calculated once and stored before a simulation. The SETUP part
mostly uses the PBEQ module. In UPDATE part, the energy and forces are
updated using the stored external field and the MIJ matrix in each step of
the molecular dynamics.
1. GSBP Syntax
GSBP is a subcommand inside PBEQ module like SOLVe and uses all options
(except solvation force-spec.) in SOLVe.
GSBP decomposition-spec. inner region-specifications
basis functions-spec. large box-specifications
cavity potential-spec. all options in SOLVE
decomposition-spec.::= [GTOT] [G_oo] [G_io] [G_ii]
GTOT [.FALSE.] : total electrostatic solvation free energy
G_oo [.FALSE.] : electrostatic solvation free energy in outer region
G_io [.FALSE.] : electrostatic free energy due to the interactions
between inner and outer regions
G_ii [.FALSE.] : electrostatic solvation free energy in inner region
inner region-specifications:: [ [RECTbox]
[XMAX real] [YMAX real] [YMAX real]
[XMIN real] [YMIN real] [YMIN real] ]
[ [SPHEre]
[SRDIst real]
[RRXCen real] [RRYCen real] [RRZCen real] ]
RECTbox [.FALSE.] : rectangular (box) inner region
XMAX [0.0] : maximum position of inner region along X-axis
YMAX [0.0] : maximum position of inner region along Y-axis
ZMAX [0.0] : maximum position of inner region along Z-axis
XMIN [0.0] : minimum position of inner region along X-axis
YMIN [0.0] : minimum position of inner region along Y-axis
ZMIN [0.0] : minimum position of inner region along Z-axis
SPHEre [.FALSE.] : spherical inner region
SRDIst [0.0] : radius of spherical inner region
RRXCen [0.0] : X position of spherical inner region
RRYCen [0.0] : Y position of spherical inner region
RRZCen [0.0] : Z position of spherical inner region
basis function-spec.:: [ [XNPOl integer] [YNPOl integer] [ZNPOl integer] ]
[NMPOl integer]
[MAXNpol integer] [NLISt integer] [NOSOrt]
[CGSCal real]
XNPOl [0] : number of Legendre polynomials in X direction
YNPOl [0] : number of Legendre polynomials in Y direction
ZNPOl [0] : number of Legendre polynomials in Z direction
NMPOl [0] : number of multipoles with spherical harmonics
MAXNpol [NTPOL] : maximum number of basis functions which are used in
the energy and forces calculations
NLISt [1] : updating frequency for the ordered list of basis
functions during molecular dynamics
NOSOrt [.FALSE.] : surpress the ordering of basis functions
CGSCale [1.0] : charge scaling factor for the monopole basis
large box-specifications:: [LBOX] [LDCEl real] [LNCEl integer] [FOCUS]
[LXBCen real] [LYBCen real] [LZBCen real]
LBOX [.FALSE.] : invoke large box calculation (see below)
LDCEL [4*DCEL] : grid spacing of large box
LNCEL [33] : number of grid point in 1D for a cubic large box
: this should be smaller than or equal to NCEL
LXBCEN [0.0] : the center of a large box in X
LYBCEN [0.0] : the center of a large box in Y
LZBCEN [0.0] : the center of a large box in Z
FOCUS [.FALSE.] : use the potential from a large box calculation for
the boundary potential in finer calculation
cavity potential spec ::= CAVI atom-selection [DRDI real] [DRCA real]
2. Free energy decomposition
The total electrostatic solvation energy is decomposed into G_oo, G_io, and
G_ii. All decomposition calculations are performed using the PB solver.
With G_io keyword we can calculate the static external field and save it using
WRITE PHIX. G_ii gives the exact reaction field energy with which we can
compare the basis-set reaction field energy.
3. Inner region & Basis functions
Currently, GSBP supports two shapes for the inner regions: an orthorhombic
rectangular box and a sphere. For the rectangular box, Legendre polynomials
are used as a basis-set. The number of function along each cartesian axis can
be specified using XNPOL, YNPOL, and ZNPOL. The resulting total number of
basis functions (NTPOL) is XNPOL*YNPOL*ZNPOL. For the spherical inner region,
spherical harmonics are used. The number of electric multipoles is specified
as NMPOL, and the resulting total number of basis functions (NTPOL) is
NMPOL*NMPOL (e.g., with NMPOL = 2 one is including the reaction field for the
monopole and dipole of the inner system).
The calculation of the MIJ matrix can be done in a single job but can also
be restarted. This is convenient since one does not always know how many basis
functions would yield accurate results. For example, one could calculate the
MIJ matrix with NMPOL=11 spherical harmonics. After comparing the result with
exact PB reaction field, one may decide to increase the number of multipoles
in NMPOL. This procedure is illustrated in the test case gsbptest1.inp.
The list of basis functions can be ordered and sorted such that the number of
multipole basis function used for the energy and force (MAXNpol) calculations
is reduced.
The focussing method with a large initial box and interpolating boundary
condition (INTBP) is a necessary procedure for computing the MIJ matrix
because the charge distribution corresponding to a given basis function
involves a large number of lattice point charges. All grid points inside the
inner region contain a partial charge assigned by a basis function.
Therefore, it would take a long time to set the boundary potential directly.
In practice, the charges density from a basis function are interpolated onto
a large (coarse) grid to reduce the number of grid-point charges which
increase the computational cost of setting up the boundary conditions.
In this case, the focussing method is much more useful because the boundary
potential can be obtained from the coarse grid calculation.
4. Cavity Potential
The GSBP cavity potential is a restrictive potential that keeps
water molecules from escaping the simulation region. Usually it is
applied only on the oxgen atom of the water molecules. The DRDI option
specifies the offset where the restrictive potential is placed
from the dielectic boundary for the spherical geometry.
The DRCA option gives the offset of the quartic potential (same form
as the one in MMFP module) for the orthorombic geometry.
Solvent Macromolecule Boundary Potential (SMBP)
The SMBP is a boundary potential that is analogous to the GSBP, yet
can be used in conjunction with ab-initio QM/MM setups. As, in contrast
to the GSBP, the PB equations have to be solved for every step, it is
targeted for use in geometry optimizations. The SMBP is especially useful
for higher-level QM/MM optimizations of MD snapshots obtained with the
GSBP using a lower-level QM/MM or pure MM setup. The original method is
described in T. Benighaus and W. Thiel, J. Chem. Theory Comput. 5, 3114 (2009).
The current implementation of the method is described in J. Zienau and
and Q. Cui (2012, in preparation). In the SMBP, the electrostatic
interactions between the QM part and all other entities (except for the
inner region MM charges) are handled via a surface charge projection approach,
where the virtual surface charges are situated on the boundary between
the inner and outer regions. As no GSBP type basis set is used, the SMBP
can be viewed as the basis set limit of the GSBP, although divergence
effects when atoms are close at the boundary can still occur even for very
large GSBP basis sets.
As in the GSBP the numerical implementation is divided into SETUP and
UPDATE parts; in the SETUP part, however, only the static external field is
calculated. The UPDATE part is fully analogous to the GSBP.
The SMBP has been interfaced with the Gaussian 09 and Q-Chem codes,
although the Q-Chem interface is currently NOT functional due to problems with
the ESP charge approach implemented in Q-Chem. Therefore, only Gaussian 09
can be used as ab-initio QM method with the SMBP at the present stage.
For benchmark purposes, an interface with the semi-empirical SCC-DFTB method
is provided as well.
(i) It is necessary to source a radius file in the PBEQ module
for BOTH SETUP and UPDATE parts!
(ii) For SMBP/Q-Chem geometry optimizations (future implementation),
the jobtype in the qchem.inp file must be set to "SP" (single point)!
1. SMBP Syntax
SMBP is a subcommand inside PBEQ module like SOLVe and uses all options
(except solvation force-spec.) in SOLVe. It supports all inner region and
large box options of the GSBP. Special or additional options are described below.
SMBP decomposition-spec. inner region-specifications (GSBP and additional)
large box-specifications (GSBP) all options in SOLVE
decomposition-spec.::= [PHIX]
PHIX [.FALSE.] : calculate static outer potential
inner region-specifications:: [ RECTbox (all GSBP options)
[INCX real] [INCY real] [INCZ real] ]
[ SPHEre (all GSBP options)
[NSPT integer] [SPAL integer] ]
[ [IGUE integer] [QCCH integer]
[CGTH real] [CGMX integer] [SCTH real] [SCMX integer] ]
INCX [1.0] : Spacing of surface charges along X for RECTbox
INCY [1.0] : Spacing of surface charges along Y for RECTbox
INCZ [1.0] : Spacing of surface charges along Z for RECTbox
NSPT [90] : Number of surface charges for SPHEre
SPAL [2] : Algorithm for placing surface charges on SPHEre
"1" uses a distribution along circles
"2" uses a distribution along spirals (recommended)
IGUEss [1] : Initial guess for QM atomic charges
"1" uses charges from the previous step if possible
"2" uses zero guess charges always (not recommended)
QCCH [1] : atomic charge representation from QM calculation
(ab-initio only)
"1" uses ESP charges
(default for Gaussian09: Merz-Kollmann)
"2" uses Mulliken charges (not recommended)
CGTH [1.e-6] : Numerical threshold for Conjugate Gradient (CG)
optimizer of the surface charges
CGMX [2000] : Maximum number of iterations for the CG optimizer
SCTH [5.e-4] : Numerical threshold for the Self Consistent
Reaction Field (SCRF) calculation
SCMX [50] : Maximum number of SCRF iterations
2. Free energy decomposition
This part is analogous to the GSBP G_io option, as only the static outer
field is calculated. The option is renamed to PHIX in the SMBP.
3. Inner region
The same geometric shapes as for the GSBP (sphere and box) are currently
supported. As a "perfectly even" distribution of points on a sphere does not
exist, two approximate surface charge distributions are implemented for the
spherical boundary. With a reasonable large number of charges (about 30 and
more), the difference between both algorithms was found to be negligible,
so that the default setting is recommended, as it allows for an arbitrary
number of charges to be specified. The default setting for the number of
charges NSPT (90) should be sufficient for most cases. For the rectangular
box shaped boundary, the NSPT and SPAL options are ignored, as the surface
charges are arranged on a rectangular grid on the box surface and their number
is calculated from the INCX, INCY, and INCZ values. The default settings are
recommended for the other options. If the SCRF calculation does not converge,
the SCRF threshold SCTH can be set to a (slightly) larger value.
Concerning the focussing method with interpolating boundary potential
condition, the same remarks as mentioned for the GSBP apply for the SMBP.
No cavity potential has been implemented for the SMBP, but, e.g., MMFP
constraints can be used.
This examples are meant to be a partial guide in setting up
an input file for PBEQ. There are two test files, pbeqtest1.inp,
pbeqtest2.inp, pbeqtest3.inp, and pbeqtest7.inp.
Example (1)
This example shows how to perform two PB calculations, one for a surrounding
dielectric of 80 (water) and one for a surrounding of 1.0 (vacuum). The
difference between the two energies then corresponds to the electrostatic
contribution to the solvation free energy. The salt concentration was zero
in this calculation.
scalar wmain = radius
SOLVE epsw 80.0 conc 0.0 ncel 30 dcel 0.4
set ener80 = ?ENPB
SOLVE epsw 1.0
set ener1 = ?ENPB
CALC total = @ener80 - @ener1
This example shows how to use a set of atomic Born radii with a smoothing
set sw 0.4
set factor 0.939
stream radius.str
scalar wmain add @sw
scalar wmain mult @factor
scalar wmain set 0.0 sele type H* end
scalar wmain show
SOLVE epsw 80.0 ncel 100 dcel 0.3 -
smooth swin @sw force sten 0.03 npbeq 1
RESET !! If you consider a minimization or dynamics with PB forces,
!! don't use RESET here.
This example shows how to set up a membrane potential and how to get
the electrostatic contribution to the solvation free energy in the membrane
environment. Note that a non-zero concentration is required for a sensible
system with a membrane potential.
scalar wmain = radius
SOLVE epsw 80.0 ncel 150 dcel 0.5 conc 0.150 -
Tmemb 25.0 Zmemb 0.0 epsm 2.0 vmemb 0.100
set ener80 = ?ENPB
SOLVE epsw 1.0 conc 0.000 -
Tmemb 25.0 Zmemb 0.0 epsm 1.0 vmemb 0.000
set ener1 = ?ENPB
CALC total = @ener80 - @ener1
This example shows how to set up boundary potentials using FOCUS keyword,
how to read the saved potential, and how to calculate the electrostatic
contribution to the solvation free energy using FOCUS.
scalar wmain = radius
SOLVE epsw 1.0 ncel 60 dcel 0.4
open write file unit 40 name phi.dat
write phi unit 40
SOLVE epsw 1.0 dcel 0.2 focus ! boundary potentials from DCEL 0.4 potentials
! NOTE: YOU CAN CHANGE NCEL IN THE FOCUSSED SYSTEM AS FOLLOWS;
! SOLVE epsw 1.0 ncel 80 dcel 0.2 focus
SOLVE epsw 1.0 dcel 0.1 focus ! boundary potentials from DCEL 0.2 potentials
open read file unit 41 name phi.dat
read phi unit 41
SOLVE epsw 1.0 dcel 0.1 focus ! boundary potentials from DCEL 0.4 potentials
scalar wmain = radius
SOLVE epsw 80.0 ncel 60 dcel 0.4
set ener81 = ?ENPB
SOLVE epsw 80.0 dcel 0.2 focus
set ener82 = ?ENPB
SOLVE epsw 80.0 dcel 0.1 focus
set ener83 = ?ENPB
SOLVE epsw 80.0 dcel 0.05 focus
set ener84 = ?ENPB
SOLVE epsw 1.0 dcel 0.4
set ener11 = ?ENPB
SOLVE epsw 1.0 dcel 0.2 focus
set ener12 = ?ENPB
SOLVE epsw 1.0 dcel 0.1 focus
set ener13 = ?ENPB
SOLVE epsw 1.0 dcel 0.05 focus
set ener14 = ?ENPB
calc total = @ener81 - @ener11
calc total = @ener82 - @ener12
calc total = @ener83 - @ener13
calc total = @ener84 - @ener14
SOLVE epsw 80.0 ncel 120 dcel 0.2
set ener80 = ?ENPB
SOLVE epsw 1.0
set ener1 = ?ENPB
calc total = @ener80 - @ener1
This example shows pKa Poisson-Bolztmann calculations which
deals with explicit charge distribution on the ionizable site.
(see also ~chmtest/c28/pbeqtest7.inp)
! set residue for pKa calculation and the patch for the ionizable sidechain
set segid = syst
set resid = 2
set patch = GLUP
!Miscelaneous variables
set Dcel = 0.5 ! initial value for the mesh size in the finite-difference
set Ncel = 65 ! maximum number of grid points
set EpsP = 1.0 ! dielectric constant for the protein interior
set EpsW = 80.0 ! solvent dielectric constant
set Conc = 0.0 ! salt concentration
set Focus = Yes
!Note that the resid must be set before streaming into this file
scalar wcomp = charge
patch @patch @Segid @resid setup
hbuild !build any missing hydrogens
scalar wcomp store 1
scalar charge store 2
define SITE select .bygroup. ( resid @resid ) show end
define REST select .not. site end
! Charges of the unprotonated state
scalar wmain recall 1
scalar wmain show
scalar wmain stat select SITE end
! Charges of the protonated state
scalar wmain recall 2
scalar wmain show
scalar wmain stat select SITE end
! Estimate the grid dimensions
format (f15.5)
coor orient norotate
coor stat select all end
calc DcelX = ( ?Xmax - ?Xmin ) / @Ncel
calc DcelY = ( ?Ymax - ?Ymin ) / @Ncel
calc DcelZ = ( ?Zmax - ?Zmin ) / @Ncel
if @DcelX gt @Dcel set Dcel = @DcelX
if @DcelY gt @Dcel set Dcel = @DcelY
if @DcelZ gt @Dcel set Dcel = @DcelZ
coor stat select SITE end
set Xcen = ?xave
set Ycen = ?yave
set Zcen = ?zave
stream @0radii.str
scalar charge recall 2 ! Protonated charge distribution
SOLVE ncel @Ncel Dcel @Dcel EpsP @epsP EpsW @EpsW
if Focus eq yes -
SOLVE ncel @Ncel Dcel 0.25 EpsP @EpsP EpsW @EpsW focus -
XBcen @Xcen YBcen @Ycen ZBcen @Zcen
set EnerPs = ?enpb ! Protonated side chain in structure
SOLVE ncel @Ncel Dcel @Dcel EpsP @epsP EpsW @EpsW select SITE end
if Focus eq yes -
SOLVE ncel @Ncel Dcel 0.25 EpsP @EpsP EpsW @EpsW focus -
XBcen @Xcen YBcen @Ycen ZBcen @Zcen select SITE end
set EnerPi = ?enpb ! Protonated side chain isolated
scalar charge recall 1 ! Unprotonated charge distribution
SOLVE ncel @Ncel Dcel @Dcel EpsP @epsP EpsW @EpsW
if Focus eq yes -
SOLVE ncel @Ncel Dcel 0.25 EpsP @EpsP EpsW @EpsW focus -
XBcen @Xcen YBcen @Ycen ZBcen @Zcen
set EnerUs = ?enpb ! Unprotonated side chain in structure
SOLVE ncel @Ncel Dcel @Dcel EpsP @epsP EpsW @EpsW select SITE end
if Focus eq yes
SOLVE ncel @Ncel Dcel 0.25 EpsP @EpsP EpsW @EpsW focus -
XBcen @Xcen YBcen @Ycen ZBcen @Zcen select SITE end
set EnerUi = ?enpb ! Unprotonated side chain isolated
calc Energy = ( @EnerPs - @EnerUs ) - ( @EnerPi - @EnerUi )
calc pKa = -@Energy/( ?KBLZ * 300.0 ) * log10(exp(1)) != log10(exp(-@Energy/(?KBLZ*300))) | {"url":"https://academiccharmm.org/documentation/version/c41b1/pbeq","timestamp":"2024-11-04T02:03:10Z","content_type":"text/html","content_length":"61414","record_id":"<urn:uuid:1dba2f40-d60c-4886-aa03-b942e37b8594>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00830.warc.gz"} |
Meta-analysis: area under ROC curve
For a short overview of meta-analysis in MedCalc, see Meta-analysis: introduction.
MedCalc uses the methods described by Zhou et al. (2002) for calculating the weighted summary Area under the ROC curve under the fixed effects model and random effects model.
How to enter data
The data of different studies can be entered as follows in the spreadsheet (example taken from Zhou et al., 2002):
Required input
The dialog box for "Meta-analysis: area under ROC curve" can then be completed as follows:
Studies: a variable containing an identification of the different studies.
Area under ROC curve (AUC): a variable containing the Area under the ROC curve reported in the different studies.
Standard error of AUC: a variable containing the Standard error of the Area under the ROC curve reported in the different studies.
Filter: a filter to include only a selected subgroup of cases in the graph.
• Forest plot: creates a forest plot.
□ Marker size relative to study weight: option to have the size of the markers that represent the effects of the studies vary in size according to the weights assigned to the different studies.
You can choose the fixed effect model weights or random effect model weights.
□ Plot pooled effect - fixed effects model: option to include the pooled effect under the fixed effects model in the forest plot.
□ Plot pooled effect - random effect model: option to include the pooled effect under the random effects model in the forest plot.
□ Diamonds for pooled effects: option to represent the pooled effects using a diamond (the location of the diamond represents the estimated effect size and the width of the diamond reflects the
precision of the estimate).
• Funnel plot: creates a funnel plot to check for the existence of publication bias. See Meta-analysis: introduction.
The program lists the results of the individual studies included in the meta-analysis: the area under the ROC curve, its standard error and 95% confidence interval.
The pooled Area under the ROC curve with 95% CI is given both for the Fixed effects model and the Random effects model (Zhou et al., 2002).
The random effects model will tend to give a more conservative estimate (i.e. with wider confidence interval), but the results from the two models usually agree where there is no heterogeneity. See
Meta-analysis: introduction for interpretation of the heterogeneity statistics Cochran's Q and I^2. When heterogeneity is present the random effects model should be the preferred model.
See Meta-analysis: introduction for interpretation of the different publication bias tests.
Forest plot
The results of the different studies, with 95% CI, and the pooled Area under the ROC curve with 95% CI are shown in a forest plot:
• Borenstein M, Hedges LV, Higgins JPT, Rothstein HR (2009) Introduction to meta-analysis. Chichester, UK: Wiley.
• Higgins JP, Thompson SG, Deeks JJ, Altman DG (2003) Measuring inconsistency in meta-analyses. BMJ 327:557-560.
• Zhou XH, NA Obuchowski, DK McClish (2002) Statistical methods in diagnostic medicine. New York: Wiley. | {"url":"https://www.medcalc.org/manual/meta-analysis-ROC-area.php","timestamp":"2024-11-07T12:07:35Z","content_type":"text/html","content_length":"33961","record_id":"<urn:uuid:60cac8d3-1349-499b-930a-1f855ac7b46b>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00477.warc.gz"} |
Dynamics of Leukemia Stem-like Cell Extinction in Acute Promyelocytic LeukemiaDynamics of Leukemia Stem-like Cell Extinction in Acute Promyelocytic Leukemia
Many tumors are believed to be maintained by a small number of cancer stem–like cells, where cure is thought to require eradication of this cell population. In this study, we investigated the
dynamics of acute promyelocytic leukemia (APL) before and during therapy with regard to disease initiation, progression, and therapeutic response. This investigation used a mathematical model of
hematopoiesis and a dataset derived from the North American Intergroup Study INT0129. The known phenotypic constraints of APL could be explained by a combination of differentiation blockade of
PML–RARα–positive cells and suppression of normal hematopoiesis. All-trans retinoic acid (ATRA) neutralizes the differentiation block and decreases the proliferation rate of leukemic stem cells in
vivo. Prolonged ATRA treatment after chemotherapy can cure patients with APL by eliminating the stem-like cell population over the course of approximately one year. To our knowledge, this study
offers the first estimate of the average duration of therapy that is required to eliminate stem-like cancer cells from a human tumor, with the potential for the refinement of treatment strategies to
better manage human malignancy. Cancer Res; 74(19); 5386–96. ©2014 AACR.
By combining a mathematical model of hematopoiesis with data from a large randomized trial of acute promyelocytic leukemia, this study offers the first determination of the average duration of
therapy required to eliminate all stem-like cells in a human tumor.
Our multicompartment model of hematopoiesis has been described elsewhere, but here we describe it briefly for clarity (Fig. 1; refs. 1–6). Cells in a given compartment i with probability ϵ[i]
differentiate and produce two cells that migrate to the next downstream compartment |$\left( {i + 1} \right)$| or self-renew and increase compartment i by one cell with probability |$1 - \varepsilon
_i$|. Here, “compartments” are not understood as physical spaces but as an accounting tool to keep track of the replication and differentiation of each cell. Proliferation rates r[i] are intrinsic
for each compartment and with |$r_i < r_{i + 1}$|. Parameters describing cells in non–stem cell compartments i>0 are fixed, by allometric scaling arguments (1) and data from human hematopoiesis and
are given by |$\varepsilon _i^h = 0.85, r_i^h = (\gamma ^h )^i r_0^h$| with |$\gamma ^h = 1.26$|. Thus, the rate of proliferation increases exponentially with the compartment [subscripts refer to
the compartment, whereas superscripts refer to healthy (h) or cancerous (c) cells]. In the stem cell compartment |$i = 0, N_0 = 400,\varepsilon _0^h = 0.5$|, and |$r_0^h = 1/365\,\,{\rm days}$| (
7–10). Undisturbed, the above system reaches a steady state, where cell numbers fluctuate around an average cell count. In the absence of disease, the steady state for each compartment is given by
Under this parameter setting, we need 32 compartments to represent hematopoiesis with a daily output of approximately |$3.5 \times 10^{11}$| cells (1–6). This can be viewed as an almost continuous
differentiation process of cells (Fig. 1).
Because of the high cell turnover rates, the dynamics of normal cells in such hierarchies can be captured by a linear system of differential equations (5, 6). An arbitrary number of mutations can be
described analytically by similar equations if mutated cells proliferate independently (5, 6). We model the dynamics of normal and leukemic cell lineages by such systems of differential equations.
The equations for normal cells follow from the influx and output of healthy cells in each compartment and are derived as follows. Cells in compartment i increase in number due to self-renewal at rate
|$\left( {1 - \varepsilon _i^h } \right)r_i^h N_i^h$| and decrease due to differentiation of cells into compartment |$i + 1$| at rate |$\varepsilon _i^h r_i^h N_i^h$|, leading to a change in number
arising from processes within the compartment of |$+ \left( {1 - 2\varepsilon _i^h } \right)r_i^h N_i^h$|. In addition, there is an influx from upstream compartment |$i - 1$| at rate |$+ 2\
varepsilon _{i - 1}^h r_{i - 1}^h N_{i - 1}^h$|; see Fig. 1 for a graphical representation. Collecting all terms gives
Here, l denotes the compartment where the cancer-driving mutation occurs, i.e., the leukemic stem–like cells. Compartments upstream of l are not affected by this mutation and proliferate
independently (Fig. 1). However, the proliferation of healthy cells, |$N_i^h$|, in and downstream of compartment l is potentially inhibited by the leukemic cells, |$N_i^c$|. This interference is
modeled by Hill functions (11). In the absence of malignant cells, homeostasis is normal, but proliferation of healthy cells is suppressed with increasing numbers of leukemic cells (12).
The dynamics of leukemic cells can be described by the following set of equations
In the absence of therapy, the number of leukemic cells in the compartment of origin grows exponentially for ϵ[i]^c < 0.5, leading to an even faster growth in downstream compartments. Leukemic cells
proliferate independently of healthy cells as evidenced by observations in animal models of disease (13). The leukemia-driving cells occupy compartment l (Fig. 1, bottom). Thus, leukemic cells can
only be found in and downstream of compartment l. In addition, the leukemic cell proliferation properties differ significantly from those of healthy cells.
Model Constraints
1. Phenotype: Because PML–RARα expression reduces differentiation and enhances self-renewal of cells (13–17), we supposed that |$\varepsilon ^c < \varepsilon ^h = 0.85$|. Generally, APL is
associated with pancytopenia due to a reduction in bone marrow output. Therefore, we adjusted our parameters to reduce the cells in compartment 31 to 10% to 20% of normal while ensuring that the
intramedullary compartments were hypercellular. ATRA reverses the differentiation block and, therefore, |$\varepsilon ^c \to 0.85$|. In vitro, ATRA slows down the rate of replication of leukemic
cells by at least a factor of 0.61 (18). While chemotherapy kills the majority of cancer cells, the proliferation properties of surviving cancer cells remain unchanged.
2. Time to diagnosis and origin of the disease: The disease must start from a single leukemic stem cell in keeping with the clonal origin of cancer. Using data from Guibal and colleagues (13), we
inferred that the minimum time for diagnosis in mice is approximately 120 days. Using our previously described allometric scaling relationship (19, 20) comparing timescales between mice and men,
|${\frac{{T_{mi} }}{{T_{hu} }}} = \left( {\frac{{M_{mi} }}{{M_{hu} }}} \right)^{1/4}$|, where T and M refer to the time to diagnosis and the species-specific adult mass, respectively, we could
determine that the minimum time from the appearance of the first leukemic stem cell in humans to diagnosis is more than 872 days. An animal model of APL suggests that the disease may originate in
a colony-forming unit, granulocyte-macrophage (CFU-GM cell; ref. 13), which in our model would reside in compartments 13 to 15. A limited study of three patients also suggests that CD34^+CD38^−
cells do not have the t(15q22;17q12), typical of the disease (21).
3. Clonal burden in bone marrow: The average tumor burden of leukemic stem cells at the time of diagnosis is approximately 1% of the tumor population present in the bone marrow (22), and the
leukemic cells represent 65% to 98% of the marrow cellularity (23).
4. The average time for the bone marrow to appear normal after therapy with ATRA is 38 days (range, 25–90 days; ref. 23).
5. Patients treated with ATRA alone for induction often have an increase in their leukocyte count that peaks between 12 and 14 days after initiation of therapy (23).
6. Time to relapse: If patients are treated with ATRA alone until they have a morphologic remission, they relapse on average after approximately 110 days (45–300; ref. 24, 25).
7. ATRA does not alter the dynamic properties of normal hematopoietic cells.
The cancer stem cell (CSC) hypothesis states that at the root of most (perhaps all) tumors, there is a population of cancer stem cells that is not only able to renew itself but also gives rise to the
bulk of the tumor cell population (26–29). These CSCs are essential for the origin and the continued growth and maintenance of the tumor. As a consequence, these cells are an important target of
therapy, as it is thought that eradication of such cells is necessary for a potential cure of the tumor. Various models have been developed that address this hypothesis from a theoretical perspective
and there is increasing evidence for its support from animal models of cancer. Although CSCs were initially isolated from patients with acute leukemia (26–29), they have now been identified in
virtually all types of tumors. It is therefore important to understand the dynamics of these cells under therapy and whether they can be eradicated, leading to cure of the disease and long-term
survival of patients. Here, we utilize data from a clinical trial of therapy for APL to understand the dynamics of leukemic stem cells under therapy. We use a mathematical/computational model of
hematopoiesis together with quantitative data from the North American Intergroup Study INT0129 (30, 31) to determine the probability that the leukemic stem cells are eradicated at a certain time
under this treatment regimen. APL was chosen for a number of reasons: (i) the disease is well defined with most patients having the translocation t(15q22;17q12), leading to PML–RARα oncogene
activation (14–16), (ii) the tumor burden can be quantitated using quantitative real-time PCR (qRT-PCR; refs. 31, 32), (iii) targeted therapy in the form of all-trans retinoic acid (ATRA) is
available and highly effective (33, 34), (iv) the availability of serial quantitative data on disease burden from a large randomized clinical trial that allows us to investigate the effects of ATRA
treatment and chemotherapy separately (31, 32, 35), and (v) the availability of a mathematical/computational model of hematopoiesis that has already been utilized to understand the dynamics of
mutations in other hematologic disorders (1–6). In the following, we provide a brief summary of the clinical trial, a description of the mathematical model together with the justification of the
constraints used to determine the model parameters, followed by a description of the data fitting and presentation of results. Details of the mathematical model as well as the clinical trial are
provided in Materials and Methods. In summary, our mathematical model contains a fixed number of compartments that represent different stages of cell differentiation. At each stage, cells proliferate
with a fixed rate r[i] and differentiate into the next downstream compartment with probability |$\varepsilon _i$|. This general framework allows us to describe normal hematopoiesis as well as the
initiation and progression of different types of leukemia, defined by changes in proliferation and differentiation parameters of malignant cells. As a result, we provide an estimate of the timescale
with which a leukemic stem cell population is eliminated in humans.
Patients and Methods
Clinical trial INT0129
The North American Intergroup trial of ATRA in APL was initially reported in 1997 with subsequent updates (30, 36, 37). Patients were recruited from centers affiliated with the Cancer and Leukemia
Group B, the Southwest Oncology Group, and the Eastern Cooperative Oncology Group. Briefly, patients with newly diagnosed APL were randomly assigned to induction therapy either with combination
chemotherapy: daunorubicin (45 mg/m^2 daily on days 1–3) and cytosine arabinoside (100 mg/m^2 by continuous infusion for 7 days; N = 191) or ATRA alone (45 mg/m^2 orally in two divided doses daily)
that could be given for up to 90 days (N = 188). Pediatric patients less than 3 years of age were treated with the same protocol, but with appropriate dose modifications. Subsequently, patients who
achieved a complete remission with induction therapy received two cycles of consolidation: the first cycle was identical to the initial induction chemotherapy regimen while the second cycle consisted
of high dose cytosine arabinoside (2 g/m^2 every 12 hours for 4 days) with daunorubicin 45 mg/m^2 daily on days 1 and 2. The patients were subsequently randomized to observation or maintenance
therapy with ATRA (45 mg/m^2 orally in two divided doses daily) for up to 1 year. Fifty-four patients who received induction with ATRA were randomized to maintenance with ATRA, whereas 56 patients
initially treated with chemotherapy went on ATRA maintenance after the second randomization. Patients had serial measurement of the PML–RARα oncogene as previously described (31, 32, 35). Our
mathematical/computational model was fitted to this serial data, after normalization of PML–RARα to GAPDH with the pretreatment copy number value of PML–RARα/GAPDH normalized to 1. Serial collection
of blood samples for qRT-PCR quantification was not mandatory for the trial and as a result the data set is incomplete in this respect.
Parameter estimation
We use the initial condition of 1 leukemic stem cell in compartment l with the described constraints to numerically solve the deterministic equations from above for each compartment using standard
numerical procedures implemented in Mathematica. Throughout this process, we assume that the effect of ATRA on leukemic cells is constant. Each numerical solution gives the disease scenario arising
from a given set of model parameters. In our case, we had to determine four key parameters: (i) the proliferation rate and (ii) differentiation probability in the compartment of the cancer-initiating
cell (|$r_{15}^c$| and |$\varepsilon _{15}^c$|) and (iii) the proliferation rate and (iv) differentiation probability of cancer cells in the bone marrow (|$r_{16 - 25}^c$| and |$\varepsilon _{16 -
25}^c$|). We then performed an extensive parameter search, varying these four parameters within a wide range compatible with the phenotypic constrains of the disease (i.e., |$r_{15}^c\,>\,r_{15}^h$|
and |$\varepsilon _{15}^c\,<\,\varepsilon _{15}^h$| as well as |$r_{16 - 25}^c\,<\,r_{16 - 25}^h$| and |$\varepsilon _{16 - 25}^c\,<\, \varepsilon _{16 - 25}^h$|). We compared the properties of the
modeled disease scenario to the restrictive phenotypic constrains of the disease (points i–vii) described above. If the model prediction deviated in a single point from the phenotypic constraints,
the parameter set was discarded. This resulted in a restricted set of parameters for the disease in the absence of therapy.
We assumed that therapy with either ATRA or chemotherapy was started as soon as a diagnosis was made. The proliferation and differentiation parameters of the leukemic cells were altered to fit the
available response data in case of ATRA therapy. Chemotherapy was implemented as a single catastrophic event, in which the majority of cancer and healthy cells are killed, but the proliferation
properties of the surviving cancer cells remain unchanged. Healthy cells are not affected by ATRA therapy and thus their dynamic properties are only indirectly altered by the response of the leukemic
cells to therapy.
Stochastic simulations
We inferred the distribution of extinction times of leukemic stem–like cells by performing stochastic simulations implemented by a Gillespie algorithm (38), based on an agent-based representation of
our hierarchical organization (5). We used parameter estimations from our deterministic fitting procedure, and therefore parameters were fixed during all stochastic simulations. We performed in total
10^4 independent stochastic simulations and recorded the time until all cancer stem–like cells went extinct, leading to a distribution of cancer stem–like extinction times.
Dynamics of untreated APL
Initially, we had to determine the time course and dynamic properties of the leukemic stem and progenitor cells taking into account the constraints that we identified from the literature, in
particular the known biology of the disease (Fig. 1; refs. 13–17, 21, 23). Given that PML–RARα expression leads to a block in differentiation, we had to impose a lower probability for the
differentiation of leukemic cells compared with healthy cells |$\varepsilon _l^c < \varepsilon ^h = 0.85$| in our model. The leukemic stem cell in APL may arise in a CFU-GM cell (13, 21), and
therefore |$l = 15$| was chosen as the founding compartment of APL in our hierarchical model, based on our prior results (1, 39). We estimated that the minimum time between the onset of the first
leukemic stem cell and disease was 872 days (13, 19, 20), where disease was defined as a reduction in bone marrow output to approximately 20% of normal, leading to cytopenias that are typical for
this leukemia. At the same time, the bone marrow compartments will be expanded and the marrow appears hypercellular. The parameter estimates that led to the best fit to the data are presented in
Table 1.
Table 1.
Parameter . i = 15 . 16 ≤ i ≤ 25 . 26 ≤ i ≤ 31 .
|$\gamma _i^C$| 1.34 1.12 1.12
|$\varepsilon _i^C$| 0.45 0.07 0.85
Parameter . i = 15 . 16 ≤ i ≤ 25 . 26 ≤ i ≤ 31 .
|$\gamma _i^C$| 1.34 1.12 1.12
|$\varepsilon _i^C$| 0.45 0.07 0.85
NOTE: Here, i = 15 represents the compartment of the founding APL cancer stem–like cell. Compartments 16 ≤ i ≤ 25 represent the bone marrow cell load and compartments 26 ≤ i ≤ 31 differentiated cells
that can be found in the bloodstream. The parameters |$\varepsilon _i^c$| denote the differentiation probabilities of cancerous cells in compartment i and the parameters |$\gamma _i^c$| the increase
in proliferation rate of cancerous cells per compartment.
Our fits suggest that the leukemic stem cells replicate faster and self-renew with a higher probability than normal CFU-GM cells in the same compartment. Even based on their higher self-renewal
capacity alone, they have a considerable fitness advantage compared with the normal progenitor cells. If we count the number of offspring cells produced by a mutant cell in a given compartment and
compare it with the number of offspring of a normal cell, we obtain for relative fitness in our mathematical model (f[j]; ref. 40),
This translates into a relative fitness advantage of 6.9 compared with normal CFU-GM (17). Moreover, our fitting suggests that the leukemic progenitor cells (downstream of the leukemic stem cells)
have an extremely high fitness advantage (based on the virtual absence of normal cells in the circulation), estimated at 75 compared with their normal counterparts (normalized to 1) due to the block
in differentiation (and enhanced self-renewal). These estimates provide a vivid explanation of the rapid disease progression and early high lethality associated with this disease before the advent of
ATRA therapy.
Dynamics of disease under chemotherapy treatment
We implement chemotherapy treatment as a single catastrophic event, in which the majority of both leukemic and healthy cells are killed instantly. The proliferation parameters of surviving cells
remain unchanged after chemotherapy treatment. Therefore, we only need to infer a single parameter (the fraction of killed cells under chemotherapy) to determine the dynamics of patients with APL
under chemotherapy from our mathematical model. We vary the fraction of killed cells and fit the resulting dynamics to the serial qRT-PCR data of the fraction of patients treated with chemotherapy in
the INT0129 trial. Our best parameter estimates (see Figs. 2 and 3) suggest that only 0.3% of all cancer cells survived, compatible with more than 2 log kill of leukemic cells. However, this also
implies that approximately 10^5 leukemic stem cells survive and relapse is to be expected. This prediction is confirmed by observations from the follow up of the INT0129 trial were the risk of
relapse of chemotherapy-treated patients (without ATRA maintenance) was high. Note, that the characteristic peak of bone marrow output, 10 to 12 days after ATRA treatment, does not occur with
chemotherapy (Fig. 2A vs. C).
Dynamics of disease under therapy with ATRA
ATRA alters the behavior of leukemic cells by (i) inducing differentiation and (ii) slows down the rate of replication of leukemic stem cells. Fitting of our model to serial RT-PCR data from the
INT0129 trial (Figs. 2 and 3A) provides an estimate for the leukemic cell parameters under therapy as reported in Table 2. We consider that therapy starts immediately after diagnosis or 870 days from
the appearance of the first leukemic stem cell. We assume that the differentiation probabilities return to normal under ATRA therapy. The proliferation rates within the bone marrow, |$\gamma _{16 -
25}^{ATRA}$|, are fixed by the time until the bone marrow output peaks. The proliferation rate of cancer stem cells, |$\gamma _{15}^{ATRA}$|, has only a minor influence on this dynamics due to the
exponential growth characteristics of hematopoiesis, and is thus difficult to estimate from the data. However, it is a crucial parameter to assess the probability of relapse. For the best parameter
estimate, the rate of replication of the leukemic stem cells returns to normal compared with other CFU-GM cells |$\left( {\gamma _{15}^c = 1.34 \to \gamma _{15}^{ATRA} = 1.26} \right)$|. At the same
time, the differentiation block of the cells is removed |$\left( {\varepsilon _{15}^c = 0.45 \to \varepsilon _{15}^{ATRA} = 0.85 = \varepsilon _{15}^h } \right)$|. Zhu and colleagues showed that the
doubling time of NB4 cells treated with ATRA increased in vitro from 25.2 hours to 41.26 hours (slowed the cells by a factor of 0.6; ref. 18). In our in vivo model, there is also such a slowdown
effect. Our proliferation rates in compartment i scale via |$r_i = \left( \gamma \right)^i r_{0.}$|.
Table 2.
Parameter . i = 15 . 16 ≤ i ≤ 25 . 26 ≤ i ≤ 31 .
|$\gamma _i^h$| 1.26 1.26 1.26
|$\gamma _i^{ATRA}$| 1.26 1.44 1.44
|$\varepsilon _i^h$| 0.85 0.85 0.85
|$\varepsilon _i^{ATRA}$| 0.85 0.85 0.85
Parameter . i = 15 . 16 ≤ i ≤ 25 . 26 ≤ i ≤ 31 .
|$\gamma _i^h$| 1.26 1.26 1.26
|$\gamma _i^{ATRA}$| 1.26 1.44 1.44
|$\varepsilon _i^h$| 0.85 0.85 0.85
|$\varepsilon _i^{ATRA}$| 0.85 0.85 0.85
NOTE: Here, |$\varepsilon _i^h$| and |$\varepsilon _i^{ATRA}$| denote the differentiation probabilities in compartment i of healthy cells and cancerous cells under ATRA treatment, respectively. The
parameters |$\gamma _i^h$| and |$\gamma _i^{ATRA}$| represent the relative increase of the proliferation rate per compartment for healthy cells and cancer cells under ATRA treatment.
Thus, a decrease in doubling times of cancer cells under ATRA treatment by a factor of 0.6 in the experiment corresponds to |$( {\frac{{\gamma _{15}^{ATRA} }}{{\gamma _{15}^c }}} )^{15} \approx 0.4$|
in our theoretical model based on the scaling of replication rates. With this in mind, our relative reduction in leukemic stem cell replication |$\left( {\gamma ^c = 1.34 \to \gamma ^{ATRA} = 1.26} \
right)$| is in qualitative agreement with the finding of Zhu and colleagues. We note that the replication rate in vivo would be expected to be slower than what is observed in vitro (41).
In addition, ATRA therapy affects more downstream progenitor cells correcting their differentiation block back to normal |$\left( {\varepsilon _{16 - 25}^c = 0.07 \to \varepsilon _{16 - 25}^{ATRA} =
0.85} \right)$|. Fitting also suggests that ATRA increases the rate of replication of downstream leukemic progenitors compared with normal cells. The latter prediction is compatible with the
observation of a rapid increase in the neutrophil count in patients treated with ATRA alone (Figs. 2 and 3C).
In Fig. 4, we provide a comparison of individual fits of the model to patient specific data for a patient induced with ATRA (Fig. 4A) and another patient randomized to chemotherapy only induction (
Fig. 4B). The model fits especially well the ATRA-treated patient.
Extinction time of leukemic stem cells
Our model is in line with in vivo mouse experiments (13) that before the initiation of therapy, the leukemic stem cell population |$\left( {i = 15} \right)$|, increases exponentially (Fig. 3B).
Thus, at the time of diagnosis |$\left( {T^{diag} } \right)$|, we expect to find |$N_{15}^c \left( {T^{diag} } \right)$| cells, where
For our parameters, we obtain 4 × 10^7 leukemic stem cells at the time of diagnosis in compartment 15 (CFU-GM), where the mutation originates (19, 20). Under ATRA therapy, the number of leukemic stem
cells decreases exponentially such that
Therefore, the average extinction time for the leukemic stem–like cells (i.e., for the elimination of all leukemic stem–like cells) is given by:
On average the time for clonal extinction under ATRA treatment is approximately 312 days (0.36 × 870 days). However, this time increases exponentially with the number of cells at diagnosis, and,
therefore, continued therapy with ATRA is a prerequisite to cure this disease. These results are compatible with clinical observations and justify the need for maintenance therapy for approximately 1
year that seems to lead to a cure in many patients as treated in the INT0129 trial (Fig. 3D).
Relapse of seemingly successful treated patients is a common phenomenon in acute leukemia. Presumably, this relapse is caused by the remaining cancer stem and progenitor cells after therapy as well
as the selection of mutant cells resistant to therapy through a variety of mechanisms, including mutations in the LBD domain of RARα, increased ATRA catabolism, abnormal trafficking of ATRA to the
nucleus, the presence of cytoplasmic retinoic acid binding protein, and overexpression of BP1 (42, 43). Thus, the question of whether and when treatment eradicates all cancer stem cells is critical.
As intrinsic cell properties, such as the exact time of cell proliferation or differentiation, are stochastic, one naturally expects the actual extinction time of the leukemic stem cell pool to
differ in patients (44, 45). To obtain the distribution of these extinction times under ATRA treatment, we implemented a computational representation of the mathematical model (see Materials and
Methods) by utilizing standard Gillespie algorithms (38) and ran exact individual-based stochastic simulations on this model. We assumed that the initial decline from approximately 4 × 10^7 to 10^5
leukemic stem cells under ATRA treatment follows the deterministic equations. We performed 10^4 independent realizations of the stochastic simulations initialized with 10^5 leukemic stem cells under
ATRA treatment that use the parameter estimations from Tables 1 and 2 and recorded the extinction times of the leukemic stem cell pool. The probability of leukemic stem cell eradication under ATRA
therapy that lasted less than 200 days is negligible (Fig. 3D and Fig. 5). Only 42% of patients would be expected to be cured after 312 days (deterministic extinction time), imposing a high risk of
relapse for over half of the patients. However, after one year of ATRA treatment up to 92% of patients are free of leukemic stem cells according to this model and therefore “cured” of their disease.
If we hope to better treat cancer, with emerging therapeutic approaches, a detailed understanding of cancer initiation, cancer progression, and response to treatment is essential. Here, we combine a
mathematical/computational approach with clinical trial data of treatment response of patients with APL to ATRA therapy and/or chemotherapy. This approach allows us to model leukemia progression from
the occurrence of the first leukemic stem cell until the potential elimination of the last cancer stem cell under treatment and thus provides a detailed understanding of all phases of APL under two
different treatment regimes. At least one murine model of APL suggests that the disease originates in a CFU-GM cell (13, 22) and thus within the lower half (here compartment 15) of our hierarchical
model. We acknowledge that there is still some disagreement on the true origin of the leukemic stem cell in APL (21, 46, 47), partly based on the animal model used (46, 48) and it is likely that
other mutations in addition to the t(15q22;17q12) are required for APL to develop (49). Although the cancer stem cell hypothesis is increasingly accepted, and cancer-initiating (stem) cells have been
isolated from many tumors, the field is still somewhat controversial (27–29). Several possible explanations exist for the divergent results observed vis-à-vis the presence, frequency, surface marker
expression, and functional properties of these putative cells, including (i) the animal model used for engraftment that provides the complex microenvironment for cells to survive and grow, (ii)
genetic/epigenetic heterogeneity between tumors, (iii) stage of the tumor, and others (47–49). However, the presence of cellular hierarchies within acute leukemia is less controversial and a critical
component of our modeling approach.
We find that an interaction of leukemic and healthy cells is sufficient to explain the known phenotypic constrains of APL. The bone marrow in APL usually appears hypercellular at diagnosis, but a
fraction of patients may have a hypocellular bone marrow at diagnosis. This can be explained by the suppression of healthy cells combined with a differentiation block of APL cells in the bone marrow.
Our model of acute promyelocytic leukemia also reveals that bone marrow failure syndromes are not necessarily due to failures of the normal hematopoietic stem cell pool. They can also occur by
suppression of the proliferation of normal cells by leukemic cells, for example, due to competition for cytokines. This also implies that hematopoiesis returns to normal after the eradication of the
malignant cells as is typical for many acute leukemias and is in line with our model as well as recent observations in vivo (12).
Our model suggests that, in addition to the block of differentiation, APL stem–like cells have an increased proliferation rate, and thus they have a significant fitness advantage (6.9 to normal
CFU-GM cells with 1.0) compared with normal cells. Despite this fitness advantage, the disease progresses slowly initially, as cells accumulate in the bone marrow, but only weakly affect the output
of normal hematopoietic cells (17). The output of fully differentiated healthy cells only starts to decrease slowly approximately 600 days after the occurrence of the first leukemic stem cell and
diagnosis typically occurs after approximately 870 days (2.4 years). This is substantially shorter than the timescale for the clinical development of chronic myeloid leukemia, multiple myeloma or
solid malignancies such as colon cancer (3, 50).
We find that chemotherapy provides a significant initial response but is unlikely to cure the disease, as a substantial fraction of leukemic stem–like cells is expected to survive treatment and leads
to relapse of the disease. Indeed, we find several patients in the INT0129 trial that underwent three cycles of chemotherapy (per protocol) but relapsed in time intervals of approximately 100 to 300
The model suggests that ATRA has differential effects on APL stem–like cells compared with leukemic cells further downstream within its hierarchy. ATRA removes the differentiation block and increases
the proliferation rate of APL cells in the bone marrow. This leads to a rapid expansion of most APL progenitor cells and causes the typical bone marrow peak after 10 to 13 days of ATRA treatment. The
bone marrow will appear free of APL blasts after approximately 20 days. However, cure requires the extinction of all APL stem–like cells. We find, in line with in vitro studies, that ATRA decreases
the proliferation rate of leukemic stem–like cells to 0.4 compared with untreated leukemic stem–like cells and thus slows down the eradication of APL stem–like cells. Thus, despite a fast initial
response, continued ATRA therapy is necessary to reduce the risk of relapse. Our model suggests that, in the absence of additional mutations, ATRA therapy for one year would result in a high
likelihood of eradication of all leukemic stem–like cells and thus a cure in many patients. This is compatible with long-term follow-up of patients on the INT0129 trial, where only 3.3% of patients
who achieved complete remission experienced a late relapse (defined as occurring >3 years after diagnosis; ref. 37). Our model in its current form does not consider the intrinsic heterogeneity
present in most leukemias or the emergence of mutant subclones that may lead to relapse of the disease. Such extensions of the model may be possible in the future but require more detailed knowledge
about the structure of the tumor. Regardless, our modeling also illustrates the impact of stochastic effects on the response to treatment, and in part explains why outcomes can be vastly different
between patients with similar disease status at diagnosis, even without considerations of tumor evolution. However, our model suggests that continued ATRA treatment can eventually eradicate all
leukemic stem–like cells and thus potentially cure APL, and provides for the first time a time scale for the in vivo eradication of leukemic stem–like cells in humans.
Disclosure of Potential Conflicts of Interest
No potential conflicts of interest were disclosed.
The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Cancer Institute.
Authors' Contributions
Conception and design: B. Werner, A. Traulsen, D. Dingli
Development of methodology: B. Werner, A. Traulsen, D. Dingli
Acquisition of data (provided animals, acquired and managed patients, provided facilities, etc.): R.E. Gallagher, E. Paietta, M.R. Litzow, M.S. Tallman, P.H. Wiernik, J. Slack, C. Willman, Z. Sun
Analysis and interpretation of data (e.g., statistical analysis, biostatistics, computational analysis): B. Werner, R.E. Gallagher, E. Paietta, A. Traulsen, D. Dingli
Writing, review, and/or revision of the manuscript: B. Werner, R.E. Gallagher, E. Paietta, M.R. Litzow, M.S. Tallman, P.H. Wiernik, C. Willman, Z. Sun, A. Traulsen, D. Dingli
Grant Support
This study was coordinated by the ECOG-ACRIN Cancer Research Group (Robert L. Comis, MD, and Mitchell D. Schnall, MD, PhD, Group Co-Chairs) and supported in part by Public Health Service Grants
CA21115, CA14958, CA13650, CA17145, CA86726, and CA56771 from the National Cancer Institute, NIH, and the Department of Health and Human Services.
The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked advertisement in accordance with 18 U.S.C. Section 1734
solely to indicate this fact.
Compartmental architecture and dynamics of hematopoiesis
PLoS ONE
On the origin of multiple mutant clones in paroxysmal nocturnal hemoglobinuria
Stem Cells
Chronic myeloid leukemia: origin, development, response to therapy and relapse
Clin Leukemia
Neutral evolution in paroxysmal nocturnal hemoglobinuria
Proc Natl Acad Sci U S A
Dynamics of mutant cells in hierarchical organized tissues
PLoS Comput Biol
A deterministic model for the occurrence and dynamics of multiple mutations in hierarchically organized tissues
J R Soc Interface
Use of an X-linked human neutrophil marker to estimate timing of lyonization and size of the dividing stem cell pool
J Clin Invest
, et al
Telomere fluorescence measurements in granulocytes and T lymphocyte subsets point to a high turnover of hematopoietic stem cells and memory T cells in early childhood
J Exp Med
Estimating human hematopoietic stem cell kinetics using granulocyte telomere lengths
Exp Hematol
, et al
Hematopoietic stem cell behavior in non-human primates
Statistical theory of cooperative binding to proteins. The Hill equation and the binding potential
J Am Chem Soc
, et al
Acute myeloid leukemia does not deplete normal hematopoietic stem cells but induces cytopenias by impeding their differentiation
Proc Natl Acad Sci U S A
Di Ruscio
, et al
Identification of a myeloid committed progenitor as the cancer-initiating cell in acute promyelocytic leukemia
de The
The t(15;17) translocation of acute promyelocytic leukaemia fuses the retinoic acid receptor alpha gene to a novel transcribed locus
The theory of APL
de The
Revisiting the differentiation paradigm in acute promyelocytic leukemia
PML-RARA can increase hematopoietic self-renewal without causing a myeloproliferative disease in mice
J Clin Invest
, et al
Effect of retinoic acid isomers on proliferation, differentiation and PML relocalization in the APL cell line NB4
Cyclic neutropenia in mammals
Am J Hematol
The allometry of chronic myeloid leukemia
J Theor Biol
, et al
Highly purified primitive hematopoietic stem cells are PML-RARA negative and generate nonclonal progenitors in acute promyelocytic leukemia
, et al
Eradication of acute promyelocytic leukemia-initiating cells through PML-RARA degradation
Nat Med
, et al
A clinical and experimental study on all-trans retinoic acid-treated acute promyelocytic leukemia patients
, et al
Use of all-trans retinoic acid in the treatment of acute promyelocytic leukemia
, et al
All-trans retinoic acid for acute promyelocytic leukemia. Results of the New York Study
Ann Intern Med
, et al
A cell initiating human acute myeloid leukaemia after transplantation into SCID mice
Stem cells, cancer, and cancer stem cells
The increasing complexity of the cancer stem cell paradigm
Evolution of the cancer stem cell model
Cell Stem Cell
, et al
All-trans-retinoic acid in acute promyelocytic leukemia
N Engl J Med
, et al
Quantitative real-time RT-PCR analysis of PML-RAR alpha mRNA in acute promyelocytic leukemia: assessment of prognostic significance in adult patients from intergroup protocol 0129
, et al
Molecular analysis and clinical outcome of adult APL patients with the type V PML-RARalpha isoform: results from intergroup protocol 0129
How I treat acute promyelocytic leukemia
de The
All-trans retinoic acid modulates the retinoic acid receptor-alpha in promyelocytic cells
J Clin Invest
, et al
Association of PML-RAR alpha fusion mRNA type with pretreatment hematologic characteristics but not treatment outcome in acute promyelocytic leukemia: an intergroup molecular study
, et al
All-trans retinoic acid in acute promyelocytic leukemia: long-term outcome and prognostic factor analysis from the North American Intergroup protocol
, et al
All-trans retinoic acid and late relapses in acute promyelocytic leukemia: very long-term follow-up of the North American Intergroup Study I0129
Leuk Res
Exact stochastic simulation of coupled chemical reactions
J Phys Chem
Progenitor cell self-renewal and cyclic neutropenia
Cell Prolif
Reproductive fitness advantage of BCR-ABL expressing leukemia cells
Cancer Lett
Allometric scaling of metabolic rate from molecules and mitochondria to cells and mammals
Proc Natl Acad Sci U S A
Suppl 1
, et al
Overexpression of BP1, a homeobox gene, is associated with resistance to all-trans retinoic acid in acute promyelocytic leukemia cells
Ann Hematol
Mechanisms of action and resistance to all-trans retinoic acid (ATRA) and arsenic trioxide (As2O 3) in acute promyelocytic leukemia
Int J Hematol
Stochastic dynamics of hematopoietic tumor stem cells
Cell Cycle
On the dynamics of neutral mutations in a mathematical model for a homogeneous stem cell population
J R Soc Interface
Acute promyelocytic leukemia: where does it stem from?
, et al
PML-RARalpha initiates leukemia by conferring properties of self-renewal to committed promyelocytic progenitors
, et al
Expression and function of PML-RARA in the hematopoietic progenitor cells of Ctsg-PML-RARA mice
PLoS ONE
, et al
Sequencing a mouse acute promyelocytic leukemia genome reveals genetic events relevant for disease progression
J Clin Invest
Growth rates and responses to treatment in human myelomatosis
Br J Haematol
©2014 American Association for Cancer Research. | {"url":"https://aacrjournals.org/cancerres/article/74/19/5386/597414/Dynamics-of-Leukemia-Stem-like-Cell-Extinction-in","timestamp":"2024-11-07T19:23:29Z","content_type":"text/html","content_length":"371069","record_id":"<urn:uuid:ca645c1e-2952-466d-a5f6-1ce6c5d68b5b>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00422.warc.gz"} |
Jet (mathematics)
In mathematics, the jet is an operation that takes a differentiable function f and produces a polynomial, the truncated Taylor polynomial of f, at each point of its domain. Although this is the
definition of a jet, the theory of jets regards these polynomials as being abstract polynomials rather than polynomial functions.
This article first explores the notion of a jet of a real valued function in one real variable, followed by a discussion of generalizations to several real variables. It then gives a rigorous
construction of jets and jet spaces between Euclidean spaces. It concludes with a description of jets between manifolds, and how these jets can be constructed intrinsically. In this more general
context, it summarizes some of the applications of jets to differential geometry and the theory of differential equations.
Jets of functions between Euclidean spaces
Before giving a rigorous definition of a jet, it is useful to examine some special cases.
One-dimensional case
Suppose that is a real-valued function having at least k+1 derivatives in a neighborhood U of the point . Then by Taylor's theorem,
Then the k-jet of f at the point is defined to be the polynomial
Jets are normally regarded as abstract polynomials in the variable z, not as actual polynomial functions in that variable. In other words, z is an indeterminate variable allowing one to perform
various algebraic operations among the jets. It is in fact the base-point from which jets derive their functional dependency. Thus, by varying the base-point, a jet yields a polynomial of order at
most k at every point. This marks an important conceptual distinction between jets and truncated Taylor series: ordinarily a Taylor series is regarded as depending functionally on its variable,
rather than its base-point. Jets, on the other hand, separate the algebraic properties of Taylor series from their functional properties. We shall deal with the reasons and applications of this
separation later in the article.
Mappings from one Euclidean space to another
Suppose that is a function from one Euclidean space to another having at least (k+1) derivatives. In this case, Taylor's theorem asserts that
The k-jet of f is then defined to be the polynomial
in , where .
Algebraic properties of jets
There are two basic algebraic structures jets can carry. The first is a product structure, although this ultimately turns out to be the least important. The second is the structure of the composition
of jets.
If are a pair of real-valued functions, then we can define the product of their jets via
Here we have suppressed the indeterminate z, since it is understood that jets are formal polynomials. This product is just the product of ordinary polynomials in z, modulo. In other words, it is
multiplication in the ring , where is the ideal generated by polynomials homogeneous of order ≥ k+1.
We now move to the composition of jets. To avoid unnecessary technicalities, we consider jets of functions that map the origin to the origin. If and with f(0)=0 and g(0)=0, then . The composition of
jets is defined by It is readily verified, using the chain rule, that this constitutes an associative noncommutative operation on the space of jets at the origin.
In fact, the composition of k-jets is nothing more than the composition of polynomials modulo the ideal of polynomials homogeneous of order .
• In one-dimension, let and . Then
Jets at a point in Euclidean space: rigorous definitions
This subsection focuses on two different rigorous definitions of the jet of a function at a point, followed by a discussion of Taylor's theorem. These definitions shall prove to be useful later on
during the intrinsic definition of the jet of a function between two manifolds.
Analytic definition
The following definition uses ideas from mathematical analysis to define jets and jet spaces. It can be generalized to smooth functions between Banach spaces, analytic functions between real or
complex domains, to p-adic analysis, and to other areas of analysis.
Let be the vector space of smooth functions. Let k be a non-negative integer, and let p be a point of . We define an equivalence relation on this space by declaring that two functions f and g are
equivalent to order k if f and g have the same value at p, and all of their partial derivatives agree at p up to (and including) their k-th order derivatives. In short, iff to k-th order.
The k-th order jet space of at p is defined to be the set of equivalence classes of , and is denoted by .
The k-th order jet at p of a smooth function is defined to be the equivalence class of f in .
Algebraic-geometric definition
The following definition uses ideas from algebraic geometry and commutative algebra to establish the notion of a jet and a jet space. Although this definition is not particularly suited for use in
algebraic geometry per se, since it is cast in the smooth category, it can easily be tailored to such uses.
Let be the vector space of germs of smooth functions at a point p in . Let be the ideal of functions that vanish at p. (This is the maximal ideal for the local ring.) Then the ideal consists of all
function germs that vanish to order k at p. We may now define the jet space at p by
If is a smooth function, we may define the k-jet of f at p as the element of by setting
Taylor's theorem
Regardless of the definition, Taylor's theorem establishes a canonical isomorphism of vector spaces between and . So in the Euclidean context, jets are typically identified with their polynomial
representatives under this isomorphism.
Jet spaces from a point to a point
We have defined the space of jets at a point . The subspace of this consisting of jets of functions f such that f(p)=q is denoted by
Jets of functions between two manifolds
If M and N are two smooth manifolds, how do we define the jet of a function ? We could perhaps attempt to define such a jet by using local coordinates on M and N. The disadvantage of this is that
jets cannot thus be defined in an equivariant fashion. Jets do not transform as tensors. Instead, jets of functions between two manifolds belong to a jet bundle.
This section begins by introducing the notion of jets of functions from the real line to a manifold. It proves that such jets form a fibre bundle, analogous to the tangent bundle, which is an
associated bundle of a jet group. It proceeds to address the problem of defining the jet of a function between two smooth manifolds. Throughout this section, we adopt an analytic approach to jets.
Although an algebro-geometric approach is also suitable for many more applications, it is too subtle to be dealt with systematically here. See jet (algebraic geometry) for more details.
Jets of functions from the real line to a manifold
Suppose that M is a smooth manifold containing a point p. We shall define the jets of curves through p, by which we henceforth mean smooth functions such that f(0)=p. Define an equivalence relation
as follows. Let f and g be a pair of curves through p. We will then say that f and g are equivalent to order k at p if there is some neighborhood U of p, such that, for every smooth function , . Note
that these jets are well-defined since the composite functions and are just mappings from the real line to itself. This equivalence relation is sometimes called that of k-th order contact between
curves at p.
We now define the k-jet of a curve f through p to be the equivalence class of f under , denoted or . The k-th order jet space is then the set of k-jets at p.
As p varies over M, forms a fibre bundle over M: the k-th order tangent bundle, often denoted in the literature by T^kM (although this notation occasionally can lead to confusion). In the case k=1,
then the first order tangent bundle is the usual tangent bundle: T^1M=TM.
To prove that T^kM is in fact a fibre bundle, it is instructive to examine the properties of in local coordinates. Let (x^i)= (x^1,...,x^n) be a local coordinate system for M in a neighborhood U of p
. Abusing notation slightly, we may regard (x^i) as a local diffeomorphism.
Claim. Two curves f and g through p are equivalent modulo if and only if .
Indeed, the only if part is clear, since each of the n functions x^1,...,x^n is a smooth function from M to . So by the definition of the equivalence relation , two equivalent curves must have .
Conversely, suppose that φ is a smooth real-valued function on M in a neighborhood of p. Since every smooth function has a local coordinate expression, we may express φ as a function in the
coordinates. Specifically, if Q is a point of M near p, then
for some smooth real-valued function ψ of n real variables. Hence, for two curves f and g through p, we have
The chain rule now establishes the if part of the claim. For instance, if f and g are functions of the real variable t , then
which is equal to the same expression when evaluated against g instead of f, recalling that f(0)=g(0)=p and f and g are in k-th order contact in the coordinate system (x^i).
Hence the ostensible fibre bundle T^kM admits a local trivialization in each coordinate neighborhood. At this point, in order to prove that this ostensible fibre bundle is in fact a fibre bundle, it
suffices to establish that it has non-singular transition functions under a change of coordinates. Let be a different coordinate system and let be the associated change of coordinates diffeomorphism
of Euclidean space to itself. By means of an affine transformation of , we may assume without loss of generality that ρ(0)=0. With this assumption, it suffices to prove that is an invertible
transformation under jet composition. (See also jet groups.) But since ρ is a diffeomorphism, is a smooth mapping as well. Hence,
which proves that is non-singular. Furthermore, it is smooth, although we do not prove that fact here.
Intuitively, this means that we can express the jet of a curve through p in terms of its Taylor series in local coordinates on M.
Examples in local coordinates:
• As indicated previously, the 1-jet of a curve through p is a tangent vector. A tangent vector at p is a first-order differential operator acting on smooth real-valued functions at p. In local
coordinates, every tangent vector has the form
Given such a tangent vector v, let f be the curve given in the x^i coordinate system by . If φ is a smooth function in a neighborhood of p with φ(p)=0, then
is a smooth real-valued function of one variable whose 1-jet is given by
which proves that one can naturally identify tangent vectors at a point with the 1-jets of curves through that point.
• The space of 2-jets of curves through a point.
In a local coordinate system x^i centered at a point p, we can express the second order Taylor polynomial of a curve f(t) by
So in the x coordinate system, the 2-jet of a curve through p is identified with a list of real numbers . As with the tangent vectors (1-jets of curves) at a point, 2-jets of curves obey a
transformation law upon application of the coordinate transition functions.
Let (y^i) be another coordinate system. By the chain rule,
Hence, the transformation law is given by evaluating these two expressions at t=0.
Note that the transformation law for 2-jets is second order in the coordinate transition functions.
Jets of functions from a manifold to a manifold
We are now prepared to define the jet of a function from a manifold to a manifold.
Suppose that M and N are two smooth manifolds. Let p be a point of M. Consider the space consisting of smooth maps defined in some neighborhood of p. We define an equivalence relation on as follows.
Two maps f and g are said to be equivalent if, for every curve γ through p (recall that by our conventions this is a mapping such that ), we have on some neighborhood of 0.
The jet space is then defined to be the set of equivalence classes of modulo the equivalence relation . Note that because the target space N need not possess any algebraic structure, also need not
have such a structure. This is, in fact, a sharp contrast with the case of Euclidean spaces.
If is a smooth function defined near p, then we define the k-jet of f at p, , to be the equivalence class of f modulo .
John Mather introduced the notion of multijet. Loosely speaking, a multijet is a finite list of jets over different base-points. Mather proved the multijet transversality theorem, which he used in
his study of stable mappings.
Jets of sections
This subsection deals with the notion of jets of local sections of a vector bundle. Almost everything in this section generalizes mutatis mutandis to the case of local sections of a fibre bundle, a
Banach bundle over a Banach manifold, a fibered manifold, or quasi-coherent sheaves over schemes. Furthermore, these examples of possible generalizations are certainly not exhaustive.
Suppose that E is a finite-dimensional smooth vector bundle over a manifold M, with projection . Then sections of E are smooth functions such that is the identity automorphism of M. The jet of a
section s over a neighborhood of a point p is just the jet of this smooth function from M to E at p.
The space of jets of sections at p is denoted by . Although this notation can lead to confusion with the more general jet spaces of functions between two manifolds, the context typically eliminates
any such ambiguity.
Unlike jets of functions from a manifold to another manifold, the space of jets of sections at p carries the structure of a vector space inherited from the vector space structure on the sections
themselves. As p varies over M, the jet spaces form a vector bundle over M, the k-th order jet bundle of E, denoted by J^k(E).
• Example: The first-order jet bundle of the tangent bundle.
We work in local coordinates at a point. Consider a vector field
in a neighborhood of p in M. The 1-jet of v is obtained by taking the first-order Taylor polynomial of the coefficients of the vector field:
In the x coordinates, the 1-jet at a point can be identified with a list of real numbers . In the same way that a tangent vector at a point can be identified with the list (v^i), subject to a
certain transformation law under coordinate transitions, we have to know how the list is affected by a transition.
So let us consider the transformation law in passing to another coordinate system y^i. Let w^k be the coefficients of the vector field v in the y coordinates. Then in the y coordinates, the 1-jet
of v is a new list of real numbers . Since
it follows that
Expanding by a Taylor series, we have
Note that the transformation law is second order in the coordinate transition functions.
Differential operators between vector bundles
See the coordinate independent description of a differential operator.
• Krasil'shchik, I. S., Vinogradov, A. M., [et al.], Symmetries and conservation laws for differential equations of mathematical physics, American Mathematical Society, Providence, RI, 1999, ISBN
• Kolář, I., Michor, P., Slovák, J., Natural operations in differential geometry. Springer-Verlag: Berlin Heidelberg, 1993. ISBN 3-540-56235-4, ISBN 0-387-56235-4.
• Saunders, D. J., The Geometry of Jet Bundles, Cambridge University Press, 1989, ISBN 0-521-36948-7
• Olver, P. J., Equivalence, Invariants and Symmetry, Cambridge University Press, 1995, ISBN 0-521-47811-1
• Sardanashvily, G., Advanced Differential Geometry for Theoreticians: Fiber bundles, jet manifolds and Lagrangian theory, Lambert Academic Publishing, 2013, ISBN 978-3-659-37815-7; arXiv:
See also
This article is issued from
- version of the 10/19/2016. The text is available under the
Creative Commons Attribution/Share Alike
but additional terms may apply for the media files. | {"url":"https://ipfs.io/ipfs/QmXoypizjW3WknFiJnKLwHCnL72vedxjQkDDP1mXWo6uco/wiki/Jet_(mathematics).html","timestamp":"2024-11-09T06:14:47Z","content_type":"text/html","content_length":"64711","record_id":"<urn:uuid:67f3c3da-01aa-4199-b0e7-c6a8d7b11283>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00196.warc.gz"} |
How Gauth makes statistical analysis easy for students
Statistical analysis is an element of many disciplines in humanities and sciences, social, natural, economic, and business sciences. However, for many students, the process of mastering statistical
methods and concepts is rather difficult because the subject is complex and abstract. Gauth is an improved version of an educational tool that assists learners in dealing with the challenges of
statistical analysis and makes it less horrifying. Gauth can also help you learn about brightspace stony. This article focuses on how Gauth assists students in understanding and applying statistical
analysis appropriately.
Breaking Down Complex Concepts
The major difficulty that students encounter in statistical analysis is the understanding of such concepts as probability distributions, hypothesis testing, regression analysis, and inferential
statistics. Such topics include the following: Computations that involve a lot of mathematics and other forms of reasoning that may be difficult for students.
Gauth has made these concepts easy to follow by giving clear and easy-to-follow step-by-step procedures. The AI has basic information examples and illustrations that help the students grasp the basic
knowledge before moving to a higher level of knowledge. Therefore, Gauth assists the students in gaining a good foundation of the basic concepts that will aid them in solving more complex problems.
Interactive Problem Solving
On one hand, it is possible to have a clear understanding of statistical concepts; on the other hand, it is quite a different thing to be able to use them to solve problems. Gauth is most effective
in teaching students how to apply statistical methods in problem-solving. It has problem-solving parts in which students can enter data, and select the appropriate statistical techniques and the
system will provide immediate feedback on the correct choice.
For instance, if a student is working on a problem that is connected to linear regression, Gauth can explain to him/her what variables to search for, how to find the regression line, and how to
interpret the results. This makes learning more practical and it also helps the students to understand how statistics is used in real-life situations.
Calculations and Error-Checking Features
Computations that are performed manually in statistics may be time-consuming and may also be prone to several errors especially when working with large data sets or complex equations. Gauth does
these calculations for the students, therefore, the students do not have to bother themselves with the calculations and they can focus on the results. In regards to computations, whether it is means,
standard deviations, or p-values, Gauth does these computations well.
Furthermore, the AI also has an error-checking capability incorporated in it. If a student is wrong, for instance, in the formula used or the interpretation of data, Gauth will detect the mistake and
assist the student in the right approach. This feature not only saves time but also assists the students to know where they made a mistake and how to rectify it hence enhancing their learning.
Accessibility of a variety of materials
Gauth also provides students with several aids including tutorials, practice problems, and references. These resources are meant to improve the features of AI by giving the students other methods by
which they can study and practice statistics. If a student needs to revise what he or she has learned or needs detailed assistance on a subject or level of learning, Gauth has all the material to
assist the student.
In conclusion, Gauth is a useful tool that assists students in learning statistical analysis by explaining the concepts, allowing the student to solve problems, calculating the results, offering the
tools for data visualization, and adjusting to the student’s learning style. Thus, students can apply Gauth to improve their knowledge of statistics and be more confident in the practical
application of the obtained knowledge in their academic and work activities. | {"url":"https://802traders.com/how-gauth-makes-statistical-analysis-easy-for-students/","timestamp":"2024-11-09T03:10:00Z","content_type":"text/html","content_length":"19041","record_id":"<urn:uuid:04870a57-50bb-4d1a-aed6-38f304ae43b4>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00659.warc.gz"} |
Outlier Detection and Handling
What Should We Do with Extreme Values?
Every time we ask for a numerical response like income, expenditure, consumption, etc., and allow people to enter the value freely, we are going to end up with extreme values. Imagine a scenario in
which we asked people about daily milk consumption in their household in liters. Most of the answers would be around one liter per day. But then we get a value of 15 liters per day. Is it possible?
Maybe the family has many children, or perhaps they make craft cheese. The other option is that person omitted the dot, and that what they really meant was 1.5. What if we got a value of 150 liters
per day? Should we report that an average household consumes 7.5 liters per day just because of a couple of extremes? That wouldn't be smart. Those awkward situations are frequent, and don’t happen
only when studying people. Conditions under which technical devices work can also change so that they produce extreme results. Basically, whatever your data source is, an extreme value is bound to
appear at some point. The question is – what do we do with it? In this post, you will learn about several ways to detect extreme values and deal with them.
Key metrics: What do I need?
Any question that requires an open-ended numerical response will most likely generate some outlying values. Basically, any numerical measure, no matter what the source is, is vulnerable to outliers.
The main concepts: What should I know?
The standard deviation approach
A common approach to detecting extreme values is calculating the standard deviation of the results (read here sample) and then flagging all the values that fall outside of ±3SD as outliers. When our
sample size is relatively small (n<1000), we can also use a less strict criterion of ±2.5SD. However, this approach presumes that the data is approximately bell-shaped (read here sample), although
analysts commonly neglect this fact and apply it indiscriminately.
If the distribution of the results drastically deviates from the bell curve,[1] you can use another approach, which uses interquartile range (IQR). To calculate the IQR, you should sort your data and
find the highest value lower than 25% of results (Q1) and the highest value lower than 75% of results (Q3). If you have a result higher than the value of Q3 + ((Q3-Q1)*3) or lower than Q1 – ((Q3-Q1)
*3), you can say that you have a clear outlier. If you want to detect moderate-intensity outliers, you should multiply the distance between Q1 and Q3 by 1.5 instead of 3.
[1] 9 out of 10 analysts will likely use eyeball method to determine weather the distribution deviates from normal, or even worse – they will simply assume that it’s normal and proceed as it were. We
advise you to be an outlier. Use Kolmogorov-Smirnov and Shapiro-Wilks test to assess normality of your data’s distribution.
Robust methods
The approaches above are meant to flag outliers, typically for deletion. However, instead of simply deleting outliers, you can also apply special metrics that are less vulnerable to outliers. We say
that these measures are robust. For example, as an alternative to the mean, which is very sensitive to extreme values, we can use median, trimmed mean, winsorized mean and M estimator.
Let’s start with the median. To calculate the median, you should sort your data and find the value below which lies 50% of the data. While the median is very robust, it does not describe the data set
very well – it’s not very informative. The trimmed mean, on the other hand, better summarizes the data. It is computed by cutting off a certain percentage of the most extreme cases from both sides of
the data distribution. The usual trim value is 20%, which means that we cut off 20% of the cases with the highest and 20% of the cases with the lowest values, and then calculate the mean using the
remaining data points. Another similar robust measure is the winsorized mean. Winsorization is similar to the previously described trimming. However, instead of removing the extreme cases, we replace
the bottom unwanted percentage of cases with the lowest accepted value and the top unwanted percentage of cases with the highest accepted value. Similarly to the trimmed mean, the standard percentage
of replaced cases is 20% at the top and at the bottom, but you can use a smaller or higher percent.
M estimators are a more mathematically advanced robust alternative to the mean. The most popular is Huber M, but Tukey's and bi-weight are also commonly used. Simply out, the M estimators are based
on minimizing the function of the distance between each of the data points to the central value. The advantage of M estimators is that they preserve more information about the data set while still
being robust to the existence of the outliers. Their disadvantage is the conceptual complexity, which makes them very rare in business data analysis.
Visualizing distributions with Box plots
The most common way to visualize data with outliers is a Box plot diagram (or Box plot with whiskers). Let’s take a look at the accompanying case-study dashboard below. The data set used contains
data on sales of different beverage brands[2] at various points of sale (PoS). Gray dots represent the sales volumes at each PoS, while the orange dots represent the volume at the currently selected
PoS (you can use the controller in the top right corner to select a different PoS).
Let's focus on the brand E. Midline of the blue box is the median, or the 50th percentile. The bottom part of the box (dark blue) represents the sales volumes of the PoSs between the 1st quartile (or
25th percentile) and median. The top part of the box (light blue) shows the sales volumes of the PoSs between the median and 3rd quartile (or 75th percentile). The lines extending below and above the
box are called whiskers, and they encompass all PoSs whose volume of sales is not extreme. All PoSs that are out of bounds of whiskers are considered outliers. In this case, the bounds are determine
by the formula.
Minimum = Q1 – ((Q3-Q1)*1.5)
Maximum = Q3 + ((Q3-Q1)*1.5)
[2] Brands are white labeled by capital letters from A to G
How to approach outlier detection in practice
We outlined sever approaches to identifying and handling outliers, and we might have left you feeling confused about what to do. The strategy we propose is to calculate the mean and some of the
robust measures. If you get a big difference, you should interpret it as a red flag for existence of extreme values. You should also use some kind of visual inspection, such as the box and whiskers
plot. Always keep in mind the distribution of your data – don’t just assume that it is normally distributed.
All this might seem like tedious work, and it is. However, spending some time on the outlier detection before sending the report will save you from much bigger headache that might otherwise come your
way after the report has been released. Also, one final note – it’s a good practice to decide on the approach that you are going to use to handle outliers before you begin the actual analysis.
Otherwise, it can be easy to succumb to the temptation to modify the results according to one’s expectations.
Further considerations
In this post, we covered only univariant outliers, which means we considered only values on one metric. However, outliers can also be defined by their position on multiple metrics simultaneously. In
that case, identifying them can be tricky. Multidimensional outliers do not have to be detectable when looking at any of the measures individually, meaning that the strategies for outlier detection
that we outlined above could fail for such cases. | {"url":"https://www.data-in-practice.com/post/outlier-detection","timestamp":"2024-11-07T06:41:47Z","content_type":"text/html","content_length":"1050485","record_id":"<urn:uuid:857ac1f5-7246-4e83-a93c-f5044338179d>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00414.warc.gz"} |
4.2 Yaw of Repose and Resulting Crossrange Deflection - Sierra Bullets
4.2 Yaw of Repose and Resulting Crossrange Deflection
Referring to Figure 4.1-1, as the bullet flies, the principal aerodynamic force on the bullet acts directly opposite to the velocity vector. The projection of this force along the longitudinal axis
is the drag force on the bullet. The drag force acts through both the center of pressure and the center of mass, and so it does not create a torque on the bullet. With a tiny yaw angle (i.e., the yaw
of repose) a very small component of the aerodynamic force acts horizontally and sideward on the bullet. This small sideforce, acting at the
Figure 4.1-1 Bullet Flight Characteristics
center of pressure, creates a torque on the bullet equal to the force multiplied by the moment arm, that is, the distance between the center of mass and the center of pressure. The direction of this
torque is downward for a right hand spinning bullet, or upward for a left-hand spinning bullet.
To visualize this situation refer to Figure 4.2-1, which is Figure 4.1-1 viewed from directly above the bullet. Note that there is no wind acting in Figure 4.2-1. The principal trajectory parameters
are displayed with the correct relationships for a bullet with a right hand spin. The velocity vector V is in the trajectory plane and tangent to the trajectory path. The aerodynamic force Faero is
directed opposite to the velocity vector. The bullet has a spin angular momentum H that is directed along the longitudinal axis and forward for a right-hand spin. The yaw of repose is the small angle
between the H vector and the V vector. This angle causes a small component of the aerodynamic force, called Fside, to act on the side of the bullet, and which can be thought of as acting at the
center of pressure of the bullet. This sideforce creates a torque vector M on the bullet. The torque vector M is the vector cross product of the moment arm r, which extends from the center of mass to
the center of pressure, and the sideforce Fside. The direction of the torque vector M is downward, that is, perpendicular to the plane containing r and Fside. The torque vector does not point exactly
vertically downward, because of the inclination angle of the trajectory, but it is exactly perpendicular to the plane of r and Fside.
Now consider the angular motion of the bullet, which is governed by the equations of angular motion. A key parameter in these equations is the angular momentum of the bullet, which consists of two
components. The first component is the spin angular momentum H shown in Figure 4.2-1, which is large in order to guarantee stabilization of the bullet. Because the bullet rotates downward in the
pitch direction as it flies, a second component of
angular momentum is directed horizontally in the direction opposite to Fside. This component is so small that it can be considered negligible compared to the spin angular momentum.
The magnitude of the spin angular momentum of a bullet is nearly constant as the bullet flies. It changes very slowly because the rotational frictional force and torque acting on the bullet are
small. Consequently, the change in the vector angular momentum of the bullet as it flies is very nearly limited to a change in direction of the spin angular momentum vector H, with no change in the
magnitude of that quantity. Under this condition, the equations of angular motion tell us that the angular momentum vector H rotates toward the torque vector M applied to the bullet.
Consequently, the spin angular momentum vector H, which is always along the central axis of the bullet, rotates downward toward the torque vector M caused by the sideforce Fside. So, in essence the
sideforce causes the bullet to rotate downward in the pitch direction to keep the axis of the bullet almost exactly tangent to the trajectory curve as the bullet flies along the trajectory arc. Of
course, as the axis of the bullet and the vector H rotate downward, the vector M also rotates at exactly the same rate, so that H always remains perpendicular to M. This entire situation is almost a
steady state motion; everything changes very slowly as the bullet flies. The yaw of repose angle, the spin angular momentum magnitude, the torque magnitude, and the sideforce are nearly, but not
quite, constant as the bullet flies from muzzle to target.
Because the sideforce Fside acts throughout the flight of the bullet, a horizontal (crossrange) deflection of the bullet will result. This deflection is generally small, but it can be noticed,
especially by long-range target shooters. This is because the deflection increases as time of flight to the target grows longer. Usually the observation comes about as follows. A rifle is sighted in
at point of aim, say, at 200 yards. Then the range to the target is changed to 400 yards. The shooter makes an elevation correction to the rifle sights for the longer range, and a sighting shot (or
group) is fired. The shooter notices that the shot (or group) is deflected to the right (for a RH twist barrel) by a few inches, but there is no crosswind to account for this deflection. The shooter
can apply a windage correction for the 400-yard range, and everything goes well at that range distance. Then, if the range is changed to 600 yards, the shooter has the same experience. A satisfactory
sight elevation correction can be made, but shots will be deflected a few inches to the right, necessitating a windage correction even in the absence of a crosswind. The sideforce arising from the
yaw of repose is the cause of this unexpected crossrange deflection of bullets. [Here we assume that the crosshairs in the telescope (and the adjustment axes) are aligned precisely vertically and
horizontally, so that the sight adjustments are precisely vertical and horizontal.] Crossrange deflections occur also for bullets with left-hand spin, but the deflections are toward the left rather
than toward the right. For a bullet with left-hand spin, the spin angular momentum vector is directed out of the tail of the bullet. To cause the bullet to rotate downward in pitch, an upward
vertical torque is necessary so that the angular momentum vector will rotate upward. This in turn requires a sideforce directed from right to left across the trajectory plane, and this can result
only from a yaw of repose angle to the left of the trajectory plane (a small, nose-left angle of the bullet as it flies). Consequently, the sideforce is directed to the left, and the bullet deflects
in the crossrange direction to the left as it flies downrange. | {"url":"https://www.sierrabullets.com/exterior-ballistics/4-2-yaw-of-repose-and-resulting-crossrange-deflection/","timestamp":"2024-11-08T18:50:17Z","content_type":"text/html","content_length":"195636","record_id":"<urn:uuid:6371f63e-a7f4-4322-b7f9-6c7e36b4096b>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00447.warc.gz"} |
Build Tunable Model for Tuning with
Build Tunable Model for Tuning with hinfstruct
This example shows how to construct a tunable model of a control system for tuning with hinfstruct. To do so, build a generalized linear model of your closed-loop system, incorporating weighting
functions that capture your design requirements (see Formulating Design Requirements as H-Infinity Constraints).
1. Use commands such as tf, zpk, and ss to create numeric linear models that represent the fixed elements of your control system and any weighting functions that represent your design requirements.
2. Use tunable models (either control design blocks or generalized LTI models) to represent the tunable elements of your control system. For more information about tunable models, see Models with
Tunable Coefficients.
3. Use model-interconnection commands such as series, parallel, and connect to construct your closed-loop system from the numeric and tunable models.
For this example, build a tunable model of the closed-loop system with weighting functions shown in the following block diagram.
This block diagram represents a head-disk assembly (HDA) in a hard disk drive. The architecture includes the plant G in a feedback loop with a PI controller C and a low-pass filter, F = a/(s+a). The
PI gains of C and the filter parameter a are tunable to achieve a desired response. For hinfstruct, you encode the desired response with the weighting functions LS and 1/LS, which express a target
loop shape. Let T(s) denote the closed-loop transfer function from the inputs $\left\{\mathit{r},{\mathit{n}}_{\mathit{w}}\right\}$to the outputs $\left\{\mathit{y},{\mathit{e}}_{\mathit{w}}\right\}$
. Then, constraining the ${\mathit{H}}_{\infty }$ norm to less than 1 (${‖\mathit{T}\left(\mathit{s}\right)‖}_{\infty }<1$) approximately enforces the target loop shape.
For this example, use the target loop shape given by:
$\mathit{LS}=\frac{1+0.001\frac{\mathit{s}}{{\omega }_{\mathit{c}}}}{0.001+\frac{\mathit{s}}{{\omega }_{\mathit{c}}}}$
This value of LS corresponds to the following open-loop response shape.
wc = 1000;
s = tf('s');
LS = (1+0.001*s/wc)/(0.001+s/wc);
To prepare for tuning the controller and filter, construct a tunable model of the closed-loop system T(s). First, load the plant model G, a ninth-order SISO state-space (ss) model.
Create a tunable model of the PI controller, using the predefined control design block tunablePID.
C = tunablePID('C','pi');
There is no predefined control design block for the filter structure F = a/(s+a). You can create the tunable filter using realp.
a = realp('a',1);
F = tf(a,[1 a]);
To build the closed-loop model, first label all the inputs and outputs of the system components.
G.InputName = 'u';
G.OutputName = 'y';
We = LS;
We.InputName = 'e';
We.OutputName = 'ew';
Wn = 1/LS;
Wn.InputName = 'nw';
Wn.OutputName = 'n';
C.InputName = 'e';
C.OutputName = 'u';
F.InputName = 'yn';
F.OutputName = 'yf';
Specify the summing junctions in terms of the I/O labels of the other components of the control system. One junction takes the difference between the reference signal and the filtered output,
producing the error signal e. The other junction adds noise to the plant output, producing the noisy output yn.
Sum1 = sumblk('e = r - yf');
Sum2 = sumblk('yn = y + n');
Finally, use connect to combine all the elements into a complete model of the closed-loop system.
T0 = connect(G,Wn,We,C,F,Sum1,Sum2,{'r','nw'},{'y','ew'});
T0 is a genss object representing the entire closed-loop control system incorporating the loop-shaping weighting functions. The Blocks property of T0 contains the tunable blocks C and a. (In this
example, the control system model T0 is a continuous-time model, with T0.Ts = 0. You can also use hinfstruct with a discrete-time model, provided that you specify a definite sample time, T0.Ts ≠ –1.)
ans = struct with fields:
C: [1x1 tunablePID]
a: [1x1 realp]
You can now use hinfstruct to tune a and the free parameters of C. See Tune and Validate Controller Parameters.
See Also
Related Topics | {"url":"https://it.mathworks.com/help/robust/gs/build-tunable-model-tuning-with-hinfstruct.html","timestamp":"2024-11-10T09:49:54Z","content_type":"text/html","content_length":"78589","record_id":"<urn:uuid:e29a33e8-b9ed-44c2-8a31-0c268bb243d4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00813.warc.gz"} |
What is the difference between norm 1 and norm 2?
Specifically, you learned: The L1 norm that is calculated as the sum of the absolute values of the vector. The L2 norm that is calculated as the square root of the sum of the squared vector values.
Should I use L1 or L2 norm?
From a practical standpoint, L1 tends to shrink coefficients to zero whereas L2 tends to shrink coefficients evenly. L1 is therefore useful for feature selection, as we can drop any variables
associated with coefficients that go to zero. L2, on the other hand, is useful when you have collinear/codependent features.
What is a 1-norm?
The 1-norm is simply the sum of the absolute values of the columns.
What does L1 norm tell you?
L1 Norm is the sum of the magnitudes of the vectors in a space. It is the most natural way of measure distance between vectors, that is the sum of absolute difference of the components of the
vectors. In this norm, all the components of the vector are weighted equally.
Why is L1 robust than L2?
Robustness: L1 > L2 The L1 norm is more robust than the L2 norm, for fairly obvious reasons: the L2 norm squares values, so it increases the cost of outliers exponentially; the L1 norm only takes the
absolute value, so it considers them linearly.
What are the advantages of L1 over L2 normalization?
Advantages of L1 over L2 norm (explanation on Quora) This means the L1 norm performs feature selection and you can delete all features where the coefficient is 0. A reduction of the dimensions is
useful in almost all cases. The L1 norm optimizes the median. Therefore the L1 norm is not sensitive to outliers.
What’s the difference between L1 and L2 regularization and why would you use each?
L1 regularization gives output in binary weights from 0 to 1 for the model’s features and is adopted for decreasing the number of features in a huge dimensional dataset. L2 regularization disperse
the error terms in all the weights that leads to more accurate customized final models.
What is a L2 norm?
The -norm (also written ” -norm”) is a vector norm defined for a complex vector.
What are some examples of norms?
Examples include:
• Acknowledge others in the elevator with a simple nod or say hi.
• Stand facing the front.
• Never push extra buttons, only the one for your floor.
• Never stand right by someone if you are the only two people on board.
• Do not act obnoxiously on the elevator.
What is the difference between L1 L2 regularization?
The differences between L1 and L2 regularization: L1 regularization penalizes the sum of absolute values of the weights, whereas L2 regularization penalizes the sum of squares of the weights. The L1
regularization solution is sparse. The L2 regularization solution is non-sparse.
Why is L2 norm more stable than L1 norm?
L2-norm is more stable in small adjustment of a data point is because L2-norm is continuous. L1 has absolute value which makes it a non-differenciable piecewise function. | {"url":"https://www.kembrel.com/essay-guide/what-is-the-difference-between-norm-1-and-norm-2/","timestamp":"2024-11-03T15:55:17Z","content_type":"text/html","content_length":"65969","record_id":"<urn:uuid:459a2482-1b43-4375-b3f5-90bc7b11e890>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00402.warc.gz"} |
A new branching tree model has been proposed for the first time in the direction of increasing degree 2^n (merging in the reverse direction), which coincides with the direction of increasing total
stopping time. It has been shown that each time corresponds to a sequence of individual numbers n(tst)→∞, the volume of which increases with time. Thus, it is proven that each time corresponds to a
finite number of Collatz sequences of the same length. The reason for the formation of a histogram or spectrum tst(q) with two peaks has been established. It is shown that the double structure is
formed by the regularities of Jacobsthal recurrence numbers at the nodes of the sequences. It has been established that the graph tst(q) with the numbers of active nodes in semi-logarithmic
coordinates tst, logm(p) appears as a straight line, while the graph for the numbers of inactive nodes appears as a scattered spectrum. Based on the established statistical regularities tst(q), a new
recurrent model of trivial cycles is proposed.
[1] R.Terras. A stopping time problem on the positive integers. Acta Arith. 30: 241–252,197
[2] C. Lagarias, The (3x+1)–problem and its generalizations, American Mathematical Monthly 92 (1985), 3–23.
[3] K. A. Borovkov and D. Pfeifer, Estimates for the Syracuse Problem via a probabilistic model, Theory of Probability and its Applications 45, N2 (2000), 300–310.
[4] G. J. Wirsching, The Dynamical System generated by the (3x+ 1)–function, Lecture Notes in Mathematics, N1681, Springer–Verlag, Berlin, 1998, 158p.
[5] B.Gurbaxani. An Engineering and Statistical Look at the Collatz (3n + 1) Conjecture. arXiv preprint arXiv:2103.15554
[6] M. Rasool, S.Belhaouari. From Collatz Conjecture to chaos and hash function. Chaos, Solitons and Fractals 176 (2023) 114103, 2023. http://creativecommons.org/licenses/by/4.0/
[7] A. Grubiy. Automation implementations of the process of generating Collatz sequence. Vol.48, pp.108-116,2012
[8] Y. Sinai. Statistical (3x+ 1) problem, Dedicated to the memory of Jurgen K. Moser. Communications InPure & Applied Math., 56(7), 1016–1028,2003ю
[9] T. Tao. Almost all orbits of the Collatz map attain almost bounded values. Forum of Mathematics, Pi,Volume (10),2022.
[10] http://en.wikipedia.org/wiki/File:CollatzStatistic100million.png
[11] C. AllenMc. Histogram of total stopping times for the numbers 1 to 100 million (2013). Link: https://en.wikipedia.org/wiki/Collatz_conjecture#/media/File: CollatzStatistic100million.png
[12] Thomas e Silva. Computational Verification of the 3x+1 conjecture, Universidade de Aveiro (2015). Link: http://sweet.ua.pt/tos/3x+1.html
[13] U. Rinat. Collatz Conjecture: calculation in reverse with JavaScript. https://blog.rinatussenov.com/collatz-conjecture-calculation-in-reverse-with-javascript-a768fab10425
[14] J. Miller. Reversing the Collatz Conjecture Linearly. https://medium.com/@jordan.kay/reversing-the-collatz-conjecture-linearly...
[15] N. Fabiano, Z.Mitrovic, N.Mirkov, S.Radenović. A discussion on two old standing number theory problems: Collatz hypothesis, together with its relation to Planck’s black body radiation, and
Kurepa’s conjecture on left factorial function Chapter 1. October 2022h. ttps://www.researchgate.net/publication/364284245
[16] P. Kosobutskyy. The Collatz problem as a reverse problem on a graph tree formed from Q*2^n (Q=1,3,5,7,…) Jacobsthal-type numbers .arXiv:2306.14635v1
[17] P. Kosobutskyy. Comment from article ”Two different scenarios when the Collatz Conjecture fails”. General Letters in Mathematics. 2022. Vol. 12, iss. 4. P. 179–182.
[18] P. Kosobutskyy, D. Rebot. Collatz conjecture 3n ± 1 as a Newton Binomial Problem. Computer Design Systems. Theory and Practice, Vol. 5, No. 1, 2023.рр.137-145
[19] P. Kosobutskyy, Yedyharova A., Slobodzyan T. From Newton's binomial and Pascal’s triangle to Collatz's problem. Computer Design Systems. Theory and Practice , Vol. 5, No 1, 2023.рр.121-127
[20] P. Kosobutskyy, Karkulovskyy V. Recurrence and structuring of sequences of transformations 3n +1 as arguments for confirmation of the Collatz hypothesis. Computer Design Systems. Theory and
Practice. Vol. 5, No. 1, 2023.рр.28-33
[21] J. Choi. Ternary Modified Collatz Sequences And Jacobsthal Numbers. Journal of Integer Sequences, Vol. 19 (2016), Article 16.7.5
[22] Sloan's On-Line Encyclopedia of Integer Sequences (OEIS, http://oeis.org/). | {"url":"https://science.lpnu.ua/cds/all-volumes-and-issues/volume-6-number-2-2024/statistical-modeling-kq-1-discrete-data","timestamp":"2024-11-09T04:49:24Z","content_type":"text/html","content_length":"35081","record_id":"<urn:uuid:15d4ed79-0e64-4799-8e0a-351d796e5b6a>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00715.warc.gz"} |
One to One Functions - Graph, Examples | Horizontal Line Test
One to One Functions - Graph, Examples | Horizontal Line Test
What is a One to One Function?
A one-to-one function is a mathematical function where each input corresponds to just one output. So, for each x, there is only one y and vice versa. This signifies that the graph of a one-to-one
function will never intersect.
The input value in a one-to-one function is noted as the domain of the function, and the output value is noted as the range of the function.
Let's look at the pictures below:
For f(x), any value in the left circle corresponds to a unique value in the right circle. In conjunction, any value on the right side corresponds to a unique value on the left. In mathematical
jargon, this implies every domain has a unique range, and every range holds a unique domain. Thus, this is an example of a one-to-one function.
Here are some different examples of one-to-one functions:
Now let's study the second image, which displays the values for g(x).
Notice that the inputs in the left circle (domain) do not own unique outputs in the right circle (range). For example, the inputs -2 and 2 have equal output, in other words, 4. In conjunction, the
inputs -4 and 4 have identical output, i.e., 16. We can see that there are identical Y values for numerous X values. Therefore, this is not a one-to-one function.
Here are different representations of non one-to-one functions:
What are the qualities of One to One Functions?
One-to-one functions have these characteristics:
• The function owns an inverse.
• The graph of the function is a line that does not intersect itself.
• It passes the horizontal line test.
• The graph of a function and its inverse are equivalent with respect to the line y = x.
How to Graph a One to One Function
In order to graph a one-to-one function, you are required to find the domain and range for the function. Let's examine a straight-forward example of a function f(x) = x + 1.
Immediately after you possess the domain and the range for the function, you need to chart the domain values on the X-axis and range values on the Y-axis.
How can you tell whether a Function is One to One?
To indicate whether or not a function is one-to-one, we can leverage the horizontal line test. Once you plot the graph of a function, trace horizontal lines over the graph. In the event that a
horizontal line passes through the graph of the function at more than one spot, then the function is not one-to-one.
Since the graph of every linear function is a straight line, and a horizontal line doesn’t intersect the graph at more than one place, we can also conclude all linear functions are one-to-one
functions. Remember that we do not apply the vertical line test for one-to-one functions.
Let's look at the graph for f(x) = x + 1. Once you graph the values for the x-coordinates and y-coordinates, you need to consider whether or not a horizontal line intersects the graph at more than
one spot. In this instance, the graph does not intersect any horizontal line more than once. This signifies that the function is a one-to-one function.
On the other hand, if the function is not a one-to-one function, it will intersect the same horizontal line more than one time. Let's examine the figure for the f(y) = y^2. Here are the domain and
the range values for the function:
Here is the graph for the function:
In this instance, the graph crosses numerous horizontal lines. Case in point, for either domains -1 and 1, the range is 1. Additionally, for both -2 and 2, the range is 4. This means that f(x) = x^2
is not a one-to-one function.
What is the inverse of a One-to-One Function?
Considering the fact that a one-to-one function has just one input value for each output value, the inverse of a one-to-one function also happens to be a one-to-one function. The opposite of the
function essentially undoes the function.
For example, in the case of f(x) = x + 1, we add 1 to each value of x in order to get the output, or y. The opposite of this function will subtract 1 from each value of y.
The inverse of the function is known as f−1.
What are the properties of the inverse of a One to One Function?
The properties of an inverse one-to-one function are the same as any other one-to-one functions. This signifies that the opposite of a one-to-one function will possess one domain for each range and
pass the horizontal line test.
How do you find the inverse of a One-to-One Function?
Determining the inverse of a function is not difficult. You just have to switch the x and y values. For instance, the inverse of the function f(x) = x + 5 is f-1(x) = x - 5.
Considering what we learned earlier, the inverse of a one-to-one function undoes the function. Since the original output value required adding 5 to each input value, the new output value will require
us to subtract 5 from each input value.
One to One Function Practice Questions
Contemplate the following functions:
• f(x) = x + 1
• f(x) = 2x
• f(x) = x2
• f(x) = 3x - 2
• f(x) = |x|
• g(x) = 2x + 1
• h(x) = x/2 - 1
• j(x) = √x
• k(x) = (x + 2)/(x - 2)
• l(x) = 3√x
• m(x) = 5 - x
For each of these functions:
1. Determine whether the function is one-to-one.
2. Plot the function and its inverse.
3. Find the inverse of the function numerically.
4. Specify the domain and range of both the function and its inverse.
5. Employ the inverse to determine the value for x in each equation.
Grade Potential Can Help You Master You Functions
If you happen to be facing difficulties using one-to-one functions or similar topics, Grade Potential can set you up with a one on one teacher who can support you. Our Alpharetta math tutors are
experienced professionals who assist students just like you advance their understanding of these types of functions.
With Grade Potential, you can learn at your own pace from the convenience of your own home. Book a call with Grade Potential today by calling (770) 999-9794 to learn more about our tutoring services.
One of our team members will get in touch with you to better inquire about your needs to set you up with the best tutor for you!
Let Grade Potential match you with the ideal Grammar tutor!
Or answer a few questions below to get started | {"url":"https://www.alpharettainhometutors.com/blog/one-to-one-functions-graph-examples-horizontal-line-test","timestamp":"2024-11-09T03:17:35Z","content_type":"text/html","content_length":"80375","record_id":"<urn:uuid:39647b5c-6771-457d-b753-f57f66553fa9>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00396.warc.gz"} |
Number of zeros of a certain type of rational function - Zelaron Gaming Forum
Perhaps you can assist me with some intuition. I'm looking for the number of zeros of $\reverse f_{m,n}(z) : = \frac{\mathrm{d}^m}{\mathrm{d}z^m}\left[\left(\frac{P(z)}{Q(z)}\right)^n\right]$, where
$\reverse P(z)$ and $\reverse Q(z)$ are nonconstant polynomials with no common zeros, and $\reverse m,\,n$ are nonnegative integers. As an example, the case $\reverse P(z) = 2(z-1)^4(z-3)(z-7),\:Q(z)
= (z+1)(z-2),\:m=0,\:n=5$ yields $\reverse f_{0,5}(z) = \frac{32(z-1)^{20}(z-3)^5(z-7)^5}{(z+1)^5(z-2)^5}$, which has thirty zeros (the zeros of the polynomial in the numerator).
For some convenient notation, let $\reverse p : = \deg{P(z)},\: q : = \deg{Q(z)}$ and $\reverse Q(z) = A(z-z_1)(z-z_2)\dots (z-z_q)$, where $\reverse A$ is a complex number. I tried to decompose the
rational function $\reverse f_{m,n}(z)$ in terms of its poles as follows:
$\reverse f_{m,n}(z) = H_{m,n}(z) + \frac{C_{z_1,1}}{(z-z_1)^{m+1}} + \frac{C_{z_1,2}}{(z-z_1)^{m+2}} + \dots + \frac{C_{z_1,n}}{(z-z_1)^{m+n}} + \frac{C_{z_2,1}}{(z-z_2)^{m+1}} + \frac{C_{z_2,2}}
{(z-z_2)^{m+2}} + \dots + \frac{C_{z_q,n}}{(z-z_q)^{m+n}}.\:\: ( 1 )$
Here, the $\reverse C_{z_i,j}$ are complex numbers, and $\reverse H_{m,n}(z)$ is a polynomial of degree $\reverse \max\{(p-q)n-m,\,0\}$. Specifically, if $\reverse p\ge q$ and $\reverse 0\le m\le
(p-q)n$ (which is precisely when $\reverse H_{m,n}(z)$ does not vanish), we can write the terms in the right-hand side of equation $\reverse ( 1 )$ as a fraction with a common denominator, such that
the polynomial $\reverse H_{m,n}(z)[(z-z_1)(z-z_2)\dots(z-z_q)]^{m+n}$ dominates the degree in its numerator. Hence, $\reverse f_{m,n}(z)$ has $\reverse (p-q)n-m+q(m+n) = pn + (q-1)m$ zeros in this
There seem to be two more distinct cases, but I'm not sure how to prove what the number of zeros is in them. The answer in those (remaining) cases should (probably, based on my numerical experiments)
$\reverse pn + (q-1)m$ if $\reverse p < q,$
$\reverse q(m+n)-(m+1),$ if $\reverse p \ge q$ and $\reverse m > (p-q)n.$
Any ideas?
Last edited by Chruser; 2017-10-07 at 07:30 AM. | {"url":"http://chat.zelaron.com/forum/showthread.php?s=658796a7e644a6291c87f5f367da9c6b&t=53960","timestamp":"2024-11-12T10:21:48Z","content_type":"text/html","content_length":"77956","record_id":"<urn:uuid:4ae1e0eb-783b-4ea5-987e-1ed94a892b41>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00447.warc.gz"} |
Calculus Cheat Sheet: Get Some Calculus Hack Tricks
Get Some Useful Examples of Calculus Cheat Sheet
by Caitlin Worth | Feb 15, 2023 | Guide To MyOpenMath | 0 comments
Are you searching for the calculus cheat sheet? Let us tell you how to get a calculus 1 cheat sheet, pre-calculus formulas cheat sheet, derivative cheat sheet, integration cheat sheet, and more.
The term cheat sheet is quite popular. It is often taught in school and college to catch students doing malpractice. Students carry micro photocopies of the contents they wish to deliver in exam
The calculus cheat sheet is about the concise set of calculus formulas or sums. Students must know the procedure of mathematics before attempting the question. Thus, some students carry formula
sheets for calculus to the examination hall.
Why do Students Need Calculus Sheets?
Calculus is a specific branch of mathematics dealing with changes in a continuous fashion. You will find two significant calculus concepts:
Derivatives are a function in calculus dealing with the rate of change. Also, with the help of derivatives, you can get an explanation of the function at a particular point.
The following calculus derivatives cheat sheet will be helpful.
An integral of the calculus defines the measure of an area under the respective curve function. In addition, it collects the discrete value of a process over a particular range of values. You must
know proper math formulas on integration to solve every homework assignment correctly.
Image source: https://math.colorado.edu/
Basic Integration Formulas
In the meantime, know several interesting facts about online vs. in Person Classes.
Students find solving definite integrals or differentiation challenging because they do not practice regularly. The lack of focus in the class and gaps in the knowledge of math makes things more
Some students even think learning calculus is just a waste of time. That is why they always search for the precalculus final cheat sheet or derivative and integral cheat sheet. The best solution is
to hire our online calculus class takers and achieve academic goals.
Are you struggling with your homework for a long time? The help with online classes will provide the best help.
The Important Derivatives and Integration Students Must Know
The following table shows the differentiation and integration of the most common functions. Integration is the reverse process of differentiation. It returns the function to its original shape.
Let us go through the precalculus formula cheat sheet:
Some Calculus Cheat Sheet
Precise Definition:
Calculus 1- Know About Its Constituents
The primary focus of calculus 1 is on differential calculus. The concepts associated with it are limits and continuity. The vital topics that come under this category are:
• Derivatives
• Limits
• Integrations
• Application of the derivatives.
We know many students who find calculus 1 challenging. Even most students majoring in mathematics find calculus 1 the hardest because they lack knowledge of the basic formulae.
Though it is just an introduction to higher-level mathematics, some find it hard. If you look for a cheat sheet for calculus 1, the following image will clarify it.
Connect Math Answers is the place you should be where you get a vivid idea of several mathematical applications and functions.
Image source: https://math.colorado.edu/
Have you ever encountered some tricky calculus factors? Mathxl answers calculus is the best for you.
Calculus 2- What Does It Cover?
Calculus 2 covers the function of integral calculus. It deals with the application of one variable and its function. The inclusions of the Calculus 2 are:
• The specific method of integration
• Parametric equations
• Separable differential equations
• Polar coordinates
Examples of Calculus 2
PERIMETER, AREA & VOLUME
Do you want to make your concepts on calculus stronger? The mymathlab answers calculus is the best one for you.
Now, most of the students search for the calculus 2 cheat sheet. If you are one of them, the following examples will help you relate to the math terminology cheat sheet.
In the meantime, explore the next blog on the counterargument.
Some Examples of Pre-Calculus Cheat Sheet
The primary focus of precalculus is on several properties and functions of trigonometry, exponential function, and logarithm. Also, the precalculus course covers the complex processes of matrices,
vectors, probability, and conic sections. Thus, students will invariably require the precalculus cheat sheet.
What Is The Method To Calculate Conic Section Equations In Precalculus?
One of the challenging parts of precalculus is to differentiate between the equations and the conic section. It is essential to understand the difference between hyperbolas and parabolas. Also, it is
challenging to know about the diversification of a circle and an ellipse. The calculus formula sheet will help you get the best result. You can also refer to the calculus equation sheet.
With the help of the following equations, you can get a clear concept.
Frequently Asked Questions
How to cheat in calculus?
If you are answering the questions online, the calculus formula sheet is something you can find online. Also, in the physical examination hall, you can carry the cheat sheet. This calculus reference
sheet can be helpful as well.
What are the 4 concepts of calculus?
The 4 concepts of calculus are:
a. Limits
b. Differential calculus
c. Integral calculus
d. Multivariable calculus
Do you get a formula sheet for calculus?
You can get it in your textbook or from any online sources.
Is college calculus hard?
Many students say college calculus is more complicated than high school calculus. But, the opinion differs among various groups of students.
How long will it take to learn calculus?
Since the IQ level of each individual is different, the learning tenure will also be different.
We are sorry that this post was not useful for you!
Let us improve this post!
Tell us how we can improve this post?
Read More Similar Articles
Latest posts by Caitlin Worth
(see all)
Online calculus homework help has become essential for students struggling with the complexities of calculus. Understanding limitations and derivatives and mastering integrals and series can be very
challenging. That's why many students seek expert Do My Calculus...
The education sector has seen major disruptions since the launch of online education. Leveraging the power of the internet, many educational institutions allow students to attend lectures and
complete a course without having to attend classes in person. WileyPLUS is...
The moment students enroll in ALEKS and start working on the assignments, they aim to complete the course quickly and receive the certificate as soon as possible. Therefore, it is often seen that
students search online for How to finish ALEKS topics fast and easy! So,...
Cheating on online courses is nothing new, especially when it comes to mathematics. Students who enroll in MathXL find it challenging to solve the questions, so they look for quick solutions to
achieve high grades. So, if you are interested in knowing how to get...
Do you wonder about how to cheat on WeBWork answers? Are you facing difficulties in finding the correct answers over the internet? We are here to answer these questions and help you with WeBWork.
This will help you in knowing the ways to cheat on Webwork and get good...
Have you been clueless about where to get IXL answers online? Then your search ends here because TakeOnlineClassHelp is here to guide you with the IXL portal and help you get accurate IXL answers
online. But before that, it is important to know about the IXL portal...
Can Moodle detect cheating? The one-word answer to the above question is Yes. The popular learning management system Moodle has many features that enable instructors to detect and deter cheating.
Whether it is about Moodle, inspecting element answers, or plagiarism...
Cheating in exams is not a new concept. Over the years, students have developed and modified different ways to cheat. It requires advanced skills, so you don’t get caught in the test. So, if you want
to know how to cheat on a test, read this blog. We have mentioned...
Can socrative detect cheating? Every teacher who utilizes Socrative asks themselves this excellent question. That's a "yes" from us to you. The teachers can spot dishonest kids on Socrative. To be
clear, the majority of students do not cheat.What Is...
Are you curious to know how to cheat on Proctorio? Online proctored exams have become the norm, with educational institutes relying heavily on proctoring software. Proctorio, an exceptional online
proctoring software, has become the new normal. Its primary objective...
Get Online Calculus Homework Help: Find the Best Tutor or Helper
How To Cheat On WileyPLUS Assignments? Get The Best WileyPLUS Answers
How To Finish ALEKS Topics Fast: Get ALEKS Answers Easily
How To Get MathXL Solutions: Can You Cheat Or Get Accurate MathXL Answers From Experts?
Find Accurate WeBWork Answers For How To Cheat On WeBWork Answers And Solutions
Reliable IXL Answers: Expert Solutions for IXL Learning Challenges
Can Moodle Detect Cheating? Get Complete Detail With Proof
How To Cheat On A Test: 30+ Effective Cheating Hacks
Can Socrative Detect Cheating? Get Some Proper Guide
The Amazing Guide On How To Cheat On Proctorio! | {"url":"https://takeonlineclasshelp.com/calculus-cheat-sheet/","timestamp":"2024-11-05T11:58:32Z","content_type":"text/html","content_length":"374567","record_id":"<urn:uuid:60064d42-9c29-4215-a209-098f3123005d>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00480.warc.gz"} |
template<typename DERIVED>
class gtsam::Basis< DERIVED >
CRTP Base class for function bases.
static Matrix WeightMatrix (size_t N, const Vector &X)
Calculate weights for all x in vector X. More...
static Matrix WeightMatrix (size_t N, const Vector &X, double a, double b)
Calculate weights for all x in vector X, with interval [a,b]. More...
static double Derivative (double x, const Vector &p, OptionalJacobian< -1, -1 > H=boost::none)
template<typename DERIVED >
static Matrix gtsam::Basis< DERIVED >::WeightMatrix ( size_t N,
const Vector & X inlinestatic
Calculate weights for all x in vector X.
Returns M*N matrix where M is the size of the vector X, and N is the number of basis functions. | {"url":"https://gtsam.org/doxygen/a02812.html","timestamp":"2024-11-07T07:25:55Z","content_type":"application/xhtml+xml","content_length":"16750","record_id":"<urn:uuid:3ee2e439-9fce-4f4e-9e91-63c36e644d78>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00769.warc.gz"} |
What should be explained in the Dutch SBR-B Guideline! - Ritchie Vink
The Dutch SBR guideline is intended to help you process vibration data and help you determine when a vibration signal can cause discomfort to persons. It seems to me however, that the SBR-B guideline
does not have the intention to be understood. They seem to help you by making a super abstract of scientific papers and by giving you a few keywords so you can Google it yourself.
This post will elaborate on two formula’s given in the guideline. It took me a while to find out what they really ment. But thanks to some help from my colleague Lex van der Meer, and some papers he
found, I could make sense of it.
The guideline gives two formula’s that should be used to turn your raw data from a vibrations measurement into design values for further processing. Roughly translated, in about the same amount of
words, it says:
The vibration data needs to be weighed by:
\[|H_a(f)| = \frac{1}{v_0} \cdot \frac{1}{\sqrt{1 + (f_0/f)^2}}\]
In which:
f frequency in Hz
f[0] 5.6 Hz
v[0] 1 mm/s
From the result of the formula above the effective value is determined by:
\[v_{eff}(t) = \sqrt{ \frac{1}{\tau} \int_0^tg(\xi)v^2(t-\xi)d\xi}\]
In which:
\[\tau = 0.125 s\]
\[g(\xi) = e^{-\xi/\tau}\]
In the first formula a frequeny in Hz is required. They do not specify which frequency. I thought my measured vibration signal had infinity frequencies, or at least more than one? In the second
formule we integrate the output of the first formula over dξ, again not specifying what ξ is. That’s about all the attention the guideline spents on it. Well good luck with that!
Time signal
First we need some ‘measured’ data. In the following code snippet some fake data is created by adding 5 sine waves. The sine waves’ amplitudes and frequency are random.
import numpy as np
import matplotlib.pyplot as plt
t = np.linspace(0, 2.5, 500)
vibrations = np.zeros_like(t)
for i in range(5):
vibrations += np.sin((10 * np.random.rand())**2 * 2 * np.pi *t) * 10 * np.random.rand()
fig = plt.figure(figsize=(12, 6))
plt.title("Measured vibration")
plt.ylabel("v [mm/s]")
plt.xlabel("t [s]")
plt.plot(t, vibrations)
Vibration signal
Above is the fake vibration data we’ve just created shown. Now we have a vibrations signal we can take apart the two give formula’s and see what their use is.
Weighted signal
The first formula is used to weight the signal by the frequencies that are most likely to cause hindrance. This becomes more clear if we plot the function first. Let’s plot the result of the function
in the frequency range 1 - 100 Hz. Note that 1 / v[0] = 1, thus let’s ignore that.
f = np.arange(0, 100)
f0 = 5.6
y = 1 / np.sqrt(1 + (f0/f)**2)
plt.plot(f, y)
plt.xlabel("f [Hz]")
Weight function
By plotting \(\frac{1}{\sqrt{1 + (f/5.6)^2}}\) in the range 1-100 Hz we get the curve shown above. Apparently the lower frequencies will be weighted much more than the higher ones, as the the curve
will tend to go to zero by increasing the frequency. This weighting is done by multiplying the original signal with this function.
What the guideline does not mention is that before you are able to do so, you must convert the signal from the time domain to the frequency domain. Well, this can be done by taking the Fast Fourier
Transform! You can read more about this in the last post.
By taking the FFT we retrieve the frequency bins. Each bin can be multiplied with the weights function. Shown below is the frequency spectrum and the curve that will scale down this spectrum.
vibrations_fft = np.fft.fft(vibrations)
T = t[1] - t[0]
N = t.size
f = np.linspace(0, 1 / T, N)
a = N // 2
weight = 1 / np.sqrt(1 + (5.6/f)**2)
vibrations_fft_w = weight * vibrations_fft
fig = plt.figure(figsize=(14, 4))
plt.title("Frequency spectrum")
plt.xlabel("f [Hz]")
plt.bar(f[:a], np.abs(vibrations_fft[:a]) / np.max(vibrations_fft[:a]), width=0.3)
plt.plot(f[:a], weight[:a], c="r")
plt.title("Weighted frequency spectrum")
plt.xlabel("f [Hz]")
plt.bar(f[:a], np.abs(vibrations_fft_w[:a]) / np.max(vibrations_fft[:a]), width=0.3)
plt.ylim(0, 1)
Weighted frequency spectrum
The above figure shows that all frequencies are downscaled. However the lower frequencies are downscaled most. The higher frequencies are probably leading to the most hindrance. And very low
frequencies will probably just rock you to sleep.
By transforming the signal back to the time spectrum we can see how the frequency scaling affected the signal.
vibrations_w = np.fft.ifft(vibrations_fft_w).real
plt.title("Weighted vibration")
plt.ylabel("v [mm/s]")
plt.xlabel("t [s]")
plt.plot(t, vibrations_w)
Weighted time spectrum
By comparing the above figure with the original signal we can see it has changed a bit. By weakening the lower frequencies the signal has decreased in amplitude. The maximum amplitude has dropped
from ~ 30 mm/s to ~ 25 mm/s.
Effective value
The second formula describes how you can compute the effective value of the vibration signal, or ‘voortschrijdende effectieve waarde’ in Dutch. This formula looks a lot like the formula of the Root
Mean Square (RMS) of a signal. The formula of the RMS given by:
\[RMS = \sqrt{\frac{1}{T}\int^T_0 v(t)^2dt}\]
It resembles the first formula. However for \(v(t - \xi)\) the velocity signal is multiplied with \(e^{-\xi/\tau}\). Also the signal is not an integral with steps dt, but an integral with steps dξ
The ξ is actually another parameter for the time t. Every increment in time dt a new integral is computed from t[0] to t[i] with steps dξ (which are the same size as dt); The function \(e^{-\xi/\tau}
\) is another scaling function. The larger ξ becomes, the smaller the multiplication factor becomes.
v_sqrd_w = vibrations_w**2
a = 55
c = ["#1f77b4" for i in range(a)]
current = 51
c[current] = "#d62728"
xi = t[:current + 2]
g = np.exp(-xi / 0.125)
plt.title("Squared signal")
plt.plot(t[:current + 2], g[::-1][:a] * v_sqrd_w[current + 1], color="r")
plt.bar(t[1:a], v_sqrd_w[1:a], width=0.002, color=c)
plt.ylim(0, np.max(v_sqrd_w))
for i in range(g.size):
v_sqrd_w[i] *= g[-i]
plt.title("Weighted squared signal")
plt.xlabel("t [s]")
plt.bar(t[1:a], v_sqrd_w[1:a], width=0.002, color=c)
plt.ylim(0, np.max(vibrations_w**2))
Scaled down time signal per time step
In the code snippet above the signal is squared and plotted. The red bar in the plot is the current time inteval t[i]. All the preceding values of, and including the value for v(t)^2, will be
multiplied with the red function. This function ranges from 0 to 1. Values close to the current time interval t[i] will keep their value. Values further away will be scaled down more. This
multiplication is done for every time step t[i]. The scaled down signal for the current time step is shown the in the second figure.
When the weighted value for every time step is determined the RMS can be computed for this weighted signal.
Ts = 0.125
v_eff = np.zeros(t.size)
dt = t[1] - t[0]
for i in range(t.size - 1):
g_xi = np.exp(-t[:i + 1][::-1] / Ts)
v_eff[i] = np.sqrt(1 / Ts * np.trapz(g_xi * v_sqrd_w[:i + 1], dx=dt))
plt.plot(t, v_eff)
plt.title("Effective value")
plt.ylabel("v [mm/s]")
plt.xlabel("t [s]")
Effective value time signal
We have computed the effective value (voortscrhijdende effectieve waarde) for a random time signal. It was quite a hassle for me to find out what should be done. The guideline does not mention that
you need to switch between the frequency and the time spectrum two times. Also the interpretation of ξ in the second formula could really use a calculation example.
What you eventually can do with the computed effective value is something I will leave to the guideline. I hope this helps someone a few hours when dealing with the SBR! | {"url":"https://www.ritchievink.com/blog/2017/05/07/what-should-be-explained-in-the-dutch-sbr-b-guideline/","timestamp":"2024-11-13T15:14:31Z","content_type":"text/html","content_length":"30044","record_id":"<urn:uuid:31342b34-0dab-4f89-bdfa-e1a26d68c28f>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00206.warc.gz"} |
Area to Z Score Calculator - Online Calculators
To calculate a Z score, subtract the mean (Xˉ) from the raw score (X), and then divide the result by the standard deviation (σ). This formula helps you find how far the value is from the mean in
standard deviation units.
Area to Z Score Calculator
The Area to Z Score Calculator is used to convert a specific area under the standard normal curve into its corresponding Z score. Z scores help in standardizing data and comparing individual scores
to a larger dataset. This calculator is essential for solving problems related to probabilities in a normal distribution, finding percentiles, and making statistical inferences. Whether calculating
confidence intervals or finding areas under the curve, understanding the Z score is crucial for analyzing normally distributed data.
$Z = \frac{(X - Xˉ)}{\sigma}$
Variable Description
Z Z score, representing the number of standard deviations a value is from the mean
X Raw score
Xˉ Mean of the data set
σ Standard deviation
Solved Calculation:
Example 1:
Step Calculation
Raw Score (X) 75
Mean (Xˉ) 65
Standard Deviation (σ) 10
Z Calculation (75 − 65) ÷ 10
Result Z = 1.0
Answer: The Z score is 1.0, meaning the score is 1 standard deviation above the mean.
Example 2:
Step Calculation
Raw Score (X) 50
Mean (Xˉ) 65
Standard Deviation (σ) 10
Z Calculation (50 − 65) ÷ 10
Result Z = −1.5
Answer: The Z score is -1.5, meaning the score is 1.5 standard deviations below the mean.
What is Area to Z Score Calculator?
An Area to Z Score Calculator helps you convert the area under the standard normal curve into a corresponding z-score, which is essential in statistics when working with normal distributions. To
calculate a z-score from an area, you use standard z-tables or a calculator that performs this conversion automatically.
For example, if you know the area to the left of a z-score, the cumulative area to z score calculator can provide the corresponding z-value. This is useful for determining probabilities or
percentiles in a data set. The z-score represents how many standard deviations a value is from the mean.
To calculate z-scores step by step, you can also use a z score calculator with steps or input values into calculators like those available on TI-84 or Excel. The z score to percentile calculator is
commonly used to convert a z-score into a percentile, while tools like the z score graph maker can visually represent the distribution.
Final Words:
For example, a z-score for a 95% confidence interval is commonly found using standard z-tables or an online calculator. This helps in determining the area under the curve and understanding how data
points are distributed relative to the mean. | {"url":"https://areacalculators.com/area-to-z-score-calculator/","timestamp":"2024-11-04T00:54:57Z","content_type":"text/html","content_length":"106672","record_id":"<urn:uuid:ad1a74d6-92bd-4832-a9df-11e1a3959ccb>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00653.warc.gz"} |
Grid 3 system
"a system with the strongest bet selection"
Grid 3 System for Even Bets
by Izak Matatya
"has a strong flat bet advantage and works with any progression"
Here's a brand new system with a brand new concept Grid 3 System for Even Bets. It's one of the best systems having the strongest flat bet advantage.
As the name implies, it explores all possibilities for a 3 decision segment or grid and gives you the best solution for winning on the long run betting nothing but 1 unit only throughout thousands of
shoes and with no interruption.
In a 3-decision segment, you could have 8 combinations of wins and losses, which are:
1) L L L
2) L L W
3) L W L
4) L W W
5) W L L
6) W L W
7) W W L
8) W W W
On the long run, one should have an equal number of each such combination if you are always betting the same way.
If you would be betting 1 unit throughout, you would break even, because you have an equal number of wins and losses.
If you would apply a progression of any type, you would also break even.
If you would interrupt your bets, say after one or two wins, you would also break even. Try it by inserting numbers and you will see.
And breaking even is without considering zeros in roulette or commissions in Baccarat, concluding that betting the same way all the time would yield to the house edge.
That's when creativity comes handy and Grid 3 system finds solutions for you to break this regularity of betting the same way and comes up with a very strong flat bet advantage.
Grid 3 comes with 2 solid solutions each described under a different system, each presenting you a flat bet advantage.
Just to give you an idea, here's a performance chart of Grid 3 - System 1 for the second 500 shoe set of 1K Zumma shoes:
Number of wins:18493
Number of losses:18085
Flat bet advantage:408
End profit: 408
There are 408 more wins than losses, much better than breaking even.
The first 500 shoes of the 1K Zumma set generates 277 units betting flat with 1 unit only.
Number of wins:18402
Number of losses:18125
Flat bet advantage:277
End Profit:277
Totaling both 500 shoe set, Grid 3 - System 1 generates 408 + 277 = 685 units for 1000 shoes.
With a small $10 unit size, this amounts to winning $6,850.
With flat bets you can bet any amount up to table's maximum.
With $100 unit size, you win $68,500 for those shoes.
You should know that if a system has a flat bet advantage, it wins on progressions, too.
The beauty of the Grid 3 systems is that because the bet selection is so strong, that ANY progression of your choice will generate huge profits.
Say, you are using a 1, 2, 6, progression with $10 units or $10, $20, $60, 6 units being your highest bet amount, Grid 3 System 1 will generate: $10,460 for the first 500 shoes set and another
$14,450 for the second 500 shoes, totaling to $24,910 for 1000 shoes.
Since any progression will generate profits, chose the one with which you are comfortable with. The system will take the 3 values of the progression as a parameter and you can experiment with any 3
Number of wins:18493
Number of losses:18085
Flat bet advantage:408
End profit: 4170
We just chose 8, 2, and 14, for instance and the system generated 4170 units for 500 shoes.
This is not really a progression but 3 values you chose to bet on each decision of the Grid 3, that is on 3 bets and you will see that ANY 3 values will work and will generate profits. The flat
betting profit will always be the same: 408 units for 500 shoes.
Grid 3 - System 2 will differ from system 1 in such a way, that the bet selection is fully dynamic and changes from bet to bet. If it is more complex? Not really. Depending if you are winning or
losing your bet, you will have a criterion on how the bet selection will change.
System 2 will also provide a flat bet advantage. And you could use any mild progression of your own, the same way as for System 1.
Here's a performance chart of System 2:
using the following parameters:
Number of wins:18440
Number of losses:18138
Flat bet advantage:302
End profit: 15990
Generates $15,990, with 10, 20, 60 being $ values or 1, 2, 6 as the units of each step of the Grid 3 for 500 shoes with a unit size of $10.
The other 500 shoes generate also $10,500 for the same Grid values, totaling to $26,490 for 1000 shoes.
Betting flat, the first 500 shoes generate 207 units and the second 500 shoes generate 302, as you can see in the above table, totaling to 507 units for 1000 shoes.
The flat bet advantage is a bit less than System 1. However, the drawdown has been cut to 32 units only in System 2 versus 87 in System 1.
I'm sure you will like both systems and you will use either both or the one which appeals to you more.
Taking commissions into account, the system document will illustrate full examples on how you can make $10,000 within a few shoes. And on the long run, the systems will generate each 1682 units for
500 shoes.
Grid 3 - system 1 and 2 go for $4,500 and they are worth each penny.
The system consists of 20 pages of easy reading and understanding and lots of examples, which will make it crystal clear.
Upon your purchase you will receive the system document and three simulations: complete results for the 1000 Zumma shoes for System 1 and System 2 and a simulation showing how the system handles
The simulations are parametric, where you can enter the 3 values for the Grid of 3 decisions and witness that any 3 value will generate profits on the long run.
As usual, your satisfaction is guaranteed and you have a full money back guarantee, should the system is not to your liking and you can return it within 30 days of your purchase, no strings attached
and no questions asked.
Wishing you all the best!
Izak Matatya
Email: webmaster@letstalkwinning.com, izak.matatya@videotron.ca or matatya.izak@gmail.com
Systems Gallery: http://www.letstalkwinning.com/gallery.htm
Newsletter: http://www.letstalkwinning.com/winalert.htm
Newsletter Archive: http://www.letstalkwinning.com/archive.htm
Recommended On-Line Casinos: http://www.letstalkwinning.com/bestonline.htm
Discussion Forum: http://www.letstalkwinning.com/forum/
Izak Matatya's new contemporary digital art online store: https://www.izakmatatya-digitalart.com, www.izakmatatya.com | {"url":"https://shop.letstalkwinning.com/products/grid-3-system","timestamp":"2024-11-02T05:11:07Z","content_type":"text/html","content_length":"113057","record_id":"<urn:uuid:6f767b68-4ada-4766-9518-934220d64231>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00005.warc.gz"} |
Adjacent Angles Worksheets - 15 Worksheets.com
Adjacent Angles Worksheets
About These 15 Worksheets
These worksheets will help students understand and identify pairs of angles that share a common vertex and a common side but do not overlap. These worksheets provide structured exercises that guide
students through the fundamental concepts of adjacent angles, reinforcing their geometric reasoning and spatial awareness. They typically include a variety of problems, from basic identification
tasks to more complex exercises that require analytical thinking and application of geometric principles.
What Are Adjacent Angles?
Adjacent angles are pairs of angles that share a common vertex and a common side but do not overlap. These angles are positioned next to each other on a plane, creating a straight line with their
non-common sides extending in opposite directions. The sum of adjacent angles that form a linear pair is always 180 degrees, making them supplementary. This property is useful in solving geometric
problems and proving theorems related to angles and parallel lines. Understanding adjacent angles is essential for accurately constructing and analyzing geometric shapes. They are widely used in
various fields, such as engineering, architecture, and design, where precise angle measurements are necessary. Recognizing adjacent angles aids in developing spatial awareness and problem-solving
skills in geometry.
The Math Skills Explored
The primary math skill explored in Adjacent Angles worksheets is the identification and classification of adjacent angles. Students learn to recognize pairs of angles that share a vertex and a side,
distinguishing them from other types of angle pairs. This skill is foundational in geometry, as understanding the relationships between angles is crucial for more advanced topics such as the
properties of polygons, congruence, and similarity.
In addition to identifying adjacent angles, these worksheets often incorporate other related skills, including:
Angle Measurement – Students practice measuring angles using a protractor, enhancing their precision and accuracy in geometric calculations.
Angle Addition Postulate – Understanding that the sum of adjacent angles forms a larger angle helps students grasp how individual angles contribute to a composite figure.
Complementary and Supplementary Angles – These concepts are often intertwined with adjacent angles, allowing students to explore how angles can be classified based on their sums.
Problem-Solving and Analytical Thinking – Many worksheets include problems that require students to apply their knowledge of adjacent angles to solve puzzles and real-world scenarios, promoting
critical thinking.
Types of Exercises
Adjacent Angles worksheets feature a wide range of exercises designed to cater to different learning stages and abilities. Here are some common types of problems you might find:
Identification Exercises – These exercises are fundamental and typically the starting point in Adjacent Angles worksheets. Students are presented with various geometric figures, and their task is to
identify and mark pairs of adjacent angles. For instance, a figure might show several intersecting lines, and students need to circle or label all pairs of adjacent angles. This type of exercise
reinforces the basic definition and visual identification of adjacent angles.
True or False Questions – True or False questions are straightforward yet effective in assessing students’ understanding. For example, a worksheet might present several angle pairs and ask whether
each pair is adjacent. These questions prompt students to apply their knowledge and make quick judgments, reinforcing their conceptual understanding.
Matching Problems – In matching problems, students are given two columns – one with geometric figures and another with descriptions or properties of angles. Their task is to match each figure with
the correct description. This type of problem enhances students’ ability to connect visual representations with theoretical concepts, solidifying their understanding of adjacent angles and related
Fill-in-the-Blank – Fill-in-the-blank exercises require students to complete sentences or equations related to adjacent angles. For example, a worksheet might present an incomplete statement like
“The angles ___ and ___ are adjacent because they share a common ___.” These exercises test students’ recall and understanding of geometric terminology and relationships.
Angle Measurement – Worksheets often include problems that involve measuring angles with a protractor. Students might be asked to measure given angles and determine if they are adjacent. This
practice hones their precision in using geometric tools and reinforces the concept of adjacent angles through hands-on activities.
Angle Addition Problems – Angle addition problems involve scenarios where students must use the Angle Addition Postulate to find unknown angles. For instance, a worksheet might present two adjacent
angles and provide the measure of one, asking students to calculate the measure of the second angle. These problems develop students’ ability to work with angle sums and understand the additive
nature of adjacent angles.
Real-World Scenarios – Applying geometric concepts to real-world scenarios is a crucial aspect of learning. Adjacent Angles worksheets often include problems that relate to everyday situations. For
example, students might be asked to find adjacent angles in architectural designs, road intersections, or various objects. These scenarios make learning more relevant and demonstrate the practical
applications of geometry.
Higher-level worksheets include complex problem-solving challenges that require a deeper understanding of adjacent angles and related concepts. These might involve multi-step problems where students
must identify adjacent angles, measure them, and apply their knowledge to find missing values or solve geometric puzzles. Such challenges promote critical thinking and analytical skills, preparing
students for advanced mathematical studies.
Benefits of These Worksheets
Using Adjacent Angles worksheets offers several educational benefits that go beyond simple practice and repetition. These worksheets serve as valuable tools in enhancing various aspects of students’
mathematical abilities and cognitive skills. Here are some of the key advantages:
Reinforces Foundational Geometry Concepts
Adjacent Angles worksheets provide a structured approach to learning fundamental geometry concepts. By concentrating on adjacent angles, students build a strong foundational knowledge that is crucial
for understanding more complex geometric relationships and properties. These worksheets help students grasp the basic principles of angles, their measurement, and their interaction within geometric
figures. This foundational knowledge is essential as students progress to more advanced topics in geometry, such as theorems involving parallel lines and polygons.
Enhances Spatial Reasoning
Identifying and working with adjacent angles requires students to visualize geometric shapes and comprehend spatial relationships. This practice significantly enhances their spatial reasoning
abilities, a critical skill not only in mathematics but also in fields such as engineering, architecture, and computer graphics. By regularly engaging with problems that involve visualizing and
manipulating geometric figures, students develop the ability to think in three dimensions and understand how different shapes interact in space.
Promotes Precision and Accuracy
Measuring angles and working with geometric figures demand precision and accuracy. Adjacent Angles worksheets provide ample opportunities for students to practice using protractors and other
geometric tools. This repeated practice helps students develop a high level of precision and accuracy, which are crucial skills for success in various academic and professional fields. By mastering
the use of these tools, students learn to approach problems methodically and with attention to detail.
Encourages Critical Thinking
Problem-solving exercises and real-world scenarios included in Adjacent Angles worksheets encourage students to think critically and apply their knowledge in various contexts. These exercises promote
analytical thinking and problem-solving abilities, skills that are invaluable in both academic and real-world settings. By facing challenging problems that require them to apply geometric principles,
students learn to approach complex situations logically and creatively.
Provides Hands-On Learning
Many Adjacent Angles worksheets incorporate hands-on activities, such as measuring angles and constructing geometric figures. These activities actively engage students in the learning process, making
geometry more interactive and enjoyable. Hands-on learning helps students to better retain information and understand abstract concepts by applying them in tangible ways. This engagement fosters a
deeper appreciation for the subject and enhances overall learning outcomes.
Demonstrates Real-World Applications
Adjacent Angles worksheets often include real-world examples that show the practical applications of geometric concepts. For instance, consider a scenario where a student is asked to design a simple
garden layout with several pathways intersecting at different angles. Using their knowledge of adjacent angles, the student can accurately measure and label the angles at each intersection. This
ensures that the pathways meet at precise angles, creating a well-organized and visually appealing garden layout.
In this real-world example, the student applies their understanding of adjacent angles to a practical task, utilizing tools like a protractor to ensure accuracy. This exercise highlights the
relevance of geometric concepts in everyday life and underscores the importance of learning about adjacent angles. It demonstrates how geometric principles are not just theoretical but have tangible
applications that can enhance the quality of our surroundings and contribute to various professional fields.
Real-World Applications of Adjacent Angles
The concept of adjacent angles is not just confined to the pages of mathematics textbooks; it finds practical applications in numerous real-world situations. Understanding and using adjacent angles
is fundamental in various fields, from construction and design to technology and everyday problem-solving.
Architecture and Construction
In architecture and construction, adjacent angles are crucial in designing and constructing buildings and other structures. Architects and engineers must accurately measure and use adjacent angles to
ensure that the corners and joints of a structure are precise and secure. For example, when designing floor plans, the angles between walls must be measured and constructed accurately to ensure that
rooms are perfectly shaped and the overall structure is stable. This precision is essential for the safety and aesthetic quality of buildings.
Interior Design
Interior designers frequently use the concept of adjacent angles when arranging furniture and other elements within a space. For instance, when placing furniture in a room, designers must consider
the angles at which pieces intersect to maximize space usage and create visually appealing arrangements. This involves measuring the angles between walls, furniture, and other objects to ensure a
harmonious and functional layout. Accurate angle measurements help designers create spaces that are both practical and aesthetically pleasing.
Engineering and Manufacturing
In engineering and manufacturing, adjacent angles play a vital role in the design and assembly of various components. Mechanical engineers, for example, must understand how different parts of a
machine fit together at precise angles to ensure smooth operation and efficiency. In the automotive industry, the alignment of parts like the steering mechanism or suspension system often relies on
accurate measurements of adjacent angles. This precision ensures that the machinery operates correctly and efficiently, reducing wear and tear and improving overall performance.
Carpentry and Woodworking
Carpenters and woodworkers regularly use adjacent angles when cutting and joining pieces of wood. When creating furniture, cabinets, or other wooden structures, they must measure and cut wood at
specific angles to ensure that pieces fit together seamlessly. For instance, when constructing a picture frame, the angles between the sides must be precisely measured and cut to form perfect
corners. Mastery of adjacent angles allows woodworkers to produce high-quality, durable pieces with tight, precise joints.
Urban Planning and Landscaping
Urban planners and landscape designers use adjacent angles to design efficient and visually pleasing layouts for public spaces, parks, and gardens. When planning pathways, roads, and intersections,
they must consider the angles at which these elements meet to ensure smooth traffic flow and aesthetic appeal. For example, the layout of a garden with intersecting pathways requires careful
measurement of adjacent angles to create a harmonious and functional design. Accurate angle measurements help planners and designers create spaces that are safe, accessible, and enjoyable for the
Technology and Computer Graphics
In technology and computer graphics, adjacent angles are used in the creation of digital images, animations, and models. Graphic designers and animators rely on geometric principles, including
adjacent angles, to create realistic and proportionate visuals. For example, when designing a 3D model, understanding how adjacent angles affect the shape and appearance of the object is crucial for
creating lifelike representations. This knowledge is also essential in programming computer algorithms that generate images and animations.
Everyday Problem-Solving
Even in everyday problem-solving, understanding adjacent angles can be beneficial. For instance, when hanging a picture on a wall, measuring the angles between the picture, the wall, and surrounding
objects ensures that the picture is level and positioned correctly. Similarly, when arranging furniture or planning a home renovation, knowing how to measure and use adjacent angles can help create
functional and aesthetically pleasing spaces. This practical application of adjacent angles demonstrates their relevance in day-to-day activities. | {"url":"https://15worksheets.com/worksheet-category/adjacent-angles/","timestamp":"2024-11-14T13:58:19Z","content_type":"text/html","content_length":"141677","record_id":"<urn:uuid:1870a349-5828-428c-a2f8-48ad2b66edc6>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00615.warc.gz"} |
Moment capacity (Section) (Beams: AS 4100)
The (section) moment capacity check is performed according to AS 4100 clause 5.1 for the moment about the x-x axis (Mx) and about the y-y axis (My), at the point under consideration.
For (member) moment capacity refer to the section Lateral torsional buckling resistance (Member moment capacity).
Note that for all section types, the effective section modulus about the major axis (Z[ex]) will be based on the minimum slenderness ratio considering both flange and web. Internally Tekla Structural
Designer will calculate the following:
• flange slenderness ratio, z[f] = (λ[ey] - λ[ef]) / ( λ[eyf] - λ[epf])
• web slenderness ratio, z[w] = (λ[eyw] - λ[ew]) / ( λ[eyw] - λ[epw])
For sections which have flexure major class either Compact or Non-compact, the effective section modulus about the major axis (Zex) will then be calculated by:
• Z[ex] = Z[x] + [MIN(z[f], z[w], 1.0) * (Z[c] - Z[x])] where Z[c] = MIN(S[x], 1.5 * Z[x])
Note that for Channel sections under minor axis bending:
• if there is single curvature with the flange tips in compression then Z[ey] will be based on Z[eyR]
• if there is single curvature with the web in compression then Z[ey] will be based on Z[eyL]
• if there is double curvature then Z[ey] will be based on the minimum of Z[eyR] and Z[eyL] | {"url":"https://support.tekla.com/doc/tekla-structural-designer/2024/ref_momentcapacitysectionbeamsas4100","timestamp":"2024-11-10T12:42:31Z","content_type":"text/html","content_length":"53554","record_id":"<urn:uuid:b227a93d-6623-4b7a-9199-e3a409c3aa4c>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00706.warc.gz"} |
How to Increase Speed in Solving Physics for NEET 2023-24?
Students who are about to give their NEET EXAMS find it much difficult to increase speed in solving complicated subjects like Maths or Physics. Students usually get confused about how to prepare or
how to become fast in solving the questions, especially for physics. But this article is here to help students who are right now struggling with questions like how to prepare and what to do. Learn
some ways to increase NEET speed in solving Physics or Maths problems quickly and efficiently.
Students fear Mathematics and try to run away from the subject. In higher classes, when they are introduced to Physics, and when the first few chapters are taught to them, the fear for Mathematics
again runs back to them. But students must always find the difference between Mathematics and Physics. Mathematics deals with all kinds of formulas, numbers, etc. on the other hand, Physics deals
with definition and understanding. Those students who are not clear with the concepts of physics find the subject difficult, but for those who know the concept and understand every point, physics
becomes interesting and easy. The first thing that the students should follow for giving the NEET exam is to concentrate more and more on concepts; if your concepts are crystal clear, you can solve
the questions fast, and it would be easy also.
Strategies to Increase Speed in Solving Physics Questions for NEET 2022
The strategies discussed below can help you to to Increase your speed in solving Physics questions for NEET 2022.
At the first stage, the students must keep their concepts clear else their preparation would go on a different track. Hence the process for Preparation is very important. There are certain tips and
tricks that you can follow to prepare well.
• There are two kinds of students who are preparing for the NEET Physics exam. Among them, one kind is those who spend a lot of time preparing their lessons for the exam, and the other kind is
those who spend very little time and do not solve the questions. Well, neither of them is good, though. If you spend a lot of time in Preparation, that will make you more confused, which will
lead you to tension, and also you will not be able to give time to the other subjects that are there for the NEET exam. On the other hand, if you do not study for the exam, it will lead you to
your failure. Hence divide your time and spend your time equally for each subject.
• You need to prepare to keep in mind the competition that's going on right now. Always try to compete with people who are appearing for JEE.
• Try to maintain a copy where you would have all the formulas are written down, which will help you deal with your exams.
• Practice from the NCERT books and understand the chapters present there.
• Solve the last year's question paper as that will increase your knowledge to some level.
Tips to Solve Physics Speedily in the NEET Examination
The tricks and tips discussed below might help the students to solve Physics at a faster rate in the NEET examination.
• The first and foremost, which should be done, is to have a clear idea of the theoretical aspect so that when you see questions in the question paper, you can quickly understand which chapter it
belongs to and the concept that you should apply for this.
• When you have gone through the question paper after getting it, and you can understand that these questions have a lot of calculation, then leave those questions for some time and do it at the
last as calculations are time consuming.
• When you get the question paper in your hand, go through the question paper very carefully and see to every little thing so that you miss none.
• Always focus on the units that carry most of your marks. For example, in the sums, the students forget to convert meter to centimetres quite often, leading to loss of marks. And for this, you
need to practice hard.
• The subject of Physics completely depends on imagination. Hence students should always try to imagine the situation and the concept that the question carries. This would make it all the easier to
solve the questions.
• While calculating, try to write it simply; for example, if a value is mentioned as 314, 98, etc., try to write it as 3.14 × 100 or 9.8 × 10. The reason is that if you write it in this way, that
will make your calculations much more easy and fast and you would not have to spend much time on numerical.
• While you sit to give your Examination, always think and decide which are the questions you can do and which you cannot. In simple words, try to sort out the questions you can answer, then do
them first, and then take your time and solve the other questions you have left out.
• The last and foremost tip that should be kept in mind is that you need to practice more and more questions regularly, be it from the NCERT books or by downloading the previous year's papers,
which you can easily download from the internet and website. When you practice more than the speed of your solving will also increase, you will be able to analyze your performance at your
These are some best tips and tricks to increase your NEET speed in the Physics examination. I hope this article will be helpful to the aspirants who are about to give their NEET exam. Have a look at
these and start your preparation strategically in an effective and quick way. | {"url":"https://www.vedantu.com/blog/how-to-increase-speed-in-solving-physics-for-neet","timestamp":"2024-11-11T04:17:24Z","content_type":"text/html","content_length":"151052","record_id":"<urn:uuid:ad95ea2b-7a47-4f69-b3fb-4a31365bf75c>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00648.warc.gz"} |
1579. The graph can be completely traversed
Alice and Bob share an undirected graph, which contains n nodes and 3 kinds of edges
Type 1: can only be traversed by Alice.
Type 2: can only be traversed by Bob.
Type 3: both Alice and Bob can traverse.
Give you an array edges, where edges[i] = [typei, ui, vi] indicates that there are two-way edges of type I between nodes ui and VI. Please find out the maximum number of edges that can be deleted on
the premise that the graph can still be completely traversed by Alice and Bob. If Alice and Bob can reach all other nodes from any node, the graph is considered to be completely ergodic.
Returns the maximum number of edges that can be deleted, and - 1 if Alice and Bob cannot traverse the graph completely.
Example 1:
Input: n = 4, edges = [[3,1,2],[3,2,3],[1,1,3],[1,2,4],[1,1,2],[2,3,4]]
Output: 2
Explanation: if you delete [1,1,2] and [1,1,3], Alice and Bob can still traverse the graph completely. Deleting any other edges does not guarantee that the graph can be completely traversed. So the
maximum number of edges that can be deleted is 2.
Example 2:
Input: n = 4, edges = [[3,1,2],[3,2,3],[1,1,4],[2,1,4]]
Output: 0
Explanation: note that deleting any edge will prevent Alice and Bob from completely traversing the graph.
Example 3:
Input: n = 4, edges = [[3,2,3],[1,1,2],[2,3,4]]
Output: - 1
Explanation: in the current graph, Alice cannot reach node 4 from other nodes. Similarly, Bob cannot reach node 1. Therefore, the graph cannot be completely traversed.
1 <= n <= 10^5
1 <= edges.length <= min(10^5, 3 * n * (n-1) / 2)
edges[i].length == 3
1 <= edges[i][0] <= 3
1 <= edges[i][1] < edges[i][2] <= n
All tuples (type I, UI, VI) are different from each other
The basic idea is to use union search to check whether it is connected (count=1), first use the edge of type3 to build the basic framework, and then use type1 and type2 to check whether it is
• Why build with type3 first? The number of edges used by alice and bob: f=(n-1)+(n-1)-k, where k is a shared edge and K can only be of type 3, while the number of deleted edges g=edges.size()-f=
edges.size()-(n-1)*2+k, so try to maximize K, so give priority to the edge of type3
class UF{
vector<int> parent,psize;
int count=0;
UF(int n){
for(int i=0;i<n;i++)
int find(int x){
return x==parent[x]?x:parent[x]=find(parent[x]);
bool connect(int x,int y){
int rx=find(x);
int ry=find(y);
return false;
return true;
int getCount(){
return count;
class Solution {
int maxNumEdgesToRemove(int n, vector<vector<int>>& edges) {
UF *u1=new UF(n);
UF *u2=new UF(n);
int k1=0,k2=0,k3=0;
for(auto &v:edges){
for(auto &v:edges){
return edges.size()-k2-k1-k3;
return -1;
Basic idea: the idea is the same as above, but it uses dfs and adjacency table to store information. | {"url":"https://www.fatalerrors.org/a/1579-the-graph-can-be-completely-traversed.html","timestamp":"2024-11-15T04:32:39Z","content_type":"text/html","content_length":"13275","record_id":"<urn:uuid:8176e9b9-2517-4382-b5a6-ed418ce9eef5>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00576.warc.gz"} |
Hypothesis test for the mean of a sample with unknown standard deviation.
[h,pvalue,ci] = ttest(x,m)
[h,pvalue,ci] = ttest(x,m,name,value)
Data sample to be tested.
Type: double
Dimension: vector | matrix
Hypothesis mean.
Type: double
Dimension: scalar
Name of an option whose value follows. Multiple name/value pairs are allowed. The supported options are alpha and dim.
alpha is the level of significance (default: 0.05).
dim is the dimension on which the test is performed (default: first non-singular dimension).
Type: string
Value for the preceeding option name.
Type: double | integer
Dimension: scalar
0 if the null hypothesis is accepted. 1 if the null hypothesis is rejected.
The p-value of the test.
A 100*(1-alpha)% confidence interval for the population mean.
Vector ttest example.
x = [7.3 9.4 5.9 6.5 5.5 7 5.3 7.3 2.6 8.7 8.1 2.5 3.9 1.4 0.2 4 6 6.1 10.3 1.2];
[h,pvalue,ci] = ttest(x,5)
h = 0
pvalue = 0.47499
ci = [Matrix] 1 x 2
4.139 6.781
ttest is a 1 sample t test and assumes that the sample comes from a normally distributed population. The null hypothesis is that the hypothesized mean is the population mean.
The paired t test is not supported by ttest at this time. | {"url":"https://www.openmatrix.org/help/topics/reference/oml_language/StatisticalAnalysis/ttest.htm","timestamp":"2024-11-08T02:24:00Z","content_type":"application/xhtml+xml","content_length":"9185","record_id":"<urn:uuid:1f4e9de7-bc36-409e-807b-853557753dda>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00055.warc.gz"} |
710 research outputs found
We extend the results of our previous paper "C*-algebras and numerical linear algebra" to cover the case of "unilateral" sections. This situation bears a close resemblance to the case of Toeplitz
operators on Hardy spaces, in spite of the fact that the operators here are far from Toeplitz operators. In particular, there is a short exact sequence 0 --> K --> A --> B --> 0 whose properties are
essential to the problem of computing the spectra of self adjoint operators.Comment: 12 pages, AMS-TeX 2.
A mathematical notion of interaction is introduced for noncommutative dynamical systems, i.e., for one parameter groups of *-automorphisms of \Cal B(H) endowed with a certain causal structure. With
any interaction there is a well-defined "state of the past" and a well-defined "state of the future". We describe the construction of many interactions involving cocycle perturbations of the CAR/CCR
flows and show that they are nontrivial. The proof of nontriviality is based on a new inequality, relating the eigenvalue lists of the "past" and "future" states to the norm of a linear functional on
a certain C^*-algebra.Comment: 22 pages. Replacement corrects misnumbering of formulas in section 4. No change in mathematical conten
Starting with a unit-preserving normal completely positive map L: M --> M acting on a von Neumann algebra - or more generally a dual operator system - we show that there is a unique reversible system
\alpha: N --> N (i.e., a complete order automorphism \alpha of a dual operator system N) that captures all of the asymptotic behavior of L, called the {\em asymptotic lift} of L. This provides a
noncommutative generalization of the Frobenius theorems that describe the asymptotic behavior of the sequence of powers of a stochastic n x n matrix. In cases where M is a von Neumann algebra, the
asymptotic lift is shown to be a W*-dynamical system (N,\mathbb Z), whick we identify as the tail flow of the minimal dilation of L. We are also able to identify the Poisson boundary of L as the
fixed point algebra of (N,\mathbb Z). In general, we show the action of the asymptotic lift is trivial iff L is {\em slowly oscillating} in the sense that $\lim_{n\to\infty}\|\rho\circ L^{n+1}-\rho\
circ L^n\|=0,\qquad \rho\in M_* .$ Hence \alpha is often a nontrivial automorphism of N.Comment: New section added with an applicaton to the noncommutative Poisson boundary. Clarification of Sections
3 and 4. Additional references. 23 p
A numerical index is introduced for semigroups of completely positive maps of \Cal B(H) which generalizes the index of E_0-semigroups. It is shown that the index of a unital completely positive
semigroup agrees with the index of its dilation to an E_0-semigroup, provided that the dilation is minimal.Comment: 26 pp. AMS-TeX 2. | {"url":"https://core.ac.uk/search/?q=author%3A(Arveson)","timestamp":"2024-11-08T05:10:44Z","content_type":"text/html","content_length":"93213","record_id":"<urn:uuid:7d3e7090-a6f2-4a3f-9313-bf3a9cc07d8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00011.warc.gz"} |
Abacus and Its Relatives
The man whose buddy is an abacus
has a friend he can count on.
G. Patrick Vennebush
Math Jokes 4 Mathy Falks
Robert D. Reed Publishers, 2010
Abacus and Its Relatives
Abacus is probably the first of calculating devices. Encyclopædia Britannica traces the word abacus to the Phoenician abak (sand). American Heritage Dictionary points to the Greek word abax, which
might have originated from the Hebrew avak (dust). The etymology of the word abacus takes a sharp turn when it comes to the plural form. Abacuses is an expected and the most natural English plural.
However, this is just as common to use abaci as if the word abacus was of a Latin origin (like locus/loci, focus/foci, etc.) For example, in Mathematics Dictionary edit by Glenn and Robert C. James
(D. Van Nostrand, 1949, 1959, 1960, 1963, 1964, 1966), the plural is abaci with no equivocation.
There is little doubt that the Ancients used a flat surface with sand strewn evenly over it as a disposable tool for writing and counting. It's said that the great Archimedes was slain by a Roman
soldier while concentrating on figures drawn in sand.
Later day abaci had grooves for small pebbles and later yet wires or rods on which counters could freely move back and forth. Each wire corresponded to a digit in a positional number system, commonly
in base 10. A very curious state of affairs was mentioned by M. Gardner with a reference to K.Menninger. For more than 15 centuries the Greek and Romans and then Europeans in the Middle Ages and
early Renaissance calculated on devices with authentic place-value system in which zero was represented by an empty line, wire or groove. Yet the written notations did not have a symbol for zero
until it was borrowed by Arabs from Hindus and eventually introduced into Europe in 1202 by Leonardo Fibonacci of Piza in his Liber Abaci (The Book of Abacus). According to D. Knuth, counting with
abaci was so convenient and easy that, at the time when only few knew how to write, it might have seemed preposterous to scribble some symbols on expensive papyrus when an excellent calculating
device was readily available.
Chinese suan pan is different from the European abacus in that the board is split into two parts. The lower part contains only five counters on each wire, the upper contains two. Digits from 0
through 4 are represented solely by counters in the lower part. The other five digits need an upper counter. E.g., 8 is represented by 3 lower counters and 1 upper counter.
I was reminded by Scott Brodie that Japanese soroban differs from its chinese relative in that it enforces carrying by containing only 4 counters "below the bar" and only 1 counter "above the bar" on
each "wire". This eliminates dual representations of "fives" and "tens". The Japanese call the low portion "earth" and the upper portion "heaven".
Here is an old wood print from [Sacred Mathematics: Japanese Temple Geometry] depicting use of suan pan.
The print came from a 1715 edition of Jinko-ki (Large and Small Numbers) which was first introduced in Japan in 1627. The book that grew with every edition was originally a translation of the Chinese
classic Sunfa Tong Zong (Systematic Treatise on Arithmetic) by Cheng Da-Wei (1593). The 1627 edition of Jinko-ki was an already expanded version of the Sunfa Tong Zong. It was published by Yoshida
Mitsuyoshi (1598-1672) who modified many problems and added numerous illustrations to the translation.
The applet below represents an abacus close to the Russian variant where, for the ease of use, middle counters differ in color from all the rest. Given the advantage of home computers, the applet
allows you to select a number system with bases from 2 through 16. Carrying of 1 to the next digit on the left is automatic. The entire interface with the applet is by clicking the mouse button. You
can move a group of counters with a single click. Just point at the last counter you wish to move.
You can also display two abaci at the same time. If the Sync button is checked, then your actions on one of the abaci, is reflected in reverse on the other. This is how it is used. Place some numbers
on the two abaci, and then check the Sync button. Start removing counters from one of the abaci. The same quantity will be added to the other abacus.
1. H. Fukagawa, A. Rothman, Sacred Mathematics: Japanese Temple Geometry, Princeton University Press, 2008
2. M. Gardner, Mathematical Circus, Vintage Books, NY, 1981
3. D. Knuth, The Art of Computer Programming, v2, Seminumerical Algorithms, Addison-Wesley, 1969
4. K. Menninger, Number Words and Number Symbols: A Cultural History of Numbers, M.I.T. Press, 1969
Related material
Read more...
A Broken Calculator
A Broken Calculator Has Its Uses
Suan pan in Various Number Systems
Abacus in Various Number Systems
Soroban in Various Number Systems
Pythagorean Triples Calculator
Napier Bones in Various Bases
Base (Binary, Decimal, etc.) Converter
|Contact| |Front page| |Contents| |Algebra|
Copyright © 1996-2018 Alexander Bogomolny | {"url":"https://www.cut-the-knot.org/blue/Abacus.shtml","timestamp":"2024-11-08T06:23:28Z","content_type":"text/html","content_length":"17849","record_id":"<urn:uuid:f9d17732-e1ac-42d2-93e1-7b77e8fd2ee6>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00475.warc.gz"} |
Micron to Mesh Size Calculator - GEGCalculators
Micron to Mesh Size Calculator
Micron to Mesh Size Calculator
How do you convert micron to mesh size? Mesh size can be roughly converted to microns using the equation: Microns = 15,000 ÷ mesh size. Conversely, mesh size can be calculated from microns using the
formula: Mesh size = 15,000 ÷ microns.
What is mesh size of 1 micron? The mesh size of 1 micron is approximately 15,000 mesh.
What is 200 mesh in microns? 200 mesh is roughly equivalent to 75 microns.
How many microns is a 20 mesh screen? A 20 mesh screen is approximately 750 microns.
What is 400 mesh in microns? 400 mesh is approximately 37.5 microns.
What is the formula for mesh size? The formula for converting mesh size to microns is: Microns = 15,000 ÷ mesh size.
What is the difference between mesh size and micron? Mesh size refers to the number of openings in a linear inch in a screen, while micron is a unit of length equal to one-millionth of a meter, used
to measure particle size.
How do you calculate micron? Micron can be calculated by converting mesh size to microns using the formula: Microns = 15,000 ÷ mesh size.
What is 150 mesh in microns? 150 mesh is approximately 100 microns.
Which is finer 200-mesh or 400 mesh? 400 mesh is finer than 200 mesh.
What is finer 100 mesh or 200-mesh? 200 mesh is finer than 100 mesh.
Which is finer 125 microns or 190 microns? 190 microns is finer than 125 microns.
Which is thicker 100 micron or 200 micron? 200 microns is thicker than 100 microns.
How small is 200 mesh? 200 mesh is quite small, with each opening measuring about 75 microns across.
Is 40 mesh finer than 100 mesh? No, 100 mesh is finer than 40 mesh.
What is the difference between 100 micron and 400 micron? The difference between 100 microns and 400 microns is 300 microns.
What is the best mesh size? The best mesh size depends on the specific application and the desired outcome.
How do I choose mesh size? Choose mesh size based on the desired particle size distribution and the requirements of the application.
What size is a 40 mesh sieve? A 40 mesh sieve typically has openings that are about 420 microns in size.
Why is a finer mesh better? Finer mesh allows for more precise particle separation and filtration.
Does mesh size matter? Yes, mesh size matters as it determines the size of particles that can pass through a screen or sieve.
How many microns is a coffee filter? A typical coffee filter has pores around 20 microns in size.
What is the relationship between mesh and micron? Mesh and micron are both units of measurement for particle size, with mesh referring to the number of openings per linear inch and micron referring
to one-millionth of a meter.
What does 20 µm mean? 20 µm means 20 microns, which is equal to 0.02 millimeters.
What is 300gsm in microns? 300gsm (grams per square meter) does not directly convert to microns, as it measures weight rather than thickness or particle size.
Is 60 mesh or 40 mesh finer? 60 mesh is finer than 40 mesh.
What does 160 mesh mean? 160 mesh refers to a screen with 160 openings per linear inch.
What thickness is 500 microns? 500 microns is approximately 0.5 millimeters thick.
How many microns is a human hair? A human hair typically ranges from 50 to 100 microns in diameter.
What is 200 mesh good for? 200 mesh is commonly used for fine particle filtration, sieving, and classification.
What is grit vs mesh vs micron? Grit refers to the size of abrasive particles, mesh refers to the number of openings per linear inch in a screen, and micron is a unit of length used to measure
particle size.
Can a mesh be too fine? Yes, a mesh can be too fine for certain applications, leading to clogging and inefficient filtration.
What is the difference between 80 mesh and 200 mesh xanthan gum? The difference is the size of the particles retained by the mesh. 80 mesh xanthan gum has larger particles than 200 mesh xanthan gum.
What does 200 mesh screen mean? A 200 mesh screen has 200 openings per linear inch, with each opening approximately 75 microns in size.
Which is thicker 125 microns or 250 microns? 250 microns is thicker than 125 microns.
Which is thicker 80 micron or 100 micron? 100 microns is thicker than 80 microns.
What is the best micron size? The best micron size depends on the specific application and desired outcome.
What is better 100 or 200 micron filter? It depends on the requirements of the filtration process. A 200 micron filter will allow smaller particles to pass through compared to a 100 micron filter.
What is an example of 100 micron size? An example of a 100 micron size is fine sand.
How big is 200 mesh? 200 mesh is quite small, with each opening measuring about 75 microns across.
What mesh size is a No 200 sieve? A No 200 sieve corresponds to 200 mesh.
What size mesh is best for sifting flour? A mesh size of around 40 to 60 is commonly used for sifting flour.
Which is smaller 50 mesh or 100 mesh? 50 mesh is smaller than 100 mesh.
What size hole is 100 mesh? The size of a hole in a 100 mesh screen is approximately 150 microns.
Is 200 mesh finer than 400 mesh? No, 400 mesh is finer than 200 mesh.
What is µm stand for? µm stands for micrometer, which is equal to one-millionth of a meter, or micron.
What is better 50 micron or 100 micron? It depends on the specific requirements of the application. Generally, a 50 micron filter will remove smaller particles than a 100 micron filter.
GEG Calculators is a comprehensive online platform that offers a wide range of calculators to cater to various needs. With over 300 calculators covering finance, health, science, mathematics, and
more, GEG Calculators provides users with accurate and convenient tools for everyday calculations. The website’s user-friendly interface ensures easy navigation and accessibility, making it suitable
for people from all walks of life. Whether it’s financial planning, health assessments, or educational purposes, GEG Calculators has a calculator to suit every requirement. With its reliable and
up-to-date calculations, GEG Calculators has become a go-to resource for individuals, professionals, and students seeking quick and precise results for their calculations.
Leave a Comment | {"url":"https://gegcalculators.com/micron-to-mesh-size-calculator/","timestamp":"2024-11-09T13:07:52Z","content_type":"text/html","content_length":"171540","record_id":"<urn:uuid:9b7191f8-b51d-4fdb-9b3c-0ce5a70e2a15>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00034.warc.gz"} |
UChicago Instructional Physics Laboratories
Geometrical Optics
The term geometrical optics refers to the study of light propagation in the limit as the wavelength of light is much smaller than any of the optical components of the system (e.g., apertures, lenses,
or mirrors). Another simplifying assumption is that each medium through which the light travels (e.g., air, water, glass) is homogeneous and that all changes between media are abrupt at the
interfaces. A consequence of these assumptions is that light travels in straight lines through each medium and that changes in the direction the light travels occur only at the interfaces between
media. The direction light travels is conveniently described by the term rays.
In this lab, we will do the following:
• study image production by mirrors and lenses;
• apply Snell’s law;
• test the relationship among surface curvature and focal length;
• test the relationship among focal length, objective and image distances; and
• construct a telescope and determine its magnification.
Law of reflection
When light strikes a mirror and reflects from its surface, the angle of reflection is equal to the angle of incidence (with both angles being measured from the normal to the mirror surface). Also,
the incident ray, the reflected ray, and the normal to the surface all lie in the same plane.
Law of refraction (Snell's law)
When light passes from one transparent medium into another, in general the light will change speed at the interface between the two media. This change in speed is accompanied by a change in direction
or refraction of the light. The angle through which the light changes direction depends on the angle of incidence at which the light strikes the surface and a characteristic of the media at the
interface. This characteristic is known as the index of refraction, $n$, which is defined as
$n = \dfrac{c_{\textrm{vacuum}}}{c_{\textrm{medium}}}$ (1)
where $c_{\textrm{vacuum}}$ is the speed of light in a vacuum and $c_{\textrm{medium}}$ is the speed of light in the medium.
The relationship between the direction of travel of light and the indices of refraction of the media is known as Snell's law,
$n_1\sin\theta_1 = n_2\sin\theta_2$, (2)
where the angles $\theta_1$ and $\theta_2$ are measured between the light rays and the normal to the surface in each medium.
Focal point, focal length, and images
When reflecting or refracting materials like mirrors or clear glass are shaped in special ways, they can be used to re-direct light to form images. If the reflecting or refracting surfaces are
spherical, this geometry (together with the laws of reflection and refraction) give rise to the ray diagrams illustrated in Fig 1. The lenses shown in Fig 1 are considered thin lenses for simplicity
and it is assumed that all of the refraction takes place at the center of the lenses.
Note that the double convex lens and the concave mirror of Figs. 1(a) and 1(b) redirect the light so that the light rays converge at the focal points. Images formed in this way are called real images
, since light actually passes through them. Real images can be projected onto a screen.
Note also that the double concave lens and the convex mirror of Figs. 1(c) and 1(d) cause the rays to diverge. Images formed this way must be inferred by extending the light rays back to where they
appear to have come from as the dashed lines show. Since no light actually passes through these images they are referred to as virtual images and they cannot be projected onto a screen.
The magnification of a lens or mirror is defined as the ratio of the image diameter to the object diameter.
A consequence of the laws of reflection and refraction and the spherical shape of the mirror or lens surface is the relationship
$\dfrac{1}{f} = \dfrac{1}{OD} + \dfrac{1}{ID}$, (3)
where $f$ is the focal length, $OD$ is the object distance (the distance from the object to the lens or mirror), and $ID$ is the image distance (the distance from the lens or mirror to the image). It
is remarkable that Eq. (3) applies both to mirrors and lenses with spherical surfaces even though the physics of refraction and reflection is quite different.
When nearby objects are viewed from different vantage points, they seem to shift positions relative to more distant objects. This phenomenon is called parallax and is illustrated in Fig. 2.
Conversely if two objects are observed from different vantage points and they do not appear to change relative positions, then the objects must be at the same position. (See Fig. 3.)
We shall make use of parallax to judge when two objects are at the same position in several parts of this experiment.
Experimental procedure
NOTE: There are two distinct experiment setups: the optical rail and the pins and ray-tracing station. These experiments may be completed in either order.
Lab notebook template
One member of the group should click on the link below to start your group lab notebook. (You may be asked to log into your UChicago Google account if you are not already logged in.) Make sure to
share the document with everyone in the group (click the “Share” button in the top right corner of the screen) so each member has access to the notebook after you leave lab.
Station 1: Pins and ray-tracing
Plane mirror
Find the image of a pin by the method of parallax. To do so, place a piece of paper on the rubberized board and set up the apparatus shown in Fig. 4 on top of the paper.
Stick a pin (the object pin) into the rubberized board a few centimeters in front of the mirror. While looking at the image of this pin in the mirror, try to place a second pin on the board behind
the mirror so that the second pin and the image of the object pin stay together while you move your head from side to side. Once you have found the position of the image stick the second pin in
What kind of image has been formed in the mirror (real or virtual)?
How does the distance from the object pin to the rear surface of the mirror (where the reflection takes place) compare with the distance from the image to the rear surface of the mirror?
Snell's Law
Find the index of refraction, $n$, of water by setting up the apparatus shown in Fig. 5.
Place a piece of paper on a rubberized board and a water box on the paper. Trace the edges of the water box onto the paper. Sight through the clear edges of the box. Stick pins 1, 2, 3 and 4 into the
board so that all four pins appear to lie in a straight line. (Note that you will be looking at pins 1 and 2 through the water while viewing pins 3 and 4 directly.)
Remove the box and draw a line from pin 1 through pin 2 and stopping at the edge of the box. Do the same for pins 3 and 4.
What does the line from pin 1 to pin 2 represent? What does the line from pin 3 to pin 4 represent?
What path did the light take through the water? Draw that line on your paper, too.
Using a protractor and straight edge, construct the normals to the water box where lines 1,2 and 3,4 intersect the refracting surfaces.
Measure the angles of incidence and refraction at both interfaces. Using Snell's law, calculate the index of refraction of water. (Assume the index of refraction of air is 1.0).
Index of refraction of water by the apparent depth of an object pin
Here we shall use the same water box and rubberized board as in Fig. 5. This time, however, place a single object pin at the rear of the water tray as shown in Fig. 6.
Next, place two pins at the front surface of the water tray near the center (a few centimeters apart). Look through the water at the object pin from two different points of view as in Fig. 6. Place
additional pins as shown so that they appear to be in-line with the object pin and with the pin you placed at the front of the tray. The pins thus added actually are in line with the virtual image of
the object pin. Trace the water box onto the paper and remove the box. Use the ray tracing pins to find the position of the virtual image of the object pin. Trace the rays and use Snell’s law to
find the index of refraction of water.
At small angles, the index of refraction inside the water is $n = h/h^{\prime}$. Determine the index of refraction of the water in this way.
Where does this approximation come from?
In Fig. 6a, we we draw a line normal to the edges of the tray that passes through the object pin in order to form two right triangles – one with height $h$ (with a hypotenuse formed by the refracted
ray) and one with height $h^{\prime}$ (with a hypotenuse formed by the unrefracted (virtual) ray).
Using geometry, we have $sin\theta = x/\sqrt{x^2 + h^2}$ and $sin\theta^{\prime} = x/\sqrt{x^2 + (h^{\prime})^2}$. In the limit that both angles are small, $h \gg x$ and $h^{\prime} \gg x$, so this
simplifies to $sin\theta \approx x/h$ and $sin\theta^{\prime} \approx x/h^{\prime}$.
From Snell's law, we therefore have
$n^{\prime}\sin\theta^{\prime} = n\sin\theta$
which becomes (in the limit of small angles)
$n^{\prime}x/h^\prime = nx/h$
Station 2: Optical rail
Convex lens and focal length
Using an optical rail, set up the apparatus as shown in Fig. 7.
Use the thinner, red-edged lens here. Move the lens and plastic screen along the optical rail until a sharp image is formed on the screen. Note that there is an infinite number of such configurations
which will produce images.
Measure the object distance (the distance from the light source to the lens), and the image distance (the distance from the lens to the screen).
Using Eq. (3), calculate the focal length of the lens. (Watch the sign conventions for the object and image distances!)
Is the image upright or inverted? Is the image real or virtual?
Measure the magnification for this configuration. How is the magnification related to the image and object distances?
A more direct measurement of the focal length may be made by observing the image formed by a distant object. Use a distant light source to form an image on some convenient surface.
Measure the image distance for this special case. Apply Eq. (3) to this limiting case of large object distance to find the focal length of the convex lens.
Recall that light emanating from a point in the focal plane of a convex lens and then passing through the lens will emerge in a parallel bundle of rays. It follows that if we place a plane mirror
after the lens so as to reflect that parallel bundle back through the lens, the light will be brought to a focus in the focal plane once more.
Set up the optical bench as shown in Fig. 8.
For this measurement, use the thicker, shorter focal length lens with the blue edge. The white metal mask has three small holes which will serve as point light sources. Place a mirror a few
centimeters to the right of the lens. While moving the lens back and forth along the optical rail, look for the image of the point light sources on the white mask surface.
The ray diagram in Fig. 9 shows a technique of ray tracing to locate the image.
Ray 1, drawn parallel to the optic axis, is refracted as it passes through the lens. Ray 2 passes through the focal point on the right side of the lens and strikes the mirror at angle $\theta$. Ray 3
is reflected from the mirror at angle $\theta$ and returns to the lens. Imaginary Ray 4 is drawn passing through the focal point, parallel to Ray 3. Imaginary Ray 5 is drawn parallel to the optic
axis. Since Rays 3 and 4 are parallel to each other, they must intersect in the focal plane. Thus, we can draw Ray 6, locating the image.
What is the focal length of the lens?
Lensmakers' formula
A consequence of Snell's law is that there is a predictable relationship among the focal length of a lens, the index of refraction of the glass, and the radii of curvature of the lens surfaces. This
relationship is called the lensmakers' formula,
$\dfrac{1}{f} = (n-1)\left(\dfrac{1}{R_1}+\dfrac{1}{R_2}\right)$. (4)
Use the spherometer to find the radii of curvature of the plano-convex lens with the blue edge. (See Fig. 10.). To do so, first place the spherometer on a flat glass plate and adjust the micrometer
so that all four points contact the glass surface. Record the micrometer reading. Now place the spherometer onto one surface of the lens and re-adjust the micrometer until all four points contact the
lens surface. Using the geometry of Fig. 10, one may derive the relationship
$R = \dfrac{S^2}{6a} +\dfrac{a}{2}$, (5)
where $R$ is the radius of curvature, $S$ is the distance between any two of the spherometer legs which form an equilateral triangle, and $a$ is the elevation of the spherometer screw necessary to
make contact with the lens surface at all four points. Repeat the measurement for the other side of the blue-edged lens. The index of refraction of the blue-edged lens has been independently measured
and is $n = 1.53\pm0.02$, which is typical for many glasses.
Calculate the focal length of the lens using the lensmakers' formula.
Measure the focal length of the blue-edged lens by forming an image of a distant light source.
Are these focal lengths consistent, within uncertainties?
Measure the focal length of the thinner, red-edged lens by forming an image of a distant light source. To do so, remove the lens from its holder and hold the lens near a wall so that the light from a
distant window or other light source forms an image on the wall. The red lens should have a longer focal length than the blue lens which you used earlier. This shorter focal length lens will be the
eyepiece of your telescope.
Arrange the two lenses, light source and screen as shown in the top portion of Fig. 11. First use the longer focal length lens (objective lens) to form the sharpest possible image of the light source
on the ground plastic screen. Then place the shorter focal length convex lens (eyepiece) to act as a magnifier of the image on the screen. The position of the eyepiece will be a bit subjective, since
your eye will try to accommodate to form the magnified image on your retina.
The magnification ray diagram is shown in the bottom portion of Fig. 11. In this diagram, $h$ is the object height, $h^{\prime}$ is the height of the image formed by the objective lens, $s^{\prime}$
is the distance from the image to the eyepiece, and $OF$ is the distance from the object to the focal point of the eyepiece. The dashed lines form an angle at the eyepiece and represent the angular
size of the un-magnified object. The magnification is defined to be $M \equiv \theta\,' /\theta$.
Measure the object ($h$) and image ($h^{\prime}$) sizes, and the distances from the object to where it appears in focus($OF$) and from the screen to the eyepiece($s'$) as defined in Fig. 11.
Calculate the angles $\theta$ and $\theta^{\prime}$ and thus the expected magnification.
While looking through the eyepiece, remove the plastic screen.
What do you see now? What was the function of the ground plastic screen?
Note that the crosshairs of a telescope would be placed at the location of the plastic screen.
Next, try to estimate the observed magnification. To do so, look at the light source directly with one eye and through the telescope with the other eye. With practice you can superimpose the two
images and compare their relative sizes and estimate the magnification.
What magnification do you estimate using this method? Compare this observed magnification with the geometric analysis done above.
Is the image formed by the telescope upright or inverted? Is the image real or virtual?
Submit your lab notebook
Make sure to submit your lab notebook by the end of the period. Download a copy of your notebook in PDF format and upload it to the appropriate spot on Canvas. Only one member of the group needs to
submit to Canvas, but make sure everyone's name is on the document!
When you're finished, don't forget to log out of both Google and Canvas, and to close all browser windows before leaving!
Post-lab assignment
Write your conclusion in a new document (not your lab notebook) and submit that as a PDF to the appropriate assignment on Canvas when you are done. You should write your conclusion by yourself (not
in a group), though you are welcome to talk to your group mates to ask questions or to discuss.
The conclusion is your interpretation and discussion of your data. In general, you should ask yourself the following questions as you write (though not every question will be appropriate to every
• What do your data tell you?
• How do your data match the model (or models) you were comparing against, or to your expectations in general? (Sometimes this means using the $t^{\prime}$ test, but other times it means making
qualitative comparisons.)
• Were you able to estimate uncertainties well, or do you see room to make changes or improvements in the technique?
• Do your results lead to new questions?
• Can you think of other ways to extend or improve the experiment?
In about one or two paragraphs, draw conclusions from the data you collected today. Address both the qualitative and quantitative aspects of the experiment and feel free to use plots, tables or
anything else from your notebook to support your words. Don't include throw-away statements like “Looks good” or “Agrees pretty well”; instead, try to be precise.
REMINDER: Your post-lab assignment is due 24 hours after your lab concludes. Submit a single PDF on Canvas. | {"url":"https://physlab-wiki.com/phylabs/lab_courses/phys-120_130-wiki-home/spring-experiments/geometrical-optics/geometrical-optics","timestamp":"2024-11-13T07:48:32Z","content_type":"text/html","content_length":"45165","record_id":"<urn:uuid:bcf25d1e-50cf-4118-9be9-31002f8c6ceb>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00632.warc.gz"} |
In this paper, we consider the fractional critical Schrödinger equation (FCSE) $$(-\Delta)^su-|u|^{2^*_s-2}u=0,$$ where $u∈\dot{H}^s(\mathbb{R}^N),$ $N≥4,$ $0<s<1$ and $2^∗_s=\frac{2N}{N−2s}$ is the
critical Sobolev exponent of order $s.$ By virtue of the variational method and the concentration compactness principle with the equivariant group action, we obtain some new type of nonradial,
sign-changing solutions of (FCSE) in the energy space $\dot{H}^s(\mathbb{R}^N)$. The key component is that we take the equivariant group action to construct several subspace of $\dot{H}^s(\mathbb{R}^
N)$ with trivial intersection, then combine the concentration compactness argument in the Sobolev space with fractional order to show the compactness property of Palais-Smale sequences in each
subspace and obtain the multiple solutions of (FCSE) in $\dot{H}^s(\mathbb{R}^N).$ | {"url":"https://global-sci.org/intro/articles_list/aam/2574.html","timestamp":"2024-11-08T01:06:49Z","content_type":"text/html","content_length":"73665","record_id":"<urn:uuid:41161ef7-2459-48f6-bad1-6b672e40c44a>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00263.warc.gz"} |
Exploring Statistical Functions in Pandas for Data Analysis Mastery - Freshers.in
Posted in pandas, Python
Exploring Statistical Functions in Pandas for Data Analysis Mastery
user November 29, 2023
Pandas, a linchpin in Python’s data analysis toolkit, is equipped with an array of statistical functions. These functions are indispensable for exploring, understanding, and deriving insights from
datasets. This article introduces some of the most crucial statistical functions available in Pandas.
Core Statistical Functions in Pandas
1. Descriptive Statistics
a. .describe()
Offers a quick overview of the central tendencies, dispersion, and shape of a dataset’s distribution.
b. .mean()
Calculates the mean of the values for the requested axis.
c. .median()
Finds the median, which is the value separating the higher half from the lower half of a data sample.
d. .mode()
Determines the mode or the value that appears most frequently in a dataset.
2. Measures of Spread
a. .std()
Computes the standard deviation, a measure of the amount of variation or dispersion in a set of values.
b. .var()
Calculates the variance, quantifying the degree of spread in a set of data points.
c. .quantile()
Finds the quantile, a value below which a certain percent of observations fall.
3. Correlation and Covariance
a. .corr()
Evaluates the correlation between columns in a DataFrame, offering insights into the relationship between variables.
b. .cov()
Computes the covariance, indicating the direction of the linear relationship between variables.
Practical Application with Sample Data
To illustrate these functions, let’s use a simple dataset:
import pandas as pd
# Learning @ Freshers.in
data = {
'Age': [25, 30, 35, 40, 45],
'Salary': [50000, 55000, 60000, 65000, 70000]
df = pd.DataFrame(data)
# Applying Statistical Functions
print("Describe:\n", df.describe())
print("Mean:\n", df.mean())
print("Standard Deviation:\n", df.std())
print("Correlation:\n", df.corr())
When to Use Statistical Functions
• Exploratory Data Analysis (EDA): To get a quick overview and understand the basic properties of the dataset.
• Data Cleaning: Identifying outliers or errors in the data.
• Data Modeling: Understanding relationships between variables before building predictive models. | {"url":"https://www.freshers.in/article/python/exploring-statistical-functions-in-pandas-for-data-analysis-mastery/","timestamp":"2024-11-04T05:03:41Z","content_type":"text/html","content_length":"113161","record_id":"<urn:uuid:33cc00c0-ce6f-49cd-bd58-a5e0aea2fc98>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00430.warc.gz"} |
How do I find a decay rate constant(K) and ultimate
BOD if all I am given...
How do I find a decay rate constant(K) and ultimate BOD if all I am given...
How do I find a decay rate constant(K) and ultimate BOD if all I am given are the temperature, ml of solution, initial and the final DO of two samples after 7 days. (One diluted one not).
Rate Constant (K) depends upon the following .i.e.
it depends upon the reaction involved
it depends upon the type of Bacteria in case of BOD as more the BOD more poor is the quality of water
it also depends upon the temperature
Rate = - Δ[organic]/Δtime = K[organic]^-1
On the other hand “ultimate BOD” is the amount of oxygen (O[2]) required to decompose all of the organic material after “infinite time”.
This is usually simply calculated form the 5 or 20 day data.
BODE(BOD exerted) = BOD (1- e^-kt)
If temperature and intial and final DO of two sample is given then rate depends on these factors as from these factors rate is calculated.(One diluted one not) | {"url":"https://justaaa.com/chemistry/137437-how-do-i-find-a-decay-rate-constantk-and-ultimate","timestamp":"2024-11-04T19:47:23Z","content_type":"text/html","content_length":"41211","record_id":"<urn:uuid:f72407ea-92df-44c0-9601-19956d7a3182>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00304.warc.gz"} |
AM 207 Final Project
The traveling salesman problem
One question of particular interest is how to route emergency aid to locations where it is needed. For concreteness, let’s postulate a Red Cross medical or food supply caravan that originates from
the organization’s in-country headquarters. This caravan wishes to visit all $n$ emergent locations in order to deliver needed supplies. They wish to do so in the most efficient manner possible.
This is the traveling salesman problem (TSP), an optimization problem that is quite well known. It was first described in 1932 by Karl Menger and has been studied extensively ever since. Here is the
traditional convex optimization specification of the problem:
$$\begin{aligned} \min &\sum_{i=0}^n \sum_{j\ne i,j=0}^nc_{ij}x_{ij} && \\ \mathrm{s.t.} & \\ & x_{ij} \in \{0, 1\} && i,j=0, \cdots, n \\ & \sum_{i=0,i\ne j}^n x_{ij} = 1 && j=0, \cdots, n \\ & \
sum_{j=0,j\ne i}^n x_{ij} = 1 && i=0, \cdots, n \\ &u_i-u_j +nx_{ij} \le n-1 && 1 \le i \ne j \le n\end{aligned}$$
As is clear from the constraints, this is an integer linear program (ILP) where:
• $x_{ij}$ is a binary decision variable indicating whether we go from location $i$ to location $j$.
• $c_{ij}$ is the distance between location $i$ and location $j$. (Note: in our application, we deal with geospatial data on a large enough scale that the Euclidean distance is actually very
imprecise. In order to model distances over the planet’s surface, we use the Haversine formula.)
• The objective function is the sum of the distances for routes that we decide to take.
• The final constraint ensures that all locations are visited once and only once.
The problem, of course, is that brute force solution of the TSP is $\mathcal{O}$$(n!)$. Traditional, deterministic algorithm approaches such as branch-and-bound or branch-and-cut are still
impractical for larger numbers of nodes. In many cases, exhaustive search for global optimality is not even particularly helpful as long as the solution found is good enough. We will use simulated
annealing (SA) to get acceptable solutions to the TSP.
Figure 8 shows a sample draw of conflict data (the blue points), and a near-optimal TSP route found through 50,000 iterations of simulated annealing.
Packing the aid truck — the Knapsack Problem
We extend the TSP into a multi-objective optimization problem where the contents of the aid trucks also have an optimization component. Therein lies the knapsack problem: subject to a volume or
weight constraint, and given that different locations might have very different needs such as food, vaccinations, or emergent medical supplies, which supplies do we pack on the trucks?
Often, this problem is formulated such that you can only bring one of each item, but that does not make sense in our application. Rather, we want to be able to bring as many types of each type of aid
as we think necessary, and we’ll assume that as many as desired are available to load on the trucks before starting out from HQ. Here’s the unbounded version of the knapsack problem:
$$\begin{aligned} \max &\sum_{i=1}^n v_i x_i && \\ \mathrm{s.t.} & \\ & x_i \in \mathbb{Z} \\ & x_i \geq 0 \\ & \sum_{i=1}^n w_ix_i \leq W\end{aligned}$$
In this formulation:
• $x_{i}$ is a zero or positive integer decision variable indicating how many units of item $i$ we load on the truck.
• $v_i$ is the utility we get from bringing along item $i$.
• $w_i$ is the weight of item $i$.
• $W$ is the maximum weight the truck can carry.
A brief detour for modeling assumptions
Before we can optimize this aid delivery mechanism, we will need to decide a way to model humanitarian aid needs at a given conflict.
Let us assume that there are $K$ distinct types of humanitarian aid to be delivered. (Without loss of generality, we will use three categories for all of our examples — perhaps we can think of them
food aid, first aid supplies, and medicines for concreteness.) We can model each conflict’s aid needs as
$$\boldsymbol x \sim \mathrm{Dir}(\boldsymbol \alpha)$$
where $\boldsymbol \alpha$ parameterizes the distribution to generate vectors of length $K$ representing the relative proportions of needs. For example, in our three category example we might draw
the vector $(0.11,\,0.66,\,0.23)$ for a certain conflict, meaning that 11% of the aid needed at this conflict is food aid, 66% is first aid supplies, and 23% is medicines. Now that we know the
proportions for the given conflict, how might we turn this unitless vector into absolute amounts?
For that reason, let’s assign each conflict a scaled size $s \in [1, 10]$ based on the number of casualties (a proxy for the severity of the conflict). We can use this size scalar to turn our
proportion vector into a vector of absolute needs.
It should be noted that both of these modeling methods for proportions and size are “plug-and-play” — because of purposely designed loose coupling in our model, these methods could trivially be
replaced by a different method of calculating or predicting the needs of each conflict. For example, if an independent model was used to calculate each of $K$ needs based on the features of each
conflict, those quantities could easily be plugged in to this model. Ultimately, the only quantities that our TSP/Knapsack model needs is an $n \times K$ matrix of aid needs for $n$ cities and $K$
categories of aid.
A new objective function to integrate TSP and Knapsack
For the vanilla TSP, we simply try to minimize the total distance. Now that we are adding a new objective, we will need to integrate the two into a coherent loss function. Here is the function we
will actually try to minimize in the combined TSP/Knapsack:
$$L(\boldsymbol x) = \text{total distance} + \text{sum of squared aid shortfalls}$$
The effect of squaring aid shortfalls acts as a weight, causing greater importance to be placed on minimizing this aspect of the problem first. Proposals wherein aid shortfalls occur are heavily
penalized. As we will see in later graphs, once the SA algorithm is able to avoid all shortfalls and the concurrent massive loss function penalties, a much slower descent begins to take place wherein
the distance is slowly optimized. See figure 10 for a depiction of this phenomenon.
Implementing the Knapsack aspect
Figure 9 shows the same draw of cities as in figure 8, this time factoring in limited carrying capacity for aid supplies on the aid delivery mechanism and using our new loss function. As we can see,
the huge penalty incurred when supplies run out quickly induces the simulated annealing algorithm to converge on a solution with multiple stops at HQ to reload.
Figures 11, 12, and 13 use some uniformly distributed points on the $[0,50]$ plane to demonstrate how the proposed TSP/Knapsack routes converge as the number of iterations increases.
Finding the optimal site for the resupply location
Our initial assumption was that the HQ was located in the capital city of Kampala. However, we should ask whether our HQ could be more conveniently located. We can answer this question by treating
the reload location as another parameter and continuing to sample HQ locations using SA. Figure 14 shows the TSP/Knapsack optimized once again, this time using a the optimal HQ location, while figure
15 compares the loss function as each method converges to its best possible configuration. | {"url":"http://pjbull.github.io/civil_conflict/optimization.html","timestamp":"2024-11-08T08:16:51Z","content_type":"text/html","content_length":"15331","record_id":"<urn:uuid:dcabf398-f775-49d9-bf91-6f49754d4d99>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00121.warc.gz"} |
Centrad conversion
Worldwide use:
Centrad is a unit of measurement used worldwide to quantify angles. Derived from the Latin word "centrum," meaning center, Centrad is a metric unit that represents one-hundredth of a radian. Radians
are the preferred unit for measuring angles in mathematics and physics due to their simplicity and compatibility with trigonometric functions. However, in practical applications, Centrad provides a
more convenient and user-friendly alternative.
The centrad is a unit of measurement used to quantify angles. It is a non-SI unit, but it is widely used in certain fields, particularly in surveying and geodesy.
A centrad or centiradian is a unit of angular measurement that is equal to 1/100th of a radian. Radians are the standard unit for measuring angles in the SI system, and they are used in various
scientific and mathematical calculations.
The centrad, also known as the centiradian, is a unit of measurement that represents one hundredth of a radian. The radian is a fundamental unit used to measure angles in the International System of
Units (SI). It is defined as the angle subtended at the center of a circle by an arc that is equal in length to the radius of the circle.
The centrad is derived by dividing a circle into 200π. This subdivision allows for more precise measurements of angles, especially in scientific and engineering applications. The use of the centrad
is particularly common in fields such as optics, where small angles are often encountered.
The origin of the centrad can be traced back to the need for a finer unit of angular measurement than the radian. By dividing the radian into smaller increments, the centrad provides a more granular
scale for measuring angles. This allows for greater accuracy and precision in calculations and measurements involving angles. The centrad is a valuable tool in various scientific and technical
disciplines, enabling researchers and professionals to work with angles at a more detailed level.
Common references:
50π = Right angle (90 degrees)
100π Centrtad = Angle over a straight line (180 degrees)
200π Centrtad = In a full circle (360 degrees)
Usage context:
Centrad finds extensive use in various fields, including engineering, surveying, navigation, and astronomy. Its widespread adoption can be attributed to its compatibility with the metric system,
which is widely used across the globe. The use of Centrad ensures consistency and ease of communication among professionals working in different countries and industries.
One of the significant advantages of Centrad is its ease of conversion to other angle units. For instance, it is straightforward to convert Centrad to degrees by multiplying the value by 0.5729578.
Similarly, converting Centrad to radians involves multiplying the value by 0.01. This flexibility allows for seamless integration with different measurement systems, making Centrad a versatile unit
for angle measurement.
The centrad is also utilized in some countries for land measurement and navigation purposes. For instance, in Germany, the centrad is commonly used in road signs to indicate the direction of a curve
or a bend. This unit provides a more precise and accurate representation of angles, especially in situations where small deviations can have significant consequences. | {"url":"http://metric-conversions.com/angle/centrad-conversion.htm","timestamp":"2024-11-14T23:09:10Z","content_type":"text/html","content_length":"39431","record_id":"<urn:uuid:ab2cad0b-61c0-4bc3-a32d-f1feb98fbff1>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00816.warc.gz"} |
Hash Array Mapped Tries (HAMT)
(Phil Bagwell, `Ideal Hash Trees', Technical Report, 2001)
Provided that the hash function have no collisions and yield hash values in a bounded interval, hash tries admit search and update in constant worst-case time (!), bounded by a somewhat larger
constant than what one usually finds for the worst-case time of search or replacement, and the amortized time of insertion or deletion, in hash tables. There is no complicated incremental rehashing
going on like in some real-time hash tables to attain these constant time bounds; in fact, there is never any rehashing. A little more precisely, `constant' means logarithmically proportional to the
length of the interval of the hash values, or, practically, linearly proportional to the number of bits in the hash, with small constant factors.
Yes, this is the same data structure as Clojure uses to implement its hash maps and hash sets. Similarly to Clojure, this code uses hash collision buckets rather than the paper's suggestion of
repeating hashes salted by the trie depth.
Although the pronunciation is identical, and despite the title of Bagwell's paper, a hash trie is not a hash tree. Sorry. Nor do these hash tries have any relation to what Knuth calls hash tries.
Except where this is obviously not the case (hash-table/fold, for example), every procedure here runs in constant expected time under the assumption of a good key hash function, ignoring the time
taken by garbage collection and the time taken by the key hash function and the key equality predicate of the relevant hash trie type. When searching in a hash trie, the key equality predicate is
applied to only as many keys extra as share a common hash value with the key whose association is sought. Thus in a hash trie with no collisions, every search involves at most one invocation of the
key hash function, and at most one invocation of the key equality predicate.
(import hash-trie)
make-hash-trie-type KEY=? KEY-HASHprocedure
Constructor for hash trie types. KEY=? must be a key equality predicate, a procedure of two arguments that returns true to indicate that they are equal and false to indicate that they are not,
and that behaves transitively, symmetrically, and reflexively. KEY-HASH must be a key hash function that preserves the key equality predicate, i.e. for keys A and B, it must be that if (KEY=? A
B), then (= (KEY-HASH A) (KEY-HASH B)).
hash-trie-type? OBJECTprocedure
Disjoint type predicate for hash trie types.
Accessors for the key equality predicates and key hash functions of hash trie types.
make-hash-trie HASH-TRIE-TYPEprocedure
Hash trie constructor.
hash-trie/type HASH-TRIEprocedure
Returns the <hash-trie-type> of HASH-TRIE.
hash-trie/count HASH-TRIEprocedure
Returns the number of associations in HASH-TRIE.
hash-trie/empty? HASH-TRIEprocedure
Returns true if HASH-TRIE has no associations, or false if it has any.
Searches for an association for KEY in HASH-TRIE. If there is one, tail-calls IF-FOUND with one argument, the datum associated with key. If not, tail-calls IF-NOT-FOUND with zero arguments.
Searches for an association for KEY in HASH-TRIE. If there is one, returns its associated datum; otherwise returns DEFAULT.
hash-trie/member? HASH-TRIE KEYprocedure
Returns true if HASH-TRIE has an association for KEY, or false if not.
Searches for an association for KEY in HASH-TRIE. If there is one, tail-calls IF-FOUND with three arguments:
□ the associated datum
□ a procedure (REPLACE DATUM) that returns a new hash trie with all the associations in HASH-TRIE, but with DATUM substituted for the datum associated with KEY
□ a procedure (DELETE) that returns a new hash trie with all the associations in HASH-TRIE excluding the association for KEY
If there is no such association, tail-calls IF-NOT-FOUND with one argument, a procedure (INSERT DATUM) that returns a new hash trie with all the associations in HASH-TRIE as well as an
association of DATUM with KEY.
hash-trie/insert HASH-TRIE KEY DATUMprocedure
Returns a hash trie with all the associations in HASH-TRIE, but associating DATUM with KEY, whether HASH-TRIE had an association for KEY or not.
Returns a hash trie with all the associations in HASH-TRIE, but associating (MODIFIER D) with KEY if HASH-TRIE associated a datum D with KEY, or associating (MODIFIER DEFAULT) with KEY if
HASH-TRIE had no association for KEY.
If HASH-TRIE has an association for KEY, returns its associated datum and HASH-TRIE. Otherwise, calls (GENERATOR KEY) to obtain a datum D, and returns D and a hash trie with all the associations
in HASH-TRIE as well as an association of D with KEY.
hash-trie/delete HASH-TRIE KEYprocedure
Returns a hash trie with all the associations in HASH-TRIE, excluding its association, if any, for KEY.
Folds HASH-TRIE by COMBINATOR, starting with an initial value V of INITIAL-VALUE and updating it for each association of a datum D with a key K, in no particular order, by (COMBINATOR K D V).
hash-trie->alist HASH-TRIEprocedure
hash-trie/key->list HASH-TRIEprocedure
hash-trie/datum->list HASH-TRIEprocedure
hash-trie->alist returns a list of pairs, in no particular order, corresponding with the associations in HASH-TRIE, with keys in the cars and associated data in the respective cdrs.
hash-trie/key-list returns a list of all the keys in HASH-TRIE, in no particular order.
hash-trie/datum-list returns a list of all the data in HASH-TRIE, in no particular order.
Returns a hash trie of the given type with the associations listed in ALIST, taking keys from the cars and corresponding data from the respective cdrs.
Hash functions for various types of data. exact-integer-hash, real-number-hash, and complex-number-hash all agree where their domains coincide. The current implementations of these hash functions
are all based on the FNV (Fowler-Noll-Vo) family of hash functions, tweaked slightly so that it is more likely to fit into the range of fixnums (small exact integers that can be represented
immediately) for most Scheme systems on 32-bit machines.
<hash-trie-type> of the above hash functions, with appropriate key equality predicates: string=? for strings, eq? for symbols, and = for the numeric types.
HASH-TRIE-NODE : (or HASH-TRIE-BUCKET-NODE HASH-TRIE-BRANCH-NODE)
vector ; hash-trie-branch-count returns last index
(import hash-trie)
(import (srfi 41))
;see https://mumble.net/~campbell/scheme/hash-trie.scm
(define (hash-trie->stream hash-trie)
(define (bucket->stream list tail)
(if (pair? list)
(stream-cons (car list) ((stream-lambda () (bucket->stream (cdr list) tail))))
tail ) )
(define (branch->stream branch tail)
(let recur ((index (sub1 (vector-length (hash-trie-branch-vector branch)))) (tail tail))
(if (positive? index)
(node->stream (vector-ref branch index) ((stream-lambda () (recur (sub1 index) tail))))
tail ) ) )
(define (node->stream node tail)
(cond ((hash-trie-branch? node) (branch->stream node tail))
((hash-trie-bucket? node) (bucket->stream (hash-trie-bucket-list node) tail))
(error 'hash-trie->stream "(internal) invalid hash-trie node" node hash-trie)) ) )
(let ((root (hash-trie-root hash-trie)))
(if root
((stream-lambda () (node->stream root stream-null)))
stream-null ) ) )
(define (stream->hash-trie stream hash-trie-type)
(let loop ((stream stream) (hash-trie (make-hash-trie hash-trie-type)))
(if (stream-pair? stream)
(let ((cell (stream-car stream)))
(loop (stream-cdr stream) (hash-trie/insert hash-trie (car cell) (cdr cell))) )
hash-trie ) ) )
Taylor R. Campbell
This egg is hosted on the CHICKEN Subversion repository:
If you want to check out the source code repository of this egg and you are not familiar with Subversion, see this page.
Provide Iterator API.
C5 release.
Copyright (c) 2009, Taylor R. Campbell
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
• Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
• Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the
• Neither the names of the authors nor the names of contributors may be used to endorse or promote products derived from this software without specific prior written permission.
OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. | {"url":"https://api.call-cc.org/5/doc/hash-trie","timestamp":"2024-11-05T15:22:47Z","content_type":"text/html","content_length":"32429","record_id":"<urn:uuid:a954e1c9-4f03-4e25-b2b0-25a991aa1a3b>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00762.warc.gz"} |
Activities to help you teach combinationsActivities to help you teach combinations | Cambridge
Activities to help you teach combinations
Published 8 February 2019
We recently wrote a post about teaching permutations and today we refer to the topic of combinations. The challenges that learners face in regard to problems on combinations are, of course, quite
similar to those they face with permutations. The logical thinking required often turns out to be a major obstacle in both these areas of the syllabus.
Once a particular problem has been correctly identified as a permutation-type or a combination-type, there will usually still be some distance to go in all but the most basic of questions. Simply
making the correct decision to press the ^nC[r] button rather than the ^nP[r] button on the calculator is rarely sufficient to assure a correct answer.
Again, learners need to develop an appreciation that there is seldom only one route to a correct solution. Also, in dealing with combinations, it is crucial to understand that two or more different
arrangements of a set of objects are, in fact, just one single combination of those objects.
In our Cambridge International AS & A Level Mathematics: Probability and Statistics 1 Coursebook, this type of problem is illustrated in Worked example 5.14 and learners will meet it in several of
the questions in Exercise 5F, End-of-Chapter Review Exercise 5 and Cross-topic Review Exercise 2.
The most appropriate point at which to offer this presentation to your learners would be after attempting Question 13 in Exercise 5F, which is where they will first need to avoid using one of the
methods featured in the presentation.
The presentation looks at three different approaches that might be used to solve one particular combination problem. The problem concerns combinations in which at least one of each of two types of
object must be included. Only two of the three given approaches are successful. There is a brief discussion on this and, needless to say, learners will benefit greatly if they can appreciate the
reason why one of the approaches fails.
Four extension activities with solutions are also offered at the end of the presentation to stretch your students.
To view the PowerPoint, click here.
About the author
Dean Chalmers is an experienced author and teacher having previously taught mathematics in the UK, Vietnam, Malaysia and Botswana. Dean is the author of our Cambridge International AS & A Level
Probability & Statistics 1 and Cambridge O Level Statistics coursebooks and has also contributed to our UK AS/A Level Further Mathematics Statistics coursebooks. | {"url":"https://www.cambridge.org/co/education/blog/2019/02/08/activities-help-you-teach-combinations/","timestamp":"2024-11-07T19:54:31Z","content_type":"text/html","content_length":"80084","record_id":"<urn:uuid:bceaedd6-a33e-44cf-9dbc-1cc6e2108e7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00665.warc.gz"} |
What is the SI unit for flow rate?
cubic metres per second
In physics and engineering, in particular fluid dynamics, the volumetric flow rate (also known as volume flow rate, rate of fluid flow, or volume velocity) is the volume of fluid which passes per
unit time; usually it is represented by the symbol Q (sometimes V̇). The SI unit is cubic metres per second (m3/s).
What is the formula for GPM?
The formula to find GPM is 60 divided by the seconds it takes to fill a one gallon container (60 / seconds = GPM). Example: The one gallon container fills in 5 seconds. 60 / 5 = 12 GPM. (60 divided
by 5 equals 12 gallons per minute.)
How do you calculate fluid power?
Hydraulic power is defined as flow multiplied by pressure. The hydraulic power supplied by a pump is: Power = (P x Q) ÷ 600 – where power is in kilowatts [kW], P is the pressure in bars, and Q is the
flow in litres per minute. ** based upon 100% efficiency; 90% efficiency would equate to 75 ÷ 0.9 = 83.3kW.
How do you calculate pump LPM?
Flow = (Displacement x RPM)/1000 Flow is in Liter per minute (LPM), Displacement is cc per revolution while RPM is rotation per minute. Q = (90 x 2222)/1000 = 200 l/min (approx).
What is the unit of pressure in SI unit?
pascal (Pa)
The SI unit for pressure is the pascal (Pa), equal to one newton per square metre (N/m2, or kg·m−1·s−2). Pascal is a so-called coherent derived unit in the SI with a special name and symbol.
How do you calculate pressure from fluid velocity?
Pressure To Velocity Calculator
1. Formula. V = Sqrt [ (2*q/p) ]
2. Dynamic Pressure (pascals)
3. Fluid Density (kg/m^3)
How do you calculate gpm from pressure?
To calculate GPM from pressure in PSI for water, follow these steps:
1. Measure the pressure inside the tank using a pressure gauge.
2. Subtract the atmospheric pressure from the tank pressure.
3. Multiply the result from step 2 by 2 and divide by the density of water.
What is the formula of hydraulic?
Basic Hydraulic Formulas
Formula For: Word Formula: Letter Formula:
FLUID POWER IN HORSEPOWER HORSEPOWER = PRESSURE(PSIG) x FLOW(GPM) 1714 HP = PQ 1714
VELOCITY THROUGH PIPING in Feet/Second Velocity VELOCITY = 0.3208 x FLOW RATE THROUGH I.D.(GPM) INTERNAL AREA(Square Inches) V = .3208Q A
What is LPS unit?
lps – metric liters per second. 1 lps is about 16 gallons per minute.
How do you calculate HP to LPM?
Conversion chart – horsepower – Metric to litres atmosphere per minute
1. horsepower – Metric to litres atmosphere per minute = 435.53 L atm/min.
2. horsepower – Metric to litres atmosphere per minute = 871.06 L atm/min.
3. horsepower – Metric to litres atmosphere per minute = 1,306.59 L atm/min. | {"url":"https://cowetaamerican.com/2022/06/29/what-is-the-si-unit-for-flow-rate/","timestamp":"2024-11-09T21:04:25Z","content_type":"text/html","content_length":"58336","record_id":"<urn:uuid:60fde079-6ab8-4e1b-8a0e-00d4a2db8062>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00555.warc.gz"} |
Year 4 Unit 3 - work out sums and differences of multiples of 100 or 1000 - FREE maths resources
Year 4
FREE maths resources for all
We can work out sums and differences of multiples of 100 or 1000
Year 4 Unit 3
What we are learning:
• ‘Sum’ is a different (mathematical) way of saying ‘the total found by adding’.
• ‘Difference’ is a different (mathematical) way of saying ‘the value between the two numbers’ which is found by subtracting the lower value from the higher value.
• Multiples of 10 have 0 units. 10, 20, 30, 40 ….
• Multiples of 100 have 0 tens and 0 units. 100, 200, 300, 400 …
• Multiples of 1000 have 0 hundreds, 0 tens and 0 units. 1000, 2000, 3000, 4000 …
• Calculating sums and differences of multiples of 100 and 1000 can seem scary – just because of the size of the numbers involved.
ACTIVITY 1: ADDING MULTIPLES OF 100 AND 1000
Activities you can do at home:
Practice speedy mental recall of addition and subtraction (one-digit numbers) i.e. 3+4 = 7, 9-2=7, 5+4=9, 7-2=5. Now make links to multiples of 100.
Say, If 3 add 4 equals 7, what will 3 hundreds add 4 hundreds equal?
Make similar links between subtraction of one-digit number and multiples of 100.
Extend this practice to multiples of 1000 when your child is ready.
Use a cut out set of question cards “Hundreds and Thousands”. Ask your child to sort the questions by looking for addition and subtraction patterns. Ask your child to explain the patterns they see.
Good questions to ask:
Can you count up/backwards in multiples of 10/100/1000 from….?
If 4 add 7 is 11 what is 400 add 700? What is 4000 add 7000?
If your child:
Is slow to work out answers because their knowledge of number bonds is insecure
Practice speedy mental recall of addition and subtraction (one-digit numbers). Time spent on this skill is a good investment as it makes all further maths easier and faster
Extension Activity
Please use this activity when you think your child understands the unit of work. It will deepen and extend your child’s understanding of this unit. | {"url":"https://famlearn.co.uk/year-4/year-4-unit-3-work-out-sums-and-differences-of-multiples-of-100-or-1000/","timestamp":"2024-11-12T08:44:29Z","content_type":"text/html","content_length":"191604","record_id":"<urn:uuid:79e339f5-4712-4f56-9072-e13e2f1e369e>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00297.warc.gz"} |
Applied Mathematics Archives - EssayParlour
Place an order here
Read More
Place an order here
Read More
1.A student has received a $30,000 loan from a wealthy aunt in order to finance his 4-year college program. The terms are that the student repay his aunt in full at the end of 8 years with simple
Read More
1.A company has issued a 5-year loan of $90,000 to a new vice president to finance a home improvement project. The terms of the loan are that his to be paid back in full at the end of 5 years with
Read More
A major airline is planning to purchase new airplanes. It wants to borrow $800 million by issuing bonds. The bonds are for a 10-year period with simple interest computed quarterly at a rate of 2
Read More
Let V be a finite-dimensional vector space over F, and let S, T L(V ) be linearoperators on V . Suppose that T has dim(V ) distinct eigenvalues and that, given anyeigenvector v V for T associated
Read More
show that the point A,B,C,D are concyclic.
Read More
Hi, I have to do project in biomathematics modeling. So, if someone can help me with that please contact with me . I still have three weeks to do the project. Also, I do not have to build model. I
Read More
does multiplication of two imaginary numbers imaginary or real? if i multiply 2i and 3i then it is = -6 which is real .if -3 and -2 are multiplied then it gives – 6 which is real . but if i do the
Read More
1.8424= (1/1+y) + (1/(1+y)^2) how did you get y =? need step by step Thanks
Read More | {"url":"https://essayparlour.com/blog/category/applied-mathematics/","timestamp":"2024-11-15T00:53:50Z","content_type":"text/html","content_length":"54792","record_id":"<urn:uuid:5026b83c-001e-4ceb-bbee-d19bf8cffd3c>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00041.warc.gz"} |
Analyze minidump files after BSOD - ifconfig.dk
Analyze minidump files after BSOD
Microsoft Blue Screen Of Death (BSOD) – everyone knows them, everyone hades them. When the system crashes, and a BSOD is shown, a mini-dump file is created which contain a crash report. This tutorial
shows how to use Microsoft Debugging Tools to analyze this file and hopefully find the reason for the system crash.
1. Download and install Debugging Tools for Windows
You´ll only need to install the Debugging Tools for Windows packet.
Ones you are done installing start up Windows Debugging Tools:
2. Navigate to All Programs | Windows Kits | Debugging Tools for Windows (x86)| WinDbg (x86)
If you are on a 64 bit system you might want to use the WinDbg (x64) instead. When Windows Debugging Tools are up and running the first thing you want to do is to define how the program should
download debugging symbols.
3. Click on File | Symbol File Path…
The dialog box Symbol Search Path opens.
4. Click in the input field and type SRV * C: \ Symbols *
5. Click the OK button
The dialog box Symbol Search Path closes. You are now ready to open the mini-dump file containing the crash report.
6. Click on File | Open Crash Dump…
The mini-dump files are stored in C:\Windows\minidump and are named according to the date of the crash. The dialog box Open Crash Dump opens.
7. Navigate to C:\Windows\minidump and click the newest mini-dump file
8. Click the Open button
The dialog box Open Crash Dump closes and the dialog box Workspace ‘base’ opens.
9. Click the NO button
The dialog box Workspace ‘base’ closes. A new window opens with a command prompt telling that the debugger is downloading the symbols, and loading the dump file. This will take at little while, so be
patient. Note that symbols and drivers from third parties cannot be loaded, and they can therefore result in errors and warnings. But often they are not needed to finder the driver or program causing
the problem.
Ones the dump file has been loaded, the cause of the problem is already becoming clearer: Probably caused by: ntoskrnl.exe ( nt+7cc40 ). To get even more information you can do a detailed analyze.
10. Click in the input field and type !analyze v and press Enter
The first piece of valuable information is the BSOD error code MEMORY_MANAGEMENT (a1), this is worth googling, and might tell you what caused the crash.
A little further down, you get the DEFAULT_BUCKET_ID – which tells us which category the error is in. This is sometimes misleading, but it can give a general hint whether it is a software or hardware
Near the bottom you will see something called MODULE_NAME
11. Click the link next to MODULE_NAME
This gives you further information about the module which failed. Especially Image path tells you where the module was while it was running. In this case it was in the /system32 folder, which tells
you that it is a critical system process. In other cases it might have been in the /system32/drivers folder or somewhere else, which could have given you a clue on what it does.
You now know that it is a system process, and it is called ntoskrnl.exe. There is nothing left than go to Google and try out some suggested solutions. A quick search on the term ntoskrnl.exe + BSOD
gives quite a few results which looks promising.
I am in no way an expert, so everything in this tutorial is written from my own experience.
Thanks to my friend Martin for borrowing me the mini-dump file for the tutorial.
Please follow and like me:
3 Responses to Analyze minidump files after BSOD
1. Might I suggest using BlueScreenView? It’s a light weight tool that does the most important, find the program that caused the BSOD. Once that’s done, the rest is up to your google’ing skillz 🙂
□ For the people asknig how to point it to the minidump folder of another PC, check out windbg; part of the Windows Debugging Tools package.For some reason everyone seems to think it is hard to
use, but it’s not. Run it, click File -> Open Crash Dump and choose a minidump (or the MEMORY.DMP file). It loads, and at the end of the output tells you which driver is probably responsible
for the crash.To run it as a portable app, copy windbg.exe, dbgeng.dll and dbghelp.dll to your USB stick.
2. Cool, I’ll try it out…
Bookmark the permalink. | {"url":"https://ifconfig.dk/bsod/","timestamp":"2024-11-04T18:29:56Z","content_type":"text/html","content_length":"58845","record_id":"<urn:uuid:9bf5377a-5212-4f96-8d9c-a4dc2216460a>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00660.warc.gz"} |
SUMIF - Sum Values Based on Criteria in Excel
The SUMIF function allows you to sum values based on a single criteria. This function works in all versions of Excel.
If you need to sum on more than 1 criteria, take a look at the SUMIFS function (note the S at the end of it).
=SUMIF(range, criteria, [sum_range])
Argument Description
Range This can be both the range against which the criteria is checked and the range that contains the values to sum or, if you use the sum_range argument, then this argument is only used to
check criteria.
The criteria that is used to determine which values to add together. This criterion is checked against the range in the range argument.
Criteria If you use text or numbers with operators, you must enclose them with double quotation marks, as shown in the below example.
To view all possible criteria operators and wildcards, go to the Criteria Comparison Operators and Wildcards sections lower in this tutorial.
[Sum_range] Optional. This tells the function which range of values to sum. Use this argument when the range of values to sum is different from the range in the range argument.
[] means the argument is optional.
Example 1 - Text
Let's sum all values that are for "west" in a data table.
This is a very simple example that matches text. Remember to enclose text criteria with double quotation marks.
Example 2 - Numbers
Let's sum all values that equal 10. Now this is a rather silly example given my current data set, but it illustrates a couple things, mentioned below.
Notice that the third argument is not needed since the range against which to check the criteria is the same as the range to sum.
Also notice that there are no quotation marks around the number for the criteria argument. In the next example, the number will need quotation marks because it will be using a comparison operator,
but here that is not the case.
In this example, the result is 10 because there is only one 10; however, if you change another number to 10, the result will be 20, the sum of both numbers that equal 10.
Example 3 - Operators
Operators allow you to sum values that are greater than, less than, or not equal to a value.
Let's sum all numbers greater than 10.
">10" is the criteria argument. You can see the greater than sign that is paced before the number but within the double quotation marks.
It is very important that criteria operators are placed within the quotation marks!
Below, you will find an explanation of criteria operators and also wildcards. These help you make more sophisticated SUMIF functions.
Criteria Comparison Operators
You don't just have to check if a criterion is equal to a value; you can check if the criterion is greater than or less than a value or not equal to a value and more.
Here is a list of the comparison operators that you can use:
> Greater than. Means the values must be greater than whatever you put after this sign. This was used in the above example.
< Less than. Means the values must be less than whatever number you put after this sign.
>= Greater than or equal to. Means the values must be equal to or greater than the number that you put after this sign.
<= Less than or equal to. Means the values must be equal to or less than the number that you put after this sign.
<> Not equal to. This works for both numbers and text and says that the value must not be equal to whatever you put after this sign.
Note: it is very important that you put these signs inside of double quotation marks, even when you use them with a number (illustrated in the example above).
To learn more, check out our tutorial on comparison operators in Excel.
You can also use what are called wildcards in this function.
? Question mark. Matches any character. So, "Th?n" would match "Then" "Than" "Th8n" etc.
* Asterisk. Matches any number of any characters that come before, after, or in the middle of a value. So, "*hi*" would match "oh hi there" "hi there" "oh hi" etc. As you can see, you can put the
asterisk before and after the value, but you can also put it only before the value if you want to match everything that comes before the value or after it to match everything that comes after it.
~ Tilde. This allows you to literally match a ? or a * character. To match a question mark, do this "~?" and to match an asterisk do this "~*" and to match a tilde with a question mark or with an
asterisk do this "~~?" or this "~~*".
Wildcards can be confusing at first and they are not often used in Excel but they can be very helpful.
To learn more, check out our tutorial on wildcards in Excel.
The SUMIF function is a very helpful function that allows you to perform quick filters on data in order to sum only what you want to sum.
If you need to have more than 1 criterion for the SUMIF function, use the SUMIFS function (note the S at the end of it).
Make sure to download the file for this tutorial so you can work with these examples in Excel. | {"url":"https://www.teachexcel.com/excel-tutorial/1906/sumif-sum-values-based-on-criteria-in-excel?nav=sim_side_col","timestamp":"2024-11-04T08:15:49Z","content_type":"application/xhtml+xml","content_length":"49356","record_id":"<urn:uuid:e6d7c95b-7145-4727-9f21-33222b4241b6>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00749.warc.gz"} |
Principles of working with models in Data Science
Today Data Science tools are becoming more in demand than ever, because they provide analysts with tremendous opportunities for modeling and classifying the real world. As we remember, the more
closed the system with the presence of conditionally constant variables and their relationships, the better such a model works. In the world of chaos, where the variables and their relationships
change significantly, such models will certainly not work.
Models training is carried out similarly to expert training. It is necessary to collect a set of relevant data, classify it, analyze relationships and gain relevant experience. For the purposes of
solving the problem using machine learning methods, it is necessary to submit a sufficient amount of data to the input, with the help of which we will teach the algorithm. This is called the training
dataset or training sample.
In order to make predictions, it is necessary to identify the relationship between the features of the original data and the responses (the desired value). The Data Scientist starts by making a guess
about exactly how these relationships work. Then, based on this assumption, he makes predictions. If they correspond to reality, this means that the assumption is correct. This approach is called
“modeling”, and the assumptions and prediction methods themselves are called: “machine learning models”.
Today we will get acquainted with the basic utilitarian models that can be used for forecasting and classification, these are:
• Decision tree
• Random forest
• Logistic regression
A decision tree is used to describe the decision-making process in almost any problem. Based on the values of the signs, specific answers are given, after which a tree is formed with the answers
“Yes” / “No” and different options for decisions or actions.
A random forest is such a learning algorithm when a certain number of trees independent of each other are built, then the algorithm decides which one is better based on voting. In some cases, random
forest improves the quality of prediction and helps to avoid retraining.
Logistic regression is an algorithm for classifying and predicting the probability of some event compared to the resulting logistic curve. In logistic regression, the number of parameters is usually
limited. Thus, it is difficult for the algorithm to adapt as much as possible to the features in the formula, and therefore the probability of retraining can be reduced.
In the next article we will take a closer look at how to compare models with each other and evaluate their quality. I continue to publish articles about business development incl. digital and
information technologies, and also continue to provide business consulting. If you are interested in articles on this topic, then subscribe to my Telegram channel: https://t.me/biz_in. If you need
business consulting support, then I am waiting for you on my website: https://akonnov.ru/. | {"url":"https://akonnov.ru/tpost/n7t1uener1-principles-of-working-with-models-in-dat","timestamp":"2024-11-03T04:34:56Z","content_type":"text/html","content_length":"40209","record_id":"<urn:uuid:7d3d1034-2ffc-43ef-b5a7-ef2df832afa3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00871.warc.gz"} |
optimal algorithms
We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton’s method. For each method in this class, we
exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is $\alpha-$Holder
continuous (for … Read more
We describe three algorithms for solving differentiable convex optimization problems constrained to simple sets in $ \R^n $, i.e., sets on which it is easy to project an arbitrary point. The first
two algorithms are optimal in the sense that they achieve an absolute precision of $ \varepsilon $ in relation to the optimal value … Read more | {"url":"https://optimization-online.org/tag/optimal-algorithms/","timestamp":"2024-11-13T21:42:07Z","content_type":"text/html","content_length":"84764","record_id":"<urn:uuid:3b84d092-3869-482a-aae8-224607241c9d>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00301.warc.gz"} |
Buffalo Bill wishes to cross a 1000 × 1000 square field. A number of snakes are on the field at various positions, and each snake can strike a particular distance in any direction. Can Bill make the
trip without being bitten? Input The input begins with a single positive integer on a line by itself indicating the number of the cases following, each of them as described below. This line is
followed by a blank line, and there is also a blank line between two consecutive inputs. Assume that the southwest corner of the field is at (0,0) and the northwest corner at (0,1000). The input
consists of a line containing n ≤ 1000, the number of snakes. A line follows for each snake, containing three real numbers: the (x,y) location of the snake and its strike distance. The snake will
bite anything that passes closer than this distance from its location. Bill must enter the field somewhere between the southwest and northwest corner and must leave somewhere between the southeast
and northeast corners. Output For each test case, the output must follow the description below. The outputs of two consecutive cases will be separated by a blank line. If Bill can complete the trip,
give coordinates at which he may enter and leave the field. If Bill may enter and leave at several places, give the most northerly. If there is no such pair of positions, print ‘Bill will be bitten.’
Sample Input 1 3 500 500 499 0 0 999 1000 1000 200 Sample Output Bill enters at (0.00, 1000.00) and leaves at (1000.00, 800.00). | {"url":"https://ohbug.com/uva/10376/","timestamp":"2024-11-03T15:33:39Z","content_type":"text/html","content_length":"2764","record_id":"<urn:uuid:91d85334-795e-4caf-a214-ff1f08f3b7a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00676.warc.gz"} |
Attometers to Miles (Roman) Converter
Enter Attometers
Miles (Roman)
β Switch toMiles (Roman) to Attometers Converter
How to use this Attometers to Miles (Roman) Converter π €
Follow these steps to convert given length from the units of Attometers to the units of Miles (Roman).
1. Enter the input Attometers value in the text field.
2. The calculator converts the given Attometers into Miles (Roman) in realtime β using the conversion formula, and displays under the Miles (Roman) label. You do not need to click any button. If
the input changes, Miles (Roman) value is re-calculated, just like that.
3. You may copy the resulting Miles (Roman) value using the Copy button.
4. To view a detailed step by step calculation of the conversion, click on the View Calculation button.
5. You can also reset the input by clicking on button present below the input field.
What is the Formula to convert Attometers to Miles (Roman)?
The formula to convert given length from Attometers to Miles (Roman) is:
Length[(Miles (Roman))] = Length[(Attometers)] / 1.4798039318982393e+21
Substitute the given value of length in attometers, i.e., Length[(Attometers)] in the above formula and simplify the right-hand side value. The resulting value is the length in miles (roman), i.e.,
Length[(Miles (Roman))].
Calculation will be done after you enter a valid input.
Consider that the wavelength of a gamma-ray photon is around 1 attometer.
Convert this wavelength from attometers to Miles (Roman).
The length in attometers is:
Length[(Attometers)] = 1
The formula to convert length from attometers to miles (roman) is:
Length[(Miles (Roman))] = Length[(Attometers)] / 1.4798039318982393e+21
Substitute given weight Length[(Attometers)] = 1 in the above formula.
Length[(Miles (Roman))] = 1 / 1.4798039318982393e+21
Length[(Miles (Roman))] = 0
Final Answer:
Therefore, 1 am is equal to 0 mi (roman).
The length is 0 mi (roman), in miles (roman).
Consider that the scale of nuclear interactions is on the order of 10 attometers.
Convert this scale from attometers to Miles (Roman).
The length in attometers is:
Length[(Attometers)] = 10
The formula to convert length from attometers to miles (roman) is:
Length[(Miles (Roman))] = Length[(Attometers)] / 1.4798039318982393e+21
Substitute given weight Length[(Attometers)] = 10 in the above formula.
Length[(Miles (Roman))] = 10 / 1.4798039318982393e+21
Length[(Miles (Roman))] = 0
Final Answer:
Therefore, 10 am is equal to 0 mi (roman).
The length is 0 mi (roman), in miles (roman).
Attometers to Miles (Roman) Conversion Table
The following table gives some of the most used conversions from Attometers to Miles (Roman).
Attometers (am) Miles (Roman) (mi (roman))
0 am 0 mi (roman)
1 am 0 mi (roman)
2 am 0 mi (roman)
3 am 0 mi (roman)
4 am 0 mi (roman)
5 am 0 mi (roman)
6 am 0 mi (roman)
7 am 0 mi (roman)
8 am 0 mi (roman)
9 am 0 mi (roman)
10 am 0 mi (roman)
20 am 0 mi (roman)
50 am 0 mi (roman)
100 am 0 mi (roman)
1000 am 0 mi (roman)
10000 am 0 mi (roman)
100000 am 0 mi (roman)
An attometer (am) is a unit of length in the International System of Units (SI). One attometer is equivalent to 0.000000000000001 meters or 1 Γ 10^(-18) meters.
The attometer is defined as one quintillionth of a meter, making it an extremely small unit of measurement used for measuring subatomic distances.
Attometers are used in advanced scientific fields such as particle physics and quantum mechanics, where precise measurements at the atomic and subatomic scales are required.
Miles (Roman)
A mile (Roman) is an ancient unit of length used in the Roman Empire. One Roman mile is equivalent to approximately 1,481.5 meters or about 4,856.7 feet.
The Roman mile, known as "mille passus," is defined as 1,000 paces (or "passus"), where each pace is considered to be about 5 feet long.
Roman miles were used for various purposes, including surveying and road construction within the Roman Empire. Although no longer in common use, the Roman mile is of historical interest and is
occasionally referenced in discussions of ancient measurements and Roman history.
Frequently Asked Questions (FAQs)
1. What is the formula for converting Attometers to Miles (Roman) in Length?
The formula to convert Attometers to Miles (Roman) in Length is:
Attometers / 1.4798039318982393e+21
2. Is this tool free or paid?
This Length conversion tool, which converts Attometers to Miles (Roman), is completely free to use.
3. How do I convert Length from Attometers to Miles (Roman)?
To convert Length from Attometers to Miles (Roman), you can use the following formula:
Attometers / 1.4798039318982393e+21
For example, if you have a value in Attometers, you substitute that value in place of Attometers in the above formula, and solve the mathematical expression to get the equivalent value in Miles | {"url":"https://convertonline.org/unit/?convert=attometers-miles_roman","timestamp":"2024-11-06T21:52:45Z","content_type":"text/html","content_length":"90410","record_id":"<urn:uuid:a0fd6a95-49d4-44ab-a885-0420ed12051b>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00360.warc.gz"} |
Assume you have two assets with the same returns, A and B. Asset A has a higher standard deviation. How do their Sharpe ratios compare?
Understand the Problem
The question is asking for a comparison of the Sharpe ratios of two assets with the same returns, specifically focusing on how their different standard deviations affect the ratios. To solve this, we
will need to apply the formula for the Sharpe ratio, which is the difference between the asset's return and the risk-free rate divided by the standard deviation. Since asset A has a higher standard
deviation, its Sharpe ratio will be lower compared to asset B, which has the same return but a lower standard deviation.
Asset B has a higher Sharpe ratio because it is lower volatility.
Answer for screen readers
Asset B has a higher Sharpe ratio because it is lower volatility.
Steps to Solve
1. Understanding the Sharpe Ratio Formula
The Sharpe ratio is calculated using the formula: $$ Sharpe \ Ratio = \frac{(R_a - R_f)}{\sigma_a} $$ where $R_a$ is the return of the asset, $R_f$ is the risk-free rate, and $\sigma_a$ is the
standard deviation of the asset's returns.
2. Applying the Formula to Both Assets
Given that both assets, A and B, have the same return ($R_a$) and risk-free rate ($R_f$), we can express the Sharpe ratios for both:
• For Asset A: $$ Sharpe \ Ratio_A = \frac{(R_a - R_f)}{\sigma_A} $$
• For Asset B: $$ Sharpe \ Ratio_B = \frac{(R_a - R_f)}{\sigma_B} $$
3. Comparison Based on Standard Deviation
Since Asset A has a higher standard deviation ($\sigma_A > \sigma_B$), the Sharpe ratio for Asset A will be lower: $$ Sharpe \ Ratio_A < Sharpe \ Ratio_B $$ This indicates that Asset B has a higher
Sharpe ratio because it has lower volatility.
Asset B has a higher Sharpe ratio because it is lower volatility.
More Information
A higher Sharpe ratio indicates better risk-adjusted returns. Since volatility (risk) is higher for Asset A, its Sharpe ratio decreases compared to Asset B, which offers the same return but lower
• Confusing returns with risk; it’s essential to understand that a higher return does not always indicate a better Sharpe ratio if the risk is significantly higher.
• Forgetting to account for the standard deviation when comparing ratios. | {"url":"https://quizgecko.com/q/assume-you-have-two-assets-with-the-same-returns-a-and-b-asset-a-has-a-higher-m81d5","timestamp":"2024-11-03T06:52:40Z","content_type":"text/html","content_length":"169953","record_id":"<urn:uuid:506ce42a-5939-4dac-8414-146f34c2c813>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00569.warc.gz"} |
Phase portraits of Jacobi elliptic functions
System of Jacobi elliptic functions
Jacobi’s elliptic functions are sorta like trig functions. His functions sn and cn have names that reminiscent of sine and cosine for good reason. These functions come up in applications such as the
nonlinear pendulum (i.e. when θ is too
large to assume θ is a good enough approximation to sin θ) and in conformal mapping.
I ran across an article [1] yesterday that shows how Jacobi’s three elliptic functions—sn, cn, and dn—could be defined by one dynamical system
with initial conditions x(0) = 0, y(0) = 1, and z(0) = 1.
The parameter k is the modulus. (In Mathematica’s notation below, k² is the parameter. See this post on parameterization conventions.) As k decreases to 0, sn converges to sine, cn to cosine, and dn
to 1. As k increases to 1, sn converges tanh, and cn and dn converge to sech. So you could think of k as a knob you turn to go from being more like circular functions (ordinary trig functions) to
more like hyperbolic functions.
Since we have a dynamical system, let’s plot the solution, varying the modulus each time.The Jacobi functions are periodic (in fact they’re doubly periodic) and so the plots will be closed loops.
f[t_, m_] = {JacobiSN[t, m], JacobiCN[t, m], JacobiDN[t, m]}
ParametricPlot3D[f[t, 0.707], {t, 0, 10}, AspectRatio -> 1]
ParametricPlot3D[f[t, 0.99], {t, 0, 20}, AspectRatio -> 1]
ParametricPlot3D[f[t, 0.01], {t, 0, 20}, AspectRatio -> 1]
Note that this last plot is nearly flat because the modulus is small and so z is nearly constant. The small modulus also makes the phase portrait nearly circular because x is approximately sine and y
is approximately cosine.
[1] Kenneth R. Meyer. Jacobi Elliptic Functions from a Dynamical Systems Point of View. The American Mathematical Monthly, Vol. 108, No. 8 (Oct., 2001), pp. 729-737
3 thoughts on “System of Jacobi elliptic functions”
1. Nice! I usually get sn from the anharmonic oscillator, but this seems more fundamental.
2. So can we change k so that we end up with a spherical geometry?
3. As stated in the blog sn and cn coverge to sin and cos as k-> 0. That means if k = 0 you have a circle in the xy-plane located at z = 1. | {"url":"https://www.johndcook.com/blog/2018/06/22/jacobi-elliptic/","timestamp":"2024-11-02T11:00:45Z","content_type":"text/html","content_length":"55231","record_id":"<urn:uuid:98abe59d-0ff5-444e-b37e-dd4b55fb4f40>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00863.warc.gz"} |
Third Grade Free Printable 3rd Grade Math Word Problems Worksheets Pdf
Third Grade Free Printable 3rd Grade Math Word Problems Worksheets Pdf
These math sheets can be printed as extra teaching material for teachers extra math practice for kids or as homework material parents can use. This is a suitable resource page for third graders
teachers and parents.
Free Printable Worksheets For Second Grade Math Word Problems Word Problem Worksheets Math Words Math Word Problems
Name date 4 feet 4 feet 2 feet 2 feet 4 feet 2 f et 4 feet 2 feet perimeter 12 feet find the perimeter of each figure.
Third grade free printable 3rd grade math word problems worksheets pdf. Our word problem worksheets review skills in real world scenarios. 258 3rd grade math worksheets addition word problems 1 in
this math worksheet your child will solve word problems using addition of 2 digit numbers. Third grade math worksheets free pdf printables with no login.
Free printable worksheets and activities for 3rd grade in pdf. Multiplication and division are introduced along with fun math pages that are kid tested. Our third grade math worksheets continue
earlier numeracy concepts and introduce division decimals roman numerals calendars and new concepts in measurement and geometry.
3rd grade math worksheets printable pdf activities for math practice. Your third grade students will find themselves challenged with these math worksheets. Math english number addition subtraction
multiplication division grammar activity.
How many wheels are there in all. Perimeter the perimeter of a polygon is the distance around it. These free 3rd grade math word problem worksheets can be shared at home or in the classroom and they
are great for warm ups and cool downs transitions extra practice homework and credit assignments.
And if you re looking for more free 3rd grade math worksheets check out this free library. Take the problem out of word problems with these math worksheets for third graders. Question 2 there are 60
cars in the parking lot.
Social studies science and the olympics are just some of the themes that will stimulate third graders as they apply addition subtraction and multiplication to these. This collection of worksheets
will help kids grasp how math applies in real world situations. Grade 3 mixed bag i word problems name.
Worksheets math grade 3. Choose your grade 3 topic. Free grade 3 math worksheets.
James has already 1 765 dollars. All worksheets are printable pdf files. How much more money does he need.
The following worksheets contain a mix of grade 3 addition subtraction multiplication and division word problems. Mixing math word problems is the ultimate test of understanding mathematical concepts
as it forces students to analyze the situation rather than mechanically apply a solution. Question 1 a new motorbike costs 3 000 dollars.
Each car has 4 wheels.
Grade 3 Maths Worksheets 13 6 Measurement Of Capacity Word Problems On Litres And Milli Word Problem Worksheets 3rd Grade Math Worksheets Math Word Problems
Word Problems Worksheets Dynamically Created Word Problems Subtraction Word Problems Word Problems 3rd Grade Word Problems
Spring Into Multiplication Multiplication Word Problems Using The Springtime Theme Math Word Multiplication Word Problems Math Word Problems Math Words
Boost Your 3rd Grader S Math Skills With These Printable Word Problems Math Word Problems 3rd Grade Math Worksheets Math Words
Grade 3 Maths Worksheets 8 5 Time Problems Lets Share Knowledge 3rd Grade Math Worksheets Math Word Problems Mathematics Worksheets
3rd Grade Math Word Problems Best Coloring Pages For Kids Word Problems Math Word Problems Math Words
Measurement Of Capacity Word Problems On Litres And Millilitres Worksheet Word Problem Worksheets 3rd Grade Math Worksheets 2nd Grade Worksheets
Grade 3 Addition Word Problem Worksheet Addition Word Problems Addition Words Word Problems
Word Problem Worksheets Word Problems 3rd Grade Word Problems Multiplication Word Problems
Grade 3 Maths Worksheets 12 7 Word Problems On Grams And Kilograms Word Problem Worksheets 3rd Grade Math Worksheets 2nd Grade Worksheets
4th Grade Number Math Word Problems Math Words Multi Step Word Problems
4 Free Math Worksheets Third Grade 3 Addition Word Problems 043 Addition Word Problems Worksh Math Words Math Word Problems Word Problem Worksheets
25 3rd Grade Math Word Problems Worksheets Pdf 5th Grade Worksheets Pdf Word Problem Worksheets Word Problems Math Word Problems
Word Problems In Fractions Grade 3 Math 3rd Grade Math Worksheets 3rd Grade Math Word Problems
Grade 3 Counting Money Worksheets Free Printable Money Word Problems Money Worksheets Word Problem Worksheets
The Division Word Problems With Division Facts From 5 To 12 A Math Worksheet From The Math Division Word Problems Word Problem Worksheets Money Word Problems | {"url":"https://askworksheet.com/third-grade-free-printable-3rd-grade-math-word-problems-worksheets-pdf/","timestamp":"2024-11-09T09:34:22Z","content_type":"text/html","content_length":"134959","record_id":"<urn:uuid:28817985-3d46-4926-a454-00483caca750>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00520.warc.gz"} |
Crystal E. Owens - Pinewood Derby Racecar
Overall goal: Use insights from solving dynamics equations to design and make a winning racecar propelled only by gravity on a set track. (We won second place!)
In the Pinewood Derby car project, students design wooden cars with the intent of achieving maximum velocity (and the fastest race time) in a race down a pre-built track. Certain forces and moments,
both conservative and dissipative, act on the car during its travel on the track. These forces and moments both accelerate and decelerate the car. The forces affect the change in energy of the car,
which ultimately dictates the velocity of the car. Throughout the Dynamics course, the effects of such forces on moving bodies are studied in detail. Understanding these principles allowed us to
design a car with optimum parameters to
maximize velocity.
The track is composed of one angled section, one curved section, and one flat section. The sections are continuous, but the exact equation of motion on each part of the track differs. Throughout the
entire motion of the car, the car experiences a number of forces. These forces include gravitational force (weight), normal force, drag force, and frictional force exerted on the wheel as a moment.
These forces contribute to changes in the car’s energy and thus affect the motion of the car. Mathematical analysis of energy terms allows for derivation of the EOM for the car, which allows for
understanding of how each parameter and force contributes to the motion of the car. Utilizing this information, the student can determine how to alter the parameters of the car in order to optimize
its velocity.
Experiments were conducted in the lab in order to measure values for the center of mass, mass moment of inertia (MMOI), drag coefficient, and coefficient of kinetic friction of both a standard car
(provided by instructor) and the custom designed car. Ultimately, it was confirmed that the values for each of these parameters were optimized in the custom design versus the standard car.
Through mathematical derivations of the EOMs, it was discovered that a low center of mass would increase the velocity of the car in angled parts of track, while a large mass moment of inertia would
decrease the velocity of the car through the curved portion of track. A relatively high mass, however, allows the car a higher acceleration through the curved and angled section of track. Further,
equations of motion confirmed that reducing surface area of the car exposed to air would reduce the magnitude of the dissipative drag force. Finally, reduction of the coefficient of friction via
lubrication and sanding of axles corresponds to an increase in car velocity. | {"url":"https://www.crystalowens.com/class-projects/pinewood-derby-racecar","timestamp":"2024-11-09T16:52:10Z","content_type":"text/html","content_length":"89468","record_id":"<urn:uuid:bf94c0d1-97c7-4e0d-a0f0-bcf4a6f26a0a>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00645.warc.gz"} |
An Exceptional Generalization of the Poisson Distribution
Open Journal of Statistics, 2012, 2, 313-318
http://dx.doi.org/10.4236/ojs.2012.23039 Published Online July 2012 (http://www.SciRP.org/journal/ojs)
An Exceptional Generalization of the Poisson Distribution
Per-Erik Hagmark
Department of Mechanics and Design, Tampere University of Technology, Tampere, Finland
Email: per-erik.hagmark@tut.fi
Received May 4, 2012; revised June 10, 2012; accepted June 23, 2012
A new two-parameter count distribution is derived starting with probabilistic arguments around the gamma function and
the digamma function. This model is a generalization of the Poisson model with a noteworthy assortment of qualities.
For example, the mean is the main model parameter; any possible non-trivial variance or zero probability can be at-
tained by changing the other model parameter; and all distributions are visually natural-shaped. Thus, exact modeling to
any degree of over/under-dispersion or zero-inflation/deflation is possible.
Keywords: Count Data; Gamma Function; Poisson Generalization; Discretization; Modeling; Over/Under-Dispersion;
1. Introduction and the Main Result
In count data modeling the Poisson distribution is usually
the first option, but real data can indicate a variety of
discrepancies. These can be genuine features or secon-
dary consequences of e.g. censoring, clustering, approxi-
mations or correlations. Specifically, the Poisson model
has no dispersion flexibility because the mean determines
the variance and the zero probability, σ2 = μ, p0 = e–μ,
while the real data can display over or under- dispersion,
σ2 ≠ μ, or zero-inflation or deflation, p0 ≠ e–μ [1]. Such
situations are usually handled e.g. by randomizing the
Poisson mean, by mixtures, by adding a new parameter,
by reweighing the Poisson point probabilities, or via
generalizing the exponential increments in the homoge-
neous Poisson process [2-5]. Our approach will be dif-
We recall an elementary fact. The mean-deviation pair
(μ, σ) of a non-binary count variable (non-negative inte-
ger-valued random variable) always satisfies the inequal-
, (1)
where [μ] is the largest integer not exceeding μ. Thus, we
will say that a count model (parameterized count variable)
has full dispersion flexibility if every positive solution (μ,
σ) of the inequality (1) is the mean-deviation pair for
some parameter values.
In [6] we called for a mathematically unified count
model N(μ, β) with two independent parameters, µ > 0, β
> 0, and the following properties:
1) Comfortable parameterization: E(N(μ, β)) = μ, for
all μ and β.
2) Generalization of the Poisson model: For β = 1,
Pr ,1!
Nμne n
, n = 0, 1, ···.
3) Full dispersion flexibility: If the numbers μ > 0 and
σ > 0 satisfy inequality (1), then there is a β such that
Var ,N
The solution to be presented in this paper obeys the
following cumulative probabilities:
1, 1,1,
nG ng
nG ng
where g(t, x) and G(t, x) are the one-parameter gamma
probability and cumulative distribution functions, respec-
tively, with parameter x and variable t (Section 2).
We begin with the derivation of fundamental inequali-
ties in Section 2. These inequalities lead to a cumulative
distribution H(x, μ), where the parameter μ > 0 is the
mean. Then the insertion of a new independent parameter
β > 0 provides an extended cumulative distribution H(x/β,
μ/β) and the related non-negative two-parameter random
variable X(μ, β), where μ is still the mean. Now the pro-
claimed count model N(μ, β) is defined as a mean-pre-
serving discretization of X(μ, β), and the above properties
1), 2), 3) are proved. Thereafter the most immediate ap-
plications are given; namely, exact modeling of over/
under-dispersion or zero-inflation/deflation to any possi-
ble degree. In the last section, we propose motives for
further research, and we compare N(μ, β) with well-es-
tablished Poisson generalizations.
opyright © 2012 SciRes. OJS
P.-E. HAGMARK
2. Derivation of Two Inequalities
We start with notation: Gamma function Г(x) as Euler’s
second integral, digamma function Ψ(x), some related
functions and immediate interrelations;
,: tx
gtxe t
, 0,x t
(, ):,,lat xgt xgt x
n ,t x
,:,, ln,btxatx gtxtxx
,: ,,)d,
txGtxasx s
,: ,Btx Atx
lim ,0,lim ,0,0
AtxBtx x
,d1,,d 0,0Atx tBtx tx
There is a nice probabilistic perspective on the gamma
function: If the random variable T has a gamma density
g(t, x), then E(ln(T)) = Ψ(x) and Var(ln(T)) = dΨ(x)/dx
[7]. In terms of our notation above, these simple observa-
tions can be written in the form
. (3)
Additional work leads to a stronger result,
. (4)
Namely, integration by parts, the functional equations
,tgtx xgtx,, formula (3),
and l’Hospital’s rule allow us to write
tx t
x t
x t
,dAtx t
Atx tgtx
xgtx t
xx xx
0,dBtx t
lim ,, ln
Btx tgtxtx
xxx x
Next we derive two fundamental inequalities. For
every fixed x > 0, the function a(t, x) has exactly one root
, and it is increasing there. This and (3, left side)
1,d0,0, 0.Atx tx
Now, taking into account (5) and (4, left side), we ob-
tain the first inequality
Further, for every fixed x > 0, the function b(t, x) has
exactly two roots, 0
, 1
0,d 0,0,0Btx tx
, and it
is decreasing at t0 and increasing at t1. From this one can
conclude that B(t, x) has, for every x > 0, a positive local
maximum at t0 and, because of (3, right side), a negative
local minimum at t1. Considering (4, right side) too, we
finally arrive at the second inequality
Nn Fxxn
3. A Mean-Preserving Discretization
We will also need a certain discretization procedure: If X
is a non-negative random variable with cumulative dis-
tribution F(x), the discretization of X is a count variable
N with cumulative probabilities equal to the mean F(x)
on the interval (n, n + 1), i.e.
We shortly quote the basic properties from [6]: The
mean and the variance of N exist (are finite) if and only if
the mean and the variance of X exist, and in that case
EE,NX (9)
4. A Generalization of the Poisson Model
In our construction of a new generalization of the Pois-
son model, the following one-parameter function will be
the central ingredient:
,:1 ,d
1,d,ddHxxAtx x t
. (11)
Recalling (5) and the notation A(t, x) = ∂G(t, x)/∂x
from Section 2, we derive
In (12) we first changed the integration order (as the
integrand is positive) and then employed the limits
,0 :lim,1,
Gt Gtx
,:lim, 0
Gt Gtx
. (13b)
Copyright © 2012 SciRes. OJS
P.-E. HAGMARK
Copyright © 2012 SciRes. OJS
The limits (13) follow from Chebyshev’s inequality
and the simple fact that the parameter x of the one-pa-
rameter gamma density g(t, x) equals the mean and the
By employing the inequalities (6) and (7), we have 0 <
H(x, μ) < 1 and ∂H(x, μ)/∂x > 0. Hence, H(x, μ) is a cu-
mulative probability distribution with mean μ (12) and
zero probability0x
0,: limH
. We proceed
by adding an independent parameter β > 0, so defining a
two-parameter cumulative distribution,
,, :Fx H
, ,0.
Now, let X(μ, β) be the non-negative random variable
determined by F(x, μ, β), and let N(μ, β) be the discreti-
zation of X(μ, β), according to Section 3. We form an
integral function of (14) and get the cumulative prob-
abilities of N(μ, β) using (8):
,, :,IxFx , d
Gs s
,,:Pr, )
1, ,, ,
PnN n
In In
tn t
E, E, 1,d
E,1 ,
Nμβ Xμβ
proving Property 1). Next, we fix β = 1 in (16) and em-
ploy the identities G(t, x) – G(t, x + 1) = g(t, x + 1) and
G(t, 0) = 1 (13a). Now Pr{N(μ, 1) ≤ n} = 1 – G(μ, n + 1),
so the point probabilities are Pr{N(μ, 1) = n} = G(μ, n) –
G(μ, n + 1) = !
, n = 0, 1, ···. This means that the
sub-model N(μ, 1) is the Poisson model, so Property 2)
holds true (see case β = 1 in Figure 1).
5. Full Dispersion Flexibility
Property 3), Section 1, remains to be proved. Given any
positive pair (μ, σ) satisfying
we have to prove that there is a β > 0 such that Var(N(μ,
β)) = σ2. Figure 2 is an illustration.
First, one obtains an upper bound for the variance of
X(μ, β) by employing Properties 1) and 2), (10, left side)
and routines:
The pair X(μ, β) and N(μ, β) is illustrated in F i gure 1.
Proof of Properties 1) and 2), Section 1. By consider-
ing (9, 12, 14) one can see that the mean does not change
during the process from H(x, μ) to N(μ, β):
Var,2 1,d
Var ,1
Var,1 .
Then (18) and (10, right side) imply Var(N(μ, β)) < ∞.
After noting that Var(N(μ, β)) is a continuous function of
β (for fixed μ) and recalling inequality (1), it is enough to
prove the following limits:
0 1limVar, ,N
lim Var, N
Figure 1. Cumulative distributions of X(μ, β) and N(μ, β), for μ = 3.2 and β = 1, 0.6, 4, 0.1.
P.-E. HAGMARK
Figure 2. The variance Var(N(μ, β)) as a function of β, for μ = 3.2 and μ = 0.7. Poisson point (β = 1, σ2 = μ); lower bound
Proof of (19). From (18) it follows that Var(X(μ, β))
tends to zero as β→0. This means that X(μ, β) approaches
the constant µ (in distribution). This again means that the
discretization N(μ, β) approaches μ if this is an integer,
and otherwise a binary count variable with the values [μ]
and [μ]+1; see [6]. In both cases the limit of Var(N(μ, β))
obeys (19).
Proof of (20). Definition (11) and partial integration
yield the identity
xHx x
GtM t
21 ,d
The first term on the right side vanishes when M→∞,
since MG(t, M) ≤ tM/Г(M). Now by changing the integra-
tion order in the latter term, one obtains
E,1 2
xHx x
Lsgsxx esxx
Then, by using (21) and part of (18), and changing in-
tegration variable, z = βt, one arrives at
E, E
Further, the inequality
ln ,
sCD s
, s > 0, x
> 0, yields a lower bound for L(s):
0,dLsgsxx e
d0, d0.
This means that L(s) tends to ∞ as s→0, and so the av-
erage of L in the interval (0, z/β) approaches ∞ as β→∞
(22). Thereby, E(X(μ, β)2) grows to ∞, so (17) and (10,
left side) complete the proof of (20).
6. Computing and Applications
When working with N(μ, β), the following numbers are
,,1, 0,1,
nG ngn
The latter faster version follows from partial integra-
tion and the identities G(t, x) – G(t, x + 1) = g(t, x + 1),
G(t, 0) = 1 (13a). Note also that most mathematical soft-
ware offers fast computation of G(t, x). Employing (23)
in (16), basic formulas can be written in the following
Pr,) 1,, ,
NnK K
12 1,,
Var, 2,.
We consider exact modeling of count variables. (For
numerical examples, see Table 1).
Application 1. Generally, a non-binary count variable
with desired mean μ and variance σ2 exists if and only if
In that case N(μ, β) always provides a solution. Indeed,
because of full dispersion flexibility, Property 3), there
Copyright © 2012 SciRes. OJS
P.-E. HAGMARK 317
Table 1. Under/over-dispersion and zero-deflation/inflation.
Phenomenon General range Numerical example Solution
Under-dispersion (μ – [μ])(1 – μ + [μ]) < σ2 < μ μ = 3.2 σ2 = 2.4 β = 0.7253
Poisson σ2 = μ (equi-dispersion) μ = 3.2 σ2 = 3.2 β = 1
Over-dispersion μ < σ2 < ∞ μ = 3.2 σ2 = 4.5 β = 1.4644
Zero-deflation max{0,1 – μ}< p0 < e–μ μ = 3.2 p0 = 0.01 β = 0.5622
Poisson p0 = e–μ μ = 3.2 p0 = 0.04076... β = 1
Zero-inflation e–μ < p0 < 1 μ = 3.2 p0 = 0.15 β = 2.2949
is a β > 0 such that Var(N(μ, β)) = σ2 (26).
Application 2. Likewise, a non-binary count variable
with desired mean μ and zero probability p0 exists if and
only if
max0, 11.p
Pr, 0)N
Again N(μ, β) provides a solution. Arguments like
those in Section 5 would show that there is a β > 0 such
that = p0 (24, n = 0).
Application 3. Suppose there is a real non-censored
random sample available of the unknown non-binary
count variable to be modeled. Let
be the sample
mean, 2
the standard variance and 0 the zero frac-
tion. It is easy to prove that these UMVU estimates also
meet (27, 28). Thus, there is a β1 that satisfies
and a
β2 that satisfies 0 (both exactly), but of course, usu-
ally 12
. Importance weighing provides a compro-
mise β and an approximate solution
7. Further Research and Discussion
Additional work is needed to enlarge the applicability of
N(μ, β). The computational behavior of the central for-
mulas 23-26 should be further explored, and tools for
stochastic simulation and statistical inference should be
developed. We put forward two concrete problems.
Problem 1. Numerical experimentation indicates that
the numbers Kn (23, n ≥ 1) increase with β (K0 = μ). If
this is true, all moments (25, k ≥ 2) increase with β, so
the iteration of β in the applications in Section 6 can be
made faster.
Problem 2. Find an algorithm for generation of ran-
dom variates from N(μ, β). The alias method [8] can of
course be used for truncated versions, but a tailor-made
method would be welcome. Actually, a generation meth-
od for X(μ, β) would be enough since, according to [6],
this can immediately be transformed to the discretization
N(μ, β).
Finally, we return to the main qualities of N(μ, β). As
mentioned, the finite mean-deviation pair (μ, σ) of any
non-binary count variable satisfies inequality (1), i.e. σ2 >
. Conversely, if (μ, σ) is a positive
solution of (1), then it is the mean-deviation pair of a
non-binary count variable; and as we have shown, there
is always an N(μ, β) with this mean-deviation pair. Since
the mean is an original model parameter of N(μ, β), only
β needs to be solved from the equation Var(N(μ, β)) = σ2.
We have called this feature “full dispersion flexibility”,
because it enables exact modeling for the first two mo-
ments, or for mean and zero probability.
Full dispersion flexibility seems to be very rare even
among well-established Poisson generalizations. The
generalization of Consul and Jain [2], the negative bino-
mial [3], the COM-Poisson distribution [4] and many
others have severe shortcomings in dispersion flexibility,
and also partly bad-shaped distribution functions. A posi-
tive exception is the General Poisson Law [5]. However,
here the mean is not a model parameter, so, if a certain
pair (μ, σ) is wanted, the original parameters must be
solved simultaneously from two equations, which both
include laborious infinite series’.
Also note that the invariants (4) and (5), the inequali-
ties (6) and (7), and the distribution (11) comprise, as
such, a contribution to probabilistic treatment of the
gamma function.
[1] J. Castillo and M. Perez-Casany, “Over-Dispersed and
Under-Dispersed Poisson Generalizations,” Journal of
Statistical Planning and Inference, Vol. 134, No. 2, 2005,
pp. 486-500. doi:10.1016/j.jspi.2004.04.019
[2] P. C. Consul and G. C. Jain, “A Generalization of the
Poisson Distribution,” Technometrics, Vol. 15, No. 4, 1973,
pp. 791-799. doi:10.2307/1267389
[3] N. L. Johnson, S. Kotz and A. W. Kemp, “Univariate
Discrete Distributions,” 2nd Edition, John Wiley & Sons,
New York, 1992.
[4] R. W. Conway and W. L. Maxwell, “A Queuing Model
with State Dependent Service Rates,” Journal of Indus-
trial Engineering, Vol. 12, 1962, pp. 132-136.
[5] G. Morlat, “Sur Une Généralisation de la loi de Poisson,”
Comptes Redus, Vol. 235, 1952, pp. 933-935.
[6] P.-E. Hagmark, “On Construction and Simulation of Count
Copyright © 2012 SciRes. OJS
P.-E. HAGMARK
Data Models,” Mathematics and Computers in Simulation,
Vol. 77, No. 1, 2008, pp. 72-80.
[7] L. Gordon, “A Stochastic Approach to the Gamma Func-
tion,” The American Mathematical Monthly, Vol. 101, No.
9, 1994, pp. 858-865.
[8] A. J. Walker, “An Efficient Method for Generating Dis-
crete Random Variables with General Distributions,” ACM
Transactions on Mathematical Software, Vol. 3, 1977, pp.
253-256. doi:10.1145/355744.355749
Copyright © 2012 SciRes. OJS | {"url":"https://file.scirp.org/Html/20668.html","timestamp":"2024-11-04T05:05:43Z","content_type":"text/html","content_length":"139037","record_id":"<urn:uuid:f4bb16dc-6a14-4ddf-aa07-bdabd4935080>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00135.warc.gz"} |
date: 2021-06-13 21:33:14
Either you are the owner of a company or investing in one, the same story applies “The numbers tell the story”
When it comes to analysing a company, there are various levels of details. The basic level is to understand revenue, expenses, assets, liabilities and the relationship of all of the aforementioned
with your cash flow. The second level though is to understand the health of a company.
So, what are financial ratios? They are simply figures extracted from a company’s income statement, balance sheet and cash flow and compared to one another in order to provide a clear financial
picture of a company’s earnings strength, liquidity, profitability or debt usage.
The following are 5 important financial ratios you have to know:
Return on equity (ROE)
Calculation: Net income / Shareholders Equity
ROE is considered to be one of the most important financial ratios, as it allows you to compare a company’s return relative to the amount invested. This could assist you in understanding the return
percentage and compare it to other alternative investments.
Gross profit margin
Calculation: Gross Profit / Revenue
As a rule of thumb, “margin” means dividing by revenue. Gross profit means Revenue minus direct expenses. So for example, if you bought an apple for 2 US$ and sold it for 3 US$ your gross profit is 1
US$, while, your gross profit margin is 33.3%.
Gross profit margin is also very important in understanding because if you a company’s gross profit margin is negative, that means that the current business model will never be profitable. For
example, let us assume you sell oranges for 5 US$, however, each orange actually costs 6 US$, while, your other expenses are 500 US$, that means that even if you sell one million oranges you will
never actually going to cover your expenses. In fact, every orange you sell actually means you lose even more than 500 US$. How high your gross profit margin should partly depend on how high your
other expenses are.
EBITDA margin
Calculation: EBITDA / Revenue
EBITDA means Earnings (i.e net profit) before interest, taxes, depreciation and amortization. This is also an important ratio, because (continuing on the example above), not only does it capture how
much the cost of the oranges sold were, but also it takes into consideration the 500 US$ that you pay.
Current ratio
Calculation: Current assets / current liabilities
Current ratio basically reveals if the company is able to pay its current liabilities using its current assets. Current assets mainly include cash, inventory and accounts receivables. On the other
hand, cash liabilities mainly include accounts that are due to be paid within one year, like accounts payable and debt payments. In a nutshell, this means that the current ratio simply implies that
if you collected all your receivables and sold all your inventory and add both of these items to your cash balance, will you have enough cash to pay out all your liabilities? As a rule of thumb, if
this ratio is below one, then there is definitely a cash flow problem in the short term.
Financial leverage
Calculation: Total capital employed / Shareholders equity
Most business usually revert to borrowing in order to operate.
Financial leverage is an important ratio that reveals the degree in which a company uses debt (i.e. borrowed money).
Total capital employed is the accounting value of all interest-bearing debt plus all owners’ equity. So as a quick example, if you have 100 US$ in debt and the company invested 100 US$ from its
shareholders, that means your financial leverage would be 2x (200 / 100).
Usually, the higher this figure the riskier the company is. This is because there is more debt to be repaid. However, it should be noted that this depends on the sector and the country in which your
company is operating in. Additionally, the loan terms (for example the interest rate of the loan) are also important factors that should be taken into consideration when looking at this ratio.
Usually if you are planning to invest in a company, it is best to take into consideration the last three historical years of these ratios. This would be enough to tell you the direction and trends of
the company. These ratios can also be compared to other comparable companies so you can understand the relative strength of a company.
However, if you are analysing these figures for your own company for internal purposes, these figures should be much more thoroughly analysed and understood in order to improve them, if possible.
Do you need to value your company or find out the financial feasibility of your new idea? Check out Front Figure. It is quicker and more cost effective compared to other traditional methods. | {"url":"https://www.frontfigure.com/blog-show/8","timestamp":"2024-11-07T16:12:29Z","content_type":"text/html","content_length":"16558","record_id":"<urn:uuid:6f26ad2e-e6c3-4c63-bf1e-e45740b5ac52>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00415.warc.gz"} |
Four Layers
Can you create more models that follow these rules?
This may be used to follow on from Cubes Here and There.
Looking at the three models here you may see that they have a lot in common although they are obviously different.
The things that are the same produce the rules.
So the rules are;
$1$/ Each colour stays at the same level in each model.
$2$/ Cubes of the same colour are not separated - they stay together.
$3$/ The numbers of cubes for each colour is fixed at $1, 2, 3$ and $4$.
$4$/ The cubes sit squarely face to face with no twists or slides.
Your challenge is to create more shapes that follow the four rules.
When you have done so, compare them and show similarities and differences.
Getting Started
Have you checked that your model obeys the rules?
How will you make sure that you don't repeat any?
Teachers' Resources
Why do this problem?
This activity challenges the most able pupils in their spatial awareness abilities. It also enables them to have something before them to explore and compare.
Possible approach
As this is intended for the most able I would suggest printing out the activity and discussing together first of all.
You could get started by asking the group to give you instructions to make the second or third model. Then let them produce their creations.
Key questions
Tell me about your shapes.
So what have you found when comparing them?
What can you now explore about these?
Possible extension
Pose the question about balance, asking "Does it matter if the model is stable?".
You could encourage children to explore models containing an archway/bridge. | {"url":"https://nrich.maths.org/problems/four-layers","timestamp":"2024-11-02T15:55:14Z","content_type":"text/html","content_length":"40858","record_id":"<urn:uuid:65466826-7959-4e79-83b7-298c92a1d68d>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00072.warc.gz"} |
SNuBIC Research Unit
Project B3
Principal Investigator:
Christiane Helzel, Applied Mathematics, Heinrich-Heine University Düsseldorf
Main Collaborators in the Research Unit:
Gregor Gassner, Division of Mathematics, University of Cologne
Manuel Torrilhon, Applied and Computational Mathematics, RWTH Aachen University
Structure-Preserving Methods for Complex Fluids
We construct numerical methods for a system of partial differential equations consisting of a kinetic equation coupled to a macroscopic flow equation, which models sedimentation in suspensions of
rod-like particles. A hierarchy of moment equations will be derived which approximates the high-dimensional scalar kinetic equation with a lower dimensional system. The number of moment equations
will be adjusted locally based on the dynamics of the problem. This requires an interface coupling of moment systems with different resolution. For the underlying mathematical model thermodynamic
consistency can be shown. We will investigate whether this structure can be preserved by the numerical method. | {"url":"https://www.snubic.io/projects/focus-area-b/project-b3","timestamp":"2024-11-05T16:15:34Z","content_type":"text/html","content_length":"20264","record_id":"<urn:uuid:ae60f48f-9998-43ee-95d0-653bdc9a992a>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00680.warc.gz"} |
[tlaplus] Proving properties of recursive function
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[tlaplus] Proving properties of recursive function
I am again stuck with proving properties of a recursive function.
I pretty much digged into the wellfounded induction proofs lib, but I do not know how to "connect" the Def(_,_) operator with the function I want to work with.
I tried to derive something from the fold functions, but actually the proofs of the wellfounded induction lib do not include ternary functions like Def(sum.fun,set, transform) which wouldbe necessary
to do something like a "transform reduce" operation on a set.
Therefore I just transformed the set I want to process into a set of 2-tuples (like a zip iterator). But this is just to explain the code below.
What I do not understand is how I can make TLAPS understand that my DefBagZipSum actually does what is behind the BagSum. These CHOOSE constructs that appear when expanded by TLAPS are really tricky.
I am stuck in LEMMA OneElementBagSum step <1>4.
I expect that the solution of this will problem will also help with the
LEMM SumOfBagsAdditive.
My BagSum is meant to summarize the indices of graph edges of a graph.
Thank you in advance,
--------------------------- MODULE BagWellFounded ---------------------------
EXTENDS TLC, Integers, FiniteSets, Sequences, Bags, TLAPS
\* Definitions
\* Define a finite set of edges
Set_of_all_possible_edges(N) == (1..N) \X (1..N)
\* derive a set of possible multisets (bags) with Set_of_all_possible_edges as (finite) domain
Set_of_all_possible_bags(N) == UNION {[SB -> {n \in Nat : n > 0}] : SB \in SUBSET Set_of_all_possible_edges(N)}
\* this is the original definition of BagSum
LOCAL Bag_sum_zipper(S) ==
LET iter[s \in SUBSET S] ==
IF s = {} THEN 0
ELSE LET x == CHOOSE x \in s : TRUE
IN iter[s \ {x}] + x[2]
IN iter[S]
\* This operator transforms a bag into a set of pairs to work with the function definitions
LOCAL Bag_zip_up(bag) == { <<x, bag[x] * (x[1] + x[2])>> : x \in BagToSet(bag) }
\* This is the function we are using
BagSum(bag) == Bag_sum_zipper(Bag_zip_up(bag))
\* Zipper Set
LOCAL Zipper_Set(N) == { Bag_zip_up(y) : y \in Set_of_all_possible_bags(N) }
\* translation of proofs from WellFoundedInduction to ternary operators
LOCAL DefBagZipSum(fun, xset) == IF xset = {}
THEN 0
ELSE LET x == CHOOSE x \in xset : TRUE
IN fun[xset \ {x}] + x[2]
\* define the subset operator
A \subset B == /\ A \subseteq B
/\ A # B
\* Lemmas
\* We first assure that our definition of the Set_of_all_possible_edges is actually finite
LEMMA EdgeSetIsFinite ==
ASSUME NEW N \in Nat
PROVE IsFiniteSet(Set_of_all_possible_edges(N))
BY FS_Interval, FS_Product
DEF Set_of_all_possible_edges
\* Then go on and show that subset is actually well founded
LEMMA SubSetIsWellFounded ==
ASSUME NEW N \in Nat,
NEW S, S = Set_of_all_possible_edges(N),
NEW T, T = SUBSET S
IsWellFoundedOn(OpToRel(\subset, T), T)
<1>1. T = FiniteSubsetsOf(S) BY FS_FiniteSubsetsOfFinite, EdgeSetIsFinite DEF FiniteSubsetsOf
<1>2. OpToRel(\subset, T) = StrictSubsetOrdering(S)
BY DEF OpToRel, \subset, StrictSubsetOrdering
<1>3. QED
BY <1>1, <1>2, FS_StrictSubsetOrderingWellFounded
\* Proof, that bag sum of EmptyBag is zero
LEMMA EmptyBagSumZero == BagSum(EmptyBag) = 0
BY DEF BagSum, EmptyBag, SetToBag , BagToSet, Bag_sum_zipper, Bag_zip_up
\* Proof, that a single element bag sums to this single element
LEMMA _OneElementBagSum_ ==
ASSUME NEW N \in Nat,
NEW S, S = Zipper_Set(N),
NEW x_edge \in Set_of_all_possible_edges(N),
NEW fun,
OpDefinesFcn(fun, S, DefBagZipSum),
WFInductiveDefines(fun, S, DefBagZipSum), \* ASSUME THIS FOR DEVELOPING THE PROOF
fun \in [S -> Nat] \* ASSUME THIS FOR DEVELOPING THE PROOF
PROVE BagSum(SetToBag({x_edge})) = x_edge[1] + x_edge[2]
<1> DEFINE ZipElement(edge) == Bag_zip_up(SetToBag({x_edge}))
<1>1. ZipElement(x_edge) = {<<x_edge, x_edge[1] + x_edge[2]>>}
BY DEF Bag_zip_up, SetToBag, BagToSet, Set_of_all_possible_edges
<1>2. ZipElement(x_edge) \in S
BY DEF Zipper_Set, SetToBag, BagToSet, Set_of_all_possible_edges,
<1>3. fun[ZipElement(x_edge)] \in Nat BY <1>2
<1>4. \A s_elem \in S: Bag_sum_zipper(s_elem) = DefBagZipSum(fun,s_elem)
BY DEF Bag_sum_zipper, DefBagZipSum, OpDefinesFcn, WFInductiveDefines
<1>10. QED BY
DEF BagSum, SetToBag, BagToSet, Set_of_all_possible_edges, Bag_sum_zipper, Bag_zip_up
\* Proof, that addition works
LEMMA SumOfBagsAdditive ==
ASSUME NEW N \in Nat,
NEW A \in Set_of_all_possible_bags(N),
NEW B \in Set_of_all_possible_bags(N)
BagSum(A (+) B) = BagSum(A) + BagSum(B)
\* Modification History
\* Last modified Sun Dec 04 22:20:56 CET 2022 by andreas
\* Created Sun Dec 04 13:44:16 CET 2022 by andreas
You received this message because you are subscribed to the Google Groups "tlaplus" group.
To unsubscribe from this group and stop receiving emails from it, send an email to tlaplus+unsubscribe@xxxxxxxxxxxxxxxx.
To view this discussion on the web visit https://groups.google.com/d/msgid/tlaplus/04d2b7a9-2938-494a-9c0d-be68803dbbd9n%40googlegroups.com. | {"url":"https://discuss.tlapl.us/msg05160.html","timestamp":"2024-11-14T12:21:16Z","content_type":"text/html","content_length":"9731","record_id":"<urn:uuid:d4fb8385-06eb-4f0b-a4b3-fe95d1504944>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00066.warc.gz"} |
The focus of the discussion
06-24-2015 12:49 PM
Given matrix A of size MxN, I would like to sort column-wise. In Matlab this can be solved easily and quickly by sort(A) and it can even sort row-wise by sort(A,2).
In fortran, I don't find this and thus far I just iteratively sort each column using dlsart2 but this is definitely much slower than MATLAB. I am sure someone must have run into this issue and I hope
you can help me speed this up, perhaps along these lines:
1. Is there any column sorting subroutine that works faster then iterating dlsart2, which costs MN log M assuming that each iterate is M log M.
2. Is dlsart2 faster than any other sorting functions? Have anyone compare this it with dpsort.f90 of slatec or some sorting functions from orderpack? I found that dlsart2 is two times faster than
3. Is there anything else that is faster than dlsart2 that can beat MATLAB sort function?
06-24-2015 03:13 PM
06-24-2015 07:52 PM
06-24-2015 08:48 PM
06-24-2015 09:31 PM
06-24-2015 11:23 PM
06-25-2015 06:01 AM
06-25-2015 06:58 AM
06-29-2015 11:14 AM
06-29-2015 01:09 PM
06-29-2015 11:49 PM
06-30-2015 06:52 AM
06-30-2015 09:52 AM | {"url":"https://community.intel.com/t5/Intel-oneAPI-Math-Kernel-Library/Column-sorting/m-p/1053343/highlight/true","timestamp":"2024-11-02T00:16:30Z","content_type":"text/html","content_length":"442615","record_id":"<urn:uuid:579cc6b6-e7a4-47ae-8f36-ca080245b3fc>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00226.warc.gz"} |
Analysis of a trapped Bose-Einstein condensate in terms of position, momentum, and angular-momentum variance
We analyze, analytically and numerically, the position, momentum, and in particular the angular-momentum variance of a Bose-Einstein condensate (BEC) trapped in a two-dimensional anisotropic trap for
static and dynamic scenarios. Explicitly, we study the ground state of the anisotropic harmonic-interaction model in two spatial dimensions analytically and the out-of-equilibrium dynamics of
repulsive bosons in tilted two-dimensional annuli numerically accurately by using the multiconfigurational time-dependent Hartree for bosons method. The differences between the variances at the
mean-field level, which are attributed to the shape of the BEC, and the variances at the many-body level, which incorporate depletion, are used to characterize position, momentum, and
angular-momentum correlations in the BEC for finite systems and at the limit of an infinite number of particles where the bosons are 100% condensed. Finally, we also explore inter-connections between
the variances.
Bibliographical note
Funding Information:
This research was funded by Israel Science Foundation (Grant No. 600/15).
Publisher Copyright:
© 2019 by the authors.
• Angular-momentum variance
• Bose-Einstein condensates
• Density
• Harmonic-interaction model
• MCTDHB
• Momentum variance
• Position variance
ASJC Scopus subject areas
• Computer Science (miscellaneous)
• Chemistry (miscellaneous)
• General Mathematics
• Physics and Astronomy (miscellaneous)
Dive into the research topics of 'Analysis of a trapped Bose-Einstein condensate in terms of position, momentum, and angular-momentum variance'. Together they form a unique fingerprint. | {"url":"https://cris.haifa.ac.il/en/publications/analysis-of-a-trapped-bose-einstein-condensate-in-terms-of-positi","timestamp":"2024-11-10T14:42:20Z","content_type":"text/html","content_length":"55101","record_id":"<urn:uuid:a282cf3b-b18c-44a0-9e34-c1c588ad6be1>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00299.warc.gz"} |
An inexact scalarized proximal algorithm with quasi- distance for convex and quasiconvex multi-objective minimization
In the paper of Rocha et al., J Optim Theory Appl (2016) 171:964979, the authors introduced a proximal point algorithm with quasi-distances to solve unconstrained convex multi-objective minimization
problems. They proved that all accumulation points are ecient solutions of the problem. In this pa- per we analyze an inexact proximal point algorithm to solve convex and qua- siconvex unconstrained
multi-objective minimization problems using quasi- distances. For the convex case, we extend the result obtained by the exact algorithm of Rocha et al. and for the quasiconvex case we prove that all
ac- cumulation points are Pareto-Clarke critical points of the problem. Finally, to show the practicality of the introduced algorithm, we present numerical examples that conrm the convergence of our
View An inexact scalarized proximal algorithm with quasi- distance for convex and quasiconvex multi-objective minimization | {"url":"https://optimization-online.org/2020/04/7714/","timestamp":"2024-11-04T13:40:07Z","content_type":"text/html","content_length":"84381","record_id":"<urn:uuid:491595b3-2357-4908-8711-0cd43d6ff4d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00738.warc.gz"} |
Expectile regression
i often find myself extremely embarrassed by myself, because i learn of concepts in machine learning that i should’ve known as a professor in machine learning but had never even heard of before. one
latest example was expectile regression; i ran into this concept while studying Kostrikov et al. (2021) on implicit Q learning for offline reinforcement learning together with Daekyu who is visiting
me from Samsung.
in their paper, Kostrikov et al. present the following loss function to estimate the $\tau$-th expectile of a random variable $X$:
$$\arg\min_{m_{\tau}} \mathbb{E}_{x \sim X}\left[ L_2^\tau (x – m_{\tau}) \right],$$
where $L_2^\tau(u) = | \tau – \mathbf{1}(u < 0) | u^2$ and $\tau \in (0.5, 1]$.
i couldn’t tell where this loss function comes from and together with Daekyu tried to reason our way toward this loss function. to be frank, i had never heard of “expectile” as a term before this …
first, i decided to figure out the definition of “expectile” and found it inside the scipy.stats.expectile documentation. based on the documentation, the $\tau$-th expectile $m_{\tau}$ satisfies
$$\tau \mathbb{E}_{x \sim X} \left[ \max(0, x – m_\tau) \right] = (1-\tau) \mathbb{E}_{x \sim X} \left[ \max(0, m_\tau-x) \right].$$
now, let’s rewrite this equation a bit by first moving the right hand side to the left hand side:
$$\tau \mathbb{E}_{x \sim X} \left[ \max(0, x – m_\tau) \right] + (\tau – 1)\mathbb{E}_{x \sim X} \left[ \max(0, m_\tau-x) \right] = 0.$$
i love expectation (not expectile) because it is linear:
$$\mathbb{E}_{x \sim X} \left[ \tau \max(0, x – m_\tau) + (\tau – 1) \max(0, m_\tau-x) \right] = 0.$$
let’s use the indicator function $\mathbb{1}(a) = 1$ if $a$ is true and $0$ otherwise:
$$\mathbb{E}_{x \sim X} \left[ \mathbb{1}(x > m_{\tau}) \tau(x – m_\tau) – \mathbb{1}(x \leq m_{\tau}) (\tau – 1) (x-m_\tau) \right] = 0.$$
moving things around a bit, i end up with
$$\mathbb{E}_{x \sim X} \left[ \right(\mathbb{1}(x > m_{\tau}) \tau – \mathbb{1}(x \leq m_{\tau}) (\tau – 1)\left) (x-m_\tau) \right] = 0.$$
at this point, i can see that for this equation to hold, i need to make $m_\tau$ very very close to $x$ on expectation. being a proud deep learner, i naturally want to minimize $(x – m_\tau)^2$. but
then, i notice that i don’t want to make $m_{\tau}$ close to $x$ equally across all $x$. rather, there is a weighting factor:
$$\mathbb{1}(x > m_{\tau}) \tau – \mathbb{1}(x \leq m_{\tau}) (\tau – 1)$$
if $x > m_{\tau}$, the weight term is same as $\tau$. otherwise, it is $1 – \tau$ which is equivalent to $| \tau – 1|$, because $\tau \in [0, 1]$. also because of this condition, $\tau = |\tau|$. in
other words, we can combine these two cases into:
$$| \tau – \mathbb{1}(x \leq m_{\tau})|.$$
finally, by multiplying the $L_2$ loss $(x – m_\tau)^2$ with this weighting coefficient, we end up with the loss function from Kostrikov et al. (2021):
$$\mathbb{E}_{x \sim X} \left[ | \tau – \mathbb{1}(x \leq m_{\tau})| (x – m_\tau)^2 \right].$$
ugh … why did i derive it myself without trusting our proud alumnus Ilya and decide to write a blog post …? waste of time … but it was fun.
You must be logged in to post a comment. | {"url":"https://kyunghyuncho.me/expectile-regression/","timestamp":"2024-11-03T02:47:07Z","content_type":"text/html","content_length":"18521","record_id":"<urn:uuid:10acd295-b3c3-42a8-a8c4-a33aeca68bf2>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00605.warc.gz"} |
EViews Help: instsum
Shows a summary of the equation instruments.
Changes the view of the equation to the Instrument Summary view. Note this is only available for equations estimated by TSLS, GMM, or LIML.
equation eq1.tsls sales c adver lsales @ gdp unemp int
creates an equation E1 and estimates it via two-stage least squares, then shows a summary of the instruments used in estimation.
“Instrument Summary” for discussion | {"url":"https://help.eviews.com/content/equationcmd-instsum.html","timestamp":"2024-11-12T22:53:36Z","content_type":"application/xhtml+xml","content_length":"7795","record_id":"<urn:uuid:3adf62ff-e9ec-4b23-bf1a-b9b070c443ed>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00475.warc.gz"} |
Amazing Card Trick
How is it possible to predict the card?
My third holiday to southern Turkey this year was hot, very hot, but great fun and as stunning as ever. Our day trips to local archaeological sites remain amongst the most memorable experiences. The
Turkish people are friendly and hospitable and great fun.
Amongst our trips was a second visit to the area around the island of Kekova (Turkey's largest, but quite small, island). The boat trip interspersed swimming and snorkelling with more intellectual
activities including a look at the sunken city, views of Lycian tombs, and a trip to the medieval castle at Kaleucagiz. Last but not least, a sharing of card tricks with our tour guide Mehmet. My
daughter and I showed Mehmet the Best Card Trick and in exchange he shared the following (amazing card trick) with us. Mehmet not only made the day memorable and enjoyable but left me with the
problem of how the trick worked to solve. The mathematics is easy so I hope you will try to get to grips with it...........
Give a full deck of cards to someone in the audience and ask them to shuffle and cut them.
Take the pack face down and count out the first half of the pack, turning them face up onto a pile in front of the member of the audience.
When you have done this - pick up the 26 cards and place them face down back at the bottom of the pile you have in your hand.
Take three cards from the top of the pack and place them face up on the table. Then add enough cards to each (all face down) to make a total of 10.
So, if you turn up the 3 , K and 8 you would put seven cards face down below the 3 (as you count from 3, 4, 5, 6, 7, 8, 9, 10), none below the K (since this already has a value of 10), and two cards
face down below the 8 (as you count from 8, 9, 10).
The three cards showing (face up) on the table are the 3 , K and 8 , making a total value of 3+10+8 = 21.
You should now be able to predict the 21st card down the rest of the pack sitting in your hand .
"And the 21st card will be..."
How is it possible to predict this card no matter what the three cards you turn over are?
Watch Charlie and Alison performing the Amazing Card Trick:
Getting Started
Do you need to use any of the 26 cards that you turned up at the start of the trick?
Is this always the case?
Student Solutions
Felix, Matthew, Alice, Robert, Hayden, Jenna, Catherine, James, James, Nick, Kieran, Kayleigh, Bethany, Luke and Matthew, all from Cupernham School sent in explanations which involved a similar
argument. Correct explanations were also recieved from Andrei of School 205 Bucharest, Sophia of the Maths Club at Stamford High School and Matthew of Finley Middle School. Here is an explanation
based on all of yours.
This card trick has nothing to do with magic, just mathematical thinking. When you do this trick, the total amount of cards on the table is 33.
It works because there are 3 cards you put down in the first place, the cards you add to them to make 10 and the number of cards down the pack. These make a total of 33 if you add them together. The
number you memorise is the 7th card into the original half of 26. That is the top 26+7 into the original half equal 33 so that is how it works.
eg. If you lay down the 9 diamonds, 3 hearts and 6 clubs, it would look like this:
9 3 6 (three cards on the table)
You then lay down 1, 7 and 4 cards respectively (making a total of a12 +3 = 15 cards).
You then count down the pack 9+3+6 cards = 18 cards.
That is 15 cards on the table plus 18 cards down the pack- making 33.
The 2 last rows show the cards that make up the original card to 10. These cards add up to 30. The top row is the original three cards, so if you add them on, the final total is 33. This is always
true, because the three cards are made up to ten every time. This is because you have to make up the numbers on the cards, then deal out the original number again when you have made the prediction.
The trick is when you deal out the 26 cards at the beginning, you take note of the 7th card. This is the card you predict.
If you do this trick properly it can be very entertaining!
Teachers' Resources
Here is a silent video with written instructions for each phase of the trick. It can be displayed on a loop while students are figuring out the trick, to remind them of the instructions if they | {"url":"https://nrich.maths.org/problems/amazing-card-trick-0","timestamp":"2024-11-15T00:20:50Z","content_type":"text/html","content_length":"48345","record_id":"<urn:uuid:253306c0-9478-4b90-b1e6-4d14f9926ac2>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00082.warc.gz"} |
Some of you may have heard of the name L’Hôpital whilst you were at school, but why was it so important? L’Hôpital’s rule, more pedantically known as “la régle de L’Hôpital”, is a highly useful
technique for finding the limit of complicated expressions. To refresh your memory, the explicit definition reads:
Where the right hand side yields an indeterminate form. This definition utilises some complex concepts. Let us break this definition down and decipher what it really means. Beginning with the limit,
this is a concept that essentially forms the basis of all calculus. Limits describe how a function behaves near, but not at, a specified point. For example, if we have the function f(x) = -x^2 + 4
(pictured below), and we take the limit of this function at x = 0. This means that we will look at values of the function approaching x = 0. We can do this from two sides, left and right. If we begin
from the left, for example at the point x = -2, we see that f(-2) = 0 . If we take another step left, closer to x = 0, for example x = -0.5 , we have that f(-0.5)= 3.75, and another step at x = -0.2
reveals that f(-0.2) = 3.96. We can do a similar technique coming from the right-hand side, and if we continue taking infinite steps from each side, without ever taking the value x = 0, we can see
where the two sides coincide: at f(x) = 4. So, we have now taken the limit of this function at x = 0, and we see that the value of this is lim[x-> 0] -x^2 + 4 = 4.
Now, you may wonder why we use the limit if we have lim[x-> c] f(x) = f(c), as we had in our previous example, but that is because this is not always the case. For example, if we take the same
function as before, but we choose to not define our function at x = 0, i.e., we do not know what value the function takes at this point, we can still use the limit.
Here, we can see that the limit of this function is still the same, but we do not have that lim[x-> 0] f(x) = f(0). One final thing to note: limits from each side must approach the same value. If our
left-hand side steps approach a value a , and our right-hand steps approach a value b not equal to a, then we say that the limit does not exist.
Now that we have concluded the basics of limits, we can move on to the other most important part of L’Hôpital’s rule: the indeterminate form. An indeterminant form is a difficult concept to grasp. In
Layman’s terms, it is an expression that involves an operation between two functions (multiplication, division, exponents, etc.) that when the limit is taken to 0, we do not find a logical answer.
This trouble with deciding what the limit should be has led to the conclusion that we are left with an indeterminate form. There are seven indeterminate forms, namely:
Looking at these, it is indeed difficult to find a limit that makes sense, so naming this an indeterminant form is quite useful. We should note that there are also cases where you may think we have
an indeterminate form, but this is not the case. Specifically:
The reason in most cases being that the limit of expressions such as these are simply undefined, or have a logical answer.
Hopefully we have this concept down pat, and we can begin to use it. If we are working with limits, indeterminate forms are a tell-tale sign that further analysis is needed. It is an indication that
our answer does not just stop at our indeterminate form, leaving our conclusion undefined or having no solution, but instead that we need to delve deeper.
Combining the two concepts we have just discussed, limits and indeterminate forms, we can see L’Hôpital’s rule emerge. Our limit tells us how the function f(x)/g(x) behaves as this approaches a
specified value c, and when this gives an indeterminate form, we know we have to dig further in our function analysis.
So, it is clear we have to do something more, and theory tells us we use L’Hôpitals rule, but why? Well, what we are trying to find is information about our limit. We know that if our limit goes to
an indeterminate form such as ∞/∞, we cannot say much about our limit, as we explained before. However, we may be able to find something about how quickly our limit goes to ∞/∞. The numerator could
go really quickly to infinity and the denominator slowly. So, we should find a way to measure the slope of our functions, or more simply put: the rate of change. Those of you paying close attention
that this is exactly what a derivative is. A derivative tells us the rate of change of our function. So, instead of looking at our function value at our limit, we can examine the derivative of each
function at our limit, i.e., take the derivative of each function and then examine the limit. This should give us a specific limit value. We can repeat this as many times as we like until we can find
a limit value.
This may be confusing, so let’s carry out an example. Let us examine lim[x-> 0] sin(x) / x. When we try to find this limit using conventional methods, we see that our function approaches 0/0, which
is an indeterminate form. But, we do not know how quickly the numerator and denominator each approach 0, perhaps one approaches more quickly. So, let us look at the rate of change of the slope of the
numerator and the denominator. Firstly, let us find the derivatives we need: f(x)=sin(x) , f'(x) = cos(x), g(x) = x, g'(x) = 1. When we fill in L’Hôpital’s rule we have that: lim[x-> 0] (sin(x) / x )
= lim[x-> 0] (cos(x) / 1), which very clearly equals 0. Thus, we can see how our function behaves as x approaches 0 , or more formally, we have found our limit. This concludes our analysis.
After a rather in-depth discussion of L’Hôpital’s rule, you should be able to see why it is important and why it is so useful. If we could not use this, then we would have no way of telling how
certain functions behave at specific points. L’Hôpital’s rule opens up a wealth of analysis that we can use for mathematics. It also allows us to side-step indeterminate forms, which we have
discovered are not particularly nice. Now that we have learnt all this, perhaps you have gained a certain appreciation for this rule, and suddenly understand why this was so important in school.
Imagine if tomorrow you were abducted, and before you knew it, you were trapped in a room with 99 other people who seemed to know nothing more about what was happening than you. You notice that
everyone is wearing an orange jumpsuit, which is uniquely numbered. You...
Every year VESTING organizes an introduction weekend for the freshmen, including myself. The general theme of this weekend is getting to know your new fellow students. It is also a perfect
opportunity to get a glimpse of the new world as a student. This was done by a...
“In terms of merit, sports have mathematical statistics. That is how you know who the best player is”. (Norm MacDonald) Until thirty/forty years ago people would most likely not believe in this
statement, but the situation has changed since the end of the 90s when...
The 100 Prisoner Problem
Introduction Weekend Impression
The importance of statistics in sport | {"url":"https://www.deeconometrist.nl/lhopitals-rule/page/2/?et_blog","timestamp":"2024-11-02T21:00:07Z","content_type":"text/html","content_length":"264053","record_id":"<urn:uuid:a8c3b9b0-7b70-4b92-87d8-aff664ead74a>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00279.warc.gz"} |
Volatility of Interest Rates
Sharon Rogner, CFA is evaluating three bonds for inclusion in fixed income portfolio for one of her pension fund clients. All three bonds have a coupon rate of 3%, maturity of five years and are
generally identical in every respect except that bond A is an option-free bond, bond B is callable in two years and bond C is putable in two years. Rogner computes the OAS of bond A to be 50bps using
a binomial tree with an assumed interest rate volatility of 15%.
If Rogner revises her estimate of interest rate volatility to 10%, the computed OAS of Bond B would most likely be:
A) lower than 50bps.
B) equal to 50bps.
C) higher than 50bps.
Answer C.
Explanation - The OAS of the three bonds should be same as they are given to be identical bonds except for the embedded options (OAS is after removing the option feature and hence would not be
affected by embedded options). Hence the OAS of bond B would be 50 bps absent any changes in assumed level of volatility.
When the assumed level of volatility in the tree is decreased, the value of the call option would decrease and the computed value of the callable bond would increase. The constant spread now needed
to force the computed value to be equal to the market price is therefore higher than before. Hence a decrease in the volatility estimate increases the computed OAS for a callable bond.
Can someone please explain this? I am not getting it. Thanks in advance.
Does a lower interest rate volatility imply falling interest rates?
1 Like | {"url":"https://www.analystforum.com/t/volatility-of-interest-rates/150718","timestamp":"2024-11-11T14:15:40Z","content_type":"text/html","content_length":"22151","record_id":"<urn:uuid:80cf59c6-463e-414d-a6a1-7a31b520840f>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00446.warc.gz"} |
Sharp A<sub>1</sub> bounds for calderón-zygmund operators and the relationship with a problem of muckenhoupt and wheeden
For any Calderón-Zygmund operator T the following sharp estimate is obtained for 1 p <∞ ||T|| Lp(ω)≤CVp||ω|| A[1], where v[p] = p^2/p-1log(e+1/p-1). In the case where p=2 and T is a classical
convolution singular integral, this result is due to R. Fefferman and J. Fipher [7]. Then, we deduce the following weak type (1,1) estimate related to a problem of Muckenhoupt and Wheeden [11]: sup
λω{x ∈^n : |Tf(x)>λ} ≤cφ(||ω ||A[1])&R^n|f|ω dx, λ>0 where w ∈ A^1 andψ(t)=t(1+log^+t(1+log^+log^+ t).
Dive into the research topics of 'Sharp A[1] bounds for calderón-zygmund operators and the relationship with a problem of muckenhoupt and wheeden'. Together they form a unique fingerprint. | {"url":"https://cris.biu.ac.il/en/publications/sharp-asub1sub-bounds-for-calder%C3%B3n-zygmund-operators-and-the-rela-2","timestamp":"2024-11-12T04:11:13Z","content_type":"text/html","content_length":"52143","record_id":"<urn:uuid:be188f01-3314-48b9-98aa-27f84335096e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00838.warc.gz"} |
Long time scale molecular dynamics subspace integration method applied to anharmonic crystals and glasses
A subspace dynamics method is presented to model long time dynamical events. The method involves determining a set of vectors that span the subspace of the long time dynamics. Specifically, the
vectors correspond to real and imaginary low frequency normal modes of the condensed phase system. Most importantly, the normal mode derived vectors are only used to define the subspace of low
frequency motions, and the actual time dependent dynamics is fully anhannonic. The resultant projected set of Newton's equations is numerically solved for the subspace motions. Displacements along
the coordinates outside the subspace are then constrained during the integration of the equations of motion in the reduced dimensional space. The method is different from traditional constraint
methods in that it can systematically deduce and remove both local and collective high frequency motions of the condensed phase system with no a priori assumptions. The technique is well suited to
removing large numbers of degrees of freedom, while only keeping the very low frequency global motions. The method is applied to highly anhannonic Lennard-Jones crystal and glass systems. Even in
these systems with no intramolecular degrees of freedom or obvious separation of time scales, the subspace dynamics provides a speed up of approximately a factor of 5 over traditional molecular
dynamics through use of a larger integration time step. In the cases illustrated here a single set of subspace vectors was adequate over the full time interval, although this is not expected to be
true for all systems.
All Science Journal Classification (ASJC) codes
• General Physics and Astronomy
• Physical and Theoretical Chemistry
Dive into the research topics of 'Long time scale molecular dynamics subspace integration method applied to anharmonic crystals and glasses'. Together they form a unique fingerprint. | {"url":"https://collaborate.princeton.edu/en/publications/long-time-scale-molecular-dynamics-subspace-integration-method-ap","timestamp":"2024-11-10T05:45:46Z","content_type":"text/html","content_length":"53619","record_id":"<urn:uuid:ed1b85d1-b7ad-45ba-9bfe-520e54c2531d>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00258.warc.gz"} |
Young’s Double Slit Experiment
Learning Objectives
By the end of this section, you will be able to:
• Explain the phenomena of interference.
• Define constructive interference for a double slit and destructive interference for a double slit.
Although Christiaan Huygens thought that light was a wave, Isaac Newton did not. Newton felt that there were other explanations for color, and for the interference and diffraction effects that were
observable at the time. Owing to Newton’s tremendous stature, his view generally prevailed. The fact that Huygens’s principle worked was not considered evidence that was direct enough to prove that
light is a wave. The acceptance of the wave character of light came many years later when, in 1801, the English physicist and physician Thomas Young (1773–1829) did his now-classic double slit
experiment (see Figure 1).
Why do we not ordinarily observe wave behavior for light, such as observed in Young’s double slit experiment? First, light must interact with something small, such as the closely spaced slits used by
Young, to show pronounced wave effects. Furthermore, Young first passed light from a single source (the Sun) through a single slit to make the light somewhat coherent. By coherent, we mean waves are
in phase or have a definite phase relationship. Incoherent means the waves have random phase relationships. Why did Young then pass the light through a double slit? The answer to this question is
that two slits provide two coherent light sources that then interfere constructively or destructively. Young used sunlight, where each wavelength forms its own pattern, making the effect more
difficult to see. We illustrate the double slit experiment with monochromatic (single λ) light to clarify the effect. Figure 2 shows the pure constructive and destructive interference of two waves
having the same wavelength and amplitude.
When light passes through narrow slits, it is diffracted into semicircular waves, as shown in Figure 3a. Pure constructive interference occurs where the waves are crest to crest or trough to trough.
Pure destructive interference occurs where they are crest to trough. The light must fall on a screen and be scattered into our eyes for us to see the pattern. An analogous pattern for water waves is
shown in Figure 3b. Note that regions of constructive and destructive interference move out from the slits at well-defined angles to the original beam. These angles depend on wavelength and the
distance between the slits, as we shall see below.
To understand the double slit interference pattern, we consider how two waves travel from the slits to the screen, as illustrated in Figure 4. Each slit is a different distance from a given point on
the screen. Thus different numbers of wavelengths fit into each path. Waves start out from the slits in phase (crest to crest), but they may end up out of phase (crest to trough) at the screen if the
paths differ in length by half a wavelength, interfering destructively as shown in Figure 4a. If the paths differ by a whole wavelength, then the waves arrive in phase (crest to crest) at the screen,
interfering constructively as shown in Figure 4b. More generally, if the paths taken by the two waves differ by any half-integral number of wavelengths [(1/2)λ, (3/2)λ, (5/2)λ, etc.], then
destructive interference occurs. Similarly, if the paths taken by the two waves differ by any integral number of wavelengths (λ, 2λ, 3λ, etc.), then constructive interference occurs.
Take-Home Experiment: Using Fingers as Slits
Look at a light, such as a street lamp or incandescent bulb, through the narrow gap between two fingers held close together. What type of pattern do you see? How does it change when you allow the
fingers to move a little farther apart? Is it more distinct for a monochromatic source, such as the yellow light from a sodium vapor lamp, than for an incandescent bulb?
Figure 5 shows how to determine the path length difference for waves traveling from two slits to a common point on a screen. If the screen is a large distance away compared with the distance between
the slits, then the angle θ between the path and a line from the slits to the screen (see the figure) is nearly the same for each path. The difference between the paths is shown in the figure; simple
trigonometry shows it to be d sin θ, where d is the distance between the slits. To obtain constructive interference for a double slit, the path length difference must be an integral multiple of the
wavelength, or d sin θ = mλ, for m = 0, 1, −1, 2, −2, . . . (constructive).
Similarly, to obtain destructive interference for a double slit, the path length difference must be a half-integral multiple of the wavelength, or
[latex]d\sin\theta=\left(m+\frac{1}{2}\right)\lambda\text{, for }m=0,1,-1,2,-2,\dots\text{ (destructive)}\\[/latex],
where λ is the wavelength of the light, d is the distance between slits, and θ is the angle from the original direction of the beam as discussed above. We call m the order of the interference. For
example, m = 4 is fourth-order interference.
The equations for double slit interference imply that a series of bright and dark lines are formed. For vertical slits, the light spreads out horizontally on either side of the incident beam into a
pattern called interference fringes, illustrated in Figure 6. The intensity of the bright fringes falls off on either side, being brightest at the center. The closer the slits are, the more is the
spreading of the bright fringes. We can see this by examining the equation d sin θ = mλ, for m = 0, 1, −1, 2, −2, . . . .
For fixed λ and m, the smaller d is, the larger θ must be, since [latex]\sin\theta=\frac{m\lambda}{d}\\[/latex]. This is consistent with our contention that wave effects are most noticeable when the
object the wave encounters (here, slits a distance d apart) is small. Small d gives large θ, hence a large effect.
Example 1. Finding a Wavelength from an Interference Pattern
Suppose you pass light from a He-Ne laser through two slits separated by 0.0100 mm and find that the third bright line on a screen is formed at an angle of 10.95º relative to the incident beam. What
is the wavelength of the light?
The third bright line is due to third-order constructive interference, which means that m = 3. We are given d = 0.0100 mm and θ = 10.95º. The wavelength can thus be found using the equation d sin θ =
mλ for constructive interference.
The equation is d sin θ = mλ. Solving for the wavelength λ gives [latex]\lambda=\frac{d\sin\theta}{m}\\[/latex].
Substituting known values yields
[latex]\begin{array}{lll}\lambda&=&\frac{\left(0.0100\text{ nm}\right)\left(\sin10.95^{\circ}\right)}{3}\\\text{ }&=&6.33\times10^{-4}\text{ nm}=633\text{ nm}\end{array}\\[/latex]
To three digits, this is the wavelength of light emitted by the common He-Ne laser. Not by coincidence, this red color is similar to that emitted by neon lights. More important, however, is the fact
that interference patterns can be used to measure wavelength. Young did this for visible wavelengths. This analytical technique is still widely used to measure electromagnetic spectra. For a given
order, the angle for constructive interference increases with λ, so that spectra (measurements of intensity versus wavelength) can be obtained.
Example 2. Calculating Highest Order Possible
Interference patterns do not have an infinite number of lines, since there is a limit to how big m can be. What is the highest-order constructive interference possible with the system described in
the preceding example?
Strategy and Concept
The equation d sin θ = mλ (for m = 0, 1, −1, 2, −2, . . . ) describes constructive interference. For fixed values of d and λ, the larger m is, the larger sin θ is. However, the maximum value that
sin θ can have is 1, for an angle of 90º. (Larger angles imply that light goes backward and does not reach the screen at all.) Let us find which m corresponds to this maximum diffraction angle.
Solving the equation d sin θ = mλ for m gives [latex]\lambda=\frac{d\sin\theta}{m}\\[/latex].
Taking sin θ = 1 and substituting the values of d and λ from the preceding example gives
[latex]\displaystyle{m}=\frac{\left(0.0100\text{ mm}\right)\left(1\right)}{633\text{ nm}}\approx15.8\\[/latex]
Therefore, the largest integer m can be is 15, or m = 15.
The number of fringes depends on the wavelength and slit separation. The number of fringes will be very large for large slit separations. However, if the slit separation becomes much greater than the
wavelength, the intensity of the interference pattern changes so that the screen has two bright lines cast by the slits, as expected when light behaves like a ray. We also note that the fringes get
fainter further away from the center. Consequently, not all 15 fringes may be observable.
Section Summary
• Young’s double slit experiment gave definitive proof of the wave character of light.
• An interference pattern is obtained by the superposition of light from two slits.
• There is constructive interference when d sin θ = mλ (for m = 0, 1, −1, 2, −2, . . . ), where d is the distance between the slits, θ is the angle relative to the incident direction, and m is the
order of the interference.
• There is destructive interference when d sin θ = mλ (for m = 0, 1, −1, 2, −2, . . . ).
Conceptual Questions
1. Young’s double slit experiment breaks a single light beam into two sources. Would the same pattern be obtained for two independent sources of light, such as the headlights of a distant car?
2. Suppose you use the same double slit to perform Young’s double slit experiment in air and then repeat the experiment in water. Do the angles to the same parts of the interference pattern get
larger or smaller? Does the color of the light change? Explain.
3. Is it possible to create a situation in which there is only destructive interference? Explain.
4. Figure 7 shows the central part of the interference pattern for a pure wavelength of red light projected onto a double slit. The pattern is actually a combination of single slit and double slit
interference. Note that the bright spots are evenly spaced. Is this a double slit or single slit characteristic? Note that some of the bright spots are dim on either side of the center. Is this a
single slit or double slit characteristic? Which is smaller, the slit width or the separation between slits? Explain your responses.
Problems & Exercises
1. At what angle is the first-order maximum for 450-nm wavelength blue light falling on double slits separated by 0.0500 mm?
2. Calculate the angle for the third-order maximum of 580-nm wavelength yellow light falling on double slits separated by 0.100 mm.
3. What is the separation between two slits for which 610-nm orange light has its first maximum at an angle of 30.0º?
4. Find the distance between two slits that produces the first minimum for 410-nm violet light at an angle of 45.0º.
5. Calculate the wavelength of light that has its third minimum at an angle of 30.0º when falling on double slits separated by 3.00 μm.
6. What is the wavelength of light falling on double slits separated by 2.00 μm if the third-order maximum is at an angle of 60.0º?
7. At what angle is the fourth-order maximum for the situation in Question 1?
8. What is the highest-order maximum for 400-nm light falling on double slits separated by 25.0 μm?
9. Find the largest wavelength of light falling on double slits separated by 1.20 μm for which there is a first-order maximum. Is this in the visible part of the spectrum?
10. What is the smallest separation between two slits that will produce a second-order maximum for 720-nm red light?
11. (a) What is the smallest separation between two slits that will produce a second-order maximum for any visible light? (b) For all visible light?
12. (a) If the first-order maximum for pure-wavelength light falling on a double slit is at an angle of 10.0º, at what angle is the second-order maximum? (b) What is the angle of the first minimum?
(c) What is the highest-order maximum possible here?
13. Figure 8 shows a double slit located a distance x from a screen, with the distance from the center of the screen given by y. When the distance d between the slits is relatively large, there will
be numerous bright spots, called fringes. Show that, for small angles (where [latex]\text{sin}\theta\approx\theta\\[/latex], with θ in radians), the distance between fringes is given by [latex]\
14. Using the result of the problem above, calculate the distance between fringes for 633-nm light falling on double slits separated by 0.0800 mm, located 3.00 m from a screen as in Figure 8.
15. Using the result of the problem two problems prior, find the wavelength of light that produces fringes 7.50 mm apart on a screen 2.00 m from double slits separated by 0.120 mm (see Figure 8).
coherent: waves are in phase or have a definite phase relationship
constructive interference for a double slit: the path length difference must be an integral multiple of the wavelength
destructive interference for a double slit: the path length difference must be a half-integral multiple of the wavelength
incoherent: waves have random phase relationships
order: the integer m used in the equations for constructive and destructive interference for a double slit
Selected Solutions to Problems & Exercises
1. 0.516º
3. 1.22 × 10^−6 m
5. 600 nm
7. 2.06º
9. 1200 nm (not visible)
11. (a) 760 nm; (b) 1520 nm
13. For small angles sin θ − tan θ ≈ θ (in radians).
For two adjacent fringes we have, d sin θ[m] = mλ and d sin θ[m] + 1 = (m + 1)λ
Subtracting these equations gives
[latex]\begin{array}{}d\left(\sin{\theta }_{\text{m}+1}-\sin{\theta }_{\text{m}}\right)=\left[\left(m+1\right)-m\right]\lambda \\ d\left({\theta }_{\text{m}+1}-{\theta }_{\text{m}}\right)=\lambda \\
\text{tan}{\theta }_{\text{m}}=\frac{{y}_{\text{m}}}{x}\approx {\theta }_{\text{m}}\Rightarrow d\left(\frac{{y}_{\text{m}+1}}{x}-\frac{{y}_{\text{m}}}{x}\right)=\lambda \\ d\frac{\Delta y}{x}=\lambda
\Rightarrow \Delta y=\frac{\mathrm{x\lambda }}{d}\end{array}\\[/latex]
15. 450 nm | {"url":"https://courses.lumenlearning.com/suny-physics/chapter/27-3-youngs-double-slit-experiment/","timestamp":"2024-11-03T05:40:52Z","content_type":"text/html","content_length":"79068","record_id":"<urn:uuid:e74529b0-8fc8-49cf-88df-9c3cf5af8067>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00634.warc.gz"} |
Alternative Answer
We know it is 16:9. So we know that 16 is the length and 9 is the height. Ben gave us height, which is 4. Now, we must find the length. To do this, we do this:
16/9 * x/4
If you look at the equation, you will see that the height is 9 on the left, and 4 on the right. We know that the length is supposed to be 16, but we donโ t know what it should be if the HEIGHT is to
be 4. And so, we place an โ xโ .
By cross multiplying, we end up with x = 4(16)/9
Which gives us 7.11
Hope that helps | {"url":"https://community.gamedev.tv/t/alternative-answer/12276","timestamp":"2024-11-13T11:49:28Z","content_type":"text/html","content_length":"15204","record_id":"<urn:uuid:8516121f-4fba-4333-b211-99db47b182a9>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00231.warc.gz"} |
Journal of Semiconductor Technology and Science
The work develops an analytical thermal model for a thermal Through Silicon Vias based heat mitigation power chip whose thermal path is quite different compared to the literatures published. Thermal
spreading angle and transverse heat transfer of thermal Through Silicon Vias as well as its thermal stress impact on carrier mobility in active areas have been considered. Traditional one-dimensional
thermal model used in three-dimensional integrated circuits and finite element analysis result are used to verify the accuracy of the proposed model. Temperature rise for the proposed structure with
respect to the filling-via radius, bulk Si thickness, Through Silicon Via liner thickness and bonding layer thickness are investigated. It can be found that the proposed thermal model is superior
than one-dimensional model in contrast with the simulation result which indicates an improvement in the thermal management of thermal Through Silicon Vias based three-dimensional integrated circuits
associated with thermal-mechanical reliability. | {"url":"http://jsts.org/jsts/XmlViewer/f404899","timestamp":"2024-11-10T22:00:28Z","content_type":"application/xhtml+xml","content_length":"177603","record_id":"<urn:uuid:33cde2e7-f16d-4532-8c4a-cbfd0af33b7f>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00233.warc.gz"} |
(S * B) / (P * D^2)
30 Aug 2024
(S * B) / (P * D^2) & Analysis of variables
Equation: (S * B) / (P * D^2)
Variable: P
Impact of Telescope Power on Pmin Function
X-Axis: 100.0to100.0
Y-Axis: Pmin Function
Title: The Impact of Telescope Power on the Pmin Function: An Analysis of the Equation (S * B) / (P * D^2)
In the field of telescope design, the Pmin function plays a crucial role in determining the minimum power required to achieve a certain level of performance. Recent studies have shown that telescope
power can significantly impact the Pmin function, leading to improved or degraded performance depending on various factors. This article presents an analysis of the equation (S * B) / (P * D^2),
which is central to understanding the relationship between telescope power and the Pmin function. We examine the effects of varying the variable P (power) on the Pmin function, providing valuable
insights for engineers designing and optimizing telescopes.
Telescopes are complex systems that rely on a delicate balance of various components to achieve optimal performance. The Pmin function, which represents the minimum power required by the telescope to
operate within specified parameters, is a critical factor in this balance. As telescope technology continues to advance, understanding the impact of telescope power on the Pmin function has become
increasingly important.
The Equation:
The equation (S * B) / (P * D^2) is a fundamental expression that underlies the relationship between telescope power and the Pmin function. Here, S represents the surface area of the primary mirror
or reflector, B denotes the bandwidth of the system, P stands for the power input to the telescope, and D signifies the diameter of the primary aperture.
Variable Analysis:
We focus on analyzing the impact of varying the variable P (power) on the Pmin function. To do this, we must first understand how changes in power affect each component of the equation:
• Surface Area (S): The surface area of the primary mirror or reflector remains constant in most telescopes.
• Bandwidth (B): This parameter also tends to be fixed for a given telescope design.
• Power (P): Changes in power input can significantly affect the Pmin function, as this variable is directly multiplied by D^2 in the denominator of the equation.
By varying P while holding S and B constant, we can observe how changes in power impact the overall value of the Pmin function. Specifically, we examine the behavior of the Pmin function as P
increases or decreases.
Our analysis reveals that increasing P leads to a proportional decrease in the Pmin function, whereas decreasing P results in an inverse increase in the Pmin function. This effect is demonstrated in
Figure 1:
P (Power) Pmin Function Value
10 kW 0.01 W
50 kW 0.005 W
100 kW 0.0025 W
As P increases, the Pmin function value decreases, and vice versa.
This analysis demonstrates that varying the variable P (power) has a direct impact on the Pmin function of telescopes. Our findings have significant implications for engineers designing and
optimizing telescopes, as they indicate that careful consideration must be given to power input when aiming to achieve optimal performance.
Based on our results, we recommend that telescope designers prioritize minimizing power requirements while maintaining acceptable levels of performance. Furthermore, incorporating adjustable power
management systems can help mitigate the impact of changing environmental conditions and ensure reliable operation within specified parameters.
By understanding the relationship between telescope power and the Pmin function, engineers can develop more efficient, effective, and high-performance telescopes for various applications.
Related topics
Academic Chapters on the topic
Information on this page is moderated by llama3.1 | {"url":"https://blog.truegeometry.com/engineering/Analytics_Impact_of_Telescope_Power_on_Pmin_Function_S_B_P_D_2_.html","timestamp":"2024-11-08T12:27:45Z","content_type":"text/html","content_length":"18852","record_id":"<urn:uuid:8f422488-f0bb-4558-a546-c0b4e1587b9e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00374.warc.gz"} |
Uniaxial Compressive Strength Prediction for Rock Material in Deep Mine Using Boosting-Based Machine Learning Methods and Optimization Algorithms
School of Resources and Safety Engineering, Central South University, Changsha, 410083, China
* Corresponding Author: Diyuan Li. Email:
Computer Modeling in Engineering & Sciences 2024, 140(1), 275-304. https://doi.org/10.32604/cmes.2024.046960
Received 20 October 2023; Accepted 24 January 2024; Issue published 16 April 2024
Traditional laboratory tests for measuring rock uniaxial compressive strength (UCS) are tedious and time-consuming. There is a pressing need for more effective methods to determine rock UCS,
especially in deep mining environments under high
stress. Thus, this study aims to develop an advanced model for predicting the UCS of rock material in deep mining environments by combining three boosting-based machine learning methods with four
optimization algorithms. For this purpose, the Lead-Zinc mine in Southwest China is considered as the case study. Rock density, P-wave velocity, and point load strength index are used as input
variables, and UCS is regarded as the output. Subsequently, twelve hybrid predictive models are obtained. Root mean square error (RMSE), mean absolute error (MAE), coefficient of determination (
), and the proportion of the mean absolute percentage error less than 20% (A-20) are selected as the evaluation metrics. Experimental results showed that the hybrid model consisting of the extreme
gradient boosting method and the artificial bee colony algorithm (XGBoost-ABC) achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing
dataset. The values of
, A-20, RMSE, and MAE on the training dataset are 0.98, 1.0, 3.11 MPa, and 2.23MPa, respectively. The highest values of
and A-20 (0.93 and 0.96), and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa, are observed on the testing dataset. The proposed hybrid model can be considered a reliable and effective
method for predicting rock UCS in deep mines.
Graphic Abstract
ρ Density
VP P-wave velocity
Is50 Point load strength
UCS Uniaxial compressive strength
As one of the most critical parameters of rock strength, uniaxial compressive strength (UCS) is widely used in geotechnical engineering, tunneling, and mining engineering. The reliability of
acquiring rock UCS in situ directly influences subsequent operations, such as drilling, digging, blasting, and support. Typically, the UCS value of rock materials can be obtained by following the
well-established regulations of the International Society for Rock Mechanics (ISRM) [1] and the American Society for Testing Materials (ASTM) [2]. Laboratory testing has strict requirements for
specimen preparation. It is challenging to obtain high-quality rock samples from layered sedimentary rocks, highly weathered rocks, and fractured rock masses [3–6]. Moreover, the accuracy of
laboratory testing depends on the professionalism of the operators. For this reason, other highly efficient and straightforward methods for rock UCS prediction have been developed in various studies,
such as statistical methods (single regression analysis methods and multiple variable regression methods) and soft computing-based methods.
Several studies have employed statistical equations to investigate the relationship between a single variable and UCS [7,8]. Fener et al. [9] conducted several laboratory tests and developed an
equation for rock UCS prediction based on point load test results, achieving better performance with an R2 value of 0.85. Yasar et al. [10] investigated the relationships between hardness (Shore
Scleroscope hardness and Schmidt hammer hardness) and UCS, revealing that the hardness property showed a high correlation coefficient with rock UCS. Yilmaz [11] introduced a new testing method to
determine the UCS of rock, called the core strangle test (CST). The UCS prediction results obtained from CST were more accurate than those from the point load tests. Basu et al. [12] pointed out that
point load strength could be used to predict the UCS of anisotropic rocks, and in their study, the final R2 result reached 0.86. Khandelwal [13] adopted the linear regression method to fit the
relationships between P-wave velocity (Vp) and rock mechanical properties, indicating that Vp was highly correlated with UCS. Amirkiyaei et al. [14] developed a statistical model to predict the UCS
of building rock materials after a freeze-thaw operation. The porosity (n) and Vp of the fresh stone were used as input variables, and the model achieved acceptable accuracy. Moreover, the Schmidt
hammer rebound value (SHR), another nondestructive test index, was widely accepted for rock UCS prediction due to the data accessibility. Yagiz [15] investigated the correlation between SHR and UCS
on nine rock types. The developed equation showed that SHR was strongly correlated with UCS. Nazir et al. [16] proposed a new correlation to predict UCS based on L-type SHR, achieving an R2 value of
up to 0.91, but the correlation was not recommended for highly weathered rock. In another study, Wang et al. [17] also concluded that L-type SHR could be considered an effective parameter for
predicting UCS.
However, predicting UCS using a single related factor is not advised because rock strength is determined by a combination of physical and mechanical properties [5]. Multiple regression models have
shown better performance in rock UCS prediction than single regression methods. Azimian et al. [5] compared the prediction performance of UCS between the standalone regression model and the multiple
regression method. The R2 values of the single regression equations using only Vp and point load strength index (Is50) were 0.90 and 0.92, respectively. In contrast, the prediction result obtained
through the multiple regression model was more accurate (R2=0.94). Similarly, Minaeian et al. [18] compared the performance of a multivariate regression model with two single regression models in
estimating rock UCS. Vp and SHR were employed in the multiple regression method, and then used in the two single regression models, respectively. The variance accounted for (VAF%) for the
multivariate regression model and the two single prediction models were 97.24%, 94.34%, and 94.39%, respectively. Farhadian et al. [19] developed a simple nonlinear multiple regression method for
predicting rock UCS based on Vp and SHR, and the results showed that the model exhibited strong prediction capability. In addition, Mishra et al. [20] confirmed that the multiple regression method
outperformed the simple regression model in rock UCS prediction. Although the statistical methods presented above showed improved prediction performance, their ability to generalize to new datasets
is limited.
In recent years, artificial intelligence (AI) methods have become widely used in engineering to tackle complex nonlinear problems due to their superior capabilities. They are recommended for solving
other complex problems, such as earth pressure calculation and rock profile reconstruction [21,22]. Research reviews have revealed that AI-based methods have been successfully applied in areas such
as tunnel squeezing analysis [23,24], rock mass failure mode classification [25], rock lithology classification [26,27], and rock burst prediction and assessment [28–30]. The outstanding results
achieved by AI-based methods have garnered significant attention in the prediction of rock mechanical parameters. For example, Ghasemi et al. [31] proposed the M5P model tree method to predict the
UCS of carbonate rocks. SHR, n, dry unit weight, Vp, and slake durability index were considered as input parameters. Wang et al. [32] employed the random forest algorithm to estimate the UCS value
based on L-type SHR and Vp. The proposed method achieved comparable results to the measured values. Cao et al. [4] combined the extreme gradient boosting method with the firefly optimization
algorithm to predict rock UCS. Input data included rock dry density (ρ), Vp, and the proportion of crystals in the rocks (such as quartz (Qtz), biotite (Bi), plagioclase (Plg), chlorite (Chl), and
alkali feldspar (Kpr)). The model achieved high-precision results. Barzegar et al. [6] developed an ensemble tree-based method to estimate rock UCS using Vp, SHR, n, and Is50. The research results
revealed that the ensemble model outperformed both the standalone machine learning (ML) models and multivariate regression models. Jin et al. [33] created a hybrid model combining the extreme
learning machine approach with the grey wolf optimization algorithm. Four rock properties (n, Vp, SHR, Is50) were used. The hybrid model achieved better prediction performance (R2=0.951) on the
testing dataset than the other four models. Saedi et al. [34] investigated six input factors, including n, cylindrical punch index, block punch index, Brazilian tensile strength, Is50, and Vp, and
created six prediction models. The findings revealed that the fuzzy inference system and multivariate regression models outperformed the single regression technique. Li et al. [35] adopted six
optimization algorithms to enhance the random forest model’s performance for rock UCS prediction, employing five parameters: SHR, Vp, Is50, n, and ρ. The proposed model achieved excellent performance
with an R2 value of 0.9753 after optimization. Mahmoodzadeh et al. [36] compared six AI-based methods for rock UCS prediction using four input variables (n, SHR, Vp, and Is50) in the models. Among
all predictive models, the Gaussian process regression model performed the best (R2=0.9955). Skentou et al. [37] developed an artificial neural network model with three optimization algorithms.
Three parameters, such as pulse velocity, SHR, and effective n, were applied in their study, and the best hybrid model obtained outstanding performance (R2=0.9607).
AI-based methods offer distinct advantages over traditional empirical formulas for rock UCS prediction. Traditional approaches to determining UCS through laboratory testing face limitations. These
include challenges in obtaining high-quality rock samples from certain geological conditions, such as severely fragmented rock masses, in-situ core disking, and lower efficiency in high-quality rock
sample collection. Moreover, few studies have been conducted on predicting the UCS of rock materials in deep mines using boosting-based approaches. Therefore, this study aims to develop a simple and
robust rock UCS predictive model tailored for deep mining environments. The Lead-Zinc mine in Southwest China serves as the case study. Three easily accessible parameters ρ, Vp, and Is50 are used as
input parameters, with UCS considered the output. The main contents are as follows: (1) constructing a comprehensive database based on previous studies and new field data obtained from deep mines in
southwestern China; (2) developing a straightforward and reliable model for rock UCS prediction, integrating three boosting-based machine learning methods, adaptive boosting (AdaBoost), gradient
boosting (GBoost), extreme gradient boosting (XGBoost), and four optimization algorithms: Bayesian optimization algorithm (BOA), artificial bee colony (ABC), grey wolf optimization (GWO), and whale
optimization algorithm (WOA); (3) systematically comparing the performance of all hybrid models in rock UCS prediction.
This study considers boosting-based ML methods such as AdaBoost, GBoost, and XGBoost base models for rock UCS prediction. Boosting, an advanced ensemble method, combines several weaker learners to
create a strong learner, ensuring improved overall performance of the final model. In addition, four optimization algorithms, namely BOA, ABC, GWO, and WOA, are employed to obtain the optimal
parameters for all the base models. These algorithms are renowned for their robustness and efficiency in solving complex optimization problems. They excel in multidimensional and nonlinear search
spaces, providing effective solutions in various fields, from engineering to data science. Detailed descriptions of the Boosting-based approaches and the optimization algorithms are provided in the
subsequent sections.
2.1 Boosting-Based ML Algorithms
AdaBoost, a typical ensemble boosting algorithm, combines multiple weak learners to form a single strong learner. By adequately considering the weights of all learners, it produces a model that is
more accurate and less prone to overfitting. The procedure of the algorithm is outlined below:
(1) Determine the regression error rate of the weak learner.
• Obtain the maximum error;
Et=max|yi−ht(xi)|, i=1,2,…,N(1)
where xi,yi indicates the input data; N is the number of the sample; ht(x) represents the weak learner of the t-th iteration.
• Estimate the relative error for each sample using the linearity loss function;
• Determine the regression error rate;
where wti is the weight of the sample xi.
(2) Determine the weight coefficient α of the weak learner.
(3) Samples weights updation for the t+1 iteration round.
Zt=∑i=1NW(xti)αt1−ϵti;Ztis the normalization factor(6)
(4) Build the ultimate learner.
H(x)=∑t=1Tln(1αt)f(x).f(x)is the median value of αtht(x)(t=1,2,…,T)(7)
Additionally, the AdaBoost method requires only a few parameter adjustments, such as the decision tree depth, the number of iterations, and the regression loss function. Consequently, it has been
extensively used in various fields, including rock mass classification [38], rockburst prediction [39], rock strength estimation [40,41], and tunnel boring machine performance prediction [42].
The GBoost method, another ensemble algorithm, is inspired by gradient descent. It trains a new weak learner based on the negative gradient of the current model loss. This well-trained weak learner
is then combined with the existing model. The final model is constructed by repeating these accumulation steps, as shown below:
(1) Initialize the base learner.
where L(x) is the squared error loss function; c is a constant; yi is the measured value; m is the number of the sample; H0(x) is the initial weak learner.
(2) Calculate the negative gradient. Then, the leaf node region Rtj of the t-th regression tree (J is the number of leaf nodes of regression tree t) could be obtained based on (xi,rti).
rti=−∂L(yi, ht−1(xi))∂ht−1(xi), i=1,2,…,m(10)
(3) Obtain the optimal-fitted value ctj for the leaf region j=1,2,…,J, and update the strong learner.
ctj=argminc∑xi∈RtjL(yi, ht−1(xi)+c)(11)
ht(x)=ht−1(x)+∑j=1JctjI, (x∈Rtj)(12)
where ht−1(x) is the regression learner obtained after t−1 iterations; I is the unit matrix.
(4) Finally, a strong learner could be obtained.
f(x)=fT(x)=∑t=1T∑j=1JctjI (x∈Rtj)(13)
An advantage of GBoost is its flexibility in choosing the loss function, allowing for the use of any continuously differentiable loss function. This characteristic makes the model more resilient to
noise. Due to these advantages, GBoost-based methods have achieved considerable success, resulting in many improved versions [43–45].
2.1.3 Extreme Gradient Boosting
XGBoost, an extension of the gradient boosting algorithm, was developed by Chen et al. [46]. This model has robust applications in classification and regression tasks. The primary idea behind the
XGBoost approach is to continually produce new trees, with each decision tree being updated based on the difference between the previous tree’s result and the target value, thereby minimizing model
bias. Given a dataset D={(xi,yi)}, where xi is the sample, the prediction result of the well-trained XGBoost model is calculated as follows:
where xi is the sample; fk(xi) is the prediction result of the k-th tree for the sample xi; K is the number of decision trees; y~i is the sum of the prediction results of all the decision trees.
where l(yi,y~i) is the loss function, such as mean square error and cross-entropy loss; yi is the target value; n is the number of samples; Ω(fk) is the complexity of the tree; T is the number of
leaves; wj is the L2 norm of leaf scores; γ and λ are the factors that aim to change the complexity of the tree.
The XGBoost method combines the loss function and regularization factor into its objective function, resulting in higher generalization than other models, as shown in Eqs. (14), (15). Another
distinctive feature of the XGBoost method is its incorporation of a greedy algorithm and an approximation algorithm for searching the split nodes of the tree. The main optimization parameters are the
decision tree depth, the number of estimators, and the maximum features. Chang et al. [47] developed an effective model for credit risk assessment using the XGBoost method. Zhang et al. [48] used the
XGBoost method to forecast the undrained shear strength of soft clays. In another work, Nguyen-Sy et al. [49] used the XGBoost method to predict the UCS of concrete. The XGBoost model outperforms the
artificial neural network (ANN) model, support vector machine (SVM), and other ML methods.
Optimization algorithms are usually designed to automatically find the best global solution within the given search space, shortening the model development cycle and ensuring model robustness. Hence,
four well-performing optimization algorithms, i.e., BOA, ABC, GWO, and WOA, were implemented in this study to optimize the parameters of the aforementioned boosting-based ML models for the prediction
of rock UCS.
2.2.1 Bayesian Optimization Algorithm
Compared to the commonly used grid search and random search algorithms, the BOA method, proposed by Pelikan et al. [50], fully utilizes prior information to find the parameters that maximize the
target function globally. The algorithm comprises two parts: (1) Gaussian process regression, which aims to determine the values of the mean and variance of the function at each point, and (2)
constructing the acquisition function, which is employed to obtain the search position of the next iteration, as shown in Fig. 1. BOA has the advantages of fewer iterations and a faster processing
speed and has been applied in several fields. Díaz et al. [51] studied the UCS prediction of jet grouting columns based on several ML algorithms and the BOA method. The optimized model obtained
significant improvement compared to existing works. Li et al. [26] proposed an intelligent model for rockburst prediction using BOA for hyperparameter optimization. Lahmiri et al. [52] used BOA to
obtain the optimal parameters of models for house price prediction. Bo et al. [53] developed an ensemble classifier model to assess tunnel squeezing hazards, with the optimal values of the seventeen
parameters obtained utilizing the BOA method. Additionally, Díaz et al. [54] investigated the correlations between activity and clayey soil properties. Thirty-five ML models were introduced in their
research to predict the activity using the clayey soil properties, with the BOA method being used to fine-tune the ML models’ hyperparameters, producing promising results.
The ABC method, developed for multivariate function optimization problems by Karaboga [56], divides bees in a colony into three groups (employed, onlookers, and scouts) based on task assignment [57],
as illustrated in Fig. 2. Employed bees are tasked with finding available food sources and gathering information. In contrast, onlookers collect good food sources based on data transferred from
employed bees and perform further searches for food. Scouts are responsible for finding valuable honey sources around the beehive. In mathematical terms, the food source represents the problem’s
solution, and the nectar level equates to the fitness value of the solution [58]. Parsajoo et al. [59] adopted the ABC method to tune and improve the model performance for rock brittleness index
prediction. Zhou et al. [60] recommended an intelligent model for rockburst risk assessment and applied the ABC method to obtain optimal hyperparameters for the model. The results revealed ABC to be
a valuable and successful strategy.
2.2.3 Grey Wolf Optimization Algorithm
GWO is a metaheuristic method developed by Mirjalili et al. [61]. The algorithm simulates grey wolf predation in nature, and wolves are divided into four hierarchies: α, β,δ, w, α,β,δ are the head
wolves leading the other wolves (w) moving toward the destination, and the position of wolf w is updated based on α, β, or δ using the following equations:
where X→p and X→(t) are the prey and grey wolf position vectors, respectively; vectors A→ and C→ are the constant coefficients; t is the current iteration.
The constant coefficients A→ and C→ can be calculated as follows:
where r→1 and r→2 are vectors randomly distributed in [0, 1]; a→ is a convergence factor that drops linearly from 2 to 0 as iterations increase.
During the iteration process, the best solution can be obtained by the head wolves α,β, and δ, and the value of |A→|>1 means that the candidate solution is far away from the prey. In contrast, when
the value of |A→|<1 indicates that the candidate solution is close to the prey. The flowchart is shown in Fig. 3. Several studies highlight the GWO method due to its simple structure, few parameters,
and easy implementation. Golafshani et al. [62] developed a model to estimate concrete UCS, demonstrating that the GWO-optimized model outperformed the original prediction model. Shariati et al. [63]
reported that integrating the GWO approach can greatly improve the model’s predictive capability.
2.2.4 Whale Optimization Algorithm
WOA, a unique population intelligence optimization method mimicking whale-feeding behavior, was introduced by Mirjalili et al. [64]. The mathematical modeling process of WOA is comparable to that of
the GWO approach. However, a critical distinction between the two algorithms is that humpback whales complete their prey behaviors using either random whale individuals or the ideal individuals, as
well as the spiraling bubble-net mode. Zhou et al. [65] applied WOA to obtain optimal parameters for the SVM model for tunnel squeezing classification, achieving higher prediction accuracy. In
another study, Tien et al. [66] presented a model for predicting concrete UCS using various optimization techniques, with WOA-based optimization performing the best. Nguyen et al. [67] combined SVM
and WOA algorithms to create an intelligent model for predicting fly rock distance, demonstrating that the hybrid WOA-SVM model outperformed standalone models.
Rock samples collected from a deep lead-zinc ore mine in Yunnan Province, Southwest China, were used as the dataset for developing rock UCS prediction models. The current maximum mining depth of the
lead-zinc ore mine has exceeded 1,500 meters, and field rock sample collection operations were conducted at different sublevels, including lower and upper plates surrounding rock and ore, as shown in
Fig. 4. All drilling works were done using the KD-100 fully hydraulic pit drilling rig, a small and easy-to-operate machine propelled by compressed air.
After that, all the high-quality samples were made into standard specimens with dimensions of ∅50×100 mm following the instructions of ISRM and ASTM. Fig. 5 shows the well-processed standard
specimens. Then, the rock parameters ρ, Vp, Is50, and UCS were obtained through laboratory tests, as depicted in Fig. 6. The rock density ρ=m/V and the value of the P-wave velocity Vp was measured by
the sonic parameter tester. Is50 was obtained through the irregular lump test, and the irregular blocks were collected from the same sublevels as the drilling, as shown in Fig. 7. The UCS test was
conducted on INSTRON 1346 equipment. Finally, 40 sets of physical and mechanical parameters of the rock were established from the deep mine, as shown in Table 1. In addition to the field dataset, 66
datasets provided in the study [68] were collected to expand the database, and thus, a total of 106 datasets were integrated as the final dataset to develop the models.
For regression prediction problems, correlation analysis between independent and dependent variables is always essential [69]. Fig. 8 shows the analysis results of the correlation between the
variables ρ, Vp, Is50, and UCS. There is a positive correlation between the input variables and the output parameter UCS, as can be seen in the last line of Fig. 8, where the values of the Pearson
correlation coefficient between ρ, Vp, Is50, and UCS are 0.53, 0.53, and 0.54, respectively. Moreover, the correlation between independent variables shows lower values, which indicates that the input
variables used in this paper for the prediction model development are reasonable and feasible. Additionally, the data distribution of each parameter is illustrated through violin plots, as shown in
Fig. 9, where the values of ρ, Vp, Is50, and UCS are evenly distributed.
Furthermore, to balance the quantity of training and testing datasets, half of the randomly selected field data and data acquired from the literature were used as training data (86 sets of data),
while the remaining field data were used as testing data (20 sets of data). The final ratio between the training and testing data is approximately 8:2.
Simultaneously, all data were normalized prior to model training due to the different magnitudes of the variables. The standardization process was as follows:
where x is the input variables; μ and σ are the mean value and standard deviation of each variable parameter.
The performance of all hybrid models was evaluated using four evaluation indices: root mean square error (RMSE), mean absolute error (MAE), R2, and A-20. Typically, lower values of RMSE and MAE
indicate a better model, suggesting that the model’s predictions are closer to the actual values. Conversely, a larger R2 value signifies a more robust model, with a maximum value of 1. The value of
A-20 equals the proportion of samples where the mean absolute percentage error between the predicted and actual values is less than 20 percent, as shown in Eq. (25). A larger A-20 value indicates
more accurate model predictions.
where yi, yi′, and y¯ are the target value, prediction result, and the average of all the target values, respectively; M is the number of the dataset. errors is the mean absolute percentage error
between the predicted value and the actual value.
5 Development of the Prediction Models
This study used boosting-based ML algorithms, including AdaBoost, GBoost, and XGBoost, to predict rock UCS. In addition, four optimization approaches, BOA, ABC, GWO, and WOA, were employed to
determine the best parameters for all boosting models, ensuring prediction accuracy. The hybrid models were then tested on testing data, providing the optimal rock UCS prediction model. Fig. 10
depicts the flowchart for hybrid model construction.
5.1 Development of the AdaBoost Model
For the AdaBoost method, the default parameters for regression problems include the base estimator, the number of estimators, the learning rate, and the loss function. The number of estimators and
the learning rate were the optimization parameters. However, the base estimator and the loss function were set to their default values (CART decision tree and linear loss function). Additionally, the
population size values for the ABC, GWO, and WOA optimization algorithms ranged from 10 to 50, with 5-unit intervals, and the total number of iterations was 100.
To obtain the optimal hyperparameters of the AdaBoost model, the R2 and RMSE values of different AdaBoost hybrid models during training were summarized, as shown in Figs. 11 and 12. Fig. 11 shows the
R2 values R2 for four hybrid models trained under different population sizes. The AdaBoost-GWO and AdaBoost-WOA models achieved the highest R2 values at population sizes of 15 and 40, respectively.
The AdaBoost-ABC model performed equally well at population sizes of 40 and 45 but converged faster at 40. The AdaBoost-BOA model was also trained with the same number of iterations, as shown in Fig.
11d. Moreover, the RMSE values for all hybrid models were calculated, and the results are presented in Fig. 12. Based on these results, the optimal hyperparameters of the AdaBoost model using the
four optimization algorithms are listed in Table 2. The corresponding prediction performances of the four optimized hybrid models on training and testing datasets are displayed in Fig. 13. The bars
with solid color filling indicate the prediction capabilities of the hybrid models on training data, while the bars with slash-filling represent the model prediction performance on testing data. The
AdaBoost-ABC and AdaBoost-GWO models exhibited the most robust prediction abilities, with the highest R2 and A-20 values (0.60 and 0.85), and the lowest RMSE and MAE values (11.45 MPa and 10.30MPa,
respectively) on the testing datasets. Therefore, the optimal parameters of the AdaBoost method were determined based on the ABC and GWO optimization algorithms, with the best learning rate being
0.197 and the optimal number of estimators rounded to 6.
5.2 Development of the GBoost Model
Compared to the AdaBoost model, the GBoost model required fine-tuning of more hyperparameters, including the learning rate, the number of estimators, maximum decision tree depth, minimum sample split
node, and minimum sample leaf node. Similarly, the training process and the corresponding results of R2 and RMSE for all the hybrid GBoost models were obtained. From Figs. 14 and 15, it was evident
that the hybrid models GBoost-ABC, GBoost-GWO, and GBoost-WOA achieved satisfactory results in terms of R2 and RMSE at the population sizes of 45, 30, and 20, respectively. The GBoost-BOA model also
yielded better results with the increase in iterations. Subsequently, four different sets of hyperparameters for the GBoost model were obtained. The prediction capabilities of the four optimized
hybrid models were comprehensively analyzed, as shown in Fig. 16. It could be seen that the R2 results R2 for all the optimized hybrid models on the training data were excellent, especially for the
hybrid models GBoost-GWO and GBoost-WOA, where the values of R2 were almost equal to 1. However, combined with the results of the other two indicators, RMSE and MAE, it was found that the hybrid
models GBoost-GWO and GBoost-WOA showed poor generalization performance on the testing data. The hybrid models GBoost-ABC and GBoost-BOA, on the contrary, performed well on the training datasets and
obtained comparable results on the testing datasets. Moreover, as shown in Fig. 16, the overall performance of the hybrid model GBoost-BOA was superior to that of GBoost-ABC. The RMSE and MAE results
for the hybrid model GBoost-BOA were lower than those for GBoost-ABC on the testing datasets. Therefore, the hybrid model GBoost-BOA was considered the best compared to the other GBoost hybrid
models. Table 3 shows the optimum hyperparameters for the GBoost model using the BOA method.
5.3 Development of the XGBoost Model
Finally, the hyperparameters of the XGBoost method, such as the number of estimators, maximum decision tree depth, maximum features, tree colsample, regression alpha, and subsample, were confirmed by
using the four optimization methods. Figs. 17 and 18 show the corresponding training results for the hybrid models. The hybrid models XGBoost-ABC, XGBoost-GWO, and XGBoost-WOA obtained the best
results at 35, 30, and 15 population sizes, respectively. The hybrid model XGBoost-BOA also performed robustly during the training process. Fig. 19 presents the prediction performance of each hybrid
model on the training and testing datasets. The hybrid model XGBoost-ABC exhibited the strongest robustness on both training and testing data, with the evaluation index R2 values of 0.98 and 0.93,
respectively, on the training and testing datasets. Meanwhile, the RMSE and MAE values were the lowest on the testing dataset. Thus, the hybrid model XGBoost-ABC was deemed the best prediction model,
with the optimal population size of the ABC optimization method being 35, as shown in Fig. 17a. Table 4 summarizes the hyperparameters of the XGBoost method obtained through training with the ABC
optimization algorithm.
6 Model Prediction Performance Analysis and Discussion
Taylor diagrams [70] were employed to discuss the predictive model’s performance on the training and testing datasets. In Fig. 20, the markers indicate different models; the radial direction
represents the correlation coefficient; the X-axis indicates the standard deviation, unnormalized; and the green dotted curves reflect the centered RMSE. The reference point with a black Pentastar
indicates the actual UCS and other markers closer to this point denote better performance. The hybrid AdaBoost models show poor performance on both training and testing datasets. The hybrid GBoost
and XGBoost models achieve acceptable results on the training datasets. However, hybrid XGBoost models perform better in the testing stage than GBoost models. In summary, XGBoost-ABC performs best
compared to XGBoost-GWO, XGBoost-WOA, and XGBoost-BOA.
Fig. 21 shows the prediction performance comparison of the optimal hybrid model for each boosting method. The results of the three evaluation indices for AdaBoost-ABC and GBoost-BOA are 0.60, 11.45,
and 10.30MPa; 0.87, 6.59, and 5.25MPa, respectively. The XGBoost-ABC hybrid model achieved the highest R2=0.93 and the smallest RMSE and MAE (4.78 MPa and 3.76MPa). The UCS prediction results of
the hybrid model XGBoost-ABC on the testing dataset are shown in Fig. 22, where the red solid line represents the model prediction results, and the blue dotted line indicates the measured values.
Sensitivity analysis was employed to better understand the intrinsic relationships between the selected independent variables and rock UCS. The relevancy factor, a commonly used method to illustrate
the sensitivity scale [71,72], was applied in this paper to assess the effect of each variable on UCS. The greater the absolute value of the relevancy factor between the independent and dependent
variables, the stronger the influence. The calculation process of the sensitivity relevancy factor (SRF) is as follows:
where xl¯ is the mean value of all data for variable l (l includes S/B, H/B, B/D, T/B, Pf, XB, and E); xli is the i-th value of variable l; n is the number of the variable data; yi and y¯ are the i-
th measured value of variable l and the average value of the prediction results, respectively.
The results showed that the most influential parameter on the UCS is the point load strength index (Is50) of 1.0, after that is the P-wave velocity (VP) of 0.3, the density (ρ) of 0.03. However, it
should be noted that this importance ranking of all input variables is only for the data used in this study and cannot be used as a general criterion.
On the other hand, other intelligent algorithms, including random forest and artificial neural network (ANN), were trained with the same datasets to verify the superiority of the XGBoost-ABC model.
The ANN model structure was 3-7-4-1, i.e., two hidden layers with 7 and 4 neurons, respectively. The prediction results of the two models are shown in Fig. 23, and the comparison results between the
hybrid model XGBoost-ABC are presented in Table 5. The results demonstrate that the hybrid model XGBoost-ABC proposed in this paper performs better.
Based on the findings mentioned above, the proposed model XGBoost-ABC achieved an acceptable UCS prediction result. Fig. 24 presents the developed Graphical User Interface (GUI), which engineers can
use as a portable tool to estimate the UCS of rock materials in deep mines. Nevertheless, it is essential to note that the developed model in this study is designed to address UCS prediction of rock
in deep mining environments with three parameters: rock density, P-wave velocity, and point load strength.
In this research, a total of 106 samples are employed to investigate the mechanical properties of rocks in underground mines. Among them, 40 sets of data are taken from a deep lead-zinc mine in
Southwest China, which can be regarded as a valuable database for investigating the mechanical properties of rocks in deep underground engineering. Three boosting-based models and four optimization
algorithms are implemented to develop intelligent models for rock UCS prediction based on the established dataset. Based on the comparison results, it was found that the proposed hybrid model
XGBoost-ABC exhibited superior prediction performance compared to the other models, with the highest R2 values of 0.98 and 0.93, smallest RMSE values of 3.11 MPa and 4.78MPa, and the smallest MAE
values of 2.23 MPa and 3.76MPa on the training and testing datasets, respectively.
Overall, the proposed hybrid model achieves promising prediction accuracy on the data presented in this study. However, it is suggested that the model be fine-tuned on other datasets to ensure model
prediction accuracy. In addition, more real-world data can be supplemented to enhance the robustness of the model. Finally, other physical and mechanical parameters can also be considered to develop
rock strength prediction models in the future.
Acknowledgement: The authors wish to express their appreciation to the reviewers for their helpful suggestions, which greatly improved the presentation of this paper.
Funding Statement: This research is supported by the National Natural Science Foundation of China (Grant No. 52374153).
Author Contributions: Junjie Zhao: Writing–original draft, coding, model training. Diyuan Li: Supervision, writing review. Jingtai Jiang, Pingkuang Luo: Data collection and process. All authors
reviewed the results and approved the final version of the manuscript.
Availability of Data and Materials: The datasets are available from the corresponding author upon reasonable request. https://github.com/cs-heibao/UCS_Prediction_GUI.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. ISRM. (1981). Rock characterization testing and monitoring. ISRM suggested methods. Oxford: Pergamon Press. [Google Scholar]
2. ASTM (1986). Standard test method of unconfined compressive strength of intact rock core specimens. ASTM Standard. 04.08 (D 2938). [Google Scholar]
3. Jamshidi, A. (2022). A comparative study of point load index test procedures in predicting the uniaxial compressive strength of sandstones. Rock Mechanics and Rock Engineering, 55(7), 4507–4516. [
Google Scholar]
4. Cao, J., Gao, J., Nikafshan Rad, H., Mohammed, A. S., Hasanipanah, M. et al. (2022). A novel systematic and evolved approach based on xgboost-firefly algorithm to predict young’s modulus and
unconfined compressive strength of rock. Engineering with Computers, 38(5), 3829–3845. [Google Scholar]
5. Azimian, A., Ajalloeian, R., Fatehi, L. (2014). An empirical correlation of uniaxial compressive strength with P-wave velocity and point load strength index on marly rocks using statistical
method. Geotechnical and Geological Engineering, 32, 205–214. [Google Scholar]
6. Barzegar, R., Sattarpour, M., Deo, R., Fijani, E., Adamowski, J. (2020). An ensemble tree-based machine learning model for predicting the uniaxial compressive strength of travertine rocks. Neural
Computing and Applications, 32, 9065–9080. [Google Scholar]
7. Yılmaz, I., Sendır, H. (2002). Correlation of schmidt hardness with unconfined compressive strength and Young’s modulus in gypsum from Sivas (Turkey). Engineering Geology, 66(3–4), 211–219. [
Google Scholar]
8. Kahraman, S. A. I. R., Gunaydin, O., Fener, M. (2005). The effect of porosity on the relation between uniaxial compressive strength and point load index. International Journal of Rock Mechanics
and Mining Sciences, 42(4), 584–589. [Google Scholar]
9. Fener, M. U. S. T. A. F. A., Kahraman, S. A. İ. R., Bilgil, A., Gunaydin, O. (2005). A comparative evaluation of indirect methods to estimate the compressive strength of rocks. Rock Mechanics and
Rock Engineering, 38, 329–343. [Google Scholar]
10. Yaşar, E., Erdoğan, Y. (2004). Estimation of rock physicomechanical properties using hardness methods. Engineering Geology, 71(3–4), 281–288. [Google Scholar]
11. Yilmaz, I. (2009). A new testing method for indirect determination of the unconfined compressive strength of rocks. International Journal of Rock Mechanics and Mining Sciences, 46(8), 1349–1357.
[Google Scholar]
12. Basu, A., Kamran, M. (2010). Point load test on schistose rocks and its applicability in predicting uniaxial compressive strength. International Journal of Rock Mechanics and Mining Sciences, 47
(5), 823–828. [Google Scholar]
13. Khandelwal, M. (2013). Correlating P-wave velocity with the physico-mechanical properties of different rocks. Pure and Applied Geophysics, 170, 507–514. [Google Scholar]
14. Amirkiyaei, V., Ghasemi, E., Faramarzi, L. (2021). Estimating uniaxial compressive strength of carbonate building stones based on some intact stone properties after deterioration by freeze–thaw.
Environmental Earth Sciences, 80(9), 352. [Google Scholar]
15. Yagiz, S. (2009). Predicting uniaxial compressive strength, modulus of elasticity and index properties of rocks using the Schmidt hammer. Bulletin of Engineering Geology and the Environment, 68,
55–63. [Google Scholar]
16. Nazir, R., Momeni, E., Armaghani, D. J., Amin, M. F. M. (2013). Prediction of unconfined compressive strength of limestone rock samples using L-type Schmidt hammer. Electronic Journal of
Geotechnical Engineering, 18, 1767–1775. [Google Scholar]
17. Wang, M., Wan, W. (2019). A new empirical formula for evaluating uniaxial compressive strength using the schmidt hammer test. International Journal of Rock Mechanics and Mining Sciences, 123,
104094. [Google Scholar]
18. Minaeian, B., Ahangari, K. (2013). Estimation of uniaxial compressive strength based on P-wave and Schmidt hammer rebound using statistical method. Arabian Journal of Geosciences, 6, 1925–1931. [
Google Scholar]
19. Farhadian, A., Ghasemi, E., Hoseinie, S. H., Bagherpour, R. (2022). Prediction of rock abrasivity index (RAI) and uniaxial compressive strength (UCS) of granite building stones using
nondestructive tests. Geotechnical and Geological Engineering, 40(6), 3343–3356. [Google Scholar]
20. Mishra, D. A., Basu, A. (2013). Estimation of uniaxial compressive strength of rock materials by index tests using regression analysis and fuzzy inference system. Engineering Geology, 160, 54–68.
[Google Scholar]
21. Zhu, J., Chang, X., Zhang, X., Su, Y., Long, X. (2022). A novel method for the reconstruction of road profiles from measured vehicle responses based on the Kalman filter method. Computer Modeling
in Engineering & Sciences, 130(3), 1719–1735. https://doi.org/10.32604/cmes.2022.019140 [Google Scholar] [CrossRef]
22. Chen, Q., Xu, C., Zou, B., Luo, Z., Xu, C. et al. (2023). Earth pressure of the trapdoor problem using three-dimensional discrete element method. Computer Modeling in Engineering & Sciences, 135
(2), 1503–1520. https://doi.org/10.32604/cmes.2022.022823 [Google Scholar] [CrossRef]
23. Ghasemi, E., Gholizadeh, H. (2019). Prediction of squeezing potential in tunneling projects using data mining-based techniques. Geotechnical and Geological Engineering, 37, 1523–1532. [Google
24. Kadkhodaei, M. H., Ghasemi, E., Mahdavi, S. (2023). Modelling tunnel squeezing using gene expression programming: A case study. Proceedings of the Institution of Civil Engineers-Geotechnical
Engineering, 176(6), 567–581. [Google Scholar]
25. Liu, Z. D., Li, D. Y. (2023). Intelligent hybrid model to classify failure modes of overstressed rock masses in deep engineering. Journal of Central South University, 30(1), 156–174. [Google
26. Li, D., Zhao, J., Liu, Z. (2022). A novel method of multitype hybrid rock lithology classification based on convolutional neural networks. Sensors, 22(4), 1574. [Google Scholar] [PubMed]
27. Li, D., Zhao, J., Ma, J. (2022). Experimental studies on rock thin-section image classification by deep learning-based approaches. Mathematics, 10(13), 2317. [Google Scholar]
28. Kadkhodaei, M. H., Ghasemi, E., Sari, M. (2022). Stochastic assessment of rockburst potential in underground spaces using Monte Carlo simulation. Environmental Earth Sciences, 81(18), 447. [
Google Scholar]
29. Kadkhodaei, M. H., Ghasemi, E. (2022). Development of a semi-quantitative framework to assess rockburst risk using risk matrix and logistic model tree. Geotechnical and Geological Engineering, 40
(7), 3669–3685. [Google Scholar]
30. Ghasemi, E., Gholizadeh, H., Adoko, A. C. (2020). Evaluation of rockburst occurrence and intensity in underground structures using decision tree approach. Engineering with Computers, 36, 213–225.
[Google Scholar]
31. Ghasemi, E., Kalhori, H., Bagherpour, R., Yagiz, S. (2018). Model tree approach for predicting uniaxial compressive strength and Young’s modulus of carbonate rocks. Bulletin of Engineering
Geology and the Environment, 77, 331–343. [Google Scholar]
32. Wang, M., Wan, W., Zhao, Y. (2020). Prediction of uniaxial compressive strength of rocks from simple index tests using random forest predictive model. Comptes Rendus Mecanique, 348(1), 3–32. [
Google Scholar]
33. Jin, X., Zhao, R., Ma, Y. (2022). Application of a hybrid machine learning model for the prediction of compressive strength and elastic modulus of rocks. Minerals, 12(12), 1506. [Google Scholar]
34. Saedi, B., Mohammadi, S. D., Shahbazi, H. (2019). Application of fuzzy inference system to predict uniaxial compressive strength and elastic modulus of migmatites. Environmental Earth Sciences,
78, 1–14. [Google Scholar]
35. Li, J., Li, C., Zhang, S. (2022). Application of six metaheuristic optimization algorithms and random forest in the uniaxial compressive strength of rock prediction. Applied Soft Computing, 131,
109729. [Google Scholar]
36. Mahmoodzadeh, A., Mohammadi, M., Ibrahim, H. H., Abdulhamid, S. N., Salim, S. G. et al. (2021). Artificial intelligence forecasting models of uniaxial compressive strength. Transportation
Geotechnics, 27, 100499. [Google Scholar]
37. Skentou, A. D., Bardhan, A., Mamou, A., Lemonis, M. E., Kumar, G. et al. (2023). Closed-form equation for estimating unconfined compressive strength of granite from three nondestructive tests
using soft computing models. Rock Mechanics and Rock Engineering, 56(1), 487–514. [Google Scholar]
38. Liu, Q., Wang, X., Huang, X., Yin, X. (2020). Prediction model of rock mass class using classification and regression tree integrated AdaBoost algorithm based on TBM driving data. Tunnelling and
Underground Space Technology, 106, 103595. [Google Scholar]
39. Wang, S. M., Zhou, J., Li, C. Q., Armaghani, D. J., Li, X. B. et al. (2021). Rockburst prediction in hard rock mines developing bagging and boosting tree-based ensemble techniques. Journal of
Central South University, 28(2), 527–542. [Google Scholar]
40. Liu, Z., Armaghani, D. J., Fakharian, P., Li, D., Ulrikh, D. V. et al. (2022). Rock strength estimation using several tree-based ML techniques. Computer Modeling in Engineering & Sciences, 133(3)
, 799–824. https://doi.org/10.32604/cmes.2022.021165 [Google Scholar] [CrossRef]
41. Liu, Z., Li, D., Liu, Y., Yang, B., Zhang, Z. X. (2023). Prediction of uniaxial compressive strength of rock based on lithology using stacking models. Rock Mechanics Bulletin, 2(4), 100081. [
Google Scholar]
42. Zhang, Q., Hu, W., Liu, Z., Tan, J. (2020). TBM performance prediction with Bayesian optimization and automated machine learning. Tunnelling and Underground Space Technology, 103, 103493. [Google
43. Chen, T., Guestrin, C. (2016). Xgboost: A scalable tree boosting system. Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785–794. San
Francisco, CA, USA. [Google Scholar]
44. Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W. et al. (2017). Lightgbm: A highly efficient gradient boosting decision tree. Proceedings of the Advances in Neural Information Processing Systems,
pp. 3147–3155. Long Beach, CA, USA. [Google Scholar]
45. Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A. V., Gulin, A. (2018). CatBoost: Unbiased boosting with categorical features. Proceedings of the 32nd International Conference on Neural
Information Processing Systems, vol. 31, pp. 6638–6648. Montréal, QC, Canada. [Google Scholar]
46. Chen, T., He, T., Benesty, M., Khotilovich, V., Tang, Y. et al. (2015). xgboost: eXtreme gradient boosting. R Package Version 0.4-2, 1(4), 1–4. [Google Scholar]
47. Chang, Y. C., Chang, K. H., Wu, G. J. (2018). Application of eXtreme gradient boosting trees in the construction of credit risk assessment models for financial institutions. Applied Soft
Computing, 73, 914–920. [Google Scholar]
48. Zhang, W., Wu, C., Zhong, H., Li, Y., Wang, L. (2021). Prediction of undrained shear strength using extreme gradient boosting and random forest based on Bayesian optimization. Geoscience
Frontiers, 12(1), 469–477. [Google Scholar]
49. Nguyen-Sy, T., Wakim, J., To, Q. D., Vu, M. N., Nguyen, T. D. et al. (2020). Predicting the compressive strength of concrete from its compositions and age using the extreme gradient boosting
method. Construction and Building Materials, 260, 119757. [Google Scholar]
50. Pelikan, M., Goldberg, D. E., Cantú-Paz, E. (1999). BOA: The Bayesian optimization algorithm. Proceedings of the Genetic and Evolutionary Computation Conference GECCO-99, vol. 1, pp. 525–532.
Orlando, FL, USA. [Google Scholar]
51. Díaz, E., Salamanca-Medina, E. L., Tomás, R. (2023). Assessment of compressive strength of jet grouting by machine learning. Journal of Rock Mechanics and Geotechnical Engineering, 16, 102–111. [
Google Scholar]
52. Lahmiri, S., Bekiros, S., Avdoulas, C. (2023). A comparative assessment of machine learning methods for predicting housing prices using Bayesian optimization. Decision Analytics Journal, 6,
100166. [Google Scholar]
53. Bo, Y., Huang, X., Pan, Y., Feng, Y., Deng, P. et al. (2023). Robust model for tunnel squeezing using Bayesian optimized classifiers with partially missing database. Underground Space, 10,
91–117. [Google Scholar]
54. Díaz, E., Spagnoli, G. (2023). Gradient boosting trees with Bayesian optimization to predict activity from other geotechnical parameters. Marine Georesources & Geotechnology, 1–11. [Google
55. Greenhill, S., Rana, S., Gupta, S., Vellanki, P., Venkatesh, S. (2020). Bayesian optimization for adaptive experimental design: A review. IEEE Access, 8, 13937–13948. [Google Scholar]
56. Karaboga, D. (2005). An idea based on honey bee swarm for numerical optimization. In: Technical report-TR06, Erciyes University, Engineering Faculty, Computer Engineering Department. [Google
57. Bharti, K. K., Singh, P. K. (2016). Chaotic gradient artificial bee colony for text clustering. Soft Computing, 20, 1113–1126. [Google Scholar]
58. Asteris, P. G., Nikoo, M. (2019). Artificial bee colony-based neural network for the prediction of the fundamental period of infilled frame structures. Neural Computing and Applications, 31(9),
4837–4847. [Google Scholar]
59. Parsajoo, M., Armaghani, D. J., Asteris, P. G. (2022). A precise neuro-fuzzy model enhanced by artificial bee colony techniques for assessment of rock brittleness index. Neural Computing and
Applications, 34, 3263–3281. [Google Scholar]
60. Zhou, J., Koopialipoor, M., Li, E., Armaghani, D. J. (2020). Prediction of rockburst risk in underground projects developing a neuro-bee intelligent system. Bulletin of Engineering Geology and
the Environment, 79, 4265–4279. [Google Scholar]
61. Mirjalili, S., Mirjalili, S. M., Lewis, A. (2014). Grey wolf optimizer. Advances in Engineering Software, 69, 46–61. [Google Scholar]
62. Golafshani, E. M., Behnood, A., Arashpour, M. (2020). Predicting the compressive strength of normal and high-performance concretes using ANN and ANFIS hybridized with grey wolf optimizer.
Construction and Building Materials, 232, 117266. [Google Scholar]
63. Shariati, M., Mafipour, M. S., Ghahremani, B., Azarhomayun, F., Ahmadi, M. et al. (2022). A novel hybrid extreme learning machine–grey wolf optimizer (ELM-GWO) model to predict compressive
strength of concrete with partial replacements for cement. Engineering with Computers, 38, 757–779. [Google Scholar]
64. Mirjalili, S., Lewis, A. (2016). The whale optimization algorithm. Advances in Engineering Software, 95, 51–67. [Google Scholar]
65. Zhou, J., Zhu, S., Qiu, Y., Armaghani, D. J., Zhou, A. et al. (2022). Predicting tunnel squeezing using support vector machine optimized by whale optimization algorithm. Acta Geotechnica, 17(4),
1343–1366. [Google Scholar]
66. Tien Bui, D., Abdullahi, M. A. M., Ghareh, S., Moayedi, H., Nguyen, H. (2021). Fine-tuning of neural computing using whale optimization algorithm for predicting compressive strength of concrete.
Engineering with Computers, 37, 701–712. [Google Scholar]
67. Nguyen, H., Bui, X. N., Choi, Y., Lee, C. W., Armaghani, D. J. (2021). A novel combination of whale optimization algorithm and support vector machine with different kernel functions for
prediction of blasting-induced fly-rock in quarry mines. Natural Resources Research, 30, 191–207. [Google Scholar]
68. Momeni, E., Armaghani, D. J., Hajihassani, M., Amin, M. F. M. (2015). Prediction of uniaxial compressive strength of rock samples using hybrid particle swarm optimization-based artificial neural
networks. Measurement, 60, 50–63. [Google Scholar]
69. Lei, Y., Zhou, S., Luo, X., Niu, S., Jiang, N. (2022). A comparative study of six hybrid prediction models for uniaxial compressive strength of rock based on swarm intelligence optimization
algorithms. Frontiers in Earth Science, 10, 930130. [Google Scholar]
70. Taylor, K. E. (2001). Summarizing multiple aspects of model performance in a single diagram. Journal of Geophysical Research: Atmospheres, 106, 7183–7192. [Google Scholar]
71. Chen, G., Fu, K., Liang, Z., Sema, T., Li, C. et al. (2014). The genetic algorithm based back propagation neural network for MMP prediction in CO2-EOR process. Fuel, 126, 202–212. [Google Scholar
72. Bayat, P., Monjezi, M., Mehrdanesh, A., Khandelwal, M. (2021). Blasting pattern optimization using gene expression programming and grasshopper optimization algorithm to minimise blast-induced
ground vibrations. Engineering with Computers, 38, 3341–3350. [Google Scholar]
Cite This Article
APA Style
Zhao, J., Li, D., Jiang, J., Luo, P. (2024). Uniaxial compressive strength prediction for rock material in deep mine using boosting-based machine learning methods and optimization algorithms.
Computer Modeling in Engineering & Sciences, 140(1), 275-304. https://doi.org/10.32604/cmes.2024.046960
Vancouver Style
Zhao J, Li D, Jiang J, Luo P. Uniaxial compressive strength prediction for rock material in deep mine using boosting-based machine learning methods and optimization algorithms. Comput Model Eng Sci.
2024;140(1):275-304 https://doi.org/10.32604/cmes.2024.046960
IEEE Style
J. Zhao, D. Li, J. Jiang, and P. Luo, “Uniaxial Compressive Strength Prediction for Rock Material in Deep Mine Using Boosting-Based Machine Learning Methods and Optimization Algorithms,” Comput.
Model. Eng. Sci., vol. 140, no. 1, pp. 275-304, 2024. https://doi.org/10.32604/cmes.2024.046960
This work is licensed under a Creative
Commons Attribution 4.0 International License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.techscience.com/CMES/v140n1/56186/html","timestamp":"2024-11-10T20:24:36Z","content_type":"application/xhtml+xml","content_length":"228807","record_id":"<urn:uuid:c6de9837-2164-4444-9c22-e9e236636433>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00567.warc.gz"} |
Probability Foundation for Electrical Engineers
Probability Foundation for Electrical Engineers. Instructor: Prof. Krishna Jagannathan, Department of Electrical Engineering, IIT Madras. This is a graduate level class on probability theory, geared
towards students who are interested in a rigorous development of the subject. It is likely to be useful for students specializing in communications, networks, signal processing, stochastic control,
machine learning, and related areas. In general, the course is not so much about computing probabilities, expectations, densities etc. Instead, we will focus on the 'nuts and bolts' of probability
theory, and aim to develop a more intricate understanding of the subject. For example, emphasis will be placed on deriving and proving fundamental results, starting from the basic axioms. (from
Probability Foundation for Electrical Engineers
Instructor: Prof. Krishna Jagannathan, Department of Electrical Engineering, IIT Madras. This is a graduate level class on probability theory.
Probability Foundation for Electrical Engineers (Lecture Notes)
Module 0: Preliminaries. Module 1: Probability Measures. Module 2: Random Variables. Module 3: Integration and Expectation. Module 4: Transforms. Module 5: Limit Theorems. | {"url":"http://www.infocobuild.com/education/audio-video-courses/electronics/probability-foundation-for-ee-iit-madras.html","timestamp":"2024-11-06T21:12:42Z","content_type":"text/html","content_length":"16120","record_id":"<urn:uuid:c0a25781-5932-4a92-ae18-9a04744016f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00454.warc.gz"} |
CJ.TO Probability of Bankruptcy | Cardinal Energy Ltd (Alberta) (CJ.TO)
CJ.TO Probability of Bankruptcy
The Probability of Bankruptcy of Cardinal Energy Ltd (Alberta) (CJ.TO) is -% . This number represents the probability that CJ.TO will face financial distress in the next 24 months given its current
fundamentals and market conditions.
Multiple factors are taken into account when calculating CJ.TO's probability of bankruptcy : Altman Z-score, Beneish M-score, financial position, macro environments, academic research about distress
risk and more.
CJ.TO - ESG ratings
ESG ratings are directly linked to the cost of capital and CJ.TO's ability to raise funding, both of which can significantly affect the probability of Cardinal Energy Ltd (Alberta) going bankrupt.
ESG Score 20.02
Environment Score 14.92
Social Score 14.88
Governance Score 30.28 | {"url":"https://valueinvesting.io/CJ.TO/probability-of-bankruptcy","timestamp":"2024-11-03T19:41:35Z","content_type":"text/html","content_length":"104291","record_id":"<urn:uuid:22ad9e9d-a43d-49e5-aa2f-6254da3c2600>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00715.warc.gz"} |
Double or Half?
Philip and Tellum both have online video channels.
Philip currently has many followers and the number of followers is increasing at the rate of 10% a day.
Tellum currently has a similar number of followers but the number is decreasing at the rate of 10% a day.
Will Philip's number of followers have doubled on the same day as Tellum's have halved?
Explain the details of this situation.
More Mathematics Lesson Starters
How did you use this resource? Can you suggest how teachers could present, adapt or develop it? Do you have any comments? It is always useful to receive feedback and helps make this free resource
even more useful for Maths teachers anywhere in the world. Click here to enter your comments.
Your access to the majority of the Transum resources continues to be free but you can help support the continued growth of the website by doing your Amazon shopping using the links on this page.
Below is an Amazon link. As an Amazon Associate I earn a small amount from qualifying purchases which helps pay for the upkeep of this website.
Educational Technology on Amazon | {"url":"http://transum.info/Software/MathsMenu/Starter.asp?ID_Starter=68","timestamp":"2024-11-09T03:38:08Z","content_type":"text/html","content_length":"20383","record_id":"<urn:uuid:9d31829d-9f8d-4871-b5f5-69866842381c>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00129.warc.gz"} |
l_1 estimation
It has been known for almost 50 years that the discrete l_1 approximation problem can be solved effectively by linear programming. However, improved algorithms involve a step which can be interpreted
as a line search, and which is not part of the standard LP solution procedures. l_1 provides the simplest example of a class of … Read more | {"url":"https://optimization-online.org/tag/l_1-estimation/","timestamp":"2024-11-01T19:04:24Z","content_type":"text/html","content_length":"83414","record_id":"<urn:uuid:262c01e9-6152-4ef5-96a5-83b5cd001785>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00698.warc.gz"} |
What does 1# + 2# + 3# EPS blanks mean?
The numbers represent pounds per cubic foot. So, #2 EPS would be 2 pounds per cubic foot of styrofoam. The higher the number, the heavier and more dense.
#2 has twice the material fused into the same size as #1. Like said above, it makes a more dense material. #2-2.5 is pretty nice to work with. #3 or #4 will be pretty hard/dense. #1 the home depot
sheet stuff is a waste of time. You also have to becareful of the bead size too, that can make a big difference with tear out.
The numbers given for EPS weight are all pre-molded weight and are not the exact weight you would get from a blank cut from a block as the density of a block is greater at the outside and lower
towards the center of the block. This could vary the blank by a half-pound or so. Molded blanks such as the Marko product is way more exact and should be within a 1/10 of a pound. A heavier density
blank also has a smaller bead size even when the same “type” of bead is used. Block foam insulation is made out of “B” bead or even “A” bead which are quite large and will make larger pukas
(tear-outs) which will need to be filled before glassing.
If you are looking at EPS as a blank material, 2 lb. or heavier will make the best material and “C” size bead the best to shape.
13yrs in the EPS biz/backyarder for 30+
When the character # follows a number it is called a Pound Sign". It is read as pound or pounds (lb. or lbs.). In your example 2# is read two pounds.
When # precedes a number it is called a Number Sign. For example: #2 is read as number two.
On you phone is it called the Pound Sign. | {"url":"https://forum.swaylocks.com/t/what-does-1-2-3-eps-blanks-mean/23139","timestamp":"2024-11-11T10:38:22Z","content_type":"text/html","content_length":"23472","record_id":"<urn:uuid:39f2c6b5-4129-49ce-8c7d-e3f834c2b992>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00189.warc.gz"} |
Unscramble ACTIVIZE
How Many Words are in ACTIVIZE Unscramble?
By unscrambling letters activize, our Word Unscrambler aka Scrabble Word Finder easily found 46 playable words in virtually every word scramble game!
Letter / Tile Values for ACTIVIZE
Below are the values for each of the letters/tiles in Scrabble. The letters in activize combine for a total of 24 points (not including bonus squares)
• A [1]
• C [3]
• T [3]
• I [1]
• V [4]
• I [1]
• Z [10]
• E [1]
What do the Letters activize Unscrambled Mean?
The unscrambled words with the most letters from ACTIVIZE word or letters are below along with the definitions.
• activable () - Sorry, we do not have a definition for this word | {"url":"https://www.scrabblewordfind.com/unscramble-activize","timestamp":"2024-11-13T09:32:41Z","content_type":"text/html","content_length":"47474","record_id":"<urn:uuid:eddd50ad-8f2d-48be-b296-ffc838a88b6d>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00394.warc.gz"} |
A-Level Maths - Pure/Core 1
Arc of a circle is a portion of its circumference. Thus, the arc length of a circle is a fraction of its circumference. If θ degrees is the central angle made by an arc of a circle, then the arc
length formula is θ/360 x 2πr. | {"url":"https://mcooke230774.netboard.me/alevelmaths/?tab=676552","timestamp":"2024-11-13T06:36:27Z","content_type":"text/html","content_length":"74487","record_id":"<urn:uuid:f9a224c5-b04b-4341-9e2a-62bfd43dd5f5>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00502.warc.gz"} |
Finding the Length of the Hypotenuse Using the Properties of the Medians of Right-Angled Triangles
Question Video: Finding the Length of the Hypotenuse Using the Properties of the Medians of Right-Angled Triangles Mathematics
In the figure, π β π π π = π β π π π = 90Β°, π is the midpoint of line segment π π , and π β π = 30Β°. Given that π π = 13 cm, find the length of line segment π
π .
Video Transcript
In the figure below, the measure of angle π π π is equal to the measure of angle π π π , which is equal to 90 degrees. π is the midpoint of line segment π π , and the measure of
angle π is 30 degrees. Given that π π equals 13 centimeters, find the length of line segment π π .
Letβ s start by taking some information from the problem and labeling our figure. We know that π π π is 90 degrees. We also know that π π π is 90 degrees. π is the midpoint of line
segment π π . And π π is the hypotenuse of triangle π π π . We have the measure of angle π is 30 degrees. Thatβ s already labeled here. And then, we know that π π is 13
centimeters. But because we have a midpoint, we can say that each of these midsegments is six and a half centimeters.
Because π is a midpoint of line segment π π and line segment π π falls on this right triangle as the hypotenuse, we can say that the line segment π π is a median of this right
triangle. Which should remind us of the property that in a right triangle, the length of the median from the vertex of the right angle is equal to half the length of the hypotenuse. The hypotenuse
was 13. Half that length is six and a half, which means that the line segment π π is equal to 6.5 centimeters.
Line segment π π is not the side weβ re trying to find. We want to know the length of π π . So now, we need to focus on what we know about triangle π π π . We know that one of the
angles is 30 degrees. One is 90, making the other 60. That means triangle π π π is a 30-60-90 triangle. And we should remember that for any 30-60-90 triangle, the side opposite the 30-degree
angle is half the hypotenuse. Or put another way, the ratio of side lengths for a 30-60-90-degree triangle occurs in one to square root of three to two, where the smallest side is opposite the
30-degree angle and the largest is the hypotenuse. The hypotenuse will be two times the side length of the smallest side in a 30-60-90 triangle.
π π is the hypotenuse of triangle π π π . So, we can say that the length of π π will be two times the length of π π because π π is the side opposite the 30-degree angle and
is, therefore, the smallest side length in this 30-60-90-degree triangle. Thatβ s two times 6.5, which is 13 centimeters. The hypotenuse of triangle π π π measures 13 centimeters. | {"url":"https://www.nagwa.com/en/videos/964192459183/","timestamp":"2024-11-08T17:18:54Z","content_type":"text/html","content_length":"246515","record_id":"<urn:uuid:b497c8c6-61b9-4dd3-bc00-92b902f055de>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00646.warc.gz"} |
Identifying Freight Market Shifts - Hidden Markov Models | Convoy
Identifying freight market turning points in real-time with Hidden Markov Models
Data Science • Published on March 14, 2022
The idea that freight markets (similar to so many other markets) move in periodic booms and busts is well established in the trucking industry, but any practitioner will acknowledge that identifying
market peaks and troughs in real time remains frustratingly more alchemy than science. Turning point predictions are frequently wrong and there is near-constant debate among industry watchers over
whether an inflexion is imminent.
This technical challenge is not unique to the freight industry. Economists have long grappled with the real-time identification of turning points in areas such as finance and macroeconomic policy.
Their record from decades of research suggests that there are no silver bullets, but also that there is scope for modest technical improvement from current freight industry practices.
In this blog post, we outline how Convoy Science & Analytics built upon research from the intersection of machine learning and empirical macroeconomics to help inform our team’s view of one of the
trickiest, most business-critical questions for the freight industry.
The problem: We are all hiking in a fog of uncertainty
Economic forecasting is sometimes likened to hiking in a dense fog: We know in intimate detail the terrain we have traveled, but have only an abstract sense of whether the next step will be into a
rock wall or off the edge of a cliff. As time unfolds, there is no definitive answer to whether a single data update is a blip to overlook or the opening salvo of an enduring trend. Head fakes are
frustratingly common.
The clear lesson is: While we can speculate about the forces we anticipate will shape markets on the visible horizon, there are no universal laws governing when the pendulum will swing in favor of
demand or supply. Peaks and troughs are obvious only in retrospect — in financial markets, in the broader economy, and as we have learned at Convoy, in the trucking market.
But decision makers must decide, even in the presence of pervasive uncertainty — and the stakes in these judgment calls are not trivial. Predicting the direction of the freight market — in
particular, spot market prices for trucking services — is essential for pricing the long-term freight contracts that large retailers and manufacturers rely on to move their products. Getting the
direction of prices (much less the price itself) wrong can have catastrophic consequences for a contract’s long-term viability.
The solution: A Hidden Markov model of freight business cycles
Generations of macroeconomists have dedicated careers to developing more robust tools to detect economic turning points than the finger-to-the-wind heuristics still common in the freight industry.
One such tool is the recession probability calculator for the U.S. economy developed by economists Marcelle Chauvet and Jeremy Piger (see, for example, here and here).
The Chauvet-Piger approach uses a Hidden Markov Model (HMM) — trained on nonfarm employment, industrial production, real personal income, and real manufacturing sales — to identify the odds that the
most recent data reflect a durable inflection point in the U.S. economy. (The official arbiter of these turning points is the group of economists who sit on the National Bureau of Economic Research’s
Business Cycle Dating Committee.) It is a creative and unconventional application of HMMs, which are widely applied in diverse domains ranging from speech recognition to genomics.
Below, we provide a simple description of how HMMs work, their appeal as a solution for the problem of identifying market turning points in real time, and how we built a HMM for the freight market.
A simple explanation of Hidden Markov Models
Three properties of HMMs make them particularly well-suited to the challenge of identifying market turning points:
First, the core assumptions of HMMs align with economic theory of how market prices move. HMMs model observed price changes as the random output of some underlying probability distribution. Modeling
price movements as a random sequence aligns with economic theory (and empirical evidence) of approximately efficient markets across a wide range of industries.
Second, the underlying probability distribution that observed price changes are drawn from is allowed to change over time. Fitting a model to the data requires selecting a finite number of possible
underlying distributions (typically called “states”) and then finding the best fit parameters (e.g., mean and variance) associated with each state. For questions about the business cycle, the number
of states to select is obvious: There are two, an expansionary state and a recession state.
Third, the model outcome is easily interpreted for non-technical audiences. A key benefit of the HMM for Convoy’s application is that it allows the rigorous quantification of the probability that the
most recent sequence of market data signifies a regime change (i.e., that the data shifted from one “state” to another, such as from expansion to recession or vice versa).
A simulated example
Consider the following example, built with simulated data.
In Figure 1, below, the red and white points in the top panel are drawn from one of two Gaussians. The red points are drawn from a Gaussian with a positive mean, which we call state 0, and the white
points are drawn from a Gauissian with a slightly negative mean, which we call state 1. If we interpreted these numbers as sequential changes, then the white points (state 1) would be associated with
a prolonged decline in the observed data. Points drawn as dots have been correctly classified by the HMM as being in the correct state, while points drawn as x’s have been incorrectly classified.
Figure 1. Example of the output of a specified Hidden Markov Model and the classification results of a best fit to this data. The white points represent a recession.
While there are a few minor misclassifications, the long stretch of white dots is classified correctly for every point when all the model is trained on the full data at once, and with only one
misclassification when the model is trained on only the data observed prior to that period. The regime change is detected because the first white dot is negative enough that the probability of the
system being in state 1 exceeds 50 percent (shown in the bottom panel). In the middle of the “recession” there are several white points above zero that — even though they are technically more likely
to be produced by the state 0 distribution than the state 1 distribution — are nonetheless correctly classified as state 1.
How could this be? A final attribute of the HMM is that it understands that regimes tend to last for a certain amount of time. It captures this by assuming that there is a certain probability of
either transitioning between states from one point to the next, or else staying in the same state. In this example we’ve specified that there is a 10% chance of transitioning and a 90% chance of
staying in the same state. Therefore, an only slightly positive data point coming after a run of negative points does not offer good enough evidence for a regime change switch. You can see the
corresponding probability of being in state 1 dip for these points, but only once dipping below 50% in the sequentially fit data, and even then not getting very close to zero.
Taken together, this makes the HMM an excellent tool for real-time diagnosis of regime switching. In the example above, even the false “recession” classifications don’t have a probability very close
to 1. By setting appropriate thresholds, we can make a good recession detector.
Extending the model to the trucking market
Inspired by the work of Chauvet and Piger, we decided to build a HMM to identify turning points in the trucking market.
The first step was to replace the general economic indicators used by Chauvet and Piger with indicators that are more closely tied to the trucking industry and for which a reasonably long time series
is available. This is not a trivial choice. While there is extensive theoretical literature (and, in some cases, statutory guidance) about the metrics to track when identifying broader economic
fluctuations, the foundations underpinning freight-specific booms and busts stand on much shakier ground.
Our starting point was the intuition — derived from classical Keynesian economic theory — that fluctuations in the freight economy are driven by changes in the aggregate demand to move goods. This
led to three categories of training data:
• Models trained on data assuming that turning points are cleanly identified by exogenous demand shocks from sectors of the economy that rely on freight to move goods — for example, metrics such as
inflation-adjusted retail sales, factory output, imports, and construction starts.
• Models trained on data that incorporate industry aggregates such as the American Trucking Association’s Truck Tonnage Index and Cass Information Systems’ freight index — though these indexes are
typically vulnerable to idiosyncratic methodological limitations and known biases.
• Models that incorporate supply-side metrics — such as heavy truck and trailer production and sales, and trucking industry employment. Incorporating supply-side metrics muddies the Keynesian
causal link of strictly demand-side models, but may yield higher accuracy if (as is typically the case) reality is messier than the theory.
The outcome of the best-performing model — which included heavy truck and trailer production and domestic manufacturing output — is shown below. (For all of the models, we transformed the metrics to
ensure stationarity and normalized to train on a two-state HMM.) We then used the following two rules to designate high-confidence freight recessions based on the probabilities that the HMM produced:
1. If not currently in a recession, declare one when: Two subsequent months have probabilities exceeding 0.9, or if in any month the recession probability exceeds 0.95.
2. If currently in a recession, declare an expansion when: Two subsequent months have probabilities drop below 0.1, or if in any month the recession probability is below 0.05.
Figure 2, below, illustrates two distinct probabilities. High-probability freight recessions as defined above are indicated by the gray shaded areas.
One is the real-time probability, where at each point in time the model is trained only on data available at that moment. This is what we see in practice as the market unfolds. The second is a
“smoothed” probability, which uses all the data available — historical and future — and therefore has higher accuracy and is less noisy.
Figure 2. Trucking market recession probabilities
When we compare these probabilities to spot market prices for trucking services — the ultimate metric that motivated our initial interest in identifying freight market turning points — recessionary
periods reliably indicate periods of falling prices, and the end of a recession is always followed by an extended period of rising prices.
The horizon has not been defeated, but it has been quantified
The past half-decade — much less the past two years of pandemic-related supply chain disruptions — have shown us that unexpected economic shocks are not outliers; they are the norm. Determining
whether a single month represents a temporary blip or an enduring market reversal has enormous consequences for any business attempting to navigate the freight market. Modern marketplaces require
more precision than rules-of-thumb.
By building upon research at the intersection of machine learning and empirical macroeconomics to develop a Hidden Markov Model of freight recession probabilities, Convoy Science & Analytics has been
able to help guide better business-critical decisions. There is no single silver bullet and we do not pretend that our probabilities are an all-knowing solution to the devilishly tricky problem of
predicting the future. But when considered among a range of other indicators and model outputs, they can help free freight decision makers from the high-stakes guesswork that has historically
exacerbated market fragilities. | {"url":"http://taggto.com/index-250.html","timestamp":"2024-11-15T00:35:59Z","content_type":"text/html","content_length":"110338","record_id":"<urn:uuid:9d692f3f-17d1-4b47-8ded-f0a15ec177b4>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00110.warc.gz"} |
R | Matteo Lisi
A friend was working on a paper and found himself in the situation of having to defend the null hypothesis that a particular effect is absent (or not measurable) when tested under more controlled
conditions than those used in previous studies. He asked for some practical advice: “what would convince you as as a reviewer of a null result?” No statistical test can “prove” a null results
(intended as the point-null hypothesis that an effect of interest is zero).
This took me some time to make it work, so I’ll write the details here for the benefit of my future self and anyone else facing similar issues. To run R in the Apocrita cluster (which runs CentOS 7)
first load the modules module load R module load gcc (gcc is required to compile the packages from source.) Before starting you should make sure that you don’t have any previous installation of RStan
in your system.
In experimental psychology and neuroscience the classical approach when comparing different models that make quantitative predictions about the behavior of participants is to aggregate the predictive
ability of the model (e.g. as quantified by Akaike Information criterion) across participants, and then see which one provide on average the best performance. Although correct, this approach neglect
the possibility that different participants might use different strategies that are best described by alternative, competing models.
Photo ©Roxie and Lee Carroll, www.akidsphoto.com. In my previous lab I was known for promoting the use of multilevel, or mixed-effects model among my colleagues. (The slides on the /misc section of
this website are part of this effort.) Multilevel models should be the standard approach in fields like experimental psychology and neuroscience, where the data is naturally grouped according to
“observational units”, i.e. individual participants. I agree with Richard McElreath when he writes that “multilevel regression deserves to be the default form of regression” (see here, section 1.
Generating random variables with given variance-covariance matrix can be useful for many purposes. For example it is useful for generating random intercepts and slopes with given correlations when
simulating a multilevel, or mixed-effects, model (e.g. see here). This can be achieved efficiently with the Choleski factorization. In linear algebra the factorization or decomposition of a matrix is
the factorization of a matrix into a product of matrices. More specifically, the Choleski factorization is a decomposition of a positive-defined, symmetric1 matrix into a product of a triangular
matrix and its conjugate transpose; in other words is a method to find the square root of a matrix.
In the study of human perception we often need to measure how sensitive is an observer to a stimulus variation, and how her/his sensitivity changes due to changes in the context or experimental
manipulations. In many applications this can be done by estimating the slope of the psychometric function1, a parameter that relates to the precision with which the observer can make judgements about
the stimulus. A psychometric function is generally characterized by 2-3 parameters: the slope, the threshold (or criterion), and an optional lapse parameter, which indicate the rate at which
attention lapses (i. | {"url":"https://mlisi.xyz/tags/r/","timestamp":"2024-11-01T23:39:58Z","content_type":"text/html","content_length":"17568","record_id":"<urn:uuid:df17dcdc-e4d9-42c8-a033-0f6f0910c3eb>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00327.warc.gz"} |
Transactions Online
Noboru TAKAGI, Kyoichi NAKASHIMA, "A Logical Model for Representing Ambiguous States in Multiple-Valued Logic Systems" in IEICE TRANSACTIONS on Information, vol. E82-D, no. 10, pp. 1344-1351, October
1999, doi: .
Abstract: In this paper, we focus on regularity and set-valued functions. Regularity was first introduced by S. C. Kleene in the propositional operations of his ternary logic. Then, M. Mukaidono
investigated some properties of ternary functions, which can be represented by regular operations. He called such ternary functions "regular ternary logic functions". Regular ternary logic functions
are useful for representing and analyzing ambiguities such as transient states or initial states in binary logic circuits that Boolean functions cannot cope with. Furthermore, they are also applied
to studies of fail-safe systems for binary logic circuits. In this paper, we will discuss an extension of regular ternary logic functions into r-valued set-valued functions, which are defined as
mappings on a set of nonempty subsets of the r-valued set {0, 1, . . . , r-1}. First, the paper will show a method by which operations on the r-valued set {0, 1, . . . , r-1} can be expanded into
operations on the set of nonempty subsets of {0, 1, . . . , r-1}. These operations will be called regular since this method is identical with the way that Kleene expanded operations of binary logic
into his ternary logic. Finally, explicit expressions of set-valued functions monotonic in subset will be presented.
URL: https://global.ieice.org/en_transactions/information/10.1587/e82-d_10_1344/_p
author={Noboru TAKAGI, Kyoichi NAKASHIMA, },
journal={IEICE TRANSACTIONS on Information},
title={A Logical Model for Representing Ambiguous States in Multiple-Valued Logic Systems},
abstract={In this paper, we focus on regularity and set-valued functions. Regularity was first introduced by S. C. Kleene in the propositional operations of his ternary logic. Then, M. Mukaidono
investigated some properties of ternary functions, which can be represented by regular operations. He called such ternary functions "regular ternary logic functions". Regular ternary logic functions
are useful for representing and analyzing ambiguities such as transient states or initial states in binary logic circuits that Boolean functions cannot cope with. Furthermore, they are also applied
to studies of fail-safe systems for binary logic circuits. In this paper, we will discuss an extension of regular ternary logic functions into r-valued set-valued functions, which are defined as
mappings on a set of nonempty subsets of the r-valued set {0, 1, . . . , r-1}. First, the paper will show a method by which operations on the r-valued set {0, 1, . . . , r-1} can be expanded into
operations on the set of nonempty subsets of {0, 1, . . . , r-1}. These operations will be called regular since this method is identical with the way that Kleene expanded operations of binary logic
into his ternary logic. Finally, explicit expressions of set-valued functions monotonic in subset will be presented.},
TY - JOUR
TI - A Logical Model for Representing Ambiguous States in Multiple-Valued Logic Systems
T2 - IEICE TRANSACTIONS on Information
SP - 1344
EP - 1351
AU - Noboru TAKAGI
AU - Kyoichi NAKASHIMA
PY - 1999
DO -
JO - IEICE TRANSACTIONS on Information
SN -
VL - E82-D
IS - 10
JA - IEICE TRANSACTIONS on Information
Y1 - October 1999
AB - In this paper, we focus on regularity and set-valued functions. Regularity was first introduced by S. C. Kleene in the propositional operations of his ternary logic. Then, M. Mukaidono
investigated some properties of ternary functions, which can be represented by regular operations. He called such ternary functions "regular ternary logic functions". Regular ternary logic functions
are useful for representing and analyzing ambiguities such as transient states or initial states in binary logic circuits that Boolean functions cannot cope with. Furthermore, they are also applied
to studies of fail-safe systems for binary logic circuits. In this paper, we will discuss an extension of regular ternary logic functions into r-valued set-valued functions, which are defined as
mappings on a set of nonempty subsets of the r-valued set {0, 1, . . . , r-1}. First, the paper will show a method by which operations on the r-valued set {0, 1, . . . , r-1} can be expanded into
operations on the set of nonempty subsets of {0, 1, . . . , r-1}. These operations will be called regular since this method is identical with the way that Kleene expanded operations of binary logic
into his ternary logic. Finally, explicit expressions of set-valued functions monotonic in subset will be presented.
ER - | {"url":"https://global.ieice.org/en_transactions/information/10.1587/e82-d_10_1344/_p","timestamp":"2024-11-05T05:46:35Z","content_type":"text/html","content_length":"63315","record_id":"<urn:uuid:6dd2d12d-5b81-4a2a-b944-56b561bce268>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00876.warc.gz"} |
Applications of derivatives
Applications of derivatives: rate of change of bodies, increasing/decreasing functions, tangents and normal, use of derivatives in approximation, maxima and minima (first derivative test motivated
geometrically and second derivative test is given as a provable tool)
To solve practical problems such as engineering optimization, the greatest challenge is often to convert the word problem into a mathematical optimization problem—by setting up the function that is
to be maximized or minimized. Similarly, we can also find the rate of change of a body using differentiation.
Recall that dy/dx is positive if y increases as x increases and is negative if y decreases as x increases.Also, the slope of the tangent of a curve f(x) can be found out by deriving the function of
the curve at the given point. Because the normal line is perpendicular to the curve f(x) at a given point, the gradient of the normal is -1/f’(x).
Example: A ship can reach its top speed in 5 minutes.During that time its distance from the start can be calculated using the formula D = t + 50t^2 where t is the time in seconds and D is measured in
metres. How fast is it accelerating?
Speed, v m/s, is the rate of change of distance with respect to time.
v = 1 + (100 5) = 501 m/sec.
Acceleration, a m/s^2, is the rate of change of speed with respect to time or second derivative of distance with respect to time.
Now consider the graph below:
The signs indicate where the gradient of the curve is =, – or 0. In each case:
A function is strictly increasing in a region where f´(x) > 0. A function is strictly increasing in a region where f´(x) < 0. A function is stationary where f´(x) = 0
Example: f(x) = 2x^3 – 3x^2 – 12x + 1.
Identify where the function is (i) increasing (ii) decreasing (iii) stationery
Solution: f’(x) = 6x^2 – 6x – 12
A sketch of the derivative shows us that
f´(x) < 0 for 0 <x< 2 … f(x) decreasing
f´(x) > 0 for x< 0 or x> 2 … f(x) increasing
f´(x) = 0 for x = 0 or x = 2 … f(x) stationary
When a function is defined on a closed interval, a ≤ x ≤ b, then it must have a maximum and a minimum value in that interval.
These values can be found either at
• a stationary point [where f´(x) = 0]
• an end-point of the closed interval. [f(a) and f(b)]
All you need do is find these values and pick out the greatest and least values.
Example: A manufacturer is making a can for holding 250 ml of juice.The cost of the can is dependent on its radius, x cm.For practical reasons, the radius must be between 2.5 cm and 4.5 cm.The cost
can be calculated from the formula
C = x^3 –5x^2 + 3x + 15, 2.5 ≤ x ≤ 4.5.
Calculate the maximum and minimum values of the cost function.
which equals zero at stationary points.
3x^2 – 10x + 3 = 0
• (3x– 1)(x – 3) = 0
• x = ^1/[3] or x = 3
• Working to 1 d.p.
• f(^1/[3]) = 15.5
• f(3) = 6
• f(2.5) = 6.9
• f(4·5) = 18.4
• By inspection f[max] = 18.4 (when x = 4.5) and f[min] = 6 (when x = 3).
Let f : D → R, D ⊂R, be a given function and let y = f (x). Let ∆x denote a small increment in x. Recall that the increment in y is corresponding to the increase in x, denoted by ∆y, is given by ∆y =
f (x + ∆x) – f (x). We define the following:
• The differential of x, denoted by dx, is defined by dx = ∆x.
• The differential of y, denoted by dy, is defined by dy = f′( x) dx
In case dx = ∆x is relatively small when compared with x, and for y, we denote dy ≈ ∆y.
Example: Use differential to approximate $\sqrt{36.5}$
Take $y= \sqrt{x}$, let x = 36 and ?x=0.5
Therefore, $\Delta y$ = $\sqrt{x+\Delta x}-\sqrt{x}=\sqrt{36.5}-\sqrt{36}=\sqrt{36.5}-6$
Hence, $\sqrt{36.5}=6+\Delta y$
Since dy is approximate = $\Delta y$, it is given by:
$dy=(\frac{dy}{dx})\Delta x=\frac{1}{2\sqrt{x}}(0.5)=\frac{1}{2\sqrt{36}}(0.5)=\frac{1}{24}=0.04$, since $y= \sqrt{x}$
Therefore, the approx. value of $\sqrt{36.5}$ is 6 + 0.04=6.04 | {"url":"http://www.yaomingfanclub.com/applications-of-derivatives.html","timestamp":"2024-11-06T01:28:56Z","content_type":"text/html","content_length":"130385","record_id":"<urn:uuid:8123b849-5913-4bcb-a4ff-b583623e86aa>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00488.warc.gz"} |