content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Volatility – A Key Hurdle to Building Long-Term WealthVolatility – A Key Hurdle to Building Long-Term Wealth | QSV Equity Management
“If I lose 20% of my money this year, but I’m up 20% next year, I’m back to even, right?”
At first glance, this may seem correct, but it doesn’t accurately reflect the effect of losses on an investment portfolio. Understanding the difference between arithmetic and geometric returns and
how they relate to portfolio value is essential to understanding what volatility means to your long-term investment success.
Consider a $100,000 portfolio that experiences a 15% decline this month and a 15% rebound next month, producing an arithmetic return of zero. You might think that a return of zero means your
portfolio value didn’t change. But in reality, the geometric return determines any increase or decrease in wealth as shown in Figure 1. This $100,000 portfolio fell to $85,000 due to the 15% drop. It
then rebounded 15%, which took the value to just $97,750, for a loss of $2,250 (or -2.25%). A loss followed by the same percentage gain does not return a portfolio to its original value. Instead, any
loss requires an even larger gain to break even.
Arithmetic Return = (r[1] + r[2] … + r[n]) / n = (-15% + 15%) / 2 = 0%
Geometric Return = (r[1] * r[2]* … r[n]) -1 = (0.85 * 1.15) -1 = -2.25%
Figure 1
Source: QSV Equity
Building wealth is a marathon, not a sprint and many investors don’t appreciate how volatility negatively affects the stamina of their portfolios. Compound returns are “dragged down” by high levels
of downside volatility to the point that overall results can be negative even with a high arithmetic return. In this paper, we hope to help investors understand the detrimental effect volatility can
have on creating long-term wealth.
Volatility Drag
Let’s first review the concept of volatility drag. In simple terms, volatility drag is defined as the additional expected return required to justify an increased level of volatility in a portfolio.
It represents yet another hurdle that investors must overcome in order to build long-term wealth. Volatility drag is approximated as one half of the volatility squared, as shown in the following
Volatility Drag = -0.5 * (Volatility)[2]
As volatility increases, the drag it imposes accelerates. Consider the following two portfolios:
⦁ Portfolio A has volatility of 10%, and thus is expected to have a volatility drag of -0.5%.
⦁ Portfolio B has volatility of 25% and an expected volatility drag of -3.2%.
All other factors being held equal, an increase from 10% volatility to 25% volatility decreases a portfolio’s growth rate by 2.7% annually. In other words, you would need an additional 2.7% in
expected annual return to justify the increased level of volatility (See Figure 2).
Figure 2
Source: QSV Equity
When comparing two portfolios with the same average return, the one with the greater volatility, or variance, will have a lower compound return, all other things being equal. Figure 3 below sums up
the impact volatility has on a portfolio. As you can see, the higher the portfolio volatility, the greater the performance hurdle. Since the relationship between portfolio returns and volatility is
exponential, it does not take much of an increase in volatility to have a meaningful detrimental impact on portfolio returns.
Figure 3
Source: QSV Equity
So, where does “volatility drag” come from? Given a series of returns, the difference between the geometric mean and the arithmetic mean of that return series represents “drag.” This “drag” occurs
because the return in any given year is not independent of the returns in other years. If you have a sizeable loss in one year, you have less capital to generate returns going forward.
Figure 4
Source: Factset and QSV Equity (data represents 10 years through Sept 30, 2016)
Consider the following real world example, where we look at historic returns for two companies: Biotech darling Amgen (AMGN) and spice manufacturer McCormick & Co. (MKC).
The first set of bars in Figure 4 show the average annual arithmetic returns for both companies over the last 10 years. At first blush, it appears that Amgen would have been the better holding as its
excess arithmetic return is almost 300 basis points annually. However, arithmetic returns do not take into account how those returns were generated. Recall, returns are not independent of one
another. Any loss reduces the amount of capital available to generate returns going forward.
Amgen is significantly more volatile than McCormick, as evidenced by the higher standard deviation. So much so, that the geometric return (represented by the compounded annual growth rate or CAGR) is
much higher for the boring spice company. In Figure 5 below, we see how, over time, the effects of compounding enable McCormick to generate greater returns for investors with significantly less
Figure 5
Source: Factset and QSV Equity (data represents 10 years of returns through Sept 30, 2016)
Upside and Downside Capture Ratios
This concept also applies to portfolios. You will often hear investment managers use the terms upside capture ratio and downside capture ratio when discussing their portfolios. This is a statistical
measure of the managers’ overall performance in up and down markets relative to their benchmarks. By comparing how managers perform versus an index in up and down markets, we can begin to get a sense
for how much risk they are taking on in their portfolios.
The upside capture is calculated by dividing a manager’s returns by the index’s returns on up days, and multiplying by 100. For downside capture we dividethe managers’ returns by the index’s returns
on days when the index is negative.
An upside capture ratio above 100 means that, on average, the manager has outperformed the benchmark on days when the returns were positive. Conversely, a downside capture ratio above 100 indicates
that the manager underperformed the benchmark on down days.
A New Way to Invest
The concept of low volatility investing is not new. Various studies – some dating back to the 1970s – argue the merits of adopting this investment strategy. (Refer to the Appendix at the end of this
paper for studies we find particularly instructive).
One of the more compelling arguments comes from Ang, Hodrick, Xing and Zhang, who found that stocks with high idiosyncratic (stock-specific) volatility tended to have normal returns during both U.S.
and international bull market periods, but lower returns during bear market periods or recessions. On average, the returns were negative, earning -0.02% per month during the 1963 to 2000 study period
(Ang, et al., 2006). The pattern of negative returns associated with the volatility factor was also noted by Xiaowei Kang, who found that portfolios tilted to low-risk securities had higher returns
than those tilted to high-risk stocks (Kang, 2012).
In “The Volatility Effect: Lower Risk Without Lower Return,” Blitz and van Vliet (2007) present empirical evidence that stocks with low volatility earn high risk adjusted returns. Global low
volatility decile portfolios had an annual alpha advantage of 12% over high volatility decile portfolios during the study’s 1986- 2006 time frame (Blitz, et al., 2007).
The concept of low volatility investing has been gaining traction with investors. Within the broad low volatility category, the current flavor of the month appears to be “smart beta” strategies,
which attempt to capture investment factors or market inefficiencies in order to deliver risk-adjusted returns above traditional capitalization-weighted indexes. One such alternative weighting scheme
focuses on targeted volatility. S&P has attempted to capitalize on this trend through the introduction of its Low Beta US Index and S&P 500 High Beta Index.
Figure 6 below illustrates the downside and upside capture of the two indexes compared to the S&P 500 Index.
Figure 6
Source: S&P and QSV Equity
Much like the Amgen/McCormick example presented earlier in this paper, in Figure 7 below, we see that the S&P Low Beta US Index has outperformed the S&P 500 High Beta Index over time with less
volatility. These historical results are just one more piece of evidence supporting a case for low volatility investing. From individual stocks like Amgen and McCormick, to a widely used index like
the S&P 500, to the numerous pools of US and international stocks evaluated in academic studies, the evidence for low volatility investing is quite persuasive.
Figure 7
Source: S&P and QSV Equity
There is no shortage of hurdles on the path to successful long-term investing.
Added volatility need not be one of those hurdles. Understanding the drag that volatility creates on investment returns can assist investors in making better allocation decisions and put them on a
smoother path toward achieving their long-term financial goals. As we said at the beginning of this paper – investing is a marathon, not a sprint. Minding the hurdles, particularly volatility, will
help ensure you cross the finish line!
Low Beta and Low Volatility Studies
⦁ “ Benchmarks as Limits to Arbitrage: Understanding the Low Volatility Anomaly,” Malcolm Baker of Harvard Business School and co-authors Brendan Bradley of Acadian Asset Management and Jeffrey
Wurgler of NYU Stern School of Business found that selectively investing in portfolios of either low-beta or low-volatility stocks over the 41-year period spanning 1968 through 2008 would have
resulted in annualized alphas of 2.6% and 2.1%, respectively. The swings these portfolios experienced were also far less extreme than those of the broader market.
Based on the study, “low-volatility and low-beta portfolios offered an enviable combination of high average returns and small drawdowns. This outcome runs counter to the fundamental principle that
risk is compensated with higher expected return.” The study’s authors state, “We believe that the long-term outperformance of low-risk portfolios is perhaps the greatest anomaly in finance. Large in
magnitude, it challenges the basic notion of a risk-return trade-off.”
⦁ Blitz and van Vliet (2007) present empirical evidence in their paper, “The Volatility Effect: Lower Risk without Lower Return” that “stocks with low volatility earn high risk-adjusted returns. The
annual alpha spread of global low versus high volatility decile portfolios amounts to 12% over the 1986-2006 period.” (Blitz, et al., 2007)
⦁ Blitz and van Vliet (2007) present empirical evidence in their paper, “The Volatility Effect: Lower Risk without Lower Return” that “stocks with low volatility earn high risk-adjusted returns. The
annual alpha spread of global low versus high volatility decile portfolios amounts to 12% over the 1986-2006 period.” (Blitz, et al., 2007)
⦁ Frazzini and Pedersen (2014) demonstrate in their paper, “Betting Against Beta,” that the beta anomaly also exists in other classes such as Treasury bonds, corporate bonds, and futures (Frazzini,
et al., 2014). They also find that the beta anomaly exists in 19 other developed stock markets from January 1989 to March 2012.
⦁ Frazzini and Pedersen (2014) demonstrate in their paper, “Betting Against Beta,” that the beta anomaly also exists in other classes such as Treasury bonds, corporate bonds, and futures (Frazzini,
et al., 2014). They also find that the beta anomaly exists in 19 other developed stock markets from January 1989 to March 2012.
⦁ Haugen and Baker (1996) in “Low Risk Stocks Outperform within All Observable Markets of the World” concluded that stocks with lower risk have higher expected and realized rates of return than
stocks with higher risk, and the results seem to reveal a major failure in the Efficient Markets Hypothesis (Haugen, et al., 1996).
⦁ Clifford Asness, Andrea Frazzini, and Lasse H. Pedersen (2013) in their paper, “Quality Minus Junk,” define a quality security as “one that has characteristics that, all-else-equal, an investor
should be willing to pay a higher price for: stocks that are safe, profitable, growing, and well managed.” The study’s authors found that “high-quality stocks have historically delivered high
risk-adjusted returns while low-quality junk stocks delivered negative risk-adjusted returns.” (Asness, et al., 2013)
Ang, A. and Robert J Hodrick and Yuhang Xing and Xiaoyan Zhang 2006. “The cross-section of volatility and expected returns.” Journal of Finance 2006 vol. 51, 259–299.
Ang, Andrew and Robert J. Hodrick and Yuhang Xing and Xiaoyan Zhang “High idiosyncratic volatility and low returns: International and further U.S. evidence,” Journal of Financial Economics, Elsevier,
2009 vol. 91(1), pages 1-23, January.
Asness, Clifford S. and Andrea Frazzini and Lasse Heje Pedersen “Quality Minus Junk” (June 19, 2014). Available at SSRN 2312432.
Baker, Malcolm and Brendan Bradley and Jeffrey Wurgler “Benchmarks as limits to arbitrage: Understanding the low-volatility anomaly.” Financial Analysts Journal 67.1 (2011): 40-54.
Baker, Nardin L. and Robert A. Haugen “Low risk stocks outperform within all observable markets of the world.” (2012). Available at SSRN 2055431.
Blitz, David and Pim Van Vliet “The volatility effect: Lower risk without lower return.” Journal of Portfolio Management (2007): 102-113.
Cowan, David and Sam Wilderman “Re-thinking risk: what the beta puzzle tells us about investing.” White paper, GMO (November 2011).
Frazzini, Andrea and Lasse Heje Pedersen “Betting against beta.” Journal of Financial Economics 111.1 (2014): 1-25.
Kang, Xiaowei, Evaluating Alternative Beta Strategies (February 1, 2012). Journal of Indexes Europe, March/April 2012 | {"url":"https://www.qsvequity.com/volatility/","timestamp":"2024-11-13T06:39:23Z","content_type":"text/html","content_length":"177704","record_id":"<urn:uuid:fb0973ee-f925-40af-a715-001d77d4b8bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00612.warc.gz"} |
Manipulative Waiters with Probabilistic Intuition
For positive integers n and q and a monotone graph property A, we consider the two-player, perfect information game WC(n, q, A), which is defined as follows. The game proceeds in rounds. In each
round, the first player, called Waiter, offers the second player, called Client, q + 1 edges of the complete graph K[n] which have not been offered previously. Client then chooses one of these edges
which he keeps and the remaining q edges go back to Waiter. If, at the end of the gamȩ the graph which consists of the edges chosen by Client satisfies the property A, then Waiter is declared the
winner; otherwise Client wins the game. In this paper we study such games (also known as Picker-Chooser games) for a variety of natural graph-theoretic parameters, such as the size of a largest
component or the length of a longest cycle. In particular, we describe a phase transition type phenomenon which occurs when the parameter q is close to n and is reminiscent of phase transition
phenomena in random graphs. Namely, we prove that if q ≥(1 + ϵ)n, then Client can avoid components of order cϵ^-2 ln n for some absolute constant c > 0, whereas for q ≤(1 - ϵ)n, Waiter can force a
giant, linearly sized component in Client's graph. In the second part of the paper, we prove that Waiter can force Client's graph to be pancyclic for every q ≤ cn, where c > 0 is an appropriate
constant. Note that this behaviour is in stark contrast to the threshold for pancyclicity and Hamiltonicity of random graphs.
أدرس بدقة موضوعات البحث “Manipulative Waiters with Probabilistic Intuition'. فهما يشكلان معًا بصمة فريدة. | {"url":"https://cris.ariel.ac.il/ar/publications/manipulative-waiters-with-probabilistic-intuition-3","timestamp":"2024-11-04T15:00:18Z","content_type":"text/html","content_length":"57507","record_id":"<urn:uuid:e763f220-5785-4464-9df1-6ef743aa70d2>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00706.warc.gz"} |
Program Guides: Yacht Design: Ratios
Engineering the Sailboat - Safety in Numbers
By Eric W. Sponberg, Naval Architect, P.E. (CT). This article was first published in SAIL magazine in June, 1985. Since then, a few improvements or changes in yacht design and engineering have
occurred. Therefore, I have modified the article where necessary to bring it up to date. EWS. (from the site) | {"url":"https://libguides.landingschool.edu/c.php?g=466608&p=3190373","timestamp":"2024-11-13T16:11:23Z","content_type":"text/html","content_length":"33105","record_id":"<urn:uuid:03efb7db-47e0-49b2-86e1-193decfe2d25>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00815.warc.gz"} |
Transposing and Reversing: How to Rotate a 2D Matrix 90 Degrees
Today's algorithm is the Rotate Image problem:
You are given an n x n 2D matrix representing an image. Rotate the image by 90 degrees (clockwise).
You have to rotate the image in-place, which means you have to modify the input 2D matrix directly. DO NOT allocate another 2D matrix and do the rotation.
For example, if you were given the 2D array
rotating the array 90 degrees clockwise would give us the output of
Put another way, the first row becomes the last column, the second row becomes the middle column, and the last row becomes the first column.
![Three sets of images. In the first set, there's a 2D array, [[1,2,3], [4,5,6],[7,8,9]], and the first row is highlighted in cyan. There's a blue arrow turned 90 degrees clockwise, and next to it is
another 2D array, [[, 1], [, 2],[, , 3]]. The last column is highlighted in cyan. In the second set, there's a 2D array, [[1,2,3], [4,5,6],[7,8,9]], whose second row is highlighted in cyan. There's a
blue arrow turned 90 degrees clockwise, and next to it is another 2D array, [[, 4, 1], [, 5, 2], [, 6, 3]], whose second column is highlighted in cyan. In the third set of images, there's a 2D array,
[[1,2,3], [4,5,6], [7,8,9]], and the last row is highlighted in cyan. There's a blue arrow turned 90 degrees clockwise, and next to it is another 2D array, [[7, 4, 1], [8, 5, 2], [9, 6, 3]], whose
second column is highlighted in cyan. ](https://dev-to-uploads.s3.amazonaws.com/i/bluo1pumyica1dmly0qz.png)
In this post, I'll start by discussing my approach to solving this problem, then I'll code the solution using JavaScript.
Approaching the Rotating 2D Array Problem
Not long ago, I discussed the problem of rotating a one dimensional array (you can find that post here). What's trickier about a 2D array is that you have to keep track of both the row and the column
that we're in.
The way I'll be rotating the 2d array (also known as a matrix) is with a two step approach. First, I'll transpose the matrix, which means switching the rows with the columns. Then, I'll reverse the
elements in each row.
Let's say our inputted matrix was
After transposing the matrix, it would look like this:
The first row became the first column, and the second row became the second column. However, we want all of these elements to be reversed, so we'll reverse each element in each row, giving us the
final matrix:
which is the solution we're after.
Solving the Matrix Rotation Problem
We'll start our solution by checking for edge cases. If the matrix is empty, then there's nothing to rotate, so we can immediately return null. Additionally, because we know the matrix is square (n x
n), if it has a length of 1, then it only has one element in it, so we can just return that element.
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
Now, like discussed above, we'll have a two step solution. To keep the code as neat as possible, we'll separate the steps out from the original rotate function. We can create a separate function
called transpose(), which will take in the matrix, and we'll call it from inside the rotate() function.
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
function transpose(matrix) {
Transposing the matrix, or switching the rows and columns, will require nested for loops. The first loop will go through each row, and the second loop will go through each column. Since they're
nested, we'll be able to access each element at any row, column point. We'll start the first for loop at i = 0, which is the first row, and we'll start the second for loop at j = 1, which is the
second column.
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
function transpose(matrix) {
for (let i = 0; i < matrix.length; i++) {
for (let j = i; j < matrix[0].length; j++) {
Inside the for loops, we'll want to swap two elements -- the value at matrix[i][j] will be swapped with the value at matrix[j][i]. To do a swap, we need a temporary variable, called temp, which
enables us to store the value at one point before changing that point's value.
When the for loops are done executing, we can return the updated matrix back to rotate().
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
function transpose(matrix) {
for (let i = 0; i < matrix.length; i++) {
for (let j = i; j < matrix[0].length; j++) {
const temp = matrix[i][j];
matrix[i][j] = matrix[j][i];
matrix[j][i] = temp;
return matrix;
We're now done with transposing the elements, so we have to move onto the second step of this solution: reversing the elements of each row. To do this, we'll want to go through each row in matrix,
and call a new function called reverse() on that row. reverse() will take in three arguments: the row we want to reverse, the starting point to reverse at (which is 0), and the ending point of the
reversal (with is row.length - 1).
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
matrix.forEach((row) => {
reverse(row, 0, row.length - 1);
function transpose(matrix) {
for (let i = 0; i < matrix.length; i++) {
for (let j = i; j < matrix[0].length; j++) {
const temp = matrix[i][j];
matrix[i][j] = matrix[j][i];
matrix[j][i] = temp;
return matrix;
function reverse(row, start, end) {
Now, in reverse(), we'll set up a while loop. The idea behind this function is to have two pointers, start and end. As long as the end pointer is larger than the start pointer, we'll want to swap the
values at those two spots.
To start, therefore, we'll set up a while loop in reverse(), which will keep going as long asstart < end`.
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
matrix.forEach((row) => {
reverse(row, 0, row.length - 1);
function transpose(matrix) {
for (let i = 0; i < matrix.length; i++) {
for (let j = i; j < matrix[0].length; j++) {
const temp = matrix[i][j];
matrix[i][j] = matrix[j][i];
matrix[j][i] = temp;
return matrix;
function reverse(row, start, end) {
while (start < end) {
Just like we did in transpose(), we'll need to set up a temporary variable in order to swap the values at the start and end points.
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
matrix.forEach((row) => {
reverse(row, 0, row.length - 1);
function transpose(matrix) {
for (let i = 0; i < matrix.length; i++) {
for (let j = i; j < matrix[0].length; j++) {
const temp = matrix[i][j];
matrix[i][j] = matrix[j][i];
matrix[j][i] = temp;
return matrix;
function reverse(row, start, end) {
while (start < end) {
const temp = row[start];
row[start] = row[end];
row[end] = temp;
Once the variables are swapped, we want to bring the start and end pointers toward each other, so we'll increment start, and decrement end. Once the while loops is done executing, we can return the
now reversed row to rotate().
function rotate(matrix) {
if (!matrix.length) return null;
if (matrix.length === 1) return matrix;
matrix.forEach((row) => {
reverse(row, 0, row.length - 1);
function transpose(matrix) {
for (let i = 0; i < matrix.length; i++) {
for (let j = i; j < matrix[0].length; j++) {
const temp = matrix[i][j];
matrix[i][j] = matrix[j][i];
matrix[j][i] = temp;
return matrix;
function reverse(row, start, end) {
while (start < end) {
const temp = row[start];
row[start] = row[end];
row[end] = temp;
return row;
Since the problem asked us to rotate the 2D array "in place", we don't have to return anything. We already modified the original matrix, so we're done with our solution!
Let me know in the comments if you have any questions or other ideas for how to approach this problem!
Top comments (2)
SACHIN RAJBHAR •
Simply Explained! thanks
agatsoh •
Thank you for explaining the algorithm in depth.
For further actions, you may consider blocking this person and/or reporting abuse | {"url":"https://practicaldev-herokuapp-com.global.ssl.fastly.net/alisabaj/rotating-a-matrix-90-degrees-4a49","timestamp":"2024-11-10T18:59:21Z","content_type":"text/html","content_length":"158280","record_id":"<urn:uuid:a0d1ac00-1e83-4073-824b-181c5f27a770>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00759.warc.gz"} |
Volume and Data Table
An old riddle asks “Which is heavier, a pound of feathers or a pound of lead?” The answer is obvious, of course, since a pound of feathers and a pound of lead both weigh the same, one pound. However,
there is clearly something different about a small piece of lead and a large bag of feathers, even though they weigh the same. What is this difference?
The relationship between the lead and feathers is expressed by the physical property called density. Density is defined as the ratio of a substance’s mass to the volume it occupies.
Density (g/mL) = Mass (g)___ Volume (mL)
In this laboratory exercise, you will be using skills and techniques learned earlier to determine the identity of different substances. To determine the precision of your technique, you will
calculate the percent error, which is a comparison of the differences between the measured value and accepted value. Percent error can be determined as follows:
% Error = (Measured Value – Accepted Value( x 100 Accepted Value
When you have completed this activity, you should be able to:
Observe the chemical and physical properties of substances to interpret the structure of and the changes in matter.
Balance Rectangular solid
Metric ruler Metal cylinder
100 mL graduated cylinder 250 mL beaker
Use the balance to determine the mass of the rectangular solid.
Record the mass to the nearest 0.01 g in the data table.
Use the metric ruler to measure the length, width, and height of the rectangular solid.
Record these measurements to the nearest 0.1 cm in the data table.
Calculate the volume using the following formula:
Volume (cm3) = length (cm) x width (cm) x height (cm)
Record the volume in your | {"url":"https://www.studymode.com/essays/Volume-And-Data-Table-1934385.html","timestamp":"2024-11-06T22:02:06Z","content_type":"text/html","content_length":"94704","record_id":"<urn:uuid:096c1369-3b7e-48eb-b8f9-96088b50ffab>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00349.warc.gz"} |
Finding the Unknown Component of a Vector Parallel to Another
Question Video: Finding the Unknown Component of a Vector Parallel to Another Mathematics • First Year of Secondary School
Given that π = β ¨β 6, β 15β ©, π = β ¨π , β 10β ©, and π β ₯ π , find the value of π .
Video Transcript
Given that vector π is equal to negative six, negative 15 and vector π is equal to π , negative 10 and the vector π is parallel to the vector π , find the value of π .
In this question, weβ re given two vectors in terms of their components: the vector π and the vector π . And weβ re also told that these two vectors are parallel. We need to use this to find
the value of π , which is one of the components of vector π . To answer this question, letβ s start by recalling what it means to say vector π is parallel to vector π . We say that two
vectors are parallel if theyβ re nonscalar multiples of each other. In other words, because vector π is parallel to vector π , there must exist some scalar π such that π is equal to π
times π .
We can substitute the expressions weβ re given for vector π and vector π into this equation. This gives us that the vector negative six, negative 15 will be equal to π multiplied by the
vector π , negative 10. We can then simplify the right-hand side of this equation by evaluating the scalar multiplication. Remember, to multiply a vector by a scalar, we just multiply each of the
components by the scalar. This gives us that the vector negative six, negative 15 will be equal to the vector π π , negative 10π . Since these two vectors must be equal, the first component of
each vector must be equal and the second component of each vector must be equal.
This gives us two equations. Equating the first component of each vector gives us that negative six must be equal to π times π . And equating the second component of each vector gives us that
negative 15 must be equal to negative 10 times π . We can solve the second equation for the value of π . We just need to divide both sides of our equation through by negative 10. This gives us
that π is equal to negative 15 divided by negative 10, which if we divide both our numerator and our denominator by negative five, we see that π is equal to three over two. We can then
substitute this value of π into our first equation.
Doing this gives us that negative six must be equal to π multiplied by three over two. And we can then solve this equation for π by dividing both sides of our equation by three over two, which
is of course the same as multiplying by two over three. This gives us that π is equal to negative six multiplied by two-thirds, which we can then evaluate. Negative six divided by three is equal
to negative two, which gives us that π is equal to negative two multiplied by two, which is negative four. Therefore, we were able to show if π is the vector negative six, negative 15 and π
is the vector π , negative 10 and the vector π is parallel to the vector π , then the value of π must be equal to negative four. | {"url":"https://www.nagwa.com/en/videos/960180542790/","timestamp":"2024-11-10T04:40:07Z","content_type":"text/html","content_length":"251706","record_id":"<urn:uuid:f8ad484e-7958-4194-af24-bede4f008539>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00828.warc.gz"} |
Arithmetic operators in C - Full explanation with examples and tutorials
View Course Path
Arithmetic operators in C – Full explanation with examples and tutorials
Math, for obvious reasons, forms a major part of any programming language. It’s how we do most of the things that we do. Or, more accurately put, it’s how we make the computer do most of the things
it does. Arithmetic operators are pre-defined operations that perform arithmetic functions in the C programming language. Or any programming language for that matter. Let’s first enlist all the types
of operators in the C programming language. Then we will expand on the topic of this post.
What are the different type of operators in C?
The different types of operators in C are as follows:
1. Arithmetic Operators
2. Relational Operators
3. Logical Operators
4. Increment and Decrement Operators
5. Assignment Operators
6. Bitwise Operators
7. Sizeof Operator
8. Miscellaneous Operators – Comma Operator, Reference Operator, Member Selection Operator, Ternary Operator, and Deference Operator.
What are the arithmetic operators in C? How do they work?
Arithmetic operators in C programming language are simple symbols that are pre-defined in the library to execute arithmetic functions.
Operator Meaning
+ Addition
– Subtraction
* Multiplication
/ Division
% Modulus operator. Gives remainder after division
The addition and subtraction operations are pretty straightforward. The C programming language allows you to place the general signs of ‘+’ and ‘-‘ between operands. Two numbers can be added and
subtracted normally.
Multiplication is carried out with the asterisk symbol ‘*’ between the operands. It is recommended to use a data type capable of holding a larger data type because multiplication can give a larger
The division operation, represented by the ‘/’ operator, gives just the quotient in non-decimal form. For example, 12/5 would give you 2 as the answer instead of 2.4.
The Modulus operation is similar to the division operation. But it gives the remainder as the answer. We represent it using the ‘%’ symbol. The answer for 12%5 would be 2.
Let’s take a look at all these arithmetic operations in C programming language in action in a simple example as shown below.
#include <stdio.h>
int main()
int a = 7,b = 3, c;
c = a+b;
printf("a+b = %d \n",c);
c = a-b;
printf("a-b = %d \n",c);
c = a*b;
printf("a*b = %d \n",c);
c = a/b;
printf("a/b = %d \n",c);
c = a%b;
printf("Remainder when a divided by b = %d \n",c);
return 0;
a+b = 10
a-b = 4
a*b = 21
a/b = 2
Remainder when a divided by b = 1
What is the priority or precedence of the arithmetic operators in C?
The arithmetic operations in C programming language follow the general order of operations. First, anything in parenthesis is calculated, followed by division or multiplication. In the end, we
perform addition and subtraction operations.
Example 1: Using arithmetic operators write a program in C to add the digits of a number taken from the user
//Add digits of four digit number taken from user
void main()
int x,y,a1,a2,a3,a4,z,sum;
printf("Enter the number");
Output and explanation
Enter the number
First, we declare a bunch of integers. Don’t think too much about it. We just know that we are going to need something to hold the values upon separation. SO we define some integers. We will use how
many ever we wish to and then later remove the unused ones. Then we take a four-digit number from the user. Using the modulo function with 10 we get the last digit as the remainder. Note that the
original number, stored in x, has still not changed. Once we separate the remainder, we store its value in the variable a1. Now, we divide the original number by 10 to actually change it into a
three-digit number. The same process as above is then repeated until all individual numerals separate. The numerals are then added to give the sum.
In keeping with our tradition so far in this C programming course, the next example is for you to understand and execute. Feel free to ask any doubts that you may have in the comments section below.
Example 2: Write a program in C to take a number from the user and reverse it
//reverse and display the four digit number taken as input from user
void main()
int x,y,temp,a1,a2,a3,a4;
printf("Enter the number\n"); | {"url":"https://technobyte.org/arithmetic-operators-c-tutorials-examples/","timestamp":"2024-11-03T07:18:02Z","content_type":"text/html","content_length":"106003","record_id":"<urn:uuid:a9e75f72-5e63-49c5-a3bb-46a82c05ed56>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00309.warc.gz"} |
Unwrap applied on expresion with partial derivative and two single variable functions
General questions (367)
Hello, I am a newbie, and encountered this problem, which I suspect is due to my unfamiliar with the syntax but cannot see how to fix.
When I declare {x,y} as coordinates, \partial{#} as partial derivative w.r.t. all coordinates, f to depend only on x and g on y; I am expecting the unwrap function to move the derivative w.r.t. x
pass g and the derivative w.r.t. y pass f, but this does not happen. My code:
In both cases I am getting A\partial_{x}{fg} and A\partial_{y}{fg}, but expecting Ag*\partial_{x}{f}
EDIT: formation
It's a bug :-( That is to say, your code should have read
(changed the two Depends lines) but even then it would produce the wrong answer.
This got introduced in the 1.x -> 2.x rewrite and somehow escaped the tests. I have just pushed a fix to github now which fixes this. | {"url":"https://cadabra.science/qa/854/applied-expresion-partial-derivative-variable-functions","timestamp":"2024-11-07T19:00:05Z","content_type":"text/html","content_length":"17078","record_id":"<urn:uuid:52c9c0b2-3e56-4578-807d-9c6207e8caeb>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00412.warc.gz"} |
Topics courses | Department of Mathematics
Fall 2024
5010: Analysis and Geometry in Hyperbolic Space
Instructor: Guozhen Lu
The hyperbolic space of dimension n is the unique simply connected, n-dimensional Riemannian manifold of constant sectional curvature equal to −1. It is a simple example of a noncompact and complete
Riemannian symmetric space of rank 1. In this course, we will begin with the intro- duction of basic models of real hyperbolic space such as the Poincare ball, the half space, the hyperboloid and the
Beltrami-Klein models. We will then establish various important functional inequalities (such as the Hardy- Littlewood-Sobolev, Poincare-Sobolev and Hardy-Sobolev-Maz’ya inequal- ities) and we will
investigate their best constants as well as the existence of extremal functions. The course will be self-contained and the only pre- requisite is MATH 5111 (Measure and Integral).
Math 5020: Cluster Algebras
Instructor: Ralf Schiffler
Cluster algebras are commutative algebras with a special combinatorial structure which are related to many different areas including Combina- torics (graphs and perfect matchings, triangulations of
surfaces, positivity), Representation Theory (quivers, tilting theory, categorification, Lie theory, Coxeter groups), Number Theory (Markov numbers, continued fractions, Lagrange spectrum), Knot
Theory (Alexander polynomial, Jones polyno- mial) and many more. The subject is relatively young (first paper in 2002), and it is a highly active research area. Topics covered in the course in- clude
the following. Definition and examples, Laurent phenomenon and positivity, finite type classification, relation to representation theory, cat- egorification, combinatorial models, relation to knot
theory and number theory.
Prerequisites: MATH 5210.
Math 5026: Computability
Instructor: Damir Dzhafarov
This course is an introduction to the basic concepts and techniques in computability theory. You do not need background in logic for this course, and in particular, Math 5260 is not a prerequisite
even though you will need a permission number if you have not taken Math 5260. Com- putability theory is the study of which sets of natural numbers (or suitably coded algebraic objects) can be
determined by a computer algorithm. Given that there are countably many computer programs but uncountable many sets of natural numbers, most sets are not computable. The focus of this course is on
the non-computable sets and how they relate to each other. In particular, we will study how to measure when one set is more complicated than another and what are the natural benchmarks of complexity
for non-computable sets.
5030: Comparison Theorems in Geometry.
Instructor: Ovidiu Munteanu
In geometry, we try to compare a certain quantity on a curved space with the corresponding quantity on a model space. The model space is often, but not always, a space of constant curvature. Some
classical examples of such comparison estimates are:
Volume Comparison: Here, we analyze the volume growth of geodesic balls of arbitrary radii in a manifold with non-negative curvature. We compare this growth to the volume growth of geodesic balls in
Euclidean space.
Eigenvalue Comparison: Similar to volume comparison, we compare the eigenvalues of the Laplacian on curved spaces to those on the model space (which may be spheres, Euclidean space, or hyperbolic
More generally, one can attempt to understand various geometric and an- alytic quantities on curved spaces by using certain model spaces as canonical examples for comparison. Well-known examples of
these quantities include the heat kernel, the Green’s function of the Laplacian, and the famous Isoperimetric, Sobolev, and Minkowski inequalities.
This course will assume basic knowledge of Riemannian geometry, such as curvature and geodesics, but it will be self-contained otherwise. We will start by covering the basic ideas of the theory. Once
we get a good grasp of those, we will move on to the newer and more advanced concepts and methods.
Spring 2024
Math 5020: Topics in Algebra: Elliptic curves
Instructor: Alvaro Lozano-Robledo
This course will be an introduction to elliptic curves which, roughly speaking, are smooth cubic curves in the projective plane with at least one rational point (turns out they have a simple model of
the form y2 = x3 + ax + b). The surprising feature of elliptic curves is that their points can be made into an abelian group, and this group is finitely generated when we focus on points with
coordinates in the rational numbers lying on an elliptic curve with rational coefficients. Elliptic curves are central in modern number theory, e.g., they were essential in the proof of Fermat’s Last
Theorem. The goal of the course will be to understand and calculate the group of all rational points on an elliptic curve (i.e., calculate its torsion and rank), and a number of more refined
invariants (such as the order of the Shafarevich-Tate group).
The prerequisites for this course are the abstract algebra sequence (Math 5210 and 5211) and a basic understanding of algebraic number theory and algebraic geometry, although I will adjust the
material to the audience background as much as I can. Our textbook will be ”The Arithmetic of Elliptic Curves” by J. H. Silverman, which is the standard graduate-level textbook for the subject.
Prerequisites: MATH 5210 and MATH 5211 (i.e., a year of abstract algebra).
Recommended Preparation: A semester of Algebraic Number Theory, and a semester of Algebraic Geometry.
Math 5030: Topics in Geometry and Topology: Introduction to Minimal Surfaces
Instructor: Lan-Hsuan Huang
The question of finding a surface with the smallest area is one of the oldest problems in geometry. It gives rise to the concept of minimal surfaces. The mathe- matical theory of minimal surfaces
dates back to the 18th century at the beginning of calculus of variations, and now it is still one of the most active branches of mathematics. Minimal surface theory also has developed a wide range
of applica- tions in, e.g. general relativity, molecular engineering, materials science, mechanical engineering, and architecture.
The course will present mathematical properties of minimal surfaces, including the topics on the first and second variations formulas of the area, Bernstein the- orem, Douglas-Rad ́o’s resolution to
the Plateau problem, curvature estimates and compactness of minimal surfaces, and the fundamental existence theory of harmonic maps by Sacks and Uhlenbeck.
Math 5040: Stochastic control theory
Instructor: George Yin
This is an introductory course on stochastic control theory. We will study both discrete-time and continuous-time stochastic systems. In addition to students in the mathematics department, we hope to
attract students from other departments (as we did this year in the MAT 5040 class, several students from statistics and engineering departments enrolled in the class). Selected topics from the list
below will be covered.
• Preliminary results (including Markov processes, diffusions, stochastic dif- ferential equations, generators and associated partial differential equations)
• calculus of variations
• controlled diffusion
• stochastic optimization
• dynamic programming
• viscosity solutions of Hamilton-Jacobi-Bellman equations
• maximum principle
• filtering
• adaptive control
• computational methods
Fall 2023
Math 5016: Stochastic Processes with Financial Applications
Instructor: Oleksii Mostovyi
This course will cover topics in stochastic processes, which will lead to study- ing and solving problems arising from finance. The tentative list of topics includes Brownian motion, stochastic
integration, diffusions, filtering, optimal stopping, and stochastic control. The financial applications will include pricing and hedging of European and American options and optimal investment. The
focus will be on solv- ing the problems in simplified settings rather than establishing the mathematical results in full generality. No preliminary knowledge of finance is required. Working knowledge
in probability will help.
Math 5020: Local fields
Instructor: Keith Conrad
In addition to completing the rational numbers to form the real numbers, the rationals can be completed in infinitely many other ways: one completion for each prime number p. The resulting
completions are called the p-adic numbers and their finite extensions are called local fields. Many concepts from classical analysis (power series, integration, etc.) can be developed over these
fields, with properties that are sometimes similar and sometimes quite different from the classical case. Local fields were originally studied and applied just within number theory, but over time
they became more mainstream through connections to other areas, such as harmonic analysis, algebraic geometry, dynamical systems, and model theory. This course should be of interest to students in
algebra as well as other students who want a broader sense of the scope in which analytic concepts can be developed over fields that are not just the real or complex numbers.
Prerequisites: Math 5210, either 5110 or 5310, or permission of instructor.
Spring 2023
Math 5010: Sobolev spaces in metric measure spaces
Instructor: Fabrice Baudoin
Description: Analysis on metric spaces emerged in the 1990s as an independent research field providing a unified treatment of first order analysis in non-smooth settings. Based on the fundamental
concept of upper gradient the notion of a Sobolev function was formulated in the setting of metric measure spaces supporting a Poincare inequality. In this course we will present that theory and
study functional inequalities in that non-smooth setting.
Bibliography: Sobolev spaces on metric measure spaces by J. Heinonen, P. Koskela, N. Shanmugalingam, J. Tyson.
Math 5016: Random walks, heat kernels and applications
Instructor: Alexander Teplyaev
Description: The topic of the first part of the course will be the relationship between random walks and the heat equation. The heat equation can be derived by averaging over a very large number of
particles. Traditionally, the resulting PDE is studied as a deterministic equation, an approach that has brought many significant results and a deep understanding of the equation and its solutions.
By studying the heat equation by considering the individual random particles one gains further insight into the problem. We will discuss the discrete case, random walk, and the heat equation on the
integer lattice, the continuous case, Brownian motion, and the usual heat equation. Solving the heat equation in the discrete setting becomes a problem of diagonalization of symmetric matrices, which
becomes a problem in Fourier series in the continuous case. Random walk and Brownian motion can be introduced and developed from the first principles. We also will discuss martingales and fractal
dimensions. The second part of the course will be devoted to a broader range of topics, selected according to the mutual interests of students.
Textbook: Random Walk and the Heat Equation by Gregory F. Lawler, University of Chicago.
Fall 2022
Math 5026: Generic sets and forcing in computability theory
Instructor: Reed Solomon
Description: Forcing is a powerful technique to construct sets in computability theory. We will start by giving a number of classical constructions in computability theory using the terminology and
techniques of forcing. Next we will define the forcing language and describe the connection between levels of genericity and satisfaction of formulas in the forcing language. The majority of the
course will be devoted to applications of Cohen, Sacks and Mathias forcing in classical computability theory, computable combinatorics and computable model theory. There are no formal prerequisites
but students should know the basic definitions in computability theory (e.g. computable and computably enumerable sets, Turing jump, arithmetic hierarchy, etc.).
Math 5040: Stochastic Approximation and Applications
Instructor: George Yin
Description: This course presents an introduction to stochastic approximation with various applications. Stochastic approximation stems from the goal of locating roots of a nonlinear function or
finding minimizers of a function. In contrast to numerical analysis, either the precise form of the function is not known or it is too complicated to compute and only noisy measurements or
observations are available. One constructs a sequence of estimates recursively to carry out the desired task. Standard procedures and their variants such as projection and truncation algorithms will
be introduced. Convergence, rates of convergence, and asymptotic efficiency will be studied in connection with ordinary differential equations, stochastic differential equations, and martingale
problem formulations. If time permits, large deviations will also be presented.
Math 5121: Riemann Surfaces and Complex Manifolds
Instructor: Damin Wu
Description: A Riemann surface is a complex manifold of complex dimension one. Every Euclidean surface admits a complex structure and hence is a Riemann surface. From the algebraic geometric
viewpoint, a Riemann surface is a smooth complex algebraic curve. In this course, we shall introduce geometry of complex manifolds via the study of Riemann surfaces. Normally the techniques and
concepts that look difficult and abstract in higher dimensions can be seen and understood clearly in the case of Riemann surface. We shall develop tools in differential geometry, partial differential
equations, and topology including sheaf cohomology.
Fall 2021
Math 5010: Probabilistic techniques in analysis
Instructor: Alexander Teplyaev
Description: This course will be based on the book by Rich Bass with the same title, and on supplementary materials. In recent years, there has been an upsurge of interest in using techniques drawn
from probability to tackle problems in analysis. These applications arise in subjects such as potential theory, harmonic analysis, singular integrals, and the study of analytic functions. This book
presents a modern survey of these methods at the level of a beginning Ph.D. student. Highlights of this book include the construction of the Martin boundary, probabilistic proofs of the boundary
Harnack principle, Dahlberg’s theorem, a probabilistic proof of Riesz’ theorem on the Hilbert transform, and Makarov’s theorems on the support of harmonic measure.
The author assumes that a reader has some background in basic real analysis, but the book includes proofs of all the results from probability theory and advanced analysis required. Each chapter
concludes with exercises ranging from the routine to the difficult. In addition, there are included discussions of open problems and further avenues of research.
Math 5020: Commutative Algebra
Instructor: Mihai Fulger
Description: Commutative Algebra is the study of commutative rings and modules over them. Topics include Noetherian rings and modules, ideals, rings of polynomials, Hilbert’s basis theorem,
nilpotents, prime and maximal ideals and the topology on the spectrum of a ring, dimension theory, regular rings. Time permitting we will go into more advanced topics like resolutions, computational
algebra, homological algebra, and Cohen-Macaulay rings.
Commutative Algebra is a precursor to the Algebraic Geometry course to be offered in Spring 2022, but will also help anyone interested in Algebraic Number Theory.
Math 5026: Computability, Randomness and Genericity
Instructor: David Solomon
Description: This course will start with an introduction to the basic concepts of computability theory, but the main focus will be on notions of algorithmic randomness and genericity. We will see
three equivalent approaches to algorithmic randomness: incompressibility and Kolmogorov complexity; effective null sets; and effective martingales. Once we have a robust hierarchy of randomness
notions, we will explore the connections between the Turing degrees of random sets and those of other classes such as PA degrees, DNR degrees and DNR2 degrees. For the last part of the course, we
will turn to generic sets and study their properties and applications in the Turing degrees. This course will not assume prior knowledge of computability theory, but also will have as little overlap
as possible with the topics from the computability theory course in Spring 2020.
Math 5030: Topics in Geometric Partial Differential Equations
Instructor: Lan-Hsuan Huang
Description: This course is an introduction to analytical methods, specifically partial differential equations (PDEs), on the study of geometry and topology. The course will begin with an
introduction to some basics of elliptic PDEs, functional analysis, and notions of curvatures. Then I will present more advanced topics, with an emphasis on isometric embedding of Riemannian manifolds
and curvature problems in mathematical relativity. In particular, I plan to discuss the following topics:
1. Nirenberg’s resolution to Weyl problem for surfaces of positive Gauss
2. Nash’s isometric embedding theorem
3. Schoen-Yau’s positive mass theorem in mathematical relativity
I will try to make the course self-contained. While it is useful to have prior knowledge in either differential geometry or PDEs, it is not required to take this course.
Math 5040: Padé approximation and its applications
Instructor: Maksym Derevyagin
Description: Padé approximants are a frequently used tool for the solution of mathematical and physical problems: solution of nonlinear equations, acceleration of convergence, numerical integration
by means of nonlinear techniques, solution of ordinary and partial differential equations.
In this course we are going to study basics of the theory:
1. Definitions and fundamental properties.
2. Various ways of computations of Padé approximants.
3. Connections with some convergence acceleration methods.
4. Connections with continued fractions.
5. Convergence theory.
We will mainly follow the book by G. Baker and P. Graves-Morris entitled Padé approximants, Parts I and II but will also use modern papers when discussing some applications and new insightful ideas.
Prerequisites: familiarity with basics of complex analysis and linear algebra.
Spring 2021
Math 5010: Singular integrals and applications
Instructor: Vasileios Chousionis
Description: This course will focus on the modern theory of singular integrals and its connections to geometric measure theory, complex analysis and potential theory. We will cover topics such as:
1. Overview of Calderon-Zygmund theory on spaces of homogeneous type
2. Non-homogeneous Calderon-Zygmund theory
3. The Cauchy transform and Analytic capacity
4. The Riesz transform and removability for Lipschitz harmonic functions
5. Singular Integrals on Lipschitz graphs
Good understanding of measure theory is required. Some knowledge of Fourier Analysis might be useful but 5140 is not a prerequisite.
MATH 5020: The Arithmetic of Elliptic Curves
Instructor: Alvaro Lozano-Robledo
Description: This course will be an introduction to elliptic curves, which roughly speaking are smooth cubic curves in the projective plane (turns out they have a simple model of the form $y^2=x^
3+ax+b$). The surprising feature of elliptic curves is that their points can be made into an abelian group, and this group is finitely generated when we focus on points with coordinates in the
rational numbers lying on an elliptic curve with rational coefficients.
Elliptic curves are central in modern number theory, e.g., they were essential in the proof of Fermat’s Last Theorem. The goal of the course will be to understand and calculate the group of all
rational points on an elliptic curve (i.e., calculate its torsion and rank), and a number of more refined invariants (such as the order of the Shafarevich-Tate group).
The prerequisites for this course are the Abstract Algebra sequence (Math 5210 and 5211) and a basic understanding of algebraic number theory and algebraic geometry, although I will adjust the
material to the audience background as much as I can. Our textbook will be “The Arithmetic of Elliptic Curves,” by Silverman, which is the standard graduate-level textbook for the subject.
MATH 5026: Introduction to Reverse Mathematics
Instructor: Reed Solomon
Description: In mathematics, we sometimes encounter theorems that “are equivalent to the axioms of choice” or “require the parallel postulate to prove”. Reverse mathematics is the study of which
axioms are required to prove specific theorems. In general, the axioms of set theory are too powerful for a reasonable analysis of this type, so the setting of reverse mathematics is second order
This course will start with an introduction to second order arithmetic and its most prominent subsystems. For each of the subsystems, we will analyze theorems from a variety of branches of
mathematics that are equivalent to the subsystem.
The end of the course will focus on the existence of special models of the subsystems that give rise to conservation results.
The only prerequisite for this course is a general introduction to logic such as Math 5260.
Math 5030: Geometric Analysis on Manifolds
Instructor: Ovidiu Munteanu
Description: This course is an introduction to the linear theory of partial differential equations on open manifolds. Assuming some appropriate information on curvature, we will study properties of
solutions to the Laplace and heat equations, which in turn will give us more insight about the geometry and topology of the underlying manifold.
For example, we will use harmonic functions to count the number of ends of open manifolds, and see applications to rigidity results for manifolds with more than one end.
The techniques developed in this theory are essential to many other problems in geometric analysis, such as in the study of geometric flows on manifolds.
The course follows Peter Li’s book Geometric Analysis, Cambridge Studies in Advanced Mathematics
Prerequisites: Basic knowledge of Riemannian geometry will be assumed. The course is self contained on the PDE’s side.
Fall 2020
Math 5010: Yang-Mills Theory
Instructor: Maria Gordina
Description: We will discuss mathematical foundations of the standard model of elementary particle theory.
We will begin with classical physics equations of Newtonian mechanics, then will move to the Lagrangian mechanics, Hamiltonian mechanics, quantum mechanics (including Heisenberg versus Schroedinger
picture), and finally arrive to the quantum field theory. In parallel we will need to understand two classical theories, namely, Maxwell’s equations and Yang-Mills equations. The needed differential
geometrical notions will be covered as well.
Math 5020: Enumerative Combinatorics
Instructor: Thomas Roby
Description: The course will give an introduction to enumerative combinatorics at the graduate level, focusing on techniques useful for those specializing in other research areas as well as
combinatorics. Topics will include: basic enumeration, multiset permutations and statistics, generating functions, bijective proofs, q-analogues, sieve methods, exponential formula, Lagrange
Text: Enumerative Combinatorics 1 (2nd edition) and 2, by Richard Stanley (selections from chapters 1,2,4,5, and 6).
Math 5031: Einstein Manifolds
Instructor: Fabrice Baudoin
Recordings of Lectures: https://sites.google.com/site/fabricebaudoinwebpage/einstein-manifolds?authuser=0
Description: In this course we will study some topics in Riemannian and pseudo- Riemannian geometry. We will mostly focus on Ricci curvature and its applications. The course will start with basics
about Riemannian and pseudo-Riemannian geometry. We will assume familiarity with differential manifolds and basic calculus on them.
We will cover the following topics:
• Linear connections on vector bundles: Torsion, Curvature, Bianchi identities
• Riemannian and pseudo-Riemannian manifolds
• Get the feel of Ricci curvature: Volume comparison theorems, Bonnet-Myers theorem
• Ricci curvature as a PDE
• Einstein manifolds and topology
• Homogeneous Riemannian manifolds
• Kahler and Calabi-Yau manifolds
• Quaternion-Kahler manifolds
Text: Einstein manifolds, by A.L. Besse, Springer, 1987.
Math 5040: Topics in PDE
Instructor: Xiaodong Yan
Description: This topic class will cover two different topics in PDE. In 1970s, De Giorgi made a conjecture regarding level sets for solutions of semilinear PDE’s. Over the years, many mathematicians
have made contributions to the problem. For the first topic, we will discuss results and technique for classical De Giorgi conjecture and its extension to fractional PDEs and (possibly) models
related to thin film equations.
First introduced by Skyrme in 1960s, skyrmions are topological solitons that emerge in many physical contexts such as superfluid, Bos-Einstein condensates with ferromagnetic order, liquid crystals
and magnetism. Magnetic skyrmions have attracted a lot of attentions in recent years due to their topological structure which hold promises for future information technologies. In the second part of
the class, we will discuss some recent results on magnetic skyrmions.
Math 5121: Conformal Dynamics: from the complex plane to Carnot groups
Instructor: Vasileios Chousionis
Description: The course will cover a variety of topics whose common underlying theme is the iteration of conformal maps. Such as:
• Basics of complex dynamics including iteration of meromorphic functions
• Conformal maps in Euclidean spaces and conformal fractals
• Rigidity of conformal maps in Carnot groups
• Conformal graph directed Markov systems on Carnot groups
• Dynamics of real, complex and Iwasawa continued fractions
On the way we are going to introduce and employ ideas from geometric measure theory, ergodic theory, geometric function theory and sub-Riemannian geometry. | {"url":"https://math.uconn.edu/courses/topics-courses/","timestamp":"2024-11-12T10:40:51Z","content_type":"text/html","content_length":"126241","record_id":"<urn:uuid:aa0fc160-4a25-4395-871c-253f9e8c8c10>","cc-path":"CC-MAIN-2024-46/segments/1730477028249.89/warc/CC-MAIN-20241112081532-20241112111532-00749.warc.gz"} |
SHOGUN: Class List
►Nshogun All of classes and functions are contained in the shogun namespace
Ccross_entropy< Backend::EIGEN3, Matrix >
Clogistic< Backend::EIGEN3, Matrix >
Backend::EIGEN3, Matrix >
Backend::EIGEN3, Matrix >
Crectified_linear< Backend::EIGEN3,
Matrix >
Csoftmax< Backend::EIGEN3, Matrix >
Csquared_error< Backend::EIGEN3, Matrix >
Cadd Generic class which is specialized for different backends to perform addition
Cadd< Backend::EIGEN3, Matrix > Partial specialization of add for the Eigen3 backend
Capply Generic class which is specialized for different backends to perform apply
Capply< Backend::EIGEN3, Matrix, Vector > Partial specialization of apply for the Eigen3 backend
Ccholesky Generic class which is specialized for different backends to compute the cholesky decomposition of a dense matrix
Ccholesky< Backend::EIGEN3, Matrix > Partial specialization of add for the Eigen3 backend
Ccolwise_sum Generic class colwise_sum which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means to
deal with various matrices directly without having to convert
Ccolwise_sum< Backend::EIGEN3, Matrix > Specialization of generic colwise_sum which works with SGMatrix and uses Eigen3 as backend for computing sum
Cconvolve< Backend::EIGEN3, Matrix >
Cdot Generic class dot which provides a static compute method. This class is specialized for different types of vectors and backend, providing a mean to deal with
various vectors directly without having to convert
Cdot< Backend::EIGEN3, Vector > Specialization of generic dot for the Eigen3 backend
Celementwise_product< Backend::EIGEN3,
Matrix >
Celementwise_square Generic class square which provides a static compute method. This class is specialized for different types of matrices and backend, providing a mean to deal
with various matrices directly without having to convert
Celementwise_square< Backend::EIGEN3, Partial specialization of generic elementwise_square for the Eigen3 backend
Matrix >
Celementwise_unary_operation Template struct elementwise_unary_operation. This struct is specialized for computing element-wise operations for both matrices and vectors of CPU (SGMatrix/
SGVector) or GPU (CGPUMatrix/CGPUVector)
Backend::EIGEN3, Operand, ReturnType, Specialization for elementwise_unary_operation with EIGEN3 backend. The operand types MUST be of CPU types (SGMatrix/SGVector)
UnaryOp >
Backend::NATIVE, Operand, ReturnType, Specialization for elementwise_unary_operation with NATIVE backend. The operand types MUST be of CPU types (SGMatrix/SGVector)
UnaryOp >
Cint2float Generic class int2float which converts different types of integer into float64 type
Cint2float< int32_t > Specialization of generic class int2float which converts int32 into float64
Cint2float< int64_t > Specialization of generic class int2float which converts int64 into float64
Cmatrix_product< Backend::EIGEN3, Matrix
Cmax Generic class which is specialized for different backends to perform the max operation
Cmax< Backend::EIGEN3, Matrix > Specialization of max for the Eigen3 backend
Cmean Generic class mean which provides a static compute method
Cmean< Backend::EIGEN3, Matrix > Specialization of generic mean which works with SGVector and SGMatrix and uses Eigen3 as backend for computing mean
Crange_fill Generic class which is specialized for different backends to perform the Range fill operation
Crange_fill< Backend::EIGEN3, Matrix > Partial specialization of add for the Eigen3 backend
Crowwise_mean Generic class rowwise_mean which provides a static compute method
Crowwise_mean< Backend::EIGEN3, Matrix > Specialization of generic mean which works with SGMatrix and uses Eigen3 as backend for computing rowwise mean
Crowwise_sum Generic class rowwise_sum which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means to
deal with various matrices directly without having to convert
Crowwise_sum< Backend::EIGEN3, Matrix > Specialization of generic rowwise_sum which works with SGMatrix and uses Eigen3 as backend for computing sum
Cscale< Backend::EIGEN3, Matrix >
Cset_rows_const< Backend::EIGEN3, Matrix,
Vector >
Csum Generic class sum which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means to deal
with various matrices directly without having to convert
Csum< Backend::EIGEN3, Matrix > Specialization of generic sum which works with SGMatrix and uses Eigen3 as backend for computing sum
Csum_symmetric Generic class sum symmetric which provides a static compute method. This class is specialized for different types of matrices and backend, providing a means
to deal with various matrices directly without having to convert
Csum_symmetric< Backend::EIGEN3, Matrix > Specialization of generic sum symmetric which works with SGMatrix and uses Eigen3 as backend for computing sum
Cvector_sum Generic class vector_sum which provides a static compute method. This class is specialized for different types of vectors and backend, providing a mean to
deal with various vectors directly without having to convert
Cvector_sum< Backend::EIGEN3, Vector > Specialization of generic vector_sum for the Eigen3 backend
CParameter Struct Parameter for wrapping up parameters to custom OpenCL operation strings. Supports string type, C-style string type and all basic types of parameters
Cocl_operation Class ocl_operation for element-wise unary OpenCL operations for GPU-types (CGPUMatrix/CGPUVector)
Csin< complex128_t >
Callocate_result Template struct allocate_result for allocating objects of return type for element-wise operations. This generic version takes care of the vector types
supported by Shogun (SGVector and CGPUVector)
Callocate_result< SGMatrix< T >, SGMatrix Specialization for allocate_result when return type is SGMatrix. Works with different scalar types as well. T defines the scalar type for the operand and
< ST > > whereas ST is the scalar type for the result of the element-wise operation
CBlock Generic class Block which wraps a matrix class and contains block specific information, providing a uniform way to deal with matrix blocks for all supported
backend matrices
C_IterInfo Struct that contains current state of the iteration for iterative linear solvers
The class implements the AdaDelta method.
CAdaDeltaUpdater \[ \begin{array}{l} g_\theta=(1-\lambda){(\frac{ \partial f(\cdot) }{\partial \theta })}^2+\lambda g_\theta\\ d_\theta=\alpha\frac{\sqrt{s_\theta+\epsilon}}
{\sqrt{g_\theta+\epsilon}}\frac{ \partial f(\cdot) }{\partial \theta }\\ s_\theta=(1-\lambda){(d_\theta)}^2+\lambda s_\theta \end{array} \]
CAdaGradUpdater The class implements the AdaGrad method
CAdamUpdater The class implements the Adam method
CAdaptMomentumCorrection This implements the adaptive momentum correction method
►CAny Allows to store objects of arbitrary types by using a BaseAnyPolicy and provides a type agnostic API. See its usage in CSGObject::Self, CSGObject::set(),
CSGObject::get() and CSGObject::has().
CBaseAnyPolicy An interface for a policy to store a value. Value can be any data like primitive data-types, shogun objects, etc. Policy defines how to handle this data. It
works with a provided memory region and is able to set value, clear it and return the type-name as string
CBaseTag Base class for all tags. This class stores name and not the type information for a shogun object. It can be used as an identifier for a shogun object where
type information is not known. One application of this can be found in CSGObject::set_param_with_btag()
CC45TreeNodeData Structure to store data of a node of C4.5 tree. This can be used as a template type in TreeMachineNode class. Ex: C4.5 algorithm uses nodes of type
CCAbsoluteDeviationLoss CAbsoluteDeviationLoss implements the absolute deviation loss function.
\(L(y_i,f(x_i)) = \mod{y_i-f(x_i)}\)
CCAccuracyMeasure Class AccuracyMeasure used to measure accuracy of 2-class classifier
CCAlphabet The class Alphabet implements an alphabet and alphabet utility functions
CCANOVAKernel ANOVA (ANalysis Of VAriances) kernel
CCApproxJointDiagonalizer Class ApproxJointDiagonalizer defines an Approximate Joint Diagonalizer (AJD) interface
CCARTreeNodeData Structure to store data of a node of CART. This can be used as a template type in TreeMachineNode class. CART algorithm uses nodes of type CTreeMachineNode
CCAttenuatedEuclideanDistance Class AttenuatedEuclideanDistance
CCAttributeFeatures Implements attributed features, that is in the simplest case a number of (attribute, value) pairs
CCAUCKernel The AUC kernel can be used to maximize the area under the receiver operator characteristic curve (AUC) instead of margin in SVM training
CCAutoencoder Represents a single layer neural autoencoder
CCAveragedPerceptron Class Averaged Perceptron implements the standard linear (online) algorithm. Averaged perceptron is the simple extension of Perceptron
CCAvgDiagKernelNormalizer Normalize the kernel by either a constant or the average value of the diagonal elements (depending on argument c of the constructor)
CCBaggingMachine : Bagging algorithm i.e. bootstrap aggregating
Class CBAHSIC, that extends CKernelDependenceMaximization and uses HSIC [1] to compute dependence measures for feature selection using a backward elimination
CCBAHSIC approach as described in [1]. This class serves as a convenience class that initializes the CDependenceMaximization::m_estimator with an instance of CHSIC
and allows only shogun::BACKWARD_ELIMINATION algorithm to use which is set internally. Therefore, trying to use other algorithms by set_algorithm() will not
work. Plese see the class documentation of CHSIC and [2] for more details on mathematical description of HSIC
CCBallTree This class implements Ball tree. The ball tree is contructed using the top-down approach. cf. ftp://ftp.icsi.berkeley.edu/pub/techreports/1989/tr-89-063.pdf
CCBALMeasure Class BALMeasure used to measure balanced error of 2-class classifier
CCBesselKernel Class Bessel kernel
CCBinaryClassEvaluation The class TwoClassEvaluation, a base class used to evaluate binary classification labels
CCBinaryFile A Binary file access class
CCBinaryLabels Binary Labels for binary classification
CCBinaryStream Memory mapped emulation via binary streams (files)
CCBinaryTreeMachineNode The node of the tree structure forming a TreeMachine The node contains pointer to its parent and pointers to its 2 children: left child and right child. The
node also contains data which can be of any type and has to be specified using template specifier
CCBinnedDotFeatures The class BinnedDotFeatures contains a 0-1 conversion of features into bins
CCBitString String class embedding a string in a compact bit representation
CCBrayCurtisDistance Class Bray-Curtis distance
CCC45ClassifierTree Class C45ClassifierTree implements the C4.5 algorithm for decision tree learning. The algorithm steps are briefy explained below :
CCCache Template class Cache implements a simple cache
CCCanberraMetric Class CanberraMetric
CCCanberraWordDistance Class CanberraWordDistance
This class implements the Classification And Regression Trees algorithm by Breiman et al for decision tree learning. A CART tree is a binary decision tree
that is constructed by splitting a node into two child nodes repeatedly, beginning with the root node that contains the whole dataset.
TREE GROWING PROCESS :
CCCARTree During the tree growing process, we recursively split a node into left child and right child so that the resulting nodes are "purest". We do this until any
of the stopping criteria is met. To find the best split, we scan through all possible splits in all predictive attributes. The best split is one that
maximises some splitting criterion. For classification tasks, ie. when the dependent attribute is categorical, the Gini index is used. For regression tasks,
ie. when the dependent variable is continuous, least squares deviation is used. The algorithm uses two stopping criteria : if node becomes completely "pure",
ie. all its members have identical dependent variable, or all of them have identical predictive attributes (independent variables).
CCCauchyKernel Cauchy kernel
CCCCSOSVM CCSOSVM
CCCGMShiftedFamilySolver Class that uses conjugate gradient method for solving a shifted linear system family where the linear opeator is real valued and symmetric positive definite,
the vector is real valued, but the shifts are complex
This class implements the CHAID algorithm proposed by Kass (1980) for decision tree learning. CHAID consists of three steps: merging, splitting and stopping.
A tree is grown by repeatedly using these three steps on each node starting from the root node. CHAID accepts nominal or ordinal categorical predictors only.
If predictors are continuous, they have to be transformed into ordinal predictors before tree growing.
CONVERTING CONTINUOUS PREDICTORS TO ORDINAL :
Continuous predictors are converted to ordinal by binning. The number of bins (K) has to be supplied by the user. Given K, a predictor is split in such a way
that all the bins get the same number (more or less) of distinct predictor values. The maximum feature value in each bin is used as a breakpoint.
MERGING :
CCCHAIDTree During the merging step, allowable pairs of categories of a predictor are evaluated for similarity. If the similarity of a pair is above a threshold, the
categories constituting the pair are merged into a single category. The process is repeated until there is no pair left having high similarity between its
categories. Similarity between categories is evaluated using the p_value
SPLITTING :
The splitting step selects which predictor to be used to best split the node. Selection is accomplished by comparing the adjusted p_value associated with
each predictor. The predictor that has the smallest adjusted p_value is chosen for splitting the node.
STOPPING :
The tree growing process stops if any of the following conditions is satisfied :
CCChebyshewMetric Class ChebyshewMetric
CCChi2Kernel The Chi2 kernel operating on realvalued vectors computes the chi-squared distance between sets of histograms
CCChiSquareDistance Class ChiSquareDistance
CCCircularBuffer Implementation of circular buffer This buffer has logical structure such as queue (FIFO). But this queue is cyclic: tape, ends of which are connected, just
instead tape there is block of physical memory. So, if you push big block of data it can be situated both at the end and the begin of buffer's memory
CCCircularKernel Circular kernel
CCClusteringAccuracy Clustering accuracy
CCClusteringEvaluation The base class used to evaluate clustering
CCClusteringMutualInformation Clustering (normalized) mutual information
CCCombinationRule CombinationRule abstract class The CombinationRule defines an interface to how to combine the classification or regression outputs of an ensemble of Machines
CCCombinedDotFeatures Features that allow stacking of a number of DotFeatures
CCCombinedFeatures The class CombinedFeatures is used to combine a number of of feature objects into a single CombinedFeatures object
CCCombinedKernel The Combined kernel is used to combine a number of kernels into a single CombinedKernel object by linear combination
CCCommUlongStringKernel The CommUlongString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 64bit integers
CCCommWordStringKernel The CommWordString kernel may be used to compute the spectrum kernel from strings that have been mapped into unsigned 16bit integers
CCCompressor Compression library for compressing and decompressing buffers using one of the standard compression algorithms:
CCConjugateGradientSolver Class that uses conjugate gradient method of solving a linear system involving a real valued linear operator and vector. Useful for large sparse systems
involving sparse symmetric and positive-definite matrices
CCConjugateOrthogonalCGSolver Class that uses conjugate orthogonal conjugate gradient method of solving a linear system involving a complex valued linear operator and vector. Useful for
large sparse systems involving sparse symmetric matrices that are not Herimitian
CCConstKernel The Constant Kernel returns a constant for all elements
CCConstMean The Const mean function class
CCContingencyTableEvaluation The class ContingencyTableEvaluation a base class used to evaluate 2-class classification with TP, FP, TN, FN rates
CCConverter Class Converter used to convert data
CCConvolutionalFeatureMap Handles convolution and gradient calculation for a single feature map in a convolutional neural network
CCCosineDistance Class CosineDistance
CCCplex Class CCplex to encapsulate access to the commercial cplex general purpose optimizer
CCCPLEXSVM CplexSVM a SVM solver implementation based on cplex (unfinished)
CCCrossCorrelationMeasure Class CrossCorrelationMeasure used to measure cross correlation coefficient of 2-class classifier
Base class for cross-validation evaluation. Given a learning machine, a splitting strategy, an evaluation criterion, features and corresponding labels, this
CCCrossValidation provides an interface for cross-validation. Results may be retrieved using the evaluate method. A number of repetitions may be specified for obtaining more
accurate results. The arithmetic mean and standard deviation of different runs is returned. Default number of runs is one
CCCrossValidationMKLStorage Class for storing MKL weights in every fold of cross-validation
CCCrossValidationMulticlassStorage Class for storing multiclass evaluation information in every fold of cross-validation
CCCrossValidationOutput Class for managing individual folds in cross-validation
CCCrossValidationPrintOutput Class for outputting cross-validation intermediate results to the standard output. Simply prints all messages it gets
CCCrossValidationResult Type to encapsulate the results of an evaluation run
CCCrossValidationSplitting Implementation of normal cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference)
CCCSVFile Class CSVFile used to read data from comma-separated values (CSV) files. See http://en.wikipedia.org/wiki/Comma-separated_values
CCCustomDistance The Custom Distance allows for custom user provided distance matrices
CCCustomKernel The Custom Kernel allows for custom user provided kernel matrices
CCCustomMahalanobisDistance Class CustomMahalanobisDistance used to compute the distance between feature vectors \( \vec{x_i} \) and \( \vec{x_j} \) as \( (\vec{x_i} - \vec{x_j})^T \
mathbf{M} (\vec{x_i} - \vec{x_j}) \), given the matrix \( \mathbf{M} \) which will be referred to as Mahalanobis matrix
CCData Dummy data holder
CCDataGenerator Class that is able to generate various data samples, which may be used for examples in SHOGUN
CCDecompressString Preprocessor that decompresses compressed strings
CCDeepAutoencoder Represents a muti-layer autoencoder
CCDeepBeliefNetwork A Deep Belief Network
CCDelimiterTokenizer The class CDelimiterTokenizer is used to tokenize a SGVector<char> into tokens using custom chars as delimiters. One can set the delimiters to use by setting
to 1 the appropriate index of the public field delimiters. Eg. to set as delimiter the character ':', one should do: tokenizer->delimiters[':'] = 1;
CCDenseDistance Template class DenseDistance
CCDenseExactLogJob Class that represents the job of applying the log of a CDenseMatrixOperator on a real vector
CCDenseFeatures The class DenseFeatures implements dense feature matrices
CCDenseLabels Dense integer or floating point labels
CCDenseMatrixExactLog Class that generates jobs for computing logarithm of a dense matrix linear operator
CCDenseMatrixOperator Class that represents a dense-matrix linear operator. It computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb{C}
^{n}\rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in\mathbb{C}^{n}\) being the vector. The result is a vector \(y\in\mathbb{C}^{m}\)
CCDensePreprocessor Template class DensePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CDenseFeatures (i.e. rectangular dense matrices)
Class CDependenceMaximization, base class for all feature selection preprocessors which select a subset of features that shows maximum dependence between the
features and the labels. This is done via an implementation of CIndependenceTest, m_estimator inside compute_measures() (see class documentation of
CFeatureSelection), which performs a statistical test for a given feature \(\mathbf{X}_i\) from the set of features \(\mathbf{X}\), and the labels \(\mathbf
{Y}\). The test checks
CCDependenceMaximization \[ \textbf{H}_0 : P\left(\mathbf{X}\setminus \mathbf{X}_i, \mathbf{Y}\right) =P\left(\mathbf{X}\setminus \mathbf{X}_i\right)P\left(\mathbf{Y}\right) \]
The test statistic is then used as a measure which signifies the independence between the rest of the features and the labels - higher the value of the test
statistic, greater the dependency between the rest of the features and the class labels, and therefore lesser significant the current feature becomes.
Therefore, highest scoring features are removed. The removal policy thus can only be shogun::N_LARGEST and shogun::PERCENTILE_LARGEST and it can be set via
set_policy() call. remove_feats() method handles the removal of features based on the specified policy
CCDiagKernel The Diagonal Kernel returns a constant for the diagonal and zero otherwise
CCDiceKernelNormalizer DiceKernelNormalizer performs kernel normalization inspired by the Dice coefficient (see http://en.wikipedia.org/wiki/Dice's_coefficient)
CCDifferentiableFunction An abstract class that describes a differentiable function used for GradientEvaluation
CCDiffusionMaps Class DiffusionMaps used to preprocess given data using Diffusion Maps dimensionality reduction technique as described in
CCDimensionReductionPreprocessor Class DimensionReductionPreprocessor, a base class for preprocessors used to lower the dimensionality of given simple features (dense matrices)
CCDirectEigenSolver Class that computes eigenvalues of a real valued, self-adjoint dense matrix linear operator using Eigen3
CCDirectLinearSolverComplex Class that provides a solve method for complex dense-matrix linear systems
CCDirectSparseLinearSolver Class that provides a solve method for real sparse-matrix linear systems using LLT
CCDiscreteDistribution This is the base interface class for all discrete distributions
CCDisjointSet Class CDisjointSet data structure for linking graph nodes It's easy to identify connected graph, acyclic graph, roots of forest etc. please refer to http://
CCDistance Class Distance, a base class for all the distances used in the Shogun toolbox
CCDistanceKernel The Distance kernel takes a distance as input
CCDistanceMachine A generic DistanceMachine interface
CCDistantSegmentsKernel The distant segments kernel is a string kernel, which counts the number of substrings, so-called segments, at a certain distance from each other
CCDistribution Base class Distribution from which all methods implementing a distribution are derived
CCDixonQTestRejectionStrategy Simplified version of Dixon's Q test outlier based rejection strategy. Statistic values are taken from http://www.vias.org/tmdatanaleng/
CCDomainAdaptationMulticlassLibLinear Domain adaptation multiclass LibLinear wrapper Source domain is assumed to b
CCDomainAdaptationSVM Class DomainAdaptationSVM
CCDomainAdaptationSVMLinear Class DomainAdaptationSVMLinear
CCDotFeatures Features that support dot products among other operations
CCDotKernel Template class DotKernel is the base class for kernels working on DotFeatures
CCDualVariationalGaussianLikelihood Class that models dual variational likelihood
CCDummyFeatures The class DummyFeatures implements features that only know the number of feature objects (but don't actually contain any)
CCDynamicArray Template Dynamic array class that creates an array that can be used like a list or an array
CCDynamicObjectArray Dynamic array class for CSGObject pointers that creates an array that can be used like a list or an array
CCDynInt Integer type of dynamic size
CCDynProg Dynamic Programming Class
CCECOCEncoder ECOCEncoder produce an ECOC codebook
CCEigenSolver Abstract base class that provides an abstract compute method for computing eigenvalues of a real valued, self-adjoint linear operator. It also provides
method for getting min and max eigenvalues
CCEMBase This is the base class for Expectation Maximization (EM). EM for various purposes can be derived from this base class. This is a template class having a
template member called data which can be used to store all parameters used and results calculated by the expectation and maximization steps of EM
CCEmbeddingConverter Class EmbeddingConverter (part of the Efficient Dimensionality Reduction Toolkit) used to construct embeddings of features, e.g. construct dense numeric
embedding of string features
CCEMMixtureModel This is the implementation of EM specialized for Mixture models
CCEPInferenceMethod Class of the Expectation Propagation (EP) posterior approximation inference method
CCErrorRateMeasure Class ErrorRateMeasure used to measure error rate of 2-class classifier
CCEuclideanDistance Class EuclideanDistance
CCEvaluation Class Evaluation, a base class for other classes used to evaluate labels, e.g. accuracy of classification or mean squared error of regression
CCEvaluationResult Abstract class that contains the result generated by the MachineEvaluation class
CCExactInferenceMethod The Gaussian exact form inference method class
CCExplicitSpecFeatures Features that compute the Spectrum Kernel feature space explicitly
CCExponentialARDKernel Exponential Kernel with Automatic Relevance Detection computed on CDotFeatures
CCExponentialKernel The Exponential Kernel, closely related to the Gaussian Kernel computed on CDotFeatures
CCExponentialLoss CExponentialLoss implements the exponential loss function.
\(L(y_i,f(x_i)) = \exp^{-y_if(x_i)}\)
CCF1Measure Class F1Measure used to measure F1 score of 2-class classifier
CCFactor Class CFactor A factor is defined on a clique in the factor graph. Each factor can have its own data, either dense, sparse or shared data. Note that
currently this class is table factor oriented
CCFactorAnalysis The Factor Analysis class is used to embed data using Factor Analysis algorithm
CCFactorDataSource Class CFactorDataSource Source for factor data. In some cases, the same data can be shared by many factors
CCFactorGraph Class CFactorGraph a factor graph is a structured input in general
CCFactorGraphDataGenerator Class CFactorGraphDataGenerator Create factor graph data for multiple unit tests
CCFactorGraphFeatures CFactorGraphFeatures maintains an array of factor graphs, each graph is a sample, i.e. an instance of structured input
CCFactorGraphLabels Class FactorGraphLabels used e.g. in the application of Structured Output (SO) learning with the FactorGraphModel. Each of the labels is represented by a
graph. Each label is of type CFactorGraphObservation and all of them are stored in a CDynamicObjectArray
CCFactorGraphModel CFactorGraphModel defines a model in terms of CFactorGraph and CMAPInference, where parameters are associated with factor types, in the model. There is a
mapping vector records the locations of local factor parameters in the global parameter vector
CCFactorGraphObservation Class CFactorGraphObservation is used as the structured output
CCFactorType Class CFactorType defines the way of factor parameterization
CCFastICA Class FastICA
CCFeatures The class Features is the base class of all feature objects
Template class CFeatureSelection, base class for all feature selection preprocessors which select a subset of features (dimensions in the feature matrix) to
CCFeatureSelection achieve a specified number of dimensions, m_target_dim from a given set of features. This class showcases all feature selection algorithms via a generic
interface. Supported algorithms are specified by the enum EFeatureSelectionAlgorithm which can be set via set_algorithm() call. Supported wrapper algorithms
CCFFDiag Class FFDiag
CCFFSep Class FFSep
CCFile A File access base class
CCFirstElementKernelNormalizer Normalize the kernel by a constant obtained from the first element of the kernel matrix, i.e. \( c=k({\bf x},{\bf x})\)
Preprocessor FisherLDA attempts to model the difference between the classes of data by performing linear discriminant analysis on input feature vectors/
matrices. When the init method in FisherLDA is called with proper feature matrix X(say N number of vectors and D feature dimensions) supplied via
CCFisherLDA apply_to_feature_matrix or apply_to_feature_vector methods, this creates a transformation whose outputs are the reduced T-Dimensional & class-specific
distribution (where T<= number of unique classes-1). The transformation matrix is essentially a DxT matrix, the columns of which correspond to the specified
number of eigenvectors which maximizes the ratio of between class matrix to within class matrix
CCFITCInferenceMethod The Fully Independent Conditional Training inference method class
CCFixedDegreeStringKernel The FixedDegree String kernel takes as input two strings of same size and counts the number of matches of length d
CCFKFeatures The class FKFeatures implements Fischer kernel features obtained from two Hidden Markov models
CCFunction Class of a function of one variable
CCFWSOSVM Class CFWSOSVM solves SOSVM using Frank-Wolfe algorithm [1]
CCGaussian Gaussian distribution interface
CCGaussianARDKernel Gaussian Kernel with Automatic Relevance Detection computed on CDotFeatures
CCGaussianARDSparseKernel Gaussian Kernel with Automatic Relevance Detection with supporting Sparse inference
CCGaussianCompactKernel The compact version as given in Bart Hamers' thesis Kernel Models for Large Scale Applications (Eq. 4.10) is computed as
Dense version of the well-known Gaussian probability distribution, defined as
\[ \mathcal{N}_x(\mu,\Sigma)= \frac{1}{\sqrt{|2\pi\Sigma|}} \exp\left(-\frac{1}{2}(x-\mu)^T\Sigma^{-1}(x-\mu)\right) \]
CCGaussianKernel The well known Gaussian kernel (swiss army knife for SVMs) computed on CDotFeatures
CCGaussianLikelihood Class that models Gaussian likelihood
CCGaussianMatchStringKernel The class GaussianMatchStringKernel computes a variant of the Gaussian kernel on strings of same length
CCGaussianNaiveBayes Class GaussianNaiveBayes, a Gaussian Naive Bayes classifier
CCGaussianProcessClassification Class GaussianProcessClassification implements binary and multiclass classification based on Gaussian Processes
CCGaussianProcessMachine A base class for Gaussian Processes
CCGaussianProcessRegression Class GaussianProcessRegression implements regression based on Gaussian Processes
CCGaussianShiftKernel An experimental kernel inspired by the WeightedDegreePositionStringKernel and the Gaussian kernel
CCGaussianShortRealKernel The well known Gaussian kernel (swiss army knife for SVMs) on dense short-real valued features
CCGCArray Template class GCArray implements a garbage collecting static array
CCGeodesicMetric Class GeodesicMetric
CCGMM Gaussian Mixture Model interface
CCGMNPLib Class GMNPLib Library of solvers for Generalized Minimal Norm Problem (GMNP)
CCGMNPSVM Class GMNPSVM implements a one vs. rest MultiClass SVM
CCGNPPLib Class GNPPLib, a Library of solvers for Generalized Nearest Point Problem (GNPP)
CCGNPPSVM Class GNPPSVM
CCGradientCriterion Simple class which specifies the direction of gradient search
CCGradientEvaluation Class evaluates a machine using its associated differentiable function for the function value and its gradient with respect to parameters
CCGradientResult Container class that returns results from GradientEvaluation. It contains the function value as well as its gradient
CCGridSearchModelSelection Model selection class which searches for the best model by a grid- search. See CModelSelection for details
CCGUIClassifier UI classifier
CCGUIConverter UI converter
CCGUIDistance UI distance
CCGUIFeatures UI features
CCGUIHMM UI HMM (Hidden Markov Model)
CCGUIKernel UI kernel
CCGUILabels UI labels
CCGUIMath UI math
CCGUIPluginEstimate UI estimate
CCGUIPreprocessor UI preprocessor
CCGUIStructure UI structure
CCGUITime UI time
CCHAIDTreeNodeData Structure to store data of a node of CHAID. This can be used as a template type in TreeMachineNode class. CHAID algorithm uses nodes of type CTreeMachineNode
CCHammingWordDistance Class HammingWordDistance
CCHash Collection of Hashing Functions
CCHashedDenseFeatures This class is identical to the CDenseFeatures class except that it hashes each dimension to a new feature space
This class can be used to convert a document collection contained in a CStringFeatures<char> object where each document is stored as a single vector into a
hashed Bag-of-Words representation. Like in the standard Bag-of-Words representation, this class considers each document as a collection of tokens, which are
then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the tokenizer used to tokenize each
CCHashedDocConverter document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to specify whether he wants to combine
different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while skipping up to k. Eg. for the tokens
["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc",
"bd" (skipped 1), "c", "cd", "d"]
This class can be used to provide on-the-fly vectorization of a document collection. Like in the standard Bag-of-Words representation, this class considers
each document as a collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the
CCHashedDocDotFeatures user to specify the tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size,
as well as to specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n
tokens, while skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a",
"ab", "ac" (skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"]
Class CHashedMultilabelModel represents application specific model and contains application dependent logic for solving multilabel classification with
CCHashedMultilabelModel feature hashing within a generic SO framework. We hash the feature of each class with a separate seed and put them in the same feature space (exploded
feature space)
CCHashedSparseFeatures This class is identical to the CDenseFeatures class except that it hashes each dimension to a new feature space
CCHashedWDFeatures Features that compute the Weighted Degreee Kernel feature space explicitly
CCHashedWDFeaturesTransposed Features that compute the Weighted Degreee Kernel feature space explicitly
CCHessianLocallyLinearEmbedding Class HessianLocallyLinearEmbedding used to preprocess data using Hessian Locally Linear Embedding algorithm as described in
CCHierarchical Agglomerative hierarchical single linkage clustering
CCHierarchicalMultilabelModel Class CHierarchicalMultilabelModel represents application specific model and contains application dependent logic for solving hierarchical multilabel
classification[1] within a generic SO framework
CCHingeLoss CHingeLoss implements the hinge loss function
CCHistogram Class Histogram computes a histogram over all 16bit unsigned integers in the features
CCHistogramIntersectionKernel The HistogramIntersection kernel operating on realvalued vectors computes the histogram intersection distance between sets of histograms. Note: the current
implementation assumes positive values for the histograms, and input vectors should sum to 1
CCHistogramWordStringKernel The HistogramWordString computes the TOP kernel on inhomogeneous Markov Chains
CCHMM Hidden Markov Model
CCHMSVMModel Class CHMSVMModel that represents the application specific model and contains the application dependent logic to solve Hidden Markov Support Vector Machines
(HM-SVM) type of problems within a generic SO framework
CCHomogeneousKernelMap Preprocessor HomogeneousKernelMap performs homogeneous kernel maps as described in
CCHSIC This class implements the Hilbert Schmidtd Independence Criterion based independence test as described in [1]
CCHuberLoss CHuberLoss implements the Huber loss function. It behaves like SquaredLoss function at values below Huber delta and like absolute deviation at values greater
than the delta
Hypothesis test base class. Provides an interface for statistical hypothesis testing via three methods: compute_statistic(), compute_p_value() and
CCHypothesisTest compute_threshold(). The second computes a p-value for the statistic computed by the first method. The p-value represents the position of the statistic in
the null-distribution, i.e. the distribution of the statistic population given the null-hypothesis is true. (1-position = p-value). The third method,
compute_threshold(), computes a threshold for a given test level which is needed to reject the null-hypothesis
CCICAConverter Class ICAConverter Base class for ICA algorithms
CCID3ClassifierTree Class ID3ClassifierTree, implements classifier tree for discrete feature values using the ID3 algorithm. The training algorithm implemented is as follows :
CCIdentityKernelNormalizer Identity Kernel Normalization, i.e. no normalization is applied
CCImplicitWeightedSpecFeatures Features that compute the Weighted Spectrum Kernel feature space explicitly
Provides an interface for performing the independence test. Given samples \(Z=\{(x_i,y_i)\}_{i=1}^m\) from the joint distribution \(\textbf{P}_{xy}\), does
CCIndependenceTest the joint distribution factorize as \(\textbf{P}_{xy}=\textbf{P}_x\textbf{P}_y\), i.e. product of the marginals? The null-hypothesis says yes, i.e. no
dependence, the alternative hypothesis says no
Abstract base class for solving multiple independent instances of CIndependentJob. It has one method, submit_job, which may add the job to an internal queue
CCIndependentComputationEngine and might block if there is yet not space in the queue. After jobs are submitted, it might not yet be ready. wait_for_all waits until all jobs are completed,
which must be called to guarantee that all jobs are finished
CCIndependentJob Abstract base for general computation jobs to be registered in CIndependentComputationEngine. compute method produces a job result and submits it to the
internal JobResultAggregator. Each set of jobs that form a result will share the same job result aggregator
CCIndexBlock Class IndexBlock used to represent contiguous indices of one group (e.g. block of related features)
CCIndexBlockGroup Class IndexBlockGroup used to represent group-based feature relation
CCIndexBlockRelation Class IndexBlockRelation
CCIndexBlockTree Class IndexBlockTree used to represent tree guided feature relation
The class IndexFeatures implements features that contain the index of the features. This features used in the CCustomKernel::init to make the subset of the
CCIndexFeatures kernel matrix. Initial CIndexFeature of row_idx and col_idx, pass them to the CCustomKernel::init(row_idx, col_idx), then use
CCustomKernel::get_kernel_matrix() will get the sub kernel matrix specified by the row_idx and col_idx
CCIndirectObject Array class that accesses elements indirectly via an index array
Class that aggregates vector job results in each submit_result call of jobs generated from rational approximation of linear operator function times a vector.
CCIndividualJobResultAggregator finalize extracts the imaginary part of that aggregation, applies the linear operator to the aggregation, performs a dot product with the sample vector,
multiplies with the constant multiplier (see CRationalApproximation) and stores the result as CScalarResult
CCInference The Inference Method base class
CCIntegration Class that contains certain methods related to numerical integration
CCIntronList Class IntronList
CCInverseMultiQuadricKernel InverseMultiQuadricKernel
CCIOBuffer An I/O buffer class
CCIsomap The Isomap class is used to embed data using Isomap algorithm as described in:
CCIterativeLinearSolver Abstract template base for all iterative linear solvers such as conjugate gradient (CG) solvers. provides interface for setting the iteration limit, relative
/absolute tolerence. solve method is abstract
Abstract template base for CG based solvers to the solution of shifted linear systems of the form \((A+\sigma)x=b\) for several values of \(\sigma\)
CCIterativeShiftedLinearFamilySolver simultaneously, using only as many matrix-vector operations as the solution of a single system requires. This class adds another interface to the basic
iterative linear solver that takes the shifts, \(\sigma\), and also weights, \(\alpha\), and returns the summation \(\sum_{i} \alpha_{i}x_{i}\), where \(x_
{i}\) is the solution of the system \((A+\sigma_{i})x_{i}=b\)
Class that contains methods for computing Jacobi elliptic functions related to complex analysis. These functions are inverse of the elliptic integral of
first kind, i.e.
\[ u(k,m)=\int_{0}^{k}\frac{dt}{\sqrt{(1-t^{2})(1-m^{2}t^{2})}} =\int_{0}^{\varphi}\frac{d\theta}{\sqrt{(1-m^{2}sin^{2}\theta)}} \]
where \(k=sin\varphi\), \(t=sin\theta\) and parameter \(m, 0\le m \le 1\) is called modulus. Three main Jacobi elliptic functions are defined as \(sn(u,m)=k=
sin\theta\), \(cn(u,m)=cos\theta=\sqrt{1-sn(u,m)^{2}}\) and \(dn(u,m)=\sqrt{1-m^{2}sn(u,m)^{2}}\). For \(k=1\), i.e. \(\varphi=\frac{\pi}{2}\), \(u(1,m)=K(m)
\) is known as the complete elliptic integral of first kind. Similarly, \(u(1,m'))= K'(m')\), \(m'=\sqrt{1-m^{2}}\) is called the complementary complete
elliptic integral of first kind. Jacobi functions are double periodic with quardratic periods \(K\) and \(K'\)
CCJade Class Jade
CCJADiag Class JADiag
CCJADiagOrth Class JADiagOrth
CCJediDiag Class Jedi
CCJediSep Class JediSep
CCJensenMetric Class JensenMetric
CCJensenShannonKernel The Jensen-Shannon kernel operating on real-valued vectors computes the Jensen-Shannon distance between the features. Often used in computer vision
CCJLCoverTreePoint Class Point to use with John Langford's CoverTree. This class must have some assoficated functions defined (distance, parse_points and print, see below) so
it can be used with the CoverTree implementation
CCJobResult Base class that stores the result of an independent job
CCJobResultAggregator Abstract base class that provides an interface for computing an aggeregation of the job results of independent computation jobs as they are submitted and
also for finalizing the aggregation
CCKDTree This class implements KD-Tree. cf. http://www.autonlab.org/autonweb/14665/version/2/part/5/data/moore-tutorial.pdf
CCKernel The Kernel base class
This class implements the kernel density estimation technique. Kernel density estimation is a non-parametric way to estimate an unknown pdf. The pdf at a
query point given finite training samples is calculated using the following formula : \ \(pdf(x')= \frac{1}{nh} \sum_{i=1}^n K(\frac{||x-x_i||}{h})\) \ K()
CCKernelDensity in the above formula is called the kernel function and is controlled by the parameter h called kernel bandwidth. Presently, this class supports only Gaussian
kernel which can be used with either Euclidean distance or Manhattan distance. This class makes use of 2 tree structures KD-tree and Ball tree for fast
calculation. KD-trees are faster than ball trees at lower dimensions. In case of high dimensional data, ball tree tends to out-perform KD-tree. By default,
the class used is Ball tree
Class CKernelDependenceMaximization, that uses an implementation of CKernelIndependenceTest to compute dependence measures for feature selection. Different
CCKernelDependenceMaximization kernels are used for labels and data. For the sake of computational convenience, the precompute() method is overridden to precompute the kernel for labels
and save as an instance of CCustomKernel
CCKernelDistance The Kernel distance takes a distance as input
CCKernelIndependenceTest Kernel independence test base class. Provides an interface for performing an independence test. Given samples \(Z=\{(x_i,y_i)\}_{i=1}^m\) from the joint
distribution \(\textbf{P}_{xy}\), does the joint distribution factorize as \(\textbf{P}_{xy}=\textbf{P}_x\textbf{P}_y\), i.e. product of the marginals?
CCKernelLocallyLinearEmbedding Class KernelLocallyLinearEmbedding used to construct embeddings of data using kernel formulation of Locally Linear Embedding algorithm as described in
CCKernelMachine A generic KernelMachine interface
CCKernelMulticlassMachine Generic kernel multiclass
CCKernelNormalizer The class Kernel Normalizer defines a function to post-process kernel values
CCKernelPCA Preprocessor KernelPCA performs kernel principal component analysis
CCKernelRidgeRegression Class KernelRidgeRegression implements Kernel Ridge Regression - a regularized least square method for classification and regression
Base class for kernel selection for kernel two-sample test statistic implementations (e.g. MMD). Provides abstract methods for selecting kernels and
CCKernelSelection computing criteria or kernel weights for the implemented method. In order to implement new methods for kernel selection, simply write a new implementation of
this class
CCKernelTwoSampleTest Kernel two sample test base class. Provides an interface for performing a two-sample test using a kernel, i.e. Given samples from two distributions \(p\) and
\(q\), the null-hypothesis is: \(H_0: p=q\), the alternative hypothesis: \(H_1: p\neq q\)
CCKLCholeskyInferenceMethod The KL approximation inference method class
CCKLCovarianceInferenceMethod The KL approximation inference method class
CCKLDiagonalInferenceMethod The KL approximation inference method class
CCKLDualInferenceMethod The dual KL approximation inference method class
CCKLDualInferenceMethodMinimizer Build-in minimizer for KLDualInference
CCKLInference The KL approximation inference method class
CCKLLowerTriangularInference The KL approximation inference method class
CCKMeans KMeans clustering, partitions the data into k (a-priori specified) clusters
CCKNN Class KNN, an implementation of the standard k-nearest neigbor classifier
This class implements a specialized version of max heap structure. This heap specializes in storing the least k values seen so far along with the indices (or
CCKNNHeap id) of the entities with which the values are associated. On calling the push method, it is automatically checked, if the new value supplied, is among the
least k distances seen so far. Also, in case the heap is full already, the max among the stored values is automatically thrown out as the new value finds its
proper place in the heap
CCKRRNystrom Class KRRNystrom implements the Nyström method for kernel ridge regression, using a low-rank approximation to the kernel matrix
CCLabels The class Labels models labels, i.e. class assignments of objects
CCLabelsFactory The helper class to specialize base class instances of labels
CCLanczosEigenSolver Class that computes eigenvalues of a real valued, self-adjoint linear operator using Lanczos algorithm
CCLaplaceInference The Laplace approximation inference method base class
CCLaplacianEigenmaps Class LaplacianEigenmaps used to construct embeddings of data using Laplacian Eigenmaps algorithm as described in:
CCLaRank LaRank multiclass SVM machine This implementation uses LaRank algorithm from Bordes, Antoine, et al., 2007. "Solving multiclass support vector machines with
CCLatentFeatures Latent Features class The class if for representing features for latent learning, e.g. LatentSVM. It's basically a very generic way of storing features of
any (user-defined) form based on CData
CCLatentLabels Abstract class for latent labels As latent labels always depends on the given application, this class only defines the API that the user has to implement for
latent labels
CCLatentModel Abstract class CLatentModel It represents the application specific model and contains most of the application dependent logic to solve latent variable based
CCLBFGSMinimizer The class wraps the Shogun's C-style LBFGS minimizer
CCLBPPyrDotFeatures Implements Local Binary Patterns with Scale Pyramids as dot features for a set of images. Expects the images to be loaded in a CDenseFeatures object
CCLDA Class LDA implements regularized Linear Discriminant Analysis
CCLeastAngleRegression Class for Least Angle Regression, can be used to solve LASSO
CCLeastSquaresRegression Class to perform Least Squares Regression
CCLibLinear This class provides an interface to the LibLinear library for large- scale linear learning focusing on SVM [1]. This is the classification interface. For
regression, see CLibLinearRegression. There is also an online version, see COnlineLibLinear
CCLibLinearMTL Class to implement LibLinear
CCLibLinearRegression This class provides an interface to the LibLinear library for large- scale linear learning focusing on SVM [1]. This is the regression interface. For
classification, see CLibLinear
CCLibSVM LibSVM
CCLibSVMFile Read sparse real valued features in svm light format e.g. -1 1:10.0 2:100.2 1000:1.3 with -1 == (optional) label and dim 1 - value 10.0 dim 2 - value 100.2
dim 1000 - value 1.3
CCLibSVMOneClass Class LibSVMOneClass
CCLibSVR Class LibSVR, performs support vector regression using LibSVM
CCLikelihoodModel The Likelihood model base class
CCLinearHMM The class LinearHMM is for learning Higher Order Markov chains
CCLinearKernel Computes the standard linear kernel on CDotFeatures
CCLinearLatentMachine Abstract implementaion of Linear Machine with latent variable This is the base implementation of all linear machines with latent variable
CCLinearLocalTangentSpaceAlignment Class LinearLocalTangentSpaceAlignment converter used to construct embeddings as described in:
CCLinearMachine Class LinearMachine is a generic interface for all kinds of linear machines like classifiers
CCLinearMulticlassMachine Generic linear multiclass machine
CCLinearOperator Abstract template base class that represents a linear operator, e.g. a matrix
CCLinearRidgeRegression Class LinearRidgeRegression implements Ridge Regression - a regularized least square method for classification and regression
CCLinearSolver Abstract template base class that provides an abstract solve method for linear systems, that takes a linear operator \(A\), a vector \(b\), solves the system
\(Ax=b\) and returns the vector \(x\)
CCLinearStringKernel Computes the standard linear kernel on dense char valued features
CCLinearTimeMMD This class implements the linear time Maximum Mean Statistic as described in [1] for streaming data (see CStreamingMMD for description)
CCLineReader Class for buffered reading from a ascii file
CCList Class List implements a doubly connected list for low-level-objects
CCListElement Class ListElement, defines how an element of the the list looks like
CCLMNN Class LMNN that implements the distance metric learning technique Large Margin Nearest Neighbour (LMNN) described in
CCLMNNStatistics Class LMNNStatistics used to give access to intermediate results obtained training LMNN
CCLocalAlignmentStringKernel The LocalAlignmentString kernel compares two sequences through all possible local alignments between the two sequences
CCLocalityImprovedStringKernel The LocalityImprovedString kernel is inspired by the polynomial kernel. Comparing neighboring characters it puts emphasize on local features
CCLocalityPreservingProjections Class LocalityPreservingProjections used to compute embeddings of data using Locality Preserving Projections method as described in
CCLocallyLinearEmbedding Class LocallyLinearEmbedding used to embed data using Locally Linear Embedding algorithm described in
CCLocalTangentSpaceAlignment Class LocalTangentSpaceAlignment used to embed data using Local Tangent Space Alignment (LTSA) algorithm as described in:
CCLock Class Lock used for synchronization in concurrent programs
Class to create unbiased estimators of \(log(\left|C\right|)= trace(log(C))\). For each estimate, it samples trace vectors (one by one) and calls submit_jobs
CCLogDetEstimator of COperatorFunction, stores the resulting job result aggregator instances, calls wait_for_all of CIndependentComputationEngine to ensure that the job result
aggregators are all up to date. Then simply computes running averages over the estimates
CCLogitDVGLikelihood Class that models dual variational logit likelihood
CCLogitLikelihood Class that models Logit likelihood
Class that models Logit likelihood and uses numerical integration to approximate the following variational expection of log likelihood
\[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \]
Class that models Logit likelihood and uses variational piecewise bound to approximate the following variational expection of log likelihood
\[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \]
\[ p(y_i|f_i) = \frac{exp(y_i*f_i)}{1+exp(f_i)}, y_i \in \{0,1\} \]
CCLogKernel Log kernel
CCLogLoss CLogLoss implements the logarithmic loss function
CCLogLossMargin Class CLogLossMargin implements a margin-based log-likelihood loss function
CCLogPlusOne Preprocessor LogPlusOne does what the name says, it adds one to a dense real valued vector and takes the logarithm of each component of it
Implementaion of rational approximation of a operator-function times vector where the operator function is log of a linear operator. Each complex system
CCLogRationalApproximationCGM generated from the shifts due to rational approximation of opertor- log times vector expression are solved at once with a shifted linear-family solver by the
computation engine. generate_jobs generates one job per sample
Implementaion of rational approximation of a operator-function times vector where the operator function is log of a dense-matrix. Each complex system
CCLogRationalApproximationIndividual generated from the shifts due to rational approximation of opertor- log times vector expression are solved individually with a complex linear solver by the
computation engine. generate_jobs generates num_shifts number of jobs per trace sample
CCLOOCrossValidationSplitting Implementation of Leave one out cross-validation on the base of CCrossValidationSplitting. Produces subset index sets consisting of one element,for each
CCLoss Class which collects generic mathematical functions
CCLossFunction Class CLossFunction is the base class of all loss functions
CCLPBoost Class LPBoost trains a linear classifier called Linear Programming Machine, i.e. a SVM using a \(\ell_1\) norm regularizer
CCLPM Class LPM trains a linear classifier called Linear Programming Machine, i.e. a SVM using a \(\ell_1\) norm regularizer
CCMachine A generic learning machine interface
CCMachineEvaluation Machine Evaluation is an abstract class that evaluates a machine according to some criterion
CCMahalanobisDistance Class MahalanobisDistance
CCMajorityVote CMajorityVote is a CWeightedMajorityVote combiner, where each Machine's weight in the ensemble is 1.0
CCManhattanMetric Class ManhattanMetric
CCManhattanWordDistance Class ManhattanWordDistance
CCManifoldSculpting Class CManifoldSculpting used to embed data using manifold sculpting embedding algorithm
CCMap Class CMap, a map based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table
CCMAPInference Class CMAPInference performs MAP inference on a factor graph. Briefly, given a factor graph model, with features \(\bold{x}\), the prediction is obtained by
\( {\arg\max} _{\bold{y}} P(\bold{Y} = \bold{y} | \bold{x}; \bold{w}) \)
CCMAPInferImpl Class CMAPInferImpl abstract class of MAP inference implementation
CCMatchWordStringKernel The class MatchWordStringKernel computes a variant of the polynomial kernel on strings of same length converted to a word alphabet
►CCMath Class which collects generic mathematical functions
Class CMatrixFeatures used to represent data whose feature vectors are better represented with matrices rather than with unidimensional arrays or vectors.
CCMatrixFeatures Optionally, it can be restricted that all the feature vectors have the same number of features. Set the attribute num_features different to zero to use this
restriction. Allow feature vectors with different number of features by setting num_features equal to zero (default behaviour)
CCMatrixOperations The helper class is used for Laplace and KL methods
Abstract base class that represents a matrix linear operator. It provides an interface to computes matrix-vector product \(Ax\) in its apply method, \(A\in\
CCMatrixOperator mathbb{C}^{m\times n},A:\mathbb{C}^{n} \rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in \mathbb{C}^{n}\) being the vector. The result is a
vector \(y\in \mathbb{C}^{m}\)
CCMCLDA Class MCLDA implements multiclass Linear Discriminant Analysis
CCMeanAbsoluteError Class MeanAbsoluteError used to compute an error of regression model
CCMeanFunction An abstract class of the mean function
CCMeanRule CMeanRule simply averages the outputs of the Machines in the ensemble
CCMeanSquaredError Class MeanSquaredError used to compute an error of regression model
CCMeanSquaredLogError Class CMeanSquaredLogError used to compute an error of regression model
CCMemoryMappedFile Memory mapped file
CCMinkowskiMetric Class MinkowskiMetric
CCMixtureModel This is the generic class for mixture models. The final distribution is a mixture of various simple distributions supplied by the user
►CCMKL Multiple Kernel Learning
CCMKLClassification Multiple Kernel Learning for two-class-classification
CCMKLMulticlass MKLMulticlass is a class for L1-norm Multiclass MKL
CCMKLOneClass Multiple Kernel Learning for one-class-classification
CCMKLRegression Multiple Kernel Learning for regression
Base class for kernel selection for MMD-based two-sample test statistic implementations. Provides abstract methods for selecting kernels and computing
CCMMDKernelSelection criteria or kernel weights for the implemented method. In order to implement new methods for kernel selection, simply write a new implementation of this
CCMMDKernelSelectionMax Kernel selection class that selects the single kernel that maximises the MMD statistic. Works for CQuadraticTimeMMD and CLinearTimeMMD. This leads to a
heuristic that is better than the standard median heuristic for Gaussian kernels. However, it comes with no guarantees
Implements MMD kernel selection for a number of Gaussian baseline kernels via selecting the one with a bandwidth parameter that is closest to the median of
CCMMDKernelSelectionMedian all pairwise distances in the underlying data. Therefore, it only works for data to which a GaussianKernel can be applied, which are grouped under the class
CDotFeatures in SHOGUN
CCMMDKernelSelectionOpt Implements optimal kernel selection for single kernels. Given a number of baseline kernels, this method selects the one that minimizes the type II error for
a given type I error for a two-sample test. This only works for the CLinearTimeMMD statistic
CCModelSelection Abstract base class for model selection
CCModelSelectionParameters Class to select parameters and their ranges for model selection. The structure is organized as a tree with different kinds of nodes, depending on the values
of its member variables of name and CSGObject
CCMPDSVM Class MPDSVM
CCMulticlassAccuracy The class MulticlassAccuracy used to compute accuracy of multiclass classification
CCMulticlassLabels Multiclass Labels for multi-class classification
CCMulticlassLibLinear Multiclass LibLinear wrapper. Uses Crammer-Singer formulation and gradient descent optimization algorithm implemented in the LibLinear library. Regularized
bias support is added using stacking bias 'feature' to hyperplanes normal vectors
CCMulticlassLibSVM Class LibSVMMultiClass. Does one vs one classification
CCMulticlassMachine Experimental abstract generic multiclass machine class
CCMulticlassModel Class CMulticlassModel that represents the application specific model and contains the application dependent logic to solve multiclass classification within
a generic SO framework
CCMulticlassOneVsOneStrategy Multiclass one vs one strategy used to train generic multiclass machines for K-class problems with building voting-based ensemble of K*(K-1) binary
classifiers multiclass probabilistic outputs can be obtained by using the heuristics described in [1]
CCMulticlassOneVsRestStrategy Multiclass one vs rest strategy used to train generic multiclass machines for K-class problems with building ensemble of K binary classifiers
CCMulticlassOVREvaluation The class MulticlassOVREvaluation used to compute evaluation parameters of multiclass classification via binary OvR decomposition and given binary evaluation
Class CMulticlassSOLabels to be used in the application of Structured Output (SO) learning to multiclass classification. Each of the labels is represented by
CCMulticlassSOLabels a real number and it is required that the values of the labels are in the set {0, 1, ..., num_classes-1}. Each label is of type CRealNumber and all of them
are stored in a CDynamicObjectArray
CCMulticlassStrategy Class MulticlassStrategy used to construct generic multiclass classifiers with ensembles of binary classifiers
CCMulticlassSVM Class MultiClassSVM
CCMultidimensionalScaling Class Multidimensionalscaling is used to perform multidimensional scaling (capable of landmark approximation if requested)
CCMultilabelAccuracy Class CMultilabelAccuracy used to compute accuracy of multilabel classification
CCMultilabelCLRModel Class MultilabelCLRModel represents application specific model and contains application dependent logic for solving multi-label classification using
Calibrated Label Ranking (CLR) [1] method within a generic SO framework
CCMultilabelLabels Multilabel Labels for multi-label classification
CCMultilabelModel Class CMultilabelModel represents application specific model and contains application dependent logic for solving multilabel classification within a generic
SO framework
CCMultilabelSOLabels Class CMultilabelSOLabels used in the application of Structured Output (SO) learning to Multilabel Classification. Labels are subsets of {0, 1, ...,
num_classes-1}. Each of the label if of type CSparseMultilabel and all of them are stored in a CDynamicObjectArray
CCMultiLaplaceInferenceMethod The Laplace approximation inference method class for multi classification
CCMultiquadricKernel MultiquadricKernel
CCMultitaskKernelMaskNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function
CCMultitaskKernelMaskPairNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function
CCMultitaskKernelMklNormalizer Base-class for parameterized Kernel Normalizers
CCMultitaskKernelNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function
CCMultitaskKernelPlifNormalizer The MultitaskKernel allows learning a piece-wise linear function (PLIF) via MKL
CCMultitaskKernelTreeNormalizer The MultitaskKernel allows Multitask Learning via a modified kernel function based on taxonomy
CCMultitaskROCEvaluation Class MultitaskROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC) of each task separately
CCNativeMulticlassMachine Experimental abstract native multiclass machine class
CCNbodyTree This class implements genaralized tree for N-body problems like k-NN, kernel density estimation, 2 point correlation
CCNearestCentroid Class NearestCentroid, an implementation of Nearest Shrunk Centroid classifier
CCNeighborhoodPreservingEmbedding NeighborhoodPreservingEmbedding converter used to construct embeddings as described in:
CCNeuralConvolutionalLayer Main component in convolutional neural networks
CCNeuralInputLayer Represents an input layer. The layer can be either connected to all the input features that a network receives (default) or connected to just a small part of
those features
CCNeuralLayer Base class for neural network layers
CCNeuralLayers A class to construct neural layers
CCNeuralLeakyRectifiedLinearLayer Neural layer with leaky rectified linear neurons
CCNeuralLinearLayer Neural layer with linear neurons, with an identity activation function. can be used as a hidden layer or an output layer
CCNeuralLogisticLayer Neural layer with linear neurons, with a logistic activation function. can be used as a hidden layer or an output layer
CCNeuralNetwork A generic multi-layer neural network
CCNeuralRectifiedLinearLayer Neural layer with rectified linear neurons
CCNeuralSoftmaxLayer Neural layer with linear neurons, with a softmax activation function. can be only be used as an output layer. Cross entropy error measure is used
NewtonSVM, In this Implementation linear SVM is trained in its primal form using Newton-like iterations. This Implementation is ported from the Olivier
CCNewtonSVM Chapelles fast newton based SVM solver, Which could be found here :http://mloss.org/software/view/30/ For further information on this implementation of SVM
refer to this paper: http://www.kyb.mpg.de/publications/attachments/neco_%5B0%5D.pdf
CCNGramTokenizer The class CNGramTokenizer is used to tokenize a SGVector<char> into n-grams
CCNOCCO This class implements the NOrmalized Cross Covariance Operator (NOCCO) based independence test as described in [1]
CCNode A CNode is an element of a CTaxonomy, which is used to describe hierarchical structure between tasks
CCNormalSampler Class that provides a sample method for Gaussian samples
CCNormOne Preprocessor NormOne, normalizes vectors to have norm 1
Class that models likelihood and uses numerical integration to approximate the following variational expection of log likelihood
\[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \]
CCOligoStringKernel This class offers access to the Oligo Kernel introduced by Meinicke et al. in 2004
CConditionalProbabilityTreeNodeData Struct to store data of node of conditional probability tree
CCOnlineLibLinear Class implementing a purely online version of CLibLinear, using the L2R_L1LOSS_SVC_DUAL solver only
CCOnlineLinearMachine Class OnlineLinearMachine is a generic interface for linear machines like classifiers which work through online algorithms
CCOnlineSVMSGD Class OnlineSVMSGD
CConstLearningRate This implements the const learning rate class for a descent-based minimizer
CCOperatorFunction Abstract template base class for computing \(s^{T} f(C) s\) for a linear operator C and a vector s. submit_jobs method creates a bunch of jobs needed to
solve for this particular \(s\) and attaches one unique job aggregator to each of them, then submits them all to the computation engine
Class that holds ONE combination of parameters for a learning machine. The structure is organized as a tree. Every node may hold a name or an instance of a
CCParameterCombination Parameter class. Nodes may have children. The nodes are organized in such way, that every parameter of a model for model selection has one node and
sub-parameters are stored in sub-nodes. Using a tree of this class, parameters of models may easily be set. There are these types of nodes:
CCParser Class for reading from a string
Preprocessor PCA performs principial component analysis on input feature vectors/matrices. When the init method in PCA is called with proper feature matrix X
(with say N number of vectors and D feature dimension), a transformation matrix is computed and stored internally. This transformation matrix is then used to
CCPCA transform all D-dimensional feature vectors or feature matrices (with D feature dimensions) supplied via apply_to_feature_matrix or apply_to_feature_vector
methods. This tranformation outputs the T-Dimensional approximation of all these input vectors and matrices (where T<=min(D,N)). The transformation matrix is
essentially a DxT matrix, the columns of which correspond to the eigenvectors of the covariance matrix(XX') having top T eigenvalues
CCPerceptron Class Perceptron implements the standard linear (online) perceptron
CCPeriodicKernel The periodic kernel as described in The Kernel Cookbook by David Duvenaud: http://people.seas.harvard.edu/~dduvenaud/cookbook/
CCPlif Class Plif
CCPlifArray Class PlifArray
CCPlifBase Class PlifBase
CCPlifMatrix Store plif arrays for all transitions in the model
CCPluginEstimate Class PluginEstimate
CCPNorm Preprocessor PNorm, normalizes vectors to have p-norm
CCPolyFeatures Implement DotFeatures for the polynomial kernel
CCPolyKernel Computes the standard polynomial kernel on CDotFeatures
CCPolyMatchStringKernel The class PolyMatchStringKernel computes a variant of the polynomial kernel on strings of same length
CCPolyMatchWordStringKernel The class PolyMatchWordStringKernel computes a variant of the polynomial kernel on word-features
CCPositionalPWM Positional PWM
CCPowerKernel Power kernel
CCPRCEvaluation Class PRCEvaluation used to evaluate PRC (Precision Recall Curve) and an area under PRC curve (auPRC)
CCPrecisionMeasure Class PrecisionMeasure used to measure precision of 2-class classifier
CCPreprocessor Class Preprocessor defines a preprocessor interface
CCProbabilityDistribution A base class for representing n-dimensional probability distribution over the real numbers (64bit) for which various statistics can be computed and which can
be sampled
CCProbitLikelihood Class that models Probit likelihood
Class that models Probit likelihood and uses numerical integration to approximate the following variational expection of log likelihood
\[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \]
CCProductKernel The Product kernel is used to combine a number of kernels into a single ProductKernel object by element multiplication
CCProtobufFile Class for work with binary file in protobuf format
CCPruneVarSubMean Preprocessor PruneVarSubMean will substract the mean and remove features that have zero variance
CCPyramidChi2 Pyramid Kernel over Chi2 matched histograms
CCQDA Class QDA implements Quadratic Discriminant Analysis
CCQDiag Class QDiag
This class implements the quadratic time Maximum Mean Statistic as described in [1]. The MMD is the distance of two probability distributions \(p\) and \(q\)
in a RKHS which we denote by
\[ \hat{\eta_k}=\text{MMD}[\mathcal{F},p,q]^2=\textbf{E}_{x,x'} \left[ k(x,x')\right]-2\textbf{E}_{x,y}\left[ k(x,y)\right] +\textbf{E}_{y,y'}\left[ k(y,y')\
right]=||\mu_p - \mu_q||^2_\mathcal{F} \]
CCRandom : Pseudo random number geneartor
This class implements randomized CART algorithm used in the tree growing process of candidate trees in Random Forests algorithm. The tree growing process is
CCRandomCARTree different from the original CART algorithm because of the input attributes which are considered for each node split. In randomized CART, a few (fixed number)
attributes are randomly chosen from all available attributes while deciding the best split. This is unlike the original CART where all available attributes
are considered while deciding the best split
This class implements the Random Forests algorithm. In Random Forests algorithm, we train a number of randomized CART trees (see class CRandomCARTree) using
the supplied training data. The number of trees to be trained is a parameter (called number of bags) controlled by the user. Test feature vectors are
CCRandomForest classified/regressed by combining the outputs of all these trained candidate trees using a combination rule (see class CCombinationRule). The feature for
calculating out-of-box error is also provided to help determine the appropriate number of bags. The evaluatin criteria for calculating this out-of-box error
is specified by the user (see class CEvaluation)
This class implements the random fourier features for the DotFeatures framework. Basically upon the object creation it computes the random coefficients,
CCRandomFourierDotFeatures namely w and b, that are needed for this method and then every time a vector is required it is computed based on the following formula z(x) = sqrt(2/D) * cos
(w'*x + b), where D is the number of samples that are used
CCRandomFourierGaussPreproc Preprocessor CRandomFourierGaussPreproc implements Random Fourier Features for the Gauss kernel a la Ali Rahimi and Ben Recht Nips2007 after preprocessing
the features using them in a linear kernel approximates a gaussian kernel
CCRandomKitchenSinksDotFeatures Class that implements the Random Kitchen Sinks (RKS) for the DotFeatures as mentioned in http://books.nips.cc/papers/files/nips21/NIPS2008_0885.pdf
CCRandomSearchModelSelection Model selection class which searches for the best model by a random search. See CModelSelection for details
Abstract base class of the rational approximation of a function of a linear operator (A) times vector (v) using Cauchy's integral formula -
\[f(\text{A})\text{v}=\oint_{\Gamma}f(z)(z\text{I}-\text{A})^{-1} \text{v}dz\]
Computes eigenvalues of linear operator and uses Jacobi elliptic functions and conformal maps [2] for quadrature rule for discretizing the contour integral
CCRationalApproximation and computes complex shifts, weights and constant multiplier of the rational approximation of the above expression as
\[f(\text{A})\text{v}\approx \eta\text{A}\Im-\left(\sum_{l=1}^{N}\alpha_{l} (\text{A}-\sigma_{l}\text{I})^{-1}\text{v}\right)\]
where \(\alpha_{l},\sigma_{l}\in\mathbb{C}\) are respectively the shifts and weights of the linear systems generated from the rational approximation, and \(\
eta\in\mathbb{R}\) is the constant multiplier, equals to \(\frac{-8K(\lambda_{m}\lambda_{M})^{\frac{1}{4}}}{k\pi N}\)
CCRationalApproximationCGMJob Implementation of independent jobs that solves one whole family of shifted systems in rational approximation of linear operator function times a vector using
CG-M linear solver. compute calls submit_results of the aggregator with CScalarResult (see CRationalApproximation)
Implementation of independent job that solves one of the family of shifted systems in rational approximation of linear operator function times a vector using
CCRationalApproximationIndividualJob a direct linear solver. The shift is moved inside the operator. compute calls submit_results of the aggregator with CVectorResult which is the solution
vector for that shift multiplied by complex weight (See CRationalApproximation)
CCRationalQuadraticKernel Rational Quadratic kernel
CCRBM A Restricted Boltzmann Machine
CCRealDistance Class RealDistance
CCRealFileFeatures The class RealFileFeatures implements a dense double-precision floating point matrix from a file
Class CRealNumber to be used in the application of Structured Output (SO) learning to multiclass classification. Even though it is likely that it does not
CCRealNumber make sense to consider real numbers as structured data, it has been made in this way because the basic type to use in structured labels needs to inherit from
CCRecallMeasure Class RecallMeasure used to measure recall of 2-class classifier
CCRegressionLabels Real Labels are real-valued labels
CCRegulatoryModulesStringKernel The Regulaty Modules kernel, based on the WD kernel, as published in Schultheiss et al., Bioinformatics (2009) on regulatory sequences
CCRejectionStrategy Base rejection strategy class
CCRescaleFeatures Preprocessor RescaleFeautres is rescaling the range of features to make the features independent of each other and aims to scale the range in [0, 1] or [-1,
CCRidgeKernelNormalizer Normalize the kernel by adding a constant term to its diagonal. This aids kernels to become positive definite (even though they are not - often caused by
numerical problems)
CCROCEvaluation Class ROCEvalution used to evaluate ROC (Receiver Operating Characteristic) and an area under ROC curve (auROC)
CCSalzbergWordStringKernel The SalzbergWordString kernel implements the Salzberg kernel
CCScalarResult Base class that stores the result of an independent job when the result is a scalar
CCScatterKernelNormalizer Scatter kernel normalizer
CCScatterSVM ScatterSVM - Multiclass SVM
CCSegmentLoss Class IntronList
CCSequence Class CSequence to be used in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM)
CCSequenceLabels Class CSequenceLabels used e.g. in the application of Structured Output (SO) learning to Hidden Markov Support Vector Machines (HM-SVM). Each of the labels
is represented by a sequence of integers. Each label is of type CSequence and all of them are stored in a CDynamicObjectArray
CCSerialComputationEngine Class that computes multiple independent instances of computation jobs sequentially
CCSerializableAsciiFile Serializable ascii file
►CCSerializableFile Serializable file
CTSerializableReader Serializable reader
CCSet Class CSet, a set based on the hash-table. w: http://en.wikipedia.org/wiki/Hash_table
CCSGDQN Class SGDQN
►CCSGObject Class SGObject is the base class of all shogun objects
Base class for the family of kernel functions that only depend on the difference of the inputs, i.e. whose values does not change if the inputs are shifted
by the same amount. More precisely,
CCShiftInvariantKernel \[ k(\mathbf{x}, \mathbf{x'}) = k(\mathbf{x-x'}) \]
For example, Gaussian (RBF) kernel is a shfit invariant kernel
CCSigmoidKernel The standard Sigmoid kernel computed on dense real valued features
CCSignal Class Signal implements signal handling to e.g. allow ctrl+c to cancel a long running process
CCSimpleFile Template class SimpleFile to read and write from files
CCSimpleLocalityImprovedStringKernel SimpleLocalityImprovedString kernel, is a ``simplified'' and better performing version of the Locality improved kernel
CCSingleFITCInference The Fully Independent Conditional Training inference base class for Laplace and regression for 1-D labels (1D regression and binary classification)
CCSingleFITCLaplaceInferenceMethod The FITC approximation inference method class for regression and binary Classification. Note that the number of inducing points (m) is usually far less than
the number of input points (n). (the time complexity is computed based on the assumption m < n)
CCSingleFITCLaplaceNewtonOptimizer The build-in minimizer for SingleFITCLaplaceInference
CCSingleLaplaceInferenceMethod The SingleLaplace approximation inference method class for regression and binary Classification
CCSingleLaplaceNewtonOptimizer The build-in minimizer for SingleLaplaceInference
CCSingleSparseInference The sparse inference base class for classification and regression for 1-D labels (1D regression and binary classification)
CCSmoothHingeLoss CSmoothHingeLoss implements the smooth hinge loss function
CCSNPFeatures Features that compute the Weighted Degreee Kernel feature space explicitly
CCSNPStringKernel The class SNPStringKernel computes a variant of the polynomial kernel on strings of same length
CCSOBI Class SOBI
CCSoftMaxLikelihood Class that models Soft-Max likelihood
CCSortUlongString Preprocessor SortUlongString, sorts the indivual strings in ascending order
CCSortWordString Preprocessor SortWordString, sorts the indivual strings in ascending order
CCSOSVMHelper Class CSOSVMHelper contains helper functions to compute primal objectives, dual objectives, average training losses, duality gaps etc. These values will be
recorded to check convergence. This class is inspired by the matlab implementation of the block coordinate Frank-Wolfe SOSVM solver [1]
CCSparseDistance Template class SparseDistance
CCSparseEuclideanDistance Class SparseEucldeanDistance
CCSparseFeatures Template class SparseFeatures implements sparse matrices
CCSparseInference The Fully Independent Conditional Training inference base class
CCSparseKernel Template class SparseKernel, is the base class of kernels working on sparse features
CCSparseMatrixOperator Class that represents a sparse-matrix linear operator. It computes matrix-vector product \(Ax\) in its apply method, \(A\in\mathbb{C}^{m\times n},A:\mathbb
{C}^{n}\rightarrow \mathbb{C}^{m}\) being the matrix operator and \(x\in\mathbb{C}^{n}\) being the vector. The result is a vector \(y\in\mathbb{C}^{m}\)
CCSparseMultilabel Class CSparseMultilabel to be used in the application of Structured Output (SO) learning to Multilabel classification
CCSparsePolyFeatures Implement DotFeatures for the polynomial kernel
CCSparsePreprocessor Template class SparsePreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CSparseFeatures
CCSparseSpatialSampleStringKernel Sparse Spatial Sample String Kernel by Pavel Kuksa pkuks.nosp@m.a@cs.nosp@m..rutg.nosp@m.ers..nosp@m.edu and Vladimir Pavlovic vladi.nosp@m.mir@.nosp@m.cs.ru
CCSpecificityMeasure Class SpecificityMeasure used to measure specificity of 2-class classifier
CCSpectrumMismatchRBFKernel Spectrum mismatch rbf kernel
CCSpectrumRBFKernel Spectrum rbf kernel
CCSphericalKernel Spherical kernel
CCSplineKernel Computes the Spline Kernel function which is the cubic polynomial
CCSplittingStrategy Abstract base class for all splitting types. Takes a CLabels instance and generates a desired number of subsets which are being accessed by their indices via
the method generate_subset_indices(...)
CCSqrtDiagKernelNormalizer SqrtDiagKernelNormalizer divides by the Square Root of the product of the diagonal elements
CCSquaredHingeLoss Class CSquaredHingeLoss implements a squared hinge loss function
CCSquaredLoss CSquaredLoss implements the squared loss function
CCStateModel Class CStateModel base, abstract class for the internal state representation used in the CHMSVMModel
►CCStatistics Class that contains certain functions related to statistics, such as probability/cumulative distribution functions, different statistics, etc
This class implements the stochastic gradient boosting algorithm for ensemble learning invented by Jerome H. Friedman. This class works with a variety of
loss functions like squared loss, exponential loss, Huber loss etc which can be accessed through Shogun's CLossFunction interface (cf. http://
CCStochasticGBMachine www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CLossFunction.html). Additionally, it can create an ensemble of any regressor class derived from the
CMachine class (cf. http://www.shogun-toolbox.org/doc/en/latest/classshogun_1_1CMachine.html). For one dimensional optimization, this class uses the
backtracking linesearch accessed via Shogun's L-BFGS class. A concise description of the algorithm implemented can be found in the following link : http://
CCStochasticProximityEmbedding Class StochasticProximityEmbedding used to construct embeddings of data using the Stochastic Proximity algorithm
CCStochasticSOSVM Class CStochasticSOSVM solves SOSVM using stochastic subgradient descent on the SVM primal problem [1], which is equivalent to SGD or Pegasos [2]. This class
is inspired by the matlab SGD implementation in [3]
CCStoreScalarAggregator Template class that aggregates scalar job results in each submit_result call, finalize then transforms current aggregation into a CScalarResult
CCStoreVectorAggregator Abstract template class that aggregates vector job results in each submit_result call, finalize is abstract
Implementation of stratified cross-validation on the base of CSplittingStrategy. Produces subset index sets of equal size (at most one difference) in which
CCStratifiedCrossValidationSplitting the label ratio is equal (at most one difference) to the label ratio of the specified labels. Do not use for regression since it may be impossible to
distribute nice in that case
CCStreamingAsciiFile Class StreamingAsciiFile to read vector-by-vector from ASCII files
CCStreamingDenseFeatures This class implements streaming features with dense feature vectors
CCStreamingDotFeatures Streaming features that support dot products among other operations
CCStreamingFeatures Streaming features are features which are used for online algorithms
CCStreamingFile A Streaming File access class
CCStreamingFileFromDenseFeatures Class CStreamingFileFromDenseFeatures is a derived class of CStreamingFile which creates an input source for the online framework from a CDenseFeatures
CCStreamingFileFromFeatures Class StreamingFileFromFeatures to read vector-by-vector from a CFeatures object
CCStreamingFileFromSparseFeatures Class CStreamingFileFromSparseFeatures is derived from CStreamingFile and provides an input source for the online framework. It uses an existing
CSparseFeatures object to generate online examples
CCStreamingFileFromStringFeatures Class CStreamingFileFromStringFeatures is derived from CStreamingFile and provides an input source for the online framework from a CStringFeatures object
CCStreamingHashedDenseFeatures This class acts as an alternative to the CStreamingDenseFeatures class and their difference is that the current example in this class is hashed into a
smaller dimension dim
This class implements streaming features for a document collection. Like in the standard Bag-of-Words representation, this class considers each document as a
collection of tokens, which are then hashed into a new feature space of a specified dimension. This class is very flexible and allows the user to specify the
CCStreamingHashedDocDotFeatures tokenizer used to tokenize each document, specify whether the results should be normalized with regards to the sqrt of the document size, as well as to
specify whether he wants to combine different tokens. The latter implements a k-skip n-grams approach, meaning that you can combine up to n tokens, while
skipping up to k. Eg. for the tokens ["a", "b", "c", "d"], with n_grams = 2 and skips = 2, one would get the following combinations : ["a", "ab", "ac"
(skipped 1), "ad" (skipped 2), "b", "bc", "bd" (skipped 1), "c", "cd", "d"]
CCStreamingHashedSparseFeatures This class acts as an alternative to the CStreamingSparseFeatures class and their difference is that the current example in this class is hashed into a
smaller dimension dim
CCStreamingMMD Abstract base class that provides an interface for performing kernel two-sample test on streaming data using Maximum Mean Discrepancy (MMD) as the test
statistic. The MMD is the distance of two probability distributions \(p\) and \(q\) in a RKHS (see [1] for formal description)
CCStreamingSparseFeatures This class implements streaming features with sparse feature vectors. The vector is represented as an SGSparseVector<T>. Each entry is of type
SGSparseVectorEntry<T> with members `feat_index' and `entry'
CCStreamingStringFeatures This class implements streaming features as strings
CCStreamingVwCacheFile Class StreamingVwCacheFile to read vector-by-vector from VW cache files
CCStreamingVwFeatures This class implements streaming features for use with VW
CCStreamingVwFile Class StreamingVwFile to read vector-by-vector from Vowpal Wabbit data files. It reads the example and label into one object of VwExample type
CCStringDistance Template class StringDistance
CCStringFeatures Template class StringFeatures implements a list of strings
CCStringFileFeatures File based string features
CCStringKernel Template class StringKernel, is the base class of all String Kernels
CCStringMap The class is a customized map for the optimization framework
CCStringPreprocessor Template class StringPreprocessor, base class for preprocessors (cf. CPreprocessor) that apply to CStringFeatures (i.e. strings of variable length)
CCStructuredAccuracy Class CStructuredAccuracy used to compute accuracy of structured classification
CCStructuredData Base class of the components of StructuredLabels
CCStructuredLabels Base class of the labels used in Structured Output (SO) problems
Class CStructuredModel that represents the application specific model and contains most of the application dependent logic to solve structured output (SO)
CCStructuredModel problems. The idea of this class is to be instantiated giving pointers to the functions that are dependent on the application, i.e. the combined feature
representation \(\Psi(\bold{x},\bold{y})\) and the argmax function \( {\arg\max} _{\bold{y} \neq \bold{y}_i} \left \langle { \bold{w}, \Psi(\bold{x}_i,\bold
{y}) } \right \rangle \). See: MulticlassModel.h and .cpp for an example of these functions implemented
CCStudentsTLikelihood Class that models a Student's-t likelihood
Class that models Student's T likelihood and uses numerical integration to approximate the following variational expection of log likelihood
\[ \sum_{{i=1}^n}{E_{q(f_i|{\mu}_i,{\sigma}^2_i)}[logP(y_i|f_i)]} \]
Class SubsequenceStringKernel that implements String Subsequence Kernel (SSK) discussed by Lodhi et. al.[1]. A subsequence is any ordered sequence of \(n\)
characters occurring in the text, though not necessarily contiguous. More formally, string \(u\) is a subsequence of string \(s\), iff there exists indices \
(\mathbf{i}=(i_{1},\dots,i_{|u|})\), with \(1\le i_{1} \le \cdots \le i_{|u|} \le |s|\), such that \(u_{j}=s_{i_{j}}\) for \(j=1,\dots,|u|\), written as \(u=
s[\mathbf{i}]\). The feature mapping \(\phi\) in this scenario is given by
\[ \phi_{u}(s)=\sum_{\mathbf{i}:u=s[\mathbf{i}]}\lambda^{l(\mathbf{i})} \]
for some \(lambda\le 1\), where \(l(\mathbf{i})\) is the length of the subsequence in \(s\), given by \(i_{|u|}-i_{1}+1\). The kernel here is an inner
CCSubsequenceStringKernel product in the feature space generated by all subsequences of length \(n\).
\[ K_{n}(s,t)=\sum_{u\in\Sigma^{n}}\langle \phi_{u}(s), \phi_{u}(t)\rangle = \sum_{u\in\Sigma^{n}}\sum_{\mathbf{i}:u=s[\mathbf{i}]} \sum_{\mathbf{j}:u=t[\
mathbf{j}]}\lambda^{l(\mathbf{i})+l(\mathbf{j})} \]
Since the subsequences are weighted by the exponentially decaying factor \(\lambda\) of their full length in the text, more weight is given to those
occurrences that are nearly contiguous. A direct computation is infeasible since the dimension of the feature space grows exponentially with \(n\). The paper
describes an efficient computation approach using a dynamic programming technique
CCSubset Wrapper class for an index subset which is used by SubsetStack
CCSubsetStack Class to add subset support to another class. A CSubsetStackStack instance should be added and wrapper methods to all interfaces should be added
CCSumOne Preprocessor SumOne, normalizes vectors to have sum 1
CCSVM A generic Support Vector Machine Interface
CCSVMLight Class SVMlight
CCSVMLightOneClass Trains a one class C SVM
CCSVMLin Class SVMLin
CCSVMSGD Class SVMSGD
CCSVRLight Class SVRLight, performs support vector regression using SVMLight
CCTableFactorType Class CTableFactorType the way that store assignments of variables and energies in a table or a multi-array
CCTanimotoDistance Class Tanimoto coefficient
CCTanimotoKernelNormalizer TanimotoKernelNormalizer performs kernel normalization inspired by the Tanimoto coefficient (see http://en.wikipedia.org/wiki/Jaccard_index )
CCTask Class Task used to represent tasks in multitask learning. Essentially it represent a set of feature vector indices
CCTaskGroup Class TaskGroup used to represent a group of tasks. Tasks in group do not overlap
CCTaskRelation Used to represent tasks in multitask learning
CCTaskTree Class TaskTree used to represent a tree of tasks. Tree is constructed via task with subtasks (and subtasks of subtasks ..) passed to the TaskTree
CCTaxonomy CTaxonomy is used to describe hierarchical structure between tasks
CCTDistributedStochasticNeighborEmbedding Class CTDistributedStochasticNeighborEmbedding used to embed data using t-distributed stochastic neighbor embedding algorithm: http://jmlr.csail.mit.edu/
CCTensorProductPairKernel Computes the Tensor Product Pair Kernel (TPPK)
CCThresholdRejectionStrategy Threshold based rejection strategy
CCTime Class Time that implements a stopwatch based on either cpu time or wall clock time
CCTokenizer The class CTokenizer acts as a base class in order to implement tokenizers. Sub-classes must implement the methods has_next(), next_token_idx() and get_copy
CCTOPFeatures The class TOPFeatures implements TOP kernel features obtained from two Hidden Markov models
CCTraceSampler Abstract template base class that provides an interface for sampling the trace of a linear operator using an abstract sample method
CCTreeMachine Class TreeMachine, a base class for tree based multiclass classifiers. This class is derived from CBaseMulticlassMachine and stores the root node (of class
type CTreeMachineNode) to the tree structure
CCTreeMachineNode The node of the tree structure forming a TreeMachine The node contains a pointer to its parent and a vector of pointers to its children. A node of this class
can have only one parent but any number of children.The node also contains data which can be of any type and has to be specified using template specifier
CCTrie Template class Trie implements a suffix trie, i.e. a tree in which all suffixes up to a certain length are stored
CCTStudentKernel Generalized T-Student kernel
CCTwoSampleTest Provides an interface for performing the classical two-sample test i.e. Given samples from two distributions \(p\) and \(q\), the null-hypothesis is: \(H_0:
p=q\), the alternative hypothesis: \(H_1: p\neq q\)
CCTwoStateModel Class CTwoStateModel class for the internal two-state representation used in the CHMSVMModel
CCUAIFile Class UAIFILE used to read data from UAI files. See http://graphmod.ics.uci.edu/uai08/FileFormat for more details
CCUWedge Class UWedge
CCUWedgeSep Class UWedgeSep
CCVarDTCInferenceMethod The inference method class based on the Titsias' variational bound. For more details, see Titsias, Michalis K. "Variational learning of inducing variables in
sparse Gaussian processes." International Conference on Artificial Intelligence and Statistics. 2009
CCVarianceKernelNormalizer VarianceKernelNormalizer divides by the ``variance''
CCVariationalGaussianLikelihood The variational Gaussian Likelihood base class. The variational distribution is Gaussian
CCVariationalLikelihood The Variational Likelihood base class
CCVectorResult Base class that stores the result of an independent job when the result is a vector
CCVowpalWabbit Class CVowpalWabbit is the implementation of the online learning algorithm used in Vowpal Wabbit
CCVwAdaptiveLearner VwAdaptiveLearner uses an adaptive subgradient technique to update weights
CCVwCacheReader Base class from which all cache readers for VW should be derived
CCVwCacheWriter CVwCacheWriter is the base class for all VW cache creating classes
CCVwEnvironment Class CVwEnvironment is the environment used by VW
CCVwLearner Base class for all VW learners
CCVwNativeCacheReader Class CVwNativeCacheReader reads from a cache exactly as that which has been produced by VW's default cache format
CCVwNativeCacheWriter Class CVwNativeCacheWriter writes a cache exactly as that which would be produced by VW's default cache format
CCVwNonAdaptiveLearner VwNonAdaptiveLearner uses a standard gradient descent weight update rule
CCVwParser CVwParser is the object which provides the functions to parse examples from buffered input
CCVwRegressor Regressor used by VW
CCWaveKernel Wave kernel
CCWaveletKernel Class WaveletKernel
CCWDFeatures Features that compute the Weighted Degreee Kernel feature space explicitly
CCWeightedCommWordStringKernel The WeightedCommWordString kernel may be used to compute the weighted spectrum kernel (i.e. a spectrum kernel for 1 to K-mers, where each k-mer length is
weighted by some coefficient \(\beta_k\)) from strings that have been mapped into unsigned 16bit integers
CCWeightedDegreePositionStringKernel The Weighted Degree Position String kernel (Weighted Degree kernel with shifts)
CCWeightedDegreeRBFKernel Weighted degree RBF kernel
CCWeightedDegreeStringKernel The Weighted Degree String kernel
CCWeightedMajorityVote Weighted Majority Vote implementation
CCWRACCMeasure Class WRACCMeasure used to measure weighted relative accuracy of 2-class classifier
Simple wrapper class that allows to store any Shogun basic parameter (i.e. float64_t, int64_t, char, etc) in a CSGObject, and therefore to make it
CCWrappedBasic serializable. Using a template argument that is not a Shogun parameter will cause a compile error when trying to register the passed value as a parameter in
the constructors
CCWrappedObjectArray Specialization of CDynamicObjectArray that adds methods to append wrapped elements to make them serializable. Objects are wrapped through the classes
CWrappedBasic, CWrappedSGVector, CWrappedSGMatrix
CCWrappedSGMatrix Simple wrapper class that allows to store any Shogun SGMatrix<T> in a CSGObject, and therefore to make it serializable. Using a template argument that is not
a Shogun parameter will cause a compile error when trying to register the passed value as a parameter in the constructors
CCWrappedSGVector Simple wrapper class that allows to store any Shogun SGVector<T> in a CSGObject, and therefore to make it serializable. Using a template argument that is not
a Shogun parameter will cause a compile error when trying to register the passed value as a parameter in the constructors
CCZeroMean The zero mean function class
CCZeroMeanCenterKernelNormalizer ZeroMeanCenterKernelNormalizer centers the kernel in feature space
CDescendCorrection This is a base class for descend based correction method
CDescendUpdater This is a base class for descend update
CDescendUpdaterWithCorrection This is a base class for descend update with descend based correction
CDynArray Template Dynamic array class that creates an array that can be used like a list or an array
CEigenSparseUtil This class contains some utilities for Eigen3 Sparse Matrix integration with shogun. Currently it provides a method for converting SGSparseMatrix to Eigen3
CElasticNetPenalty The is the base class for ElasticNet penalty/regularization within the FirstOrderMinimizer framework
CFirstOrderBoundConstraintsCostFunction The first order cost function base class with bound constrains
CFirstOrderCostFunction The first order cost function base class
CFirstOrderMinimizer The first order minimizer base class
CFirstOrderSAGCostFunction The class is about a stochastic cost function for stochastic average minimizers
CFirstOrderStochasticCostFunction The first order stochastic cost function base class
CFirstOrderStochasticMinimizer The base class for stochastic first-order gradient-based minimizers
CGCEdge Graph cuts edge
CGCNode Graph cuts node
CGCNodePtr Graph guts node pointer
CGradientDescendUpdater The class implements the gradient descend method
Cid3TreeNodeData Structure to store data of a node of id3 tree. This can be used as a template type in TreeMachineNode class. Ex: id3 algorithm uses nodes of type
CInverseScalingLearningRate The implements the inverse scaling learning rate
Template class that is used as an iterator for an iterative linear solver. In the iteration of solving phase, each solver initializes the iteration with a
CIterativeSolverIterator maximum number of iteration limit, and relative/ absolute tolerence. They then call begin with the residual vector and continue until its end returns true,
i.e. either it has converged or iteration count reached maximum limit
CL1Penalty The is the base class for L1 penalty/regularization within the FirstOrderMinimizer framework
CL1PenaltyForTG The is the base class for L1 penalty/regularization within the FirstOrderMinimizer framework
CL2Penalty The class implements L2 penalty/regularization within the FirstOrderMinimizer framework
CLearningRate The base class about learning rate for descent-based minimizers
CMappedSparseMatrix Mapped sparse matrix for representing graph relations of tasks
CMappingFunction The base mapping function for mirror descend
CMaybe Holder that represents an object that can be either present or absent. Quite simllar to std::optional introduced in C++14, but provides a way to pass the
reason of absence (e.g. "incorrect parameter")
CMinimizer The minimizer base class
This structure is used for storing data required for using the generic Expectation Maximization (EM) implemented by the template class CEMBase for mixture
CMixModelData models like gaussian mixture model, multinomial mixture model etc. The EM specialized for mixture models is implemented by the class CEMMixtureModel which
uses this MixModelData structure
CMKLMulticlassGLPK MKLMulticlassGLPK is a helper class for MKLMulticlass
CMKLMulticlassGradient MKLMulticlassGradient is a helper class for MKLMulticlass
CMKLMulticlassOptimizationBase MKLMulticlassOptimizationBase is a helper class for MKLMulticlass
CModel Class Model
CMomentumCorrection This is a base class for momentum correction methods
CMunkres Munkres
CNbodyTreeNodeData Structure to store data of a node of N-Body tree. This can be used as a template type in TreeMachineNode class. N-Body tree building algorithm uses nodes of
type CBinaryTreeMachineNode<NbodyTreeNodeData>
CNesterovMomentumCorrection This implements the Nesterov's Accelerated Gradient (NAG) correction
CParallel Class Parallel provides helper functions for multithreading
CParameter Parameter class
CPenalty The base class for penalty/regularization used in minimization
CPNormMappingFunction This implements the P-norm mapping/projection function
CPointerValueAnyPolicy This is one concrete implementation of policy that uses void pointers to store values
CProximalPenalty The base class for sparse penalty/regularization used in minimization
CRmsPropUpdater The class implements the RmsProp method
CSerializableAsciiReader00 Serializable ascii reader
CSGDMinimizer The class implements the stochastic gradient descend (SGD) minimizer
CSGIO Class SGIO, used to do input output operations throughout shogun
CSGMatrix Shogun matrix
CSGMatrixList Shogun matrix list
CSGNDArray Shogun n-dimensional array
CSGReferencedData Shogun reference count managed data
CSGSparseMatrix Template class SGSparseMatrix
CSGSparseVector Template class SGSparseVector The assumtion is that the stored SGSparseVectorEntry<T>* vector is ordered by SGSparseVectorEntry.feat_index in non-decreasing
order. This has to be assured by the user of the class
CSGSparseVectorEntry Template class SGSparseVectorEntry
CSGString Shogun string
CSGStringList Template class SGStringList
CSGVector Shogun vector
CShogunException Class ShogunException defines an exception which is thrown whenever an error inside of shogun occurs
CSMDMinimizer The class implements the stochastic mirror descend (SMD) minimizer
CSMIDASMinimizer The class implements the Stochastic MIrror Descent mAde Sparse (SMIDAS) minimizer
CSparsePenalty The base class for sparse penalty/regularization used in minimization
CSparsityStructure Struct that represents the sparsity structure of the Sparse Matrix in CRS. Implementation has been adapted from Krylstat (https://github.com/ Froskekongen/
KRYLSTAT) library (c) Erlend Aune erlen.nosp@m.da@m.nosp@m.ath.n.nosp@m.tnu..nosp@m.no under GPL2+
CSSKFeatures SSKFeatures
CStandardMomentumCorrection This implements the plain momentum correction
Csubstring Struct Substring, specified by start position and end position
CSVRGMinimizer The class implements the stochastic variance reduced gradient (SVRG) minimizer
CTag Acts as an identifier for a shogun object. It contains type information and name of the object. Generally used to CSGObject::set() and CSGObject::get()
parameters of a class
CTParameter Parameter struct
CTSGDataType Datatypes that shogun supports
Cv_array Class v_array taken directly from JL's implementation
CVersion Class Version provides version information
CVwExample Example class for VW
CVwFeature One feature in VW
CVwLabel Class VwLabel holds a label object used by VW
Chash< shogun::BaseTag >
CCSyntaxHighLight Syntax highlight
CCTron Class Tron | {"url":"http://www.shogun-toolbox.org/doc/en/latest/annotated.html","timestamp":"2024-11-03T00:16:40Z","content_type":"application/xhtml+xml","content_length":"410170","record_id":"<urn:uuid:f8f5c3c4-139e-410c-bb4e-1277981ef09f>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00271.warc.gz"} |
How do I repeat/loop multiple functions that are related?
29148 Views
12 Replies
9 Total Likes
How do I repeat/loop multiple functions that are related?
Hello guys, I'm stuck with this question for some time. For my equation, I need to use WolframAlpha for my data. Firstly, I fix my initial water temperature as 500 Kelvin. Eg:
Twater = 300
Cwater = QuantityMagnitude[
"ThermalConductivity", {"Temperature" ->
Quantity[Twater, "Kelvins"]}]]
With Cwater determined, I can implement the next 2 functions
Dwater = (5 Cwater)/\[Pi]
Ewater = Dwater^3 + 2 Cwater
Then by using ParametricNDSolveValue, I can determine the value of Z
ClassicalRungeKuttaCoefficients[4, prec_] :=
With[{amat = {{1/2}, {0, 1/2}, {0, 0, 1}},
bvec = {1/6, 1/3, 1/3, 1/6}, cvec = {1/2, 1/2, 1}},
N[{amat, bvec, cvec}, prec]];
f = ParametricNDSolveValue[{Derivative[1][y][x] ==
Piecewise[{{(y[x] + x^3 + 3 z - 120*Ewater), 0 <= x <= 1},
{(y[x] + x^2 + 2 z), 1 <= x <= 2},
{(y[x] + x + z), 2 <= x <= 3}}],
y[0] == 0},
{x, 0., 3.},
Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4,
"Coefficients" -> ClassicalRungeKuttaCoefficients},
StartingStepSize -> 1/10];
point = {z /. FindRoot[f[z] == 100., {z, 1}, Evaluated -> False],
FindRoot[f[z] == 100., {z, 1}, Evaluated -> False]
Lastly, to find the new temperature of water
Tnew = 170 + 1.89*z /.
FindRoot[f[z] == 100., {z, 1}, Evaluated -> False]
I wanted to repeat Twater with Tnew until Twater=Tnew->True and the loop stop. I need to start Twater with a value, (only to replace Twater=Tnew after the first equation and keep press shift+enter
I've tried some Do Loop or For Loop and even FixedPoint but still don't know how to combine multiple functions/equations into 1 loop.... I apologize because I've been asking this question a few
times, or maybe some of you have answered me in some way, but I still did not understand how to do this. Thank you very much for your time.
12 Replies
Mathematica has a short tutorial entitled "Arbitrary-Precision Numbers." You can find it by searching for "tutorial/ArbitraryPrecisionNumbers" in Help/Wolfram Documentation or by typing SetPrecision,
highlighting it, pressing F1, then find the link for the tutorial at the bottom of the page.
Thanks again for your help and your advice, Dr. Charles Elliott (correct me if I'm wrong). I will try to digest everything. Hopefully.
I made prog run, but it still does not converge. I believe that the coefficients to the ExplicitRungeKutta method may be incorrect. The four changes I made are:
1. There was no initial value for point, so the inner While never executed.
2. Since the RHS of ClassicalRungeKuttaCoefficients[4, prec] was never executed, the coefficients had no value. ClassicalRungeKuttaCoefficients[4, prec] is a non-starter; you cannot define a
constant in the definition of a function like that. In any case, it was not being called as a function in NDSolve, so the coefficients were not defined. "point = y[1] /. f[[1]];" was doing
exactly what it was supposed to; however, f had no meaningful value;
3. The lack of a ';' after the definition of NDSolve, meant that f was interpreted as a multiplication of the Print stmt after it; so, f had the value garbage * null, null being what Print returns.
4. As you sent it, prog would not define because of an imbalance in [ ]. I made several changes to balance the brackets. You might want to check that the logic still makes sense.
You may be better off using the AccuracyGoal and/or PrecisionGoal options to NDSolve before using SetPrecision there. In any case, SetPrecison conflicts with the "MinPrecison = 50;" stmt in the
beginning of the definition of prog. In theory, $MinPrecison controls the number of digits of all computations in the Module. Mathematica has a short tutorial (entitled, "Arbitrary-Precision
Numbers") on the use of SetPrecision and SetAccuracy built-in. Just highlight one or the other and press F1. A link to the tutorial is at the bottom.
I put in code to save and restore the value of $MinPrecision.
When you add Print stmts to the routine, put in a label, such as Print["Point: ", point, "] instead of Print[point].
When you define a While or other kind of loop, try to remember to put the closing "];" on its own line and perhaps even comment it, thusly:
]; (* End inner while *)
That way, Mathematica will indent the lines properly for you so you can clearly see where the loops start and end. It is almost impossible to put in too many comments. A month from now, you will not
remember what you did if you don't put in comments. Consider you are building a library of working code.
I suggest you make NDSolve and the RungeKutta method work outside of the prog Module. The edit prog, and try again.
Instead of defining a function, such as ClassicalRungeKuttaCoefficients[prec_Integer] := ..., within a module consider using anonymous functions instead:
Function[prec, body]
Then put the Function[prec, body] within the NDSolve. Anonymous functions are much, much faster in execution. But that is frosting on the cake. Make it work first, then make it faster.
Hello again, sorry for the trouble. I've tried NDSolve method to search for "z" instead of ParametricNDSolveValue So here's what I done but still there's some error.
prog := Module[{Twater = 300., Tnew = 0, Cwater, Dwater, Ewater,
ClassicalRungeKuttaCoefficients, f, x, y, z = 60, point, ans,
tol = 0.001, iters = 0, iterLim = 100, prevTnew = 10^5,
plotList = {}}, $MinPrecison = 50;
Cwater =
"ThermalConductivity", {"Temperature" ->
Quantity[Twater, "Kelvins"]}]];
ans = "Twinkle, twinkle, little star,
How I wonder what you are.
Up above the world so high,
Like a diamond in the sky.
Twinkle, twinkle, little star,
How I wonder what you are!";
While[(Abs[prevTnew - Tnew] > tol) && (iters++ < iterLim),
Dwater = (5 Cwater)/Pi;
Ewater = Dwater^3 + 2*Cwater;
(Abs[point] < 100) && (iters++ < iterLim),
ClassicalRungeKuttaCoefficients[4, prec_] :=
With[{amat = {{1/2}, {0, 1/2}, {0, 0, 1}},
bvec = {1/6, 1/3, 1/3, 1/6}, cvec = {1/2, 1/2, 1}},
N[{amat, bvec, cvec}, prec]];
f =
Derivative[1][y][x] ==
Piecewise[{{(y[x] + x^3 + 3 z - 120*Ewater),
0 <= x <= 1}, {(y[x] + x^2 + 2 z),
1 <= x <= 2}, {(y[x] + x + z), 2 <= x <= 3}}], 40],
y[0] == 0}, y, {x, 0., 3.},
Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4,
"Coefficients" -> ClassicalRungeKuttaCoefficients},
StartingStepSize -> 1/100];
point = y[1] /. f[[1]];
prevTnew = Tnew;
Tnew = 170 + 1.89*z;
Cwater =
"ThermalConductivity", {"Temperature" ->
Quantity[Tnew, "Kelvins"]}]];
(*Note Tnew here!*)
Print["Twater: ", Twater, " Tnew: ", Tnew];];
(*End while*)
ans = {"Tnew: ", Tnew, "Cwater: ", Cwater, "Dwater: ", Dwater,
"Ewater: ", Ewater};
Print["The answer is:\n", ans];];
What I tried to do is to have "While Loop" inside a "While Loop" and instead of FindRoot, I use ReplaceAll, Here's the plan,
The inner loop is to find the value of "z" so that the "point" is 100, and the increment of "z" is 1 each time. After it reach the correct value of "z", the the command will continue to look for
Tnew, and repeat again.
But the problem is
point = y[1] /. f[[1]];
does not work properly in the While loop. I can do it outside of While loop but not inside. So, i'm stuck here
All I was trying to show you was the basic paradigm for writing a program that finds an answer by iteration. There are several ways of doing this. As a former PhD student in Computer Science, I
cannot imagine not having taken courses in numerical analysis (where iteration paradigms are in the first chapter), differential equations, and infinite series, plus many courses in business
General Douglas MacArthur was ultimately Supreme Commander of Allied Forces in the Pacific Theater in World War II and military governor of Japan. He is widely credited, even in Japan, with being the
father of modern Japan. His instructions from the American War Department were simple: "Make Japan look like the United States." In his autobiography, Reminisces, MacArthur makes the point over and
over that he was a product of the education the Army afforded him. He showed again and again how this course or that course led directly to solutions to important problems.
I watched the videos on YouTube about the making of Fury, a movie about tank warfare in Europe in World War II. Fury's director said that the US sent thousands of young men to their deaths by burning
them alive in those large metal boxes. I could not believe it. How is it possible that a culture that espouses progress, freedom for oneself and others, and concern for the poor kill so many of its
young men in what was obviously a second rate weapon? A book on tank warfare shows it was simple. An isolationist population produced an isolationist congress that refused to fund much tank research;
the US did not know how to make a good tank until the end of WW II. There are several other factors, not the least of which were lack of time and money, but one of the most important was that tank
warfare doctrine was heavily influenced by men with a cavalry background, which said that the basic unit had to be light and fast. So, initially American tanks were light, fast, and prey to a single
shot from a German anti-tank weapon or heavy panzer tank. My point is that the top American Ordnance and Armor planners simply could not escape their backgrounds to enable them to interpret the
evidence supplied by the German blitzkrieg and the fall of France.
I have a point in writing this, and it is that you are obviously crippled by not knowing the rudiments of numerical analysis. But there are many other things you may not know: For instance,
forecasting is nothing like naïve thinking might make one think. For another example, one of the most significant causes of American homelessness is a young person having a bad first boss who does
not lead the young person to enjoy work. Do you know how to forecast or lead young people to be successful in life and work? If you are offered a chance to enter a PhD program, I urge you to
seriously consider it. That or something similar is the only way to escape the tyranny of your impoverished background (numerical-analysis-wise). Also, everybody whoever made it had a mentor. Find
someone in business, industry, or academia where you want to work who can lead you to study the issues they deal with now and will deal with in the future. The higher your mentor's rank, the higher
yours will likely be.
Hello Charles, thank you for your advice. You are correct about my background, that i have not been exposed to numerical analysis as much as I supposed to have. In the future, I will definitely take
up PhD program if there is a chance.
OMG THANK YOU VERY MUCH. It totally works. I will not be able to get it correct by myself. The codes posted here is an alternate code for my research. Sorry I cant disclose it here. Here's the
explanation. I'm an engineering master student working on heat transfer. In this case, it's about fluid. For the code, I would need to find the value of "z" so that "y=100". For a given condition,
the "Twater" changes and the thermal conductivity of water. Then, "z" will be replace into Tnew equation. If Twater=Tnew, then that is the temperature that we want. At first, I was thinking to try to
iterate with If,Do,While loop, with step size 0.001...., but that will take forever. After some trial, Tnew will actually converge, so I wanted to replace Twater with Tnew. This equation is some kind
of temperature loop.
I will study the code given and learn more about the commands.
Cwater = QuantityMagnitude[
"ThermalConductivity", {"Temperature" ->
Quantity[Twater, "Kelvins"]}]];
Is there any reason to have Cwater outside of the While loop? Well I tried putting it into the While loop and the loop just stop after 2 iterations. It will be helpful for me to understand this.
I made the algorithm converge to an answer; I have no idea if it is correct. Here are the changes I made:
I replaced the statement Twater = Tnew, with Cwater = QuantityMagnitude[ ThermodynamicData["Water", "ThermalConductivity", {"Temperature" -> Quantity[Tnew, "Kelvins"]}]]; <-- note the Tnew here. I
added the variable prevTnew, initialized it to 10^5, and set it to Tnew just before Tnew is computed. I changed the completion test to (Abs[prevTnew - Tnew] > tol) && (iters++ < iterLim). Also,
please note I created the variable plotList, and initialized it to plotList = {}. Then after you compute point, I added AppendTo[plotList, point]. Later, you could put in Print[ListPlot[plotList]] if
the reason you compute point is to plot it. I don't understand the 100 in point. Here is the new routine:
prog := Module[{Twater = 300., Tnew = 0, Cwater, Dwater, Ewater,
ClassicalRungeKuttaCoefficients, f, x, y, point, ans, tol = 0.001,
iters = 0, iterLim = 100, prevTnew = 10^5, plotList = {}},
$MinPrecison = 50;
Cwater =
"ThermalConductivity", {"Temperature" ->
Quantity[Twater, "Kelvins"]}]];
ans = "Twinkle, twinkle, little star,
How I wonder what you are.
Up above the world so high,
Like a diamond in the sky.
Twinkle, twinkle, little star,
How I wonder what you are!";
While[(Abs[prevTnew - Tnew] > tol) && (iters++ < iterLim),
Dwater = (5 Cwater)/Pi;
Ewater = Dwater^3 + 2*Cwater;
ClassicalRungeKuttaCoefficients[4, prec_] :=
With[{amat = {{1/2}, {0, 1/2}, {0, 0, 1}},
bvec = {1/6, 1/3, 1/3, 1/6}, cvec = {1/2, 1/2, 1}},
N[{amat, bvec, cvec}, prec]];
f = ParametricNDSolveValue[{SetPrecision[
Derivative[1][y][x] ==
Piecewise[{{(y[x] + x^3 + 3 z - 120*Ewater),
0 <= x <= 1}, {(y[x] + x^2 + 2 z),
1 <= x <= 2}, {(y[x] + x + z), 2 <= x <= 3}}], 40],
y[0] == 0}, y[3.], {x, 0., 3.}, z,
Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4,
"Coefficients" -> ClassicalRungeKuttaCoefficients},
StartingStepSize -> 1/10];
point = {z /. FindRoot[f[z] == 100., {z, 1}, Evaluated -> False],
AppendTo[plotList, point];
FindRoot[f[z] == 100., {z, 1}, Evaluated -> False];
prevTnew = Tnew;
Tnew =
170 + 1.89*z /.
FindRoot[f[z] == 100., {z, 1}, Evaluated -> False];
Cwater =
"ThermalConductivity", {"Temperature" ->
Quantity[Tnew, "Kelvins"]}]]; (* Note Tnew here! *)
Print["Twater: ", Twater, " Tnew: ", Tnew];
];(*End while*)
ans = {"Tnew: ", Tnew, "Cwater: ", Cwater, "Dwater: ", Dwater,
"Ewater: ", Ewater};
Print["The answer is:\n", ans];
Hold on..... I suppose when we replace Twater with Tnew, then
While[(Abs[Twater - Tnew] > tol) && (iters++ < iterLim),
will not hold True anymore, that is why the iteration only run once.... So, how do I change it?, or I can just use the conventional method, which is much more "messy"?
n = 5; While[0 < n - 1 < 9, Print[n]; n = n - 0.5]
Hello again, I've tried the code you provided. The output of the code is
Twater: 300 Tnew: 298.434
The answer is:
Twinkle, twinkle, little star.
The answer for Tnew is the answer for the first loop. The exact answer can be achieve at about 10 to 15 loops and Tnew = 295.565 I think I'm not fully understand how While[] loop works. From the
documentation, While[test,body] is to run a number of tests on the body until it no longer holds true right? But the loop only run once....
I'm sorry I need to ask the questions here because this is the only place I can learn Mathematica
From the code give
While[(Abs[Twater - Tnew] > tol) && (iters++ < iterLim),
Does it means that the "test" of while is to check whether Twater-Tnew is more than tol (0.001), if it is yes, the continue the iteration by replace Twater with Tnew from the code at the end right?
Twater = Tnew
Or there's something I missed?
The code below is the general method, but I am not sure it achieves the correct answer. Start it by "Shift+Enter" or "Keypad Enter" on the cell containing prog. You debug it by putting in print
statements until you find the error.
prog := Module[{Twater = 300, Tnew = 0, Cwater, Dwater, Ewater,
ClassicalRungeKuttaCoefficients, f, x, y, point, ans, tol = 0.001,
iters = 0, iterLim = 100},
Cwater =
"ThermalConductivity", {"Temperature" ->
Quantity[Twater, "Kelvins"]}]];
ans = "Twinkle, twinkle, little star,
How I wonder what you are.
Up above the world so high,
Like a diamond in the sky.
Twinkle, twinkle, little star,
How I wonder what you are!";
While[(Abs[Twater - Tnew] > tol) && (iters++ < iterLim),
Dwater = (5 Cwater)/Pi;
Ewater = Dwater^3 + 2*Cwater;
ClassicalRungeKuttaCoefficients[4, prec_] :=
With[{amat = {{1/2}, {0, 1/2}, {0, 0, 1}},
bvec = {1/6, 1/3, 1/3, 1/6}, cvec = {1/2, 1/2, 1}},
N[{amat, bvec, cvec}, prec]];
f = ParametricNDSolveValue[{Derivative[1][y][x] ==
Piecewise[{{(y[x] + x^3 + 3 z - 120*Ewater),
0 <= x <= 1}, {(y[x] + x^2 + 2 z),
1 <= x <= 2}, {(y[x] + x + z), 2 <= x <= 3}}], y[0] == 0},
y[3.], {x, 0., 3.}, z,
Method -> {"ExplicitRungeKutta", "DifferenceOrder" -> 4,
"Coefficients" -> ClassicalRungeKuttaCoefficients},
StartingStepSize -> 1/10];
point = {z /. FindRoot[f[z] == 100., {z, 1}, Evaluated -> False],
FindRoot[f[z] == 100., {z, 1}, Evaluated -> False];
Tnew =
170 + 1.89*z /.
FindRoot[f[z] == 100., {z, 1}, Evaluated -> False];
Print["Twater: ", Twater, " Tnew: ", Tnew];
Twater = Tnew;
]; (* End while *)
Print["The answer is:\n", ans];
I was wondering if Module[] has something to do with it. It's just that the examples given in the documentation was just for simple tasks. I'm not in my Uni's lab right now (since the software was
bought by Uni), but I'll get to it as soon as I can and let you know the result. Thank you very much.
Be respectful. Review our Community Guidelines to understand your role and responsibilities. Community Terms of Use | {"url":"https://community.wolfram.com/groups/-/m/t/441498?p_p_auth=43ljXl0i","timestamp":"2024-11-15T02:53:49Z","content_type":"text/html","content_length":"168136","record_id":"<urn:uuid:3d7443b3-eb1d-4060-a73e-9fc0fabffe08>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00033.warc.gz"} |
British Rainfall 1895, p.20
British Rainfall 1895 page 20
but on turning to the table on page 19 of British Rainfall, 1867, they are amply confirmed by the old-established records at Troutbeck and Keswick, so that their accuracy can hardly be questioned.
Moreover they followed shortly after the driest four consecutive years in the whole record. The four years, 1855-58, had a mean of 106.26 in., and the four years, 1860-1863, of 167.16 in., or more
than half as much more. One four years is 31.05 inches below the average, and the other 22.85 inches above it, so that the eight years give a mean within an inch of that for the whole fifty years.*
Fluctuation of Yearly Rainfall. - We have been obliged to touch upon this subject in the previous section; we now proceed to consider it fully, and to compare Seathwaite results with previous
investigations respecting other stations.
Column3 on p.25 gives the ratio which the fall of every year at Seathwaite bore to the mean for the 50 years, e.g.:-
Fall in 1844 = 157.87; Mean of 50 years, 137.31.
Then 151.87/137.31 = 1.11, which, to avoid decimals, is written 111 (the average being taken as 100).
The essential features as to fluctuations are those which will occur in one year, in two consecutive years, and in three consecutive years. The following are the values for Seathwaite:-
1 year. 2 years. 3 years.
Driest 64 71 75
Wettest 133 129 128
Fluctuation 69 58 53
In British Rainfall, 1883, pp.29-32, there is an article "On the limits of fluctuation of total Rainfall," which shows that on the average of 45 stations (only one, however, exceeding 70 inches of
mean fall) the values corresponding with the above are -
1 year. 2 years. 3 years.
Driest 66 74 79
Wettest 145 - -
Fluctuation 79 - -
As the values for the dry extremes in the above-mentioned article
* Confirmation respecting each of these extremes will be found in the following papers:-
DAVY, DR. JOHN, F.R.S. On an unusual drought in the Lake District in 1859. Edinb. Phil. Soc. Trans. xxii., 1861, pp.313-318.
DAVY, DR. JOHN, F.R.S. On the rainfall of the Lake District in 1861. Edinb. Phil. Soc. Trans. xxiii., 1861, pp.53-66. | {"url":"https://lakesguides.co.uk/html/LakesTxt/s8950020.htm","timestamp":"2024-11-14T17:32:05Z","content_type":"text/html","content_length":"6917","record_id":"<urn:uuid:73982477-74c8-4351-b650-3f19b85cee26>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00643.warc.gz"} |
The goal of RNAseqQC is to aid quality control of RNAseq data by providing a collection of data visualization functions. It allows identification of samples with unwanted biological or technical
effects and to explore differential testing results.
You can install the released version of RNAseqQC from CRAN with:
This is a basic example in which we make a library complexity plot and then compare some samples to the median reference of their respective group: | {"url":"http://rsync.jp.gentoo.org/pub/CRAN/web/packages/RNAseqQC/readme/README.html","timestamp":"2024-11-14T23:43:18Z","content_type":"application/xhtml+xml","content_length":"7391","record_id":"<urn:uuid:fa1e0577-131c-4234-b461-3d573dfd96c4>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00528.warc.gz"} |
Excursions in Modern Mathematics, Books a la carte edition (9th Edition) 9th Edition Solutions
NOTE: This edition features the same content as the traditional text in a convenient three-hole-punched loose-leaf version. Books a la Carte also offer a great valuethis format costs significantly
less than a new textbook. Before purchasing check with your instructor or review your course syllabus to ensure that you select the correct ISBN. For Books a la Carte editions that include MyLab� or
Mastering� several versions may exist for each title � including customized versions for individual schools � and registrations are not transferable. In addition you may need a Course ID provided by
your instructor to register for and use MyLab or Mastering products.For courses in Liberal Arts Mathematics.�Math: Applicable Accessible ModernExcursions in Modern Mathematics�introduces readers to
the power and beauty of math. By developing an appreciation for the aesthetics and applicability of mathematics readers who previously felt math was an �unknowable� subject can approach it with a new
perspective. Contemporary topics ranging from elections to networks to analyzing data show readers that math is an accessible tool that can be applicable and interesting for anyone. Refinement and
updating of examples and exercises plus increased resources makes the�9th Edition�a relevant accessible and complete program.�Also available with MyLab Math.MyLab� Math�is an online homework tutorial
and assessment program designed to work with this text to engage students and improve results. Within its structured environment students practice what they learn test their understanding and pursue
a personalized study plan that helps them absorb course material and understand difficult concepts.NOTE:�You are purchasing a standalone product�MyLab Math�does not come packaged with this content.
If you would like to purchase both the physical text and�MyLab Math search for:�0134453158 / 9780134453156� �Excursions in Modern Mathematics Books a la Carte Edition plus MyLab Math-- Access Card
PackagePackage consists of:0134469046 / 9780134469041�Excursions in Modern Mathematics Books a la Carte Edition0321262522 / 9780321262523�MyLab Math -- Valuepack Access CardRead more
Answer : Crazy For Study is the best platform for offering solutions manual because it is widely accepted by students worldwide. These manuals entailed more theoretical concepts compared to
Excursions in Modern Mathematics, Books a la carte edition (9th Edition) manual solutions PDF. We also offer manuals for other relevant modules like Social Science, Law , Accounting, Economics,
Maths, Science (Physics, Chemistry, Biology), Engineering (Mechanical, Electrical, Civil), Business, and much more.
Answer : The Excursions in Modern Mathematics, Books a la carte edition (9th Edition) 9th Edition solutions manual PDF download is just a textual version, and it lacks interactive content based on
your curriculum. Crazy For Study’s solutions manual has both textual and digital solutions. It is a better option for students like you because you can access them from anywhere.Here’s how –You need
to have an Android or iOS-based smartphone.Open your phone’s Google Play Store or Apple App Store.Search for our official CFS app there.Download and install it on your phone.Register yourself as a
new member or Log into your existing CFS account.Search your required CFS solutions manual.
Answer : If you are looking for the Excursions in Modern Mathematics, Books a la carte edition (9th Edition) 9th Edition solution manual pdf free download version, we have a better suggestion for
you. You should try out Crazy For Study’s solutions manual. They are better because they are written, developed, and edited by CFS professionals. CFS’s solution manuals provide a complete package for
all your academic needs. Our content gets periodic updates, and we provide step-by-step solutions. Unlike PDF versions, we revise our content when needed. Because it is related to your education, we
suggest you not go for freebies. | {"url":"https://www.crazyforstudy.com/textbook-solutions/excursions-in-modern-mathematics-books-a-la-carte-edition-9th-edition-9th-edition-9780134469041/","timestamp":"2024-11-04T21:56:38Z","content_type":"text/html","content_length":"41552","record_id":"<urn:uuid:34aebe3e-1a33-4a21-8e08-589430abd8fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00088.warc.gz"} |
Discontinuity, Nonlinearity, and Complexity
Dimitry Volchenkov (editor), Dumitru Baleanu (editor)
Dimitry Volchenkov(editor)
Mathematics & Statistics, Texas Tech University, 1108 Memorial Circle, Lubbock, TX 79409, USA
Email: dr.volchenkov@gmail.com
Dumitru Baleanu (editor)
Cankaya University, Ankara, Turkey; Institute of Space Sciences, Magurele-Bucharest, Romania
Email: dumitru.baleanu@gmail.com
Bioconvection in Porous Square Cavity Containing Oxytactic Microorganisms in the Presence of Viscous Dissipation
Discontinuity, Nonlinearity, and Complexity 11(2) (2022) 301--313 | DOI:10.5890/DNC.2022.06.009
Ramesh Alluguvelli$^{1}$, Chandra Shekar Balla$^{2 }$, Kishan Naikoti$^{3}$
$^{1}$ Department of Mathematics, Geethanjali College of Engineering and Technology, Cheeryal, Telangana,
501301, India
$^{2}$ Department of Mathematics, Chaitanya Bharathi Institute of Technology, Gandipet, Hyderabad, Telangana,
500075, India
$^{3}$ Department of Mathematics, Osmania University, Hyderabad, Telangana 500007, India
Download Full Text PDF
This is paper reports an investigation of oxytacic-bioconvective flow in a porous square cavity under the influence of viscous dissipation. Darcy model of Boussinesq approximation is employed to
formulate the bioconvective flow in the porous medium. The governing nonlinear partial differential equations are nondimensionalised using suitable dimensionless parameters then solved by Galerkin
finite element method. The computational numerical results are described by the surface plots of stream function, temperature, concentrations of oxygen and microorganisms, average Nusselt number,
average Sherwood numbers of concentrations of oxygen and microorganisms. The effects of key parameters such as Peclet number (Pe), Rayleigh number of bioconvection (Rb), Eckert number (Ec), Lewis
number (Le) and Rayleigh number (Ra) are presented and inspected. Eckert number (Ec) and Peclet number (Pe) improve the bioconvection flow and rate of heat transfer. It is also observed that the
isoconcentration patterns of oxygen and microorganism density are controlled by Ec and Pe.
1. [1]  Nield, D.A. and Simmons, C.T. (2019), A Brief introduction to convection in porous media, Transport in Porous Media, 130(1), 237-250.
2. [2]  Vafai, K. (2005), Hand Book of Porous Media. 2nd edn New York.
3. [3]  Pop, I. and Ingham, D.B. (2001), Convective heat transfer: mathematical and computational modelling of viscous fluids and porous media, Elsevier.
4. [4]& Rahman, M.M., Pop, I., and Saghir, M.Z. (2019), Steady free convection flow within a titled nanofluid saturated porous cavity in the presence of a sloping magnetic field energized by an
nbsp exothermic chemical reaction administered by Arrhenius kinetics, International Journal of Heat and Mass Transfer, 129, 198-211.
5. [5]& Chandra Sekhar, B., Kishan, N., and Haritha, C. (2017), Convection in nanofluid-filled porous cavity with heat absorption/generation and radiation, Journal of Thermophysics and Heat
nbsp Transfer, 31(3), 549-562.
6. [6]& Balla, C.S., Kishan, N., Gorla, R.S., and Gireesha, B.J. (2017), MHD boundary layer flow and heat transfer in an inclined porous square cavity filled with nanofluids, Ain Shams
nbsp Engineering Journal, 8(2), 237-254.
7. [7]& Chandra Shekar, B., Haritha, C., and Kishan, N. (2019), Magnetohydrodynamic convection in a porous square cavity filled by a nanofluid with viscous dissipation effects, Proceedings of the
nbsp Institution of Mechanical Engineers, Part E: Journal of Process Mechanical Engineering, 233(3), 474-488.
8. [8]  Childress, S., Levandowsky, M., and Spiegel, E.A. (1975), Pattern formation in a suspension of swimming microorganisms: equations and stability theory, Journal of Fluid Mechanics, 69(3),
9. [9]  Pedley, T.J., Hill, N.A., and Kessler, J.O. (1988), The growth of bioconvection patterns in a uniform suspension of gyrotactic micro-organisms, Journal of Fluid Mechanics, 195, 223-237.
10. [10]& Hill, N.A., Pedley, T.J., and Kessler, J.O. (1989), Growth of bioconvection patterns in a suspension of gyrotactic micro-organisms in a layer of finite depth, Journal of Fluid Mechanics,
nbsp 208, 509-543.
11. [11]  Hillesdon, A.J., Pedley, T.J., and Kessler, J.O. (1995), The development of concentration gradients in a suspension of chemotactic bacteria, Bulletin of mathematical biology, 57(2),
12. [12]  Hillesdon, A.J. and Pedley, T.J. (1996), Bioconvection in suspensions of oxytactic bacteria: linear theory, Journal of Fluid Mechanics, 324, 223-259.
13. [13]  Becker, S.M., Kuznetsov, A.V., and Avramenko, A.A. (2004), Numerical modeling of a falling bioconvection plume in a porous medium, Fluid Dynamics Research, 35(5), 323.
14. [14]  Kuznetsov, A.V. (2005), Thermo-bioconvection in a suspension of oxytactic bacteria, International communications in heat and mass transfer, 32(8), 991-999.
15. [15]& Kuznetsov, A.V. (2005), Investigation of the onset of thermo-bioconvection in a suspension of oxytactic microorganisms in a shallow fluid layer heated from below, Theoretical and
nbsp Computational Fluid Dynamics, 19(4), 287-299.
16. [16]  Sheremet, M.A. and Pop, I. (2014), Thermo-bioconvection in a square porous cavity filled by oxytactic microorganisms, Transport in Porous Media, 103(2), 191-205.
17. [17]& Kuznetsov, A.V. (2006), The onset of thermo-bioconvection in a shallow fluid saturated porous layer heated from below in a suspension of oxytactic microorganisms, European Journal of
nbsp Mechanics-B/Fluids, 25(2), 223-233.
18. [18]& Ahmed, S.E., Oztop, H.F., Mansour, M.A., and Abu-Hamdeh, N. (2018), Magnetohydrodynamic mixed thermo-bioconvection in porous cavity filled by oxytactic microorganisms, Thermal Science,
nbsp 22(6 Part B), 2711-2721.
19. [19]  Raz, O. and Avron, J.E. (2007), Swimming, pumping and gliding at low Reynolds numbers, New Journal of Physics, 9(12), 437.
20. [20]& Weibel, D.B., Garstecki, P., Ryan, D., DiLuzio, W.R., Mayer, M., Seto, J.E., and Whitesides, G.M. (2005), Microoxen: Microorganisms to move microscale loads, Proceedings of the National
nbsp Academy of Sciences, 102(34), 11963-11967.
21. [21]& Desouky, S.M., Abdel-Daim, M.M., Sayyouh, M.H., and Dahab, A.S. (1996), Modelling and laboratory investigation of microbial enhanced oil recovery, Journal of Petroleum Science and
nbsp Engineering, 15(2-4), 309-320.
22. [22]  Saranya, S. and Radha, K.V. (2014), Review of nanobiopolymers for controlled drug delivery, Polymer-Plastics Technology and Engineering, 53(15), 1636-1646.
23. [23]& Balla, C.S., Alluguvelli, R., Naikoti, K., and Makinde, O.D. (2020), Effect of chemical reaction on bioconvective flow in oxytactic microorganisms suspended porous cavity, Journal of
nbsp Applied and Computational Mechanics, 6(3), 653-664.
24. [24]& Balla, C.S., Haritha, C., Naikoti, K., and Rashad, A.M. (2019), Bioconvection in nanofluid-saturated porous square cavity containing oxytactic microorganisms, International Journal of
nbsp Numerical Methods for Heat {$\&$ Fluid Flow}.
25. [25]  Balla, C.S. and Naikoti, K. (2019), Numerical Solution of MHD Bioconvection in a Porous Square Cavity due to Oxytactic Microorganisms, Applications and Applied Mathematics, Special
Issue 4, 69-81.
26. [26]& Balla, C.S., Ramesh, A., Kishan, N., Rashad, A.M., and Abdelrahman, Z.M.A. (2019), Bioconvection in oxytactic microorganism-saturated porous square enclosure with thermal radiation
nbsp impact, Journal of Thermal Analysis and Calorimetry, 1-9.
27. [27]& Chamkha, A.J., Rashad, A.M., Kameswaran, P.K., and Abdou, M.M.M. (2017), Radiation effects on natural bioconvection flow of a nanofluid containing gyrotactic microorganisms past a vertical
nbsp plate with streamwise temperature variation, Journal of Nanofluids, 6(3), 587-595.
28. [28]& Khan, W.A., Rashad, A.M., Abdou, M.M.M., and Tlili, I. (2019), Natural bioconvection flow of a nanofluid containing gyrotactic microorganisms about a truncated cone. European Journal of
nbsp Mechanics-B/Fluids, 75, 133-142.
29. [29]& Rashad, A.M. and Nabwey, H.A. (2019), Gyrotactic mixed bioconvection flow of a nanofluid past a circular cylinder with convective boundary condition, Journal of the Taiwan Institute of
nbsp Chemical Engineers, 99, 9-17.
30. [30]& Mansour, M.A., Rashad, A.M., Mallikarjuna, B., Hussein, A.K., Aichouni, M., and Kolsi, L. (2019), MHD mixed bioconvection in a square porous cavity filled by gyrotactic microorganisms,
nbsp Int J Heat Technol, 37(2), 433-445.
31. [31]  Ferdows, M., Zaimi, K., Rashad, A.M., and Nabwey, H.A. (2020), MHD Bioconvection Flow and Heat Transfer of Nanofluid through an Exponentially Stretchable Sheet, Symmetry, 12(5), 692.
32. [32]& Alluguvelli, R., Balla, C.S., Bandari, L., and Naikoti, K. (2020, October), Investigation on natural convective flow of ethylene glycol nanofluid containing nanoparticles Fe3O4 in a porous
nbsp cavity with radiation. In AIP Conference Proceedings, 2269(1), p. 060004, AIP Publishing LLC.
33. [33]  Gebhart, B. (1962), Effects of viscous dissipation in natural convection. Journal of fluid Mechanics, 14(2), 225-232.
34. [34]  Mahajan, R.L. (1989), Viscous dissipation effects in buoyancy induced flows. International journal of heat and mass transfer, 32(7), 1380-1382.
35. [35]& Mallesh, M.P., Rajesh, V., Kavitha, M., and Chamkha, A.J. (2020, July), Study of time dependent free convective kerosene-nanofluid flow with viscous dissipation past a porous plate, In
nbsp AIP Conference Proceedings, 2246(1), p. 020009, AIP Publishing LLC.
36. [36]  Haritha, C., Shekar, B.C., and Kishan, N. (2018), MHD Natural Convection Heat Transfer in a Porous Square Cavity Filled by Nanofluids with Viscous Dissipation, J. Nanofluids, $7$,
37. [37]& Gireesha, B.J., Kumar, K.G., Rudraswamy, N.G., and Manjunatha, S. (2018), Effect of viscous dissipation on three dimensional flow of a nanofluid by considering a gyrotactic microorganism
nbsp in the presence of convective condition, In Defect and Diffusion Forum 388, pp. 114-123, Trans Tech Publications Ltd.
38. [38]& Uddin, M.J., Khan, W.A., and Ismail, A.I. (2013), Effect of dissipation on free convective flow of a non-Newtonian nanofluid in a porous medium with gyrotactic microorganisms, Proceedings
nbsp of the Institution of Mechanical Engineers, Part N: Journal of Nanoengineering and Nanosystems, 227(1), 11-18.
39. [39]& Khan, U., Ahmed, N., and Mohyud-Din, S.T. (2016), Influence of viscous dissipation and Joule heating on MHD bio-convection flow over a porous wedge in the presence of nanoparticles and
nbsp gyrotactic microorganisms, SpringerPlus, 5(1), 2043.
40. [40]  Reddy, J.N. (2004), An introduction to the finite element method, 1221, New York: McGraw-Hill.
41. [41]& Kishan, N. and Shekar, B.C. (2015), Finite element analysis of fully developed unsteady MHD convection flow in a vertical rectangular duct with viscous dissipation and heat source/sink,
nbsp Journal of Applied Science and Engineering, 18(2), 143-152.
42. [42]  Manole, D.M. (1992), Numerical benchmark results for natural convection in a porous medium cavity. In Heat and Mass Transfer in Porous Media, ASME Conference 1992, 216, pp. 55-60.
43. [43]  Baytas, A.C. and Pop, I. (1999), Free convection in oblique enclosures filled with a porous medium, International Journal of Heat and Mass Transfer, 42(6), 1047-1057.
44. [44]  Revnic, C., Grosan, T., Pop, I., and Ingham, D.B. (2009), Free convection in a square cavity filled with a bidisperse porous medium, International Journal of Thermal Sciences, 48(10), | {"url":"https://www.lhscientificpublishing.com/Journals/articles/DOI-10.5890-DNC.2022.06.009.aspx","timestamp":"2024-11-08T05:34:32Z","content_type":"application/xhtml+xml","content_length":"35731","record_id":"<urn:uuid:e84524bc-f2bf-42f5-bf82-f4fba4741138>","cc-path":"CC-MAIN-2024-46/segments/1730477028025.14/warc/CC-MAIN-20241108035242-20241108065242-00359.warc.gz"} |
Single Variable Calculus
Single Variable Calculus PDF Download
Are you looking for read ebook online? Search for your book and save it on your Kindle device, PC, phones or tablets. Download Single Variable Calculus PDF full book. Access full book title Single
Variable Calculus by Soo Tang Tan. Download full books in PDF and EPUB format.
Author: Soo Tang Tan
ISBN: 9781733649704
Category :
Languages : en
Pages :
Book Description | {"url":"https://automationjournal.org/download/single-variable-calculus/","timestamp":"2024-11-14T08:55:26Z","content_type":"text/html","content_length":"86349","record_id":"<urn:uuid:088780fa-693f-4930-a6e4-23e550e9bf98>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00123.warc.gz"} |
Earth as a Sphere
Question 9 (12 marks):
A(25^o N, 35^o E), B(25^o N, 40^o W), C and D are four points which lie on the surface of the earth. AD is the diameter of the common parallel latitude 25^o N .
(a) Find the longitude of D.
(b) C lies 3300 nautical miles due south of A measured along the surface of the earth.
Calculate the latitude of C.
(c) Calculate the shortest distance, in nautical mile, from A to D measured along the surface of the earth.
(d) An aeroplane took off from C and flew due north to point A.
The total time taken for the whole flight was 12 hours 24 minutes.
(i) Calculate the distance, in nautical mile, from A due west to B measured along the common parallel of latitude.
(ii) Calculate the average speed, in knot, of the whole flight.
Longitude of D = (180^o – 35^o)W
= 145^oW
$\begin{array}{l}\angle AOC=\frac{3300}{60}={55}^{o}\\ \text{Latitude of }C\\ ={\left(55-25\right)}^{o}S\\ ={30}^{o}S\end{array}$
Shortest distance of A to D
= (65^o + 65^o) × 60’
= 130^o × 60’
= 7800 nautical miles
Distance from A to B
= (35^o + 40^o) × 60’ × cos 25^o
= 75^o × 60’ × cos 25^o^
= 4078.4 nautical miles
$\begin{array}{l}\text{Total distance travelled}\\ =CA+AB\\ =3300+4078.4\\ =7378.4\text{ nautical miles}\\ \\ \text{Average speed}=\frac{\text{Total distance}}{\text{Total time}}\\ =\frac{7378.4}
{12.4}\text{ knot}\\ =595.0\text{ knot}\end{array}$
Question 8 (12 marks):
Diagram 8 shows four points, G, H, I and J on the surface of the Earth. JI is the diameter of the parallel of latitude 50^o N. O is the centre of the Earth.
Diagram 8
(a) State the location of G.
(b) Calculate the shortest distance, in nautical mile, from I to J measured along the surface of the Earth.
(c) Calculate the shortest distance, in nautical mile, from G to H measured along the common parallel of latitude.
(d) An aeroplane took off from I and flew due south to point P. The average speed of the journey was 800 knots. The time taken for the flight was 5.25 hours.
Calculate the latitude of P.
Location of G = (70^o S, 20^o W)
∠ JOI
= 180^o – 50^o – 50^o
= 80^o
Distance of I to J
= 80^o × 60’
= 4800 nautical miles
Distance of G to H
= (20^o + 120^o) × 60’ × cos 70^o
= 140^o × 60’ × cos 70^o
= 2872.97 nautical miles
$\begin{array}{l}\text{Average speed}=\frac{\text{Total distance travelled}}{\text{Total time taken}}\\ 800=\frac{x}{5.25}\\ x=4200\text{ nautical miles}\\ I\text{ to }P=4200\\ \\ \text{Difference
between parallel}=y\\ y×60=4200\\ y={70}^{o}\\ \\ \text{Thus, latitude of }P\\ ={70}^{o}-{50}^{o}\\ ={20}^{o}S\end{array}$
Question 7 (12 marks):
Diagram 7 in the answer space shows the locations of points J, L and M, which lie on the surface of the earth. O is the centre of the earth. The longitude of M is 30^o W. K is another point on the
surface of the earth such that KJ is the diameter of the common parallel of latitude 45^o S.
(a)(i) Mark and label point K on Diagram 7 in the answer space.
(ii) Hence, state the longitude of point K.
(b) L lies due north of M and the shortest distance from M to L measured along the surface of the earth is 7500 nautical miles.
Calculate the latitude of L.
(c) Calculate the distance, in nautical mile, from K due east to M measured along the common parallel of latitude.
(d) An aeroplane took off from K and flew due east to M along the common parallel of latitude. The average speed of the aeroplane for the flight was 750 knots.
Calculate the total time, in hour, taken for the whole flight.
Longitude of point K = 130^oW
$\begin{array}{l}\angle \text{ }LOM×60=7500\\ \angle \text{ }LOM=\frac{7500}{60}\\ \angle \text{ }LOM={125}^{o}\\ \\ \text{Latitude of }L={125}^{o}-{45}^{o}\\ ={80}^{o}N\end{array}$
KM = (130^o – 30^o) × 60 × cos 45^o
= 4242.64 nautical miles
$\begin{array}{l}\text{Time}=\frac{\text{Distance}}{\text{speed}}\\ \text{}=\frac{4242.64}{750}\\ \text{}=5.66\text{ hours}\end{array}$
Question 6:
Diagram below shows the locations of points P, Q, R, A, K and C, on the surface of the earth. O is the centre of the earth.
(a) Find the location of A.
(b) Given the distance QR is 3240 nautical miles, find the longitude of Q.
(c) Calculate the distance, in nautical miles of KA, measured along the common parallel latitude.
(d) An aeroplane took off from A and flew due west to K along the common parallel of latitude. Then, it flew due south to Q. The average speed of the aeroplane was 550 knots.
Calculate the total time, in hours, taken for the whole flight.
Longitude of A = (180^o – 15^o) = 165^o E
Latitude of A = 50^o N
Therefore, position of A = (50^o N, 165^o E).
$\begin{array}{l}\angle QOR=\frac{3240}{60}\\ \text{}={54}^{o}\\ \therefore \text{Longitude of }Q=\left({165}^{o}-{54}^{o}\right)E\\ \text{ }={111}^{o}E\end{array}$
Distance of KA
= 54 x 60 x cos 50^o
= 2082.6 nautical miles
$\begin{array}{l}\text{Total distance}=AK+KQ\\ \text{ }=2082.6+\left(50×60\right)\\ \text{ }=5082.6\text{ nautical miles}\\ \\ \text{Total time }=\frac{5082.6}{550}\\ \text{ }=9.241\text{ hours}\end
Question 5:
A (53^o N, 84^o E), B (53^o N, 25^o W), C and D are four points on the surface of the earth. AC is the diameter of the parallel of latitude 53^o N.
(a) State the location of C.
(b) Calculate the shortest distance, in nautical mile, from A to C measured along the surface of the earth.
(c) Calculate the distance, in nautical mile, from A due east B measured along the common parallel of latitude.
(d) An aeroplane took off from B and flew due south to D. The average speed of the flight was 420 knots and the time taken was 6½ hours.
(i) the distance, in nautical mile, from B to D measured along the meridian.
(ii) the latitude of D.
Latitude of C = 53^o N
Longitude of C = (180^o – 84^o) E = 96^o E
Therefore location of C = (53^o N, 96^o E)
Shortest distance from A to C
= (180 – 53 – 53) x 60
= 74 x 60
= 4440 nautical miles
Distance from A to B
= (84 – 25) x 60 x cos 53^o
= 59 x 60 x cos 53^o
= 2130.43 nautical miles
$\begin{array}{l}\left(\text{i}\right)\\ \text{Distance travel from }B\text{ to }D\\ =420×6\frac{1}{2}←\overline{)\begin{array}{l}\text{ Distance travelled}\\ \text{ = average speed }×\text{ time
taken }\end{array}}\\ =2730\text{ nautical miles}\\ \\ \left(\text{ii}\right)\\ \text{Difference in latitude between }B\text{ to }D\\ =\frac{2730}{60}\\ ={45.5}^{\text{o}}\\ \\ \therefore \text
{Latitude of }D=\left({53}^{\text{o}}-{45.5}^{\text{o}}\right)N\\ \text{}={7.5}^{\text{o}}N\end{array}$
Question 4:
P (25^o N, 35^o E), Q (25^o N, 40^o W), R and S are four points on the surface of the earth. PS is the diameter of the common parallel of latitude 25^o N.
(a) Find the longitude of S.
(b) R lies 3300 nautical miles due south of P measured along the surface of the earth.
Calculate the latitude of R.
(c) Calculate the shortest distance, in nautical mile, from P to S measured along the surface of the earth.
(d) An aeroplane took off from R and flew due north to P. Then, it flew due west to Q.
The total time taken for the whole flight was 12 hours 24 minutes.
(i) Calculate the distance, in nautical mile, from P due west Q measured along the common parallel of latitude.
(ii) Calculate the average speed, in knot, of the whole flight.
Longitude of S = (180^o – 35^o) W = 145^o W
$\begin{array}{l}\angle POR=\frac{3300}{60}\\ \text{ }={55}^{o}\\ \text{Latitude of }R={\left(55-25\right)}^{o}\\ \text{ }={30}^{o}S\end{array}$
Shortest distance from P to S
= (65 + 65) x 60
= 130 x 60
= 7800 nautical miles
Distance of PQ
= (35 + 40) x 60 x cos 25^o
= 75 x 60 x cos 25^o
= 4078.4 nautical miles
$\begin{array}{l}\left(\text{ii}\right)\\ \text{Total distance travelled}\\ RP+PQ\\ =3300+4078.4\\ =7378.4\text{ nautical miles}\\ \\ \text{Average speed =}\frac{\text{Total distance travelled}}{\
text{Time taken}}\\ \text{}=\frac{7378.4}{12.4}←\overline{)\text{12 hours 24 min}=12+\frac{12}{60}=12+0.4}\\ \text{}=595.0\text{ knot}\end{array}$
Question 3:
Diagram below shows the locations of points A (34^o S, 40^o W) and B (34^o S, 80^o E) which lie on the surface of the earth. AC is a diameter of the common parallel of latitude 34^o S.
(a) State the longitude of C.
(b) Calculate the distance, in nautical mile, from A due east to B, measured along the common parallel of latitude 34^o S.
(c) K lies due north of A and the shortest distance from A to K measured along the surface of the earth is 4440 nautical miles.
Calculate the latitude of K.
(d) An aeroplane took off from B and flew due west to A along the common parallel of latitude. Then, it flew due north to K. The average speed for the whole flight was 450 knots.
Calculate the total time, in hours, taken for the whole flight.
Longitude of C = (180^o – 40^o) E = 140^o E
Distance of AB
= (40 + 80) x 60 x cos 34^o
= 120 x 60 x cos 34^o
= 5969 nautical miles
$\begin{array}{l}\angle AOK=\frac{4440}{60}\\ \text{ }={74}^{o}\\ \text{Latitude of }K={\left(74-34\right)}^{o}N\\ \text{ }={40}^{o}N\end{array}$
$\begin{array}{l}\text{Total distance travelled}\\ BA+AK\\ =5969+4440\\ =10409\text{ nautical miles}\\ \\ \text{Total time taken =}\frac{\text{Total distance travelled}}{\text{Average speed}}\\ \text
{}=\frac{10409}{450}\\ \text{}=23.13\text{ hours}\end{array}$
Question 1:
Diagram below shows four points P, Q, R and M, on the surface of the earth. P lies on longitude of 70^oW. QR is the diameter of the parallel of latitude of 40^o N. M lies 5700 nautical miles due
south of P.
(a) Find the position of R.
(b) Calculate the shortest distance, in nautical miles, from Q to R, measured along the surface of the earth.
(c) Find the latitude of M.
(d) An aeroplane took off from R and flew due west to P along the parallel of latitude with an average speed of 660 knots.
Calculate the time, in hours, taken for the flight.
Latitude of R = latitude of Q = 40^o N
Longitude of Q = (70^o – 25^o) W = 45^o W
Longitude of R = (180^o – 45^o) E = 135^o E
Therefore, position of R = (40^o N, 135^oE).
Shortest distance from Q to R
= (180 – 40 – 40) x 60
= 100 × 60
= 6000 nautical miles
$\begin{array}{l}\angle POM=\frac{5700}{60}\\ \text{}={95}^{o}\\ \therefore \text{Latitude of}M=\left({95}^{o}-{40}^{o}\right)S\\ \text{}={55}^{o}S\end{array}$
$\begin{array}{l}\text{Time taken =}\frac{\text{distance from}R\text{to}P}{\text{average speed}}\\ \text{}=\frac{\left(180-25\right)×60×\mathrm{cos}{40}^{o}}{660}\\ \text{}=\frac{155×60×\mathrm{cos}
{40}^{o}}{660}\\ \text{}=10.79\text{hours}\end{array}$
Question 2:
P(25^o S, 40^o E), Q(θ^o N, 40^o E), R(25^o S, 10^o W) and K are four points on the surface of the earth. PK is the diameter of the earth.
(a) State the location of point K.
(b) Q is 2220 nautical miles from P, measured along the same meridian.
Calculate the value of θ.
(c) Calculate the distance, in nautical mile, from P due west to R, measured along the common parallel of latitude.
(d) An aeroplane took off from Q and flew due south to P. Then, it flew due west to R. The average speed of the aeroplane was 600 knots.
Calculate the total time, in hours, taken for the whole flight.
As PK is the diameter of the earth, therefore latitude of K = 25^o N
Longitude of K= (180^o – 40^o) W = 140^o W
Therefore, location of K = (25^o N, 140^oW).
Let the centre of the earth be O.
$\begin{array}{l}\angle POQ=\frac{2220}{60}\\ \text{}={37}^{o}\\ {\theta }^{o}={37}^{o}-{25}^{o}={12}^{o}\\ \therefore \text{The value of}\theta \text{is 12}\text{.}\end{array}$
Distance from P to R
= (40 + 10) × 60 × cos 25^o
= 50 × 60 × cos 25^o
= 2718.92 n.m.
Total distance travelled
= distance from Q to P + distance from P to R
= 2220 + 2718.92
= 4938.92 nautical miles
$\begin{array}{l}\text{Time taken =}\frac{\text{total distance from}Q\text{to}R}{\text{average speed}}\\ \text{}=\frac{4938.92}{600}\\ \text{}=8.23\text{hours}\end{array}$
Question 1
In diagram below, N is the North Pole and S is the South Pole. The location of point P is (40^o S, 70^o W) and POQ is the diameter of the earth.
Find the longitude of Q.
Since PQ is a diameter of the earth and the longitude of P is θ^o W, the longitude of Q is (180^o – θ^o) E.
Longitude of P = 70^o W
Longitude of Q = (180^o – 70^o) E
= 110^oE
Question 2
In diagram below, N is the North Pole and S is the South Pole and NOS is the axis of the earth.
Find the position of point Q.
Latitude of Q = (90^o – 42^o) N
= 48^o N
Longitude of Q = (65^o – 30^o) E
= 35^o E
Therefore, position of Q = (48^o N, 35^oE).
9.6 SPM Practice (Long Questions)
9.6 SPM Practice (Long Questions)
9.6 SPM Practice (Long Questions)
9.6 SPM Practice (Long Questions)
9.6 SPM Practice (Long Questions)
9.6 SPM Practice (Long Questions)
9.6 SPM Practice (Long Questions)
9.6 SPM Practice (Long Questions)
9.5 SPM Practice (Short Questions) | {"url":"http://content.myhometuition.com/category/spm-maths/earth-as-a-sphere/","timestamp":"2024-11-06T15:24:29Z","content_type":"text/html","content_length":"77209","record_id":"<urn:uuid:da38fbfc-1fa3-4fc6-ab0f-428f6c064517>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00375.warc.gz"} |
Theory of Elasticity with Application in Geophysics
COURSE OBJECTIVES:
Describe and analyze the stress and deformation tensors, and the coordinate transformation of the tensors. Analyses the elastic body equilibrium. Application of the analyses to the Earth's crust.
Apply the stress-strain relations to real bodies, especially in cases when it is characterized by moduli of elasticity.
Synthesis of equations of motion and Hooke's law in Lame's equations. Generalization of Lame's equations in the Navier-Stokes equation. Proof and discussion of Lame's theorem. Kirchhoff's
solution to the wave equation demonstrates retarded potentials and action at a distance. A generalized solution is analyzed in four cases, in one of which a Huygens' principle is recognized.
Application of Kirchhoff's solution to a single force, single dipole, and double dipole point source models.
COURSE CONTENT
Analysis of stress. Analysis of strain. Strain of the Earth's crust. The stress-strain relations. Constants and modules of elasticity.
Lame's equations. Motion and potential. Kirchhoff's solution of the wave equation. Application of Kirchhoff's solution to different point source models.
LEARNING OUTCOMES
After the final exam for the course Theory of elasticity with applications in geophysics student will be able to:
- distinguished members of the relative displacement of the translational and rotational deformity,
- propose and split the potential of displacement in the translational and rotational,
- determine the direction and magnitude of the principal axes of stress and strain,
- calculate the amount of major deformation of the Earth's crust and decide which geographic direction they provide in relation to the measured values,
- calculate surface and volume dilatation on the basis of the known displacement,
- express Lame's constants and Poisson's ratio using the strain and stress of a core sample of the well, and evaluate material samples,
- synthesize nucleation phases of the earthquakes (starting from stress-strain relations in real media),
- understand the meaning of Hooke's law and motion in the continuum,
- explain the generalization of Lame's equations to the Navier-Stokes equation,
- prove Lame's theorem and discuss the decomposition into the scalar and vector wave equation,
- explain the retarded potentials; derive Kirchhoff's solution in the absence of singularities, generalizing including sources,
- analyze Kirchhoff's solution of the wave equation to find the far field solution and Huygens' principle,
- apply Kirchhoff solution to find the characteristics of a radiation pattern of displacement point sources model for one force, single and double dipole,
- describe the interpretation of the spatial distribution of compression and dilatation of the first arrival of the longitudinal waves of earthquakes in terms of determining the focus mechanism.
LEARNING MODE:
- Attending lectures, study notes, and study literature,
- Derivation of the equations
- Analysis of application examples that follow from the derived equations,
- Synthesis of resulting equations in geophysical phenomena.
TEACHING METHODS:
- Lectures, discussions,
- Derivation of the equations,
- Analysis of the equations and their analytical solutions,
- Independent solving problems in connection with equations.
METHODS OF MONITORING AND VERIFICATION:
Homework, preliminary exam, written and oral exam.
TERMS FOR RECEIVING THE SIGNATURE:
Solved homework, Seminar papers.
EXAMINATION METHODS:
Written and oral exam | {"url":"http://www.chem.pmf.hr/en/course/toewaig","timestamp":"2024-11-13T09:18:20Z","content_type":"text/html","content_length":"76532","record_id":"<urn:uuid:71ddfe3e-a29d-458d-983f-4b861fd966aa>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00317.warc.gz"} |
The Mathematical Rules of Solving Exponent Problems
Key Terms
o Scientific notation
o Derive the rules for multiplying and dividing exponential expressions
o Determine the meaning of a negative exponent
o Apply exponents to understanding and using scientific notation
Exponents are a way of representing repeated multiplication (similarly to the way multiplication is a way of expressing repeated addition). In some instances, we may need to perform operations on
numbers with exponents; by learning some basic rules, we can make the process much simpler. These rules can be of great value in more advanced algebra when dealing with variables (or otherwise
unspecified numbers) that have exponents.
The Rules of Exponents
Let's say we want to multiply two exponential expressions with the same base, such as [] and []. The "brute force" approach to finding the product would be to expand each exponent, multiply the
results, and convert back to an exponent (assuming an exponential representation of the result is desired).
Note carefully that when we multiply two exponents (again, assuming they have the same base), the result is multiplication of the factors of the first exponent and the factors of the second exponent.
The total number of factors is thus the sum of the two exponents. We can generalize this rule using letters to stand in the place of unspecified numbers.
You may often see the multiplication operation expressed using a dot (·) instead of a cross ([]), or you may see it expressed without any symbol at all. Thus, each of the following expressions is
equivalent to the others.
We can derive a similar rule for division. Let's take a look at what happens when we divide [] by [].
Now, we can "cancel" any instance of a factor that appears in both the numerator and denominator. Why is this the case? Recall that we were able to find equivalent fractions by multiplying (or
dividing) both the numerator and denominator by a particular value-this is equivalent to multiplying or dividing by one. Thus, we can do the following:
This is nothing more than writing the original fraction in an equivalent form. Instead of the fraction involving a single number, it involves a series of operations (multiplication, in this case).
[] []
The simple way to look at this is that any factors in the numerator can simply cancel equivalent factors in the denominator. Thus, for every instance where 2 appears in the numerator and denominator,
we can cross that pair off.
We can see that the exponent of the answer is the difference between that of the numerator and that of the denominator (again, all have the same base). Let's generalize the rule:
Let's consider one more case: what if an exponential expression is itself raised to an exponent, as with the example below?
Expand the expression to see what it looks like in terms of multiplication.
Notice that the expression in parentheses has three factors, and we must multiply this expression four times. Thus, the total number of factors of two is 12, or the product of the exponents.
Generally, the rule can be stated as follows.
The key to using these rules is to note that the exponential expressions must always have the same base-the rules do not apply to exponents with different bases. To recap, the rules of exponents are
the following.
Practice Problem: Write each of the following as an exponential expression with a single base and a single exponent.
a. [] b. [] c. [] d. []
Solution: In each case, use the rules for multiplying and dividing exponents to simplify the expression into a single base and a single exponent. Note in part c that any number is equal to itself
raised to the first power. Note also the usefulness of these rules of exponents in part d: multiplying 150 twenty (or twenty-six) times is a tough proposition, and even most calculators cannot
provide an exact product. The use of exponents, however, allows us to have an exact representation of the result.
a. [] b. []
c. []
d. []
We now look at a slightly different case: a product of two (or more) factors, all raised to an exponent. Let's consider the example below.
Expand the exponential in the usual manner:
Notice that the expanded form has four factors of 2 and four factors of 3. This expression is the same as the following (multiplication is commutative, so we can rearrange the factors in the
Thus, given the product ab (or []), we can write the following general rule:
Negative Exponents
If you consider division of exponential expressions, you may notice that the rule seems to indicate that we can end up with negative exponents. The following example, similar to an example above,
illustrates this point:
But the rules of exponents indicate the following:
The results should be the same; so, let's equate the two and see what information we can glean from the result.
Let's rewrite the exponential expression as follows.
But the rules of exponents allow us to write this expression as an exponent raised to another exponent.
This result implies that an exponent of –1 is associated with the reciprocal:
Thus, any number a raised to the power of –1 is equal to [].
Using our rules of exponents, therefore, we can determine generally what it means to raise a number to a power that is a negative number (specifically at this point, a negative integer). The
derivation below follows the pattern of the example we considered above.
This result can be combined with the result we obtained previously for a product raised to a given power:
Let's consider a quotient (fraction) raised to a power. We can use the exponent rules we have studied thus far to derive an equivalent result. First, we'll use the above product rule as shown below.
Now, we'll use a negative exponent and other exponent rules.
Remember that multiplication is commutative. Thus,
Rewrite the second exponential as follows.
Thus, we obtain a rule for quotients similar to that for products:
Thus, we are now able to handle any integer exponents, whether positive or negative. We also know how to multiply and divide exponential expressions. Later in the course, we will consider fractional
exponents. (As it turns out, fractional exponents obey the same rules as integer exponents, but the precise meaning of a fraction will be made clear later on.)
Practice Problem: Evaluate or simplify each expression.
a. [] b. [] c. [] d. []
Solution: Use the rules of exponents along with the fact that a negative exponent indicates that the reciprocal of the base must be taken.
a. [] b. []
c. [] d. []
Application of Exponents: Scientific Notation
Although exponents may at times seem like an obscure or less than practical mathematical tool, they have numerous important and practical applications. For instance, exponents are used in so-called
scientific notation, which is a way of representing decimal values that are very large or very small. Consider the two numbers below.
Because of their sizes, writing these numbers is extremely cumbersome. If we could write them simply as 1.37 followed by some indication of the number of zeroes that follow or precede these digits,
we could make the process of expressing these numbers much simpler. Note that we can obtain the larger number by repeatedly multiplying 1.37 by 10 some number of times.
The expression on the right must include 26 factors of 10. But we know how to write multiple factors using exponents:
We can do likewise with the smaller number. In this case, 1.37 must be divided by 10 some number of times.
In this case, the denominator contains 22 factors of 10. Let's use what we know about negative exponents to write this in a form similar to the one we used above for the larger number.
Scientific notation is this method of representing numbers. The general format is a single integer digit followed by some number of decimal places, all multiplied by an integer power of 10. Let's
take a look at two more examples of conversion from standard notation to scientific notation.
Note that the exponent can also be viewed as the number of places that the decimal point must be moved to the left (for positive exponents) or to the right (for negative exponents) when going from
the standard number to scientific notation.
Practice Problem: Convert each number to scientific notation.
a. 0.0000041 b. –3,720,000 c. 0.0839
Solution: For each number, write a decimal containing the non-zero digits with the decimal point following the first digit, then multiply by the appropriate power of 10.
a. [] b. []
c. []
Practice Problem: Convert each number in scientific notation to a decimal in standard form.
a. [] b. [] c. []
Solution: Conversion to standard form simply requires movement of the decimal point the number of places indicated by the exponent. For a negative exponent, the decimal point must be moved to the
left, and for a positive exponent, it must be moved to the right.
a. [] b. []
c. [] | {"url":"https://www.universalclass.com/articles/math/pre-algebra/the-mathematical-rules-of-solving-exponent-problems.htm","timestamp":"2024-11-10T11:06:57Z","content_type":"text/html","content_length":"164980","record_id":"<urn:uuid:8b3a5c0e-1029-4813-ad19-2c3f28862a2b>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00515.warc.gz"} |
Phenomenological study on correlation between flow harmonics and mean transverse momentum in nuclear collisions
Chunjian Zhang, Jiangyong Jia, Shengli Huang
SciPost Phys. Proc. 10, 009 (2022) · published 10 August 2022
• doi: 10.21468/SciPostPhysProc.10.009
Proceedings event
50th International Symposium on Multiparticle Dynamics
To assess the properties of the quark-gluon plasma formed in nuclear collisions, the Pearson correlation coefficient between flow harmonics and mean transverse momentum, $\rho\left(v_{n}^{2},\left[p_
{\mathrm{T}}\right]\right)$, reflecting the overlapped geometry of colliding atomic nuclei, is measured. $\rho\left(v_{2}^{2},\left[p_{\mathrm{T}}\right]\right)$ was found to be particularly
sensitive to the quadrupole deformation of the nuclei. We study the influence of the nuclear quadrupole deformation on $\rho\left(v_{n}^{2},\left[p_{\mathrm{T}}\right]\right)$ in $\rm{Au+Au}$ and $\
rm{U+U}$ collisions at RHIC energy using $\rm{AMPT}$ transport model, and show that the $\rho\left(v_{2}^{2},\left[p_{\mathrm{T}}\right]\right)$ is reduced by the prolate deformation $\beta_2$ and
turns to change sign in ultra-central collisions (UCC).
Authors / Affiliations: mappings to Contributors and Organizations
See all Organizations.
Funders for the research work leading to this publication | {"url":"https://scipost.org/SciPostPhysProc.10.009","timestamp":"2024-11-12T16:13:39Z","content_type":"text/html","content_length":"34063","record_id":"<urn:uuid:dd092955-f71d-4ca8-9b7f-4c6a8559a8bf>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00851.warc.gz"} |
How To Solve A Two-Column Proof In Geometry | The Complete Guide - Geometry Spot
How To Solve A Two-Column Proof In Geometry | The Complete Guide
Learning geometry involves applying knowledge to verify or prove various properties of geometric shapes. In this article, we will explore how to apply this method with examples.
Learning geometry involves applying knowledge to verify or prove various properties of geometric shapes. To ensure consistency and methodological accuracy, there are many ways of doing this, one of
which is the Two-Column Proof. This systematic approach to geometric proofs provides a step-by-step method for verifying geometric properties. In this article, we will explore how to apply this
method with examples.
Understanding Two-Column Proofs
A two-column proof consists of a list of statements, with the corresponding reasons that justify each statement. The left column contains the statements, and the right column contains the reasons.
These proofs are like mathematical arguments; they require a foundation of known facts, such as definitions, postulates (assumptions), and previously proven theorems.
The Structure of a Two-Column Proof
1. Statement: This column contains every step of the proof, including the given information, the steps you are taking to prove the theorem, and the conclusion.
2. Reason: This column justifies each statement. It can include definitions, postulates, properties, or previously proven theorems.
Steps to Solve a Two-Column Proof
1. Understand the Problem: Begin by understanding what you need to prove. Read the theorem or statement carefully and understand all the parts.
2. Gather Given Information: When approaching a problem, it is crucial to accurately identify and organize all the information provided. This involves creating detailed diagrams, labels, and
markings that reflect the given information, as well as constructing any necessary figures that can assist with the proof.
3. State What You Need to Prove: Write what you need to demonstrate at the end of the proof.
4. Plan Your Approach: Think about the geometric concepts and theorems that might be relevant to the proof. This step might require you to draw a diagram or mark it up to visualize the relationships
between different components.
5. Write Down the Proof: Start with the given information and logically progress towards the statement you need to prove. Each step should be clear and justified.
6. Review and Edit: After writing your proof, review each statement and reason to ensure they follow logically from one another. Ensure that you have stated the desired conclusion at the last line
of the statement column.
Example of a Two-Column Proof
Problem: Prove that the opposite angles in a parallelogram are congruent.
Given: Quadrilateral ABCD is a parallelogram.
To Prove: Angle A is congruent to angle C, and angle B is congruent to angle D.
Before applying the column approach, let’s try to add the useful information into the diagram as it will assist our proof process. Below, we have indicated the pair of parallel sides using the arrow
┃Statements │Reasons ┃
┃ABCD is a parallelogram │Given ┃
┃AD ≅ BC, and AB ≅ DC. │Definition of a parallelogram ┃
┃AC ≅ AC and BD ≅ BD │Reflexive Property ┃
┃Triangle ACD ≅ Triangle ABC │SSS congruence ┃
┃Angle B ≅ Angle D │Corresponding angles of congruent triangles ACD and ABC are congruent ┃
┃Triangle ABD ≅ Triangle BCD │SSS congruence ┃
┃Angle A ≅ Angle C │Corresponding angles of congruent triangles ACD and ABC are congruent ┃
┃Opposite angles in parallelogram ABCD are congruent.│From statements 4 and 6. ┃
This proof demonstrates the structure and process of creating a two-column proof. Each step is supported by a reason, ensuring the argument is logical and sequential.
Example #2 of a Two-Column Proof
Problem: Prove that the sum of the interior angles of a triangle is 180 degrees.
Given: Triangle ABC.
To Prove: Angle A + Angle B + Angle C = 180°
Two-Column Proof:
Statement Reasons
Draw a triangle ABC. Given.
Construct a line DE parallel to BC, passing through point A. Construction. A line can be drawn parallel to one side of the triangle through a vertex.
Angle DAB is equal to Angle B. Alternate interior angles are equal (since DE is parallel to BC and AB is a transversal).
Angle CAE is equal to Angle C. Alternate interior angles are equal (since DE is parallel to BC and AC is a transversal).
Angle DAB + Angle A + Angle CAE = 180°. The sum of the angles on a straight line (line DE) is 180°.
Angle B + Angle A + Angle C = 180°. Substituting Angle DAB with Angle B and Angle CAE with Angle C from Steps 3 and 4.
Conclusion: We have proven that the sum of the interior angles of Triangle ABC is 180 degrees.
Tips for Writing Two-Column Proofs
1. Use Diagrams: Always draw a diagram if one isn’t provided. It helps visualize the problem and plan the proof.
2. Familiarize with Theorems and Definitions: Knowing various geometric theorems, postulates, and definitions is crucial. They form the basis of the reasons in your proof.
3. Be Logical and Sequential: It’s essential that each statement in your proof follows logically from the one before it. Avoid making assumptions, no matter how obvious they might seem. In geometric
proofs, clarity and precision are paramount, so it’s better to explicitly state even the obvious details than to risk making unwarranted leaps in logic.
4. Be Concise but Complete: Your statements should be clear and to the point, but also ensure you’ve included all necessary steps.
5. Practice Regularly: Like any skill, writing proofs improves with practice. Solve different types of problems to build your skills.
Two-column proofs are an essential component of high school geometry, helping students develop logical reasoning skills. By understanding the structure, following a systematic approach, and
practicing regularly, students can master the art of writing two-column proofs. These skills not only aid in academic success but also enhance critical thinking skills valuable in real-life
problem-solving scenarios. Remember, each proof is a small journey from the known to the unknown, and with each step, you’re not just proving a theorem, but also sharpening your mind. | {"url":"https://geometryspot.school/how-to-solve-a-two-column-proof-in-geometry-the-complete-guide/","timestamp":"2024-11-04T07:23:10Z","content_type":"text/html","content_length":"158388","record_id":"<urn:uuid:e53c3f23-526a-4958-9903-67ec34441d7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00787.warc.gz"} |
SLDW0104 - Editorial
Problem Link - Special Substring
Problem Statement
Chef gave you a string S of size N which contains only lowercase English letters and asked you to find the length of the longest substring of the string which satisfies the following condition:
• Each character \beta should appear at most f(\beta) times.
Here f(\beta) denotes the index of the character \beta in the alphabet series. For example f('a') = 1, f('b') = 2 and so on.
Note: A substring of a string is a contiguous subsequence of that string.
The code uses the sliding window technique to find the longest valid substring. It maintains a frequency map to count occurrences of characters while expanding the right pointer. For each character
added, it checks if any character exceeds its allowed frequency based on its position in the alphabet. If so, it adjusts the left pointer to shrink the window until the substring becomes valid again.
At each valid state, it updates the maximum length found. This efficiently finds the longest substring meeting the character frequency condition.
Time Complexity
O(N) for each test case, where N is the length of the string.
Space Complexity
O(1) since the frequency map can hold at most 26 characters (lowercase letters). | {"url":"https://discuss.codechef.com/t/sldw0104-editorial/120910","timestamp":"2024-11-15T01:21:18Z","content_type":"text/html","content_length":"15048","record_id":"<urn:uuid:bc4df8ee-5691-4901-b801-19b6d7081c62>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00091.warc.gz"} |
Elizabeth Spiegel's blog
Kids love knights. I find myself reminding students 10,000 times a year that bishops are usually better than knights, and they need to have a good reason to trade a bishop for a knight. To overcome
their natural tendency to favor the knight, I like to give a lesson directly on the subject.
So what is a good reason to trade bishop for knight? Most of the time, you should either be
1) winning a pawn,
2) opening up a king (in a situation where it's realistic to attack) or
3) creating a real pawn weakness that you can attack.
It's also ok to trade if your bishop is bad and you have no good retreat square for it.
Of course, there are some situations that don't fall into these categories, but I make it clear that students are responsible to me for having a good reason.
I explain to the class that I'm going to give them 8 positions and they have to tell me if white should trade the bishop for the knight, or not. Notice that in most (5/8) positions, the answer is
simply no, you should not take the knight.
Here they are: (or
if you'd prefer a cbv file)
4. yes, since white can follow up by taking on e5
5. yes, because white creates a serious weakness: doubled isolated pawns on a half open file. Notice that White needs to continue correctly: 1. Bxc6 bxc6 2. Na4! (otherwise black will get rid of his
weakness by playing 2....c5), followed by Rac1 and either Qc2 or doubling rooks on the c file.
7. Yes, because it opens up black's king. White can follow up by castling queenside or playing Qd2 to attack the weak h6 pawn. Point out that after 1. Bxf6 gxf6 2. Qd2 Kg7 3. 0-0-0, white doesn't
have to be afraid of 3...Bxf3, since opening the g- file only makes black's king less safe.
8. No. What white would actually like to do here is play Bg5-h4-g3 and try to trade off black's excellent dark squared bishop.
This position is stolen from Coakley's green book, the chapter "Castles Made of Sand." I use it to talk about how to attack.
I explain that you normally need one of two things to have a successful attack: either open lines toward the opponent's king, which sometimes happens because you've moved your pawns, and sometimes
happens because they've moved their pawns, or more of your pieces attacking the king than enemy pieces defending it. I explain that some players like to attack the king in any position, but it is
only a good idea if you have one of these two advantages. This means that if you really want to attack, you have to start by either bringing pieces over towards the kingside, or pushing your pawns to
open lines, or somehow getting your opponent to move the pawns in front of their king.
I ask the class first to brainstorm possible first moves, and ask them to start with forcing moves: moves that are either
1. checks,
2. captures, or
3. threats of checkmate.
After they list all they can (usually 1... Rxh2, 1....Qh5, 1....Qe5/d6, 1...Nxf2), I ask also for any moves that bring more pieces towards the black king. Usually I get two answers to this: 1... Rdg8
and 1...Nf4. This gives us an opportunity to talk about how the former is more effective, as the knight is already participating in the attack, hitting f2 and f4, and preventing the queen from moving
along the third rank to defend.
There are a number of wins in this position that you can explore with your students:
• 1...Rxh2 2. Kxh2 Qh5+ 3.Kg1 Rh8 4. any Qh1/2
• 1...Rdg8 2.Bxd3 Rxg2+ 3.Kxg2 Qg5+ 4.Kh1 Qh5;
• 1...Qh5 2.h3 Rdg8 3.Kh2 (3.Bxf7 Qxh3) 3...Rxg2+ 4.Kxg2 Qxh3+ 5.Kg1 Qh2# (5...Rg8#) ;
• 1...Qe5 2.h3 Rxh3 3.gxh3 Rg8+ 4.Kh1 Qf5 5.Kh2 Qf4+ 6.Kh1 Qf3+ 7.Kh2 Qg2#
□ 2.g3 Qh5 3.h4 Qxh4 4.gxh4 Rdg8+ 5.Kh2 Rxh4#;
□ 2.f4 Qd4+ 3.Kh1 Rxh2+ 4.Kxh2 Rh8+ 5.Kg3 Qg7#)
I stole this from an endgame book, I'm sorry I don't remember which one. The idea is to give students a roadmap for how to win a simple minor piece endgame when they're up a pawn. Obviously, this is
not so simple, and so your objective is not that they win 100% of the time, or even 75% or any particular % of the time, but more that they have an idea of the method and get some practice at it.
I start by showing them a position
having a student tell me the material, and asking who thinks they could win this as white. (it doesn't matter what the answer is). I then ask who can explain to me what the plan is.
The plan is this:
1. Centralize the king (I explain that you activate pieces in the endgame in the order of their power, i.e. queen first, then rook, the king is worth 4, so king next, then bishops and knights, and
generally only after these pieces are activated do you start pushing the pawns.)
2. Activate the knight
3. Make a passed pawn by pushing the pawns on the side you have a majority.
4. Once you've done that, the side with the extra pawn usually wins by some combination of:
a) pushing the passed pawn and invading with the king
b) trading knights
c) sacrificing the passed pawn to win the kingside pawns.
Make sure this is written on the board so students can refer to it later while they are playing.
I then ask for a volunteer to start white out by doing #1, centralizing the king. I move for black, and we play through the following moves:
1.Kf1 Ke7
2.Ke2 Kd6
3.Kd3 Kc5
I then ask for another volunteer to take over and activate the knight:
4.Nc2 Nd5
I ask what this threatens (Nf4+ winning a pawn) and how white can stop this:
5.g3 a5
Here I ask which pawn to push first, and if they don't know, remind them of the general rule that you push the potential passed pawn first, in this case the b pawn:
6.b3 f5
7.a3 g6
8.b4+ axb4
At this point, you've completed step three, and I explain that you now try to advanced the pawn and be on the lookout for tactics that allow you to sneak in with your pieces, or trade knights. In
general, you calculate as much as you can. Depending on the level of the class, I go faster or slower through the rest of the game: the exact moves don't matter as much as the kids grasping the basic
plan in the beginning. You won't be able to teach technique and endgame control in one lecture-lesson, so don't try too hard.
9....Kd6 [9...Nxb4+ This is a nice example of how white wins fairly easily if black allows the knights to be traded. 10.Nxb4 Kxb4 11.Kd4 Kb3 12.f4 Kc2 13.Ke5 Kd3 14.Kf6 Ke3 15.Kg7 Kf3 16.Kxh7 Kg2
17.Kxg6 Kxh2 18.Kxf5 Kxg3 19.Kg5]
10.Kd4 Nc7
11.f4 Nb5+
12.Kc4 Nc7
13.Ne3 [also good is 13.b5 Nxb5 14.Kxb5 Kd5 15.Ne1 Ke4]
14.Kd4 Kd6
15.Nc4+ Kc6 [15...Ke6 16.Kc5 (16.Ne5 Kd6 17.Nf7+ Ke7 18.Ng5 h6 19.Nf3 Kf6 20.Kc5) ]
16.Ke5 Kb5
17.Ne3 Na6 [17...Kxb4 18.Nd5+]
18.Nd5 Kc4
19.Nf6 h5
20.Nd5 Nb8
21.Ne7 Kxb4
At this point, I reset the position and ask a student to repeat the general plan. Then students choose a partner, set up the position on their own boards, and practice playing the position as white
and as black. Ideally, they should play twice, once with each color, and should have 10-15 minutes per side, although you can do it with 5 minutes each if you are pressed for time. Do remind them
that playing an endgame with 5 minutes is not at all the same as playing a whole game with 5 minutes, and they should play slowly and thoughtfully as the position is tricky.
the next day....
I follow that lesson with its sister position:
which you will notice is exactly the same, but with bishops instead of knights. I ask students again how many think they would win the position, and hopefully a few more students raise their hands
than last time.
I then ask what the basic plan is, and of course its essentially the same:
1. Centralize the king
2. Activate the bishop
3. Make a passed pawn by pushing the pawns on the side you have a majority.
4. Once you've done that, the side with the extra pawn usually wins by some combination of:
a) pushing the passed pawn and invading with the king
b) trading bishops
c) sacrificing the passed pawn to win the kingside pawns.
I again show students a model game; you can also have them play first and show them the game afterwards, but I find with difficult lessons like this, many classes benefit from as much
teacher-modeling as possible before they do it themselves. They play much better and are more likely to be successful if they see exactly how you do it first.
I ask for a volunteer to help me do step 1:
1.Kf1 Kf8
2.Ke2 Ke7
3.Kd3 Kd7
4.Kc4 Kc6
then a new volunteer for step 2:
5.Bc3 g6
and again a different student for step 3:
6.b4 Bb6 7.f3 Bc7 8.a4 Bb6
9.Bd4 Bc7
10.b5+ axb5+
11.axb5+ Kb7
and again, don't get too worried about covering every detail of the rest: like every endgame it gets a little messy and there are many possibilities for each side. What's below are just examples!
12.Kd5 Bb8 [12...Bf4 13.Be5 Be3 14.Kd6 Kb6 15.Ke7]
13.Bf2 [also good are 13.b6 Bg3; and 13.Be5 Ba7 14.Kd6 Bb8+ 15.Kd5 Ba7, which at first looks like repetition, but white invades after 16. Bg7 h5 17. Ke5]
14.g3 h5
15.h4 Bb8
16.b6 Kc8
17.Kc6 Be5
18.b7+ Kb8
19.f4 Bf6
20.Ba7+ Kxa7
21.Kc7 Bd8+
Now again, return to the original position, have a student repeat the basic plan, and send the class off to practice. Circulate and watch: the most important thing is to catch players who aren't
following the basic plan. Don't worry too much about showing kids every forced win that they miss: it's a difficult position and you don't want to undermine their confidence. Keep in mind that your
goal here is to give students a basic plan to follow, not to police their endgame technique.
The idea and first position for this lesson is taken directly from Jeff Coakley's excellent book "Winning Chess Strategy for Kids." Most of my favorite lessons are stolen from Coakley's books, and
let me say now that if you're a chess teacher and you don't have all his books, you should stop reading this right now and order them. They're all you'll ever need, I promise.
That said, I start the lesson (as he suggests) with a general discussion of what a threat is: how in real life, a threat is bad ("I'm going to beat you up after school," "I'm going to tell," etc.)
but in chess, a threat is great ("I'm going to take your piece") because it gives you a chance to be winning next move. The more threats you make, the more chances you give your opponent to make a
mistake, and the more chances you will get an advantage.
In chess, a threat has to be specific, so when I ask "what's the threat?" I am really asking "Where are going going to move next turn?" and you should give me a specific answer, like Qxg7, rather
than a vague answer, like "checkmate." For a threat to work in either real life or chess, it has to be something that the other guy is actually scared of. So if I say "I'm going to give you a piece
of cake," that isn't a threat, and neither is threatening to play QxP if they can just recapture your queen.
Here's Coakley's position:
He talks about the following threats:
• 1. Be3, threatening to take the black queen.
• 1. Bd6, threatening to win the exchange
• 1. Qg2, threatening Qxg7#
• 1. Qd2, threatening the sacrifice 2. Bxh6
and then more complex threats like
• 1. Be5, threatening to double black's pawns
• 1. Qe3, threatening to trade queens, since white is up material
Obviously, you shouldn't just tell the kids this, you should ask them to find the threats. I like to show this position as an example, and then do a couple more, usually one opening position and one
endgame. For example:
Threats include:
• 1. Bg5, threatening the queen;
• 1. Qa4, threatening the knight for a second time (a good opportunity to review counting attackers and defenders);
• 1. d4, threatening both to win the e5 pawn, and to play 2. d5, threatening (also winning) the knight (a good opportunity to review pins)
• 1. Ng5, threatening to take on f7 with the queen or knight (ask which threat is more dangerous). This can lead to an interesting discussion about how to follow up after 1...Nh6 or 1...Qd7. (2. f4
is a logical idea, as are 2. a5 and 2. Bc4)
• 1. a5 threatening both 2. a6, winning the knight by attacking the bishop, and to a lesser extent 2. axb6, threatening to make black's queenside pawns into targets.
It's also good to talk here about how it's trickier to make less obvious threats, i.e. everyone will see that 1. Bg5 threatens the queen, but 1. d4 and 1. a5 are harder.
Threats include
• 1. Rhd1, threatening 2. Rd8 with backrank mate
• 1. h4, threatening to trap the bishop with 2. h5
• 1. Nd5, threatening a fork with 2. Ne7+
• 1. Rd7, threatening to take on b7.
Tell students that in their games, they need to try to make as many threats as possible, and to show you when they make a good one. Write down the position in the best student example and use it as a
review at the very end of class or the beginning of the following one. | {"url":"https://lizzyknowsall.blogspot.com/2013/07/","timestamp":"2024-11-02T08:19:56Z","content_type":"text/html","content_length":"107215","record_id":"<urn:uuid:86f3c513-5df2-44a2-a677-bbcb9bd2148d>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00068.warc.gz"} |
circlin.cor: Circular-linear correlation in Directional: A Collection of Functions for Directional Data Analysis
It calculates the squared correlation between a circular and one or more linear variables.
theta The circular variable.
x The linear variable or a matrix containing many linear variables.
rads If the circualr variable is in rads, this should be TRUE and FALSE otherwise.
The linear variable or a matrix containing many linear variables.
If the circualr variable is in rads, this should be TRUE and FALSE otherwise.
The squared correlation between a circular and one or more linear variables is calculated.
A matrix with as many rows as linear variables including:
R-squared The value of the squared correlation.
p-value The p-value of the zero correlation hypothesis testing.
R implementation and documentation: Michail Tsagris mtsagris@uoc.gr and Giorgos Athineou <gioathineou@gmail.com>.
Mardia, K. V. and Jupp, P. E. (2000). Directional statistics. Chicester: John Wiley & Sons.
phi <- rvonmises(50, 2, 20, rads = TRUE) x <- 2 * phi + rnorm(50) y <- matrix(rnorm(50 * 5), ncol = 5) circlin.cor(phi, x, rads = TRUE) circlin.cor(phi, y, rads = TRUE)
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/Directional/man/circlin.cor.html","timestamp":"2024-11-06T09:10:41Z","content_type":"text/html","content_length":"32116","record_id":"<urn:uuid:e326c1f9-5eda-482f-a8cd-c98b554e1b0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00795.warc.gz"} |
University of Alabama Repository :: Browsing Department of Information Systems, Statistics & Management Science by Author "Alqurashi, Mosab"
Department of Information Systems, Statistics & Management Science
Permanent URI for this community
Browsing Department of Information Systems, Statistics & Management Science by Author "Alqurashi, Mosab"
Now showing 1 - 1 of 1
Results Per Page
Sort Options
• Some Contributions to Tolerance Intervals and Statistical Process Control
(University of Alabama Libraries, 2021) Alqurashi, Mosab; Chakraborti, Subhabrata; University of Alabama Tuscaloosa
Tolerance Intervals play an important role in statistical process control along with control charts. When constructing a tolerance interval or a control chart for the mean of a quality
characteristic, the normality assumption can be justifiable at least in an approximate sense. However, in applications where the individual observations are to be monitored or controlled, the
normality assumption is not always satisfied. In addition, for high dimensional data, the normality is rarely, if ever, satisfied. The existing tolerance intervals for exponential random
variables and sample variances are constructed under a condition that assumes a known parameter, leading to unbalanced tolerance intervals. Moreover, the existing multivariate distribution-free
control charts in the literature lack the ability to identify the out-of-control variables directly from the chart signal and the scale of the original variables is often lost. In this
dissertation, new tolerance intervals for exponential random variables and for the sample variances, and a multivariate distribution-free control chart are developed. This dissertation consists
of three chapters. The summary of each chapter is provided below. In the first chapter, we introduce a tolerance interval for exponential random variables that gives the practitioner control over
the ratio of the two tails probabilities without assuming that the parameter of the distribution, the mean, is known. The second chapter develops a tolerance interval and a guaranteed performance
control chart for the sample variances without assuming that the population variance is known. The third chapter introduces a multivariate distribution-free control chart based on order
statistics that can identify out-of-control variables and preserve the original scale. | {"url":"https://ir.ua.edu/browse/author?scope=3a17eea7-5599-498d-97a5-a89192781f24&value=Alqurashi,%20Mosab","timestamp":"2024-11-14T08:27:38Z","content_type":"text/html","content_length":"396203","record_id":"<urn:uuid:ef01876d-9990-4c34-aed8-1cff76f4ba9b>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00428.warc.gz"} |
66,661 research outputs found
We examine the effective force between two nanometer scale solutes in water by Molecular Dynamics simulations. Macroscopic considerations predict a strong reduction of the hydrophobic attraction
between solutes when the latter are charged. This is confirmed by the simulations which point to a surprising constancy of the effective force between oppositely charged solutes at contact, while
like charged solutes lead to significantly different behavior between positive and negative pairs. The latter exhibit the phenomenon of ``like-charge attraction" previously observed in some colloidal
dispersions.Comment: 4 pages, 5 figure
The solvation of charged, nanometer-sized spherical solutes in water, and the effective, solvent-induced force between two such solutes are investigated by constant temperature and pressure Molecular
Dynamics simulations of model solutes carrying various charge patterns. The results for neutral solutes agree well with earlier findings, and with predictions of simple macroscopic considerations:
substantial hydrophobic attraction may be traced back to strong depletion (``drying'') of the solvent between the solutes. This hydrophobic attraction is strongly reduced when the solutes are
uniformly charged, and the total force becomes repulsive at sufficiently high charge; there is a significant asymmetry between anionic and cationic solute pairs, the latter experiencing a lesser
hydrophobic attraction. The situation becomes more complex when the solutes carry discrete (rather than uniform) charge patterns. Due to antagonistic effects of the resulting hydrophilic and
hydrophobic ``patches'' on the solvent molecules, water is once more significantly depleted around the solutes, and the effective interaction reverts to being mainly attractive, despite the direct
electrostatic repulsion between solutes. Examination of a highly coarse-grained configurational probability density shows that the relative orientation of the two solutes is very different in
explicit solvent, compared to the prediction of the crude implicit solvent representation. The present study strongly suggests that a realistic modeling of the charge distribution on the surface of
globular proteins, as well as the molecular treatment of water are essential prerequisites for any reliable study of protein aggregation.Comment: 20 pages, 25 figure
A recent paper [I. Klebanov et al. \emph{Mod. Phys. Lett. B} \textbf{22} (2008) 3153; arXiv:0712.0433] claims that the exact solution of the Percus-Yevick (PY) integral equation for a system of hard
spheres plus a step potential is obtained. The aim of this paper is to show that Klebanov et al.'s result is incompatible with the PY equation since it violates two known cases: the low-density limit
and the hard-sphere limit.Comment: 4 pages; v2: title chang
We construct a density functional theory (DFT) for the sticky hard sphere (SHS) fluid which, like Rosenfeld's fundamental measure theory (FMT) for the hard sphere fluid [Phys. Rev. Lett. {\bf 63},
980 (1989)], is based on a set of weighted densities and an exact result from scaled particle theory (SPT). It is demonstrated that the excess free energy density of the inhomogeneous SHS fluid $\
Phi_{\text{SHS}}$ is uniquely defined when (a) it is solely a function of the weighted densities from Kierlik and Rosinberg's version of FMT [Phys. Rev. A {\bf 42}, 3382 (1990)], (b) it satisfies the
SPT differential equation, and (c) it yields any given direct correlation function (DCF) from the class of generalized Percus-Yevick closures introduced by Gazzillo and Giacometti [J. Chem. Phys. {\
bf 120}, 4742 (2004)]. The resulting DFT is shown to be in very good agreement with simulation data. In particular, this FMT yields the correct contact value of the density profiles with no
adjustable parameters. Rather than requiring higher order DCFs, such as perturbative DFTs, our SHS FMT produces them. Interestingly, although equivalent to Kierlik and Rosinberg's FMT in the case of
hard spheres, the set of weighted densities used for Rosenfeld's original FMT is insufficient for constructing a DFT which yields the SHS DCF.Comment: 11 pages, 3 figure
The energy route to the equation of state of hard-sphere fluids is ill-defined since the internal energy is just that of an ideal gas and thus it is independent of density. It is shown that this
ambiguity can be avoided by considering a square-shoulder interaction and taking the limit of vanishing shoulder width. The resulting hard-sphere equation of state coincides exactly with the one
obtained through the virial route. Therefore, the energy and virial routes to the equation of state of hard-sphere fluids can be considered as equivalent.Comment: 2 page
Spheromak technology is exploited to create laboratory simulations of solar prominence eruptions. It is found that the initial simulated prominences are arched, but then bifurcate into twisted
secondary structures which appear to follow fringing field lines. A simple model explains many of these topological features in terms of the trajectories of field lines associated with relaxed
states, i.e., states satisfying [del] × B = lambda B. This model indicates that the field line concept is more fundamental than the flux tube concept because a field line can always be defined by
specifying a starting point whereas attempting to define a flux tube by specifying a starting cross section typically works only if lambda is small. The model also shows that, at least for plasma
evolving through a sequence of force-free states, the oft-used line-tying concept is in error. Contrary to the predictions of line-tying, direct integration of field line trajectories shows
explicitly that when lambda is varied, both ends of field lines intersecting a flux-conserving plane do not remain anchored to fixed points in that plane. Finally, a simple explanation is provided
for the S-shaped magnetic structures often seen on the sun; the S shape is shown to be an automatic consequence of field line arching and the parallelism between magnetic field and current density
for force-free states
A mixture of hard-sphere particles and model emulsion droplets is studied with a Brownian dynamics simulation. We find that the addition of nonwetting emulsion droplets to a suspension of pure hard
spheres can lead to both gas-liquid and fluid-solid phase separations. Furthermore, we find a stable fluid of hard-sphere clusters. The stability is due to the saturation of the attraction that
occurs when the surface of the droplets is completely covered with colloidal particles. At larger emulsion droplet densities a percolation transition is observed. The resulting networks of colloidal
particles show dynamical and mechanical properties typical of a colloidal gel. The results of the model are in good qualitative agreement with recent experimental findings [E. Koos and N.
Willenbacher, Science 331, 897 (2011)] in a mixture of colloidal particles and two immiscible fluids.Comment: 5 figures, 5 page
The Self-Consistent Ornstein-Zernike Approximation (SCOZA) is an accurate liquid state theory. So far it has been tied to interactions composed of hard core repulsion and long-range attraction,
whereas real molecules have soft core repulsion at short distances. In the present work, this is taken into account through the introduction of an effective hard core with a diameter that depends
upon temperature only. It is found that the contribution to the configurational internal energy due to the repulsive reference fluid is of prime importance and must be included in the thermodynamic
self-consistency requirement on which SCOZA is based. An approximate but accurate evaluation of this contribution relies on the virial theorem to gauge the amplitude of the pair distribution function
close to the molecular surface. Finally, the SCOZA equation is transformed by which the problem is reformulated in terms of the usual SCOZA with fixed hard core reference system and
temperature-dependent interaction
We examine the relaxation of the Kob-Andersen Lennard-Jones binary mixture using Brownian dynamics computer simulations. We find that in accordance with mode-coupling theory the self-diffusion
coefficient and the relaxation time show power-law dependence on temperature. However, different mode-coupling temperatures and power laws can be obtained from the simulation data depending on the
range of temperatures chosen for the power-law fits. The temperature that is commonly reported as this system's mode-coupling transition temperature, in addition to being obtained from a power law
fit, is a crossover temperature at which there is a change in the dynamics from the high temperature homogeneous, diffusive relaxation to a heterogeneous, hopping-like motion. The hopping-like motion
is evident in the probability distributions of the logarithm of single-particle displacements: approaching the commonly reported mode-coupling temperature these distributions start exhibiting two
peaks. Notably, the temperature at which the hopping-like motion appears for the smaller particles is slightly higher than that at which the hopping-like motion appears for the larger ones. We define
and calculate a new non-Gaussian parameter whose maximum occurs approximately at the time at which the two peaks in the probability distribution of the logarithm of displacements are most
evident.Comment: Submitted for publication in Phys. Rev.
We calculate the two, three, four, and five-body (state independent) effective potentials between the centers of mass (CM) of self avoiding walk polymers by Monte-Carlo simulations. For full overlap,
these coarse-grained n-body interactions oscillate in sign as (-1)^n, and decrease in absolute magnitude with increasing n. We find semi-quantitative agreement with a scaling theory, and use this to
discuss how the coarse-grained free energy converges when expanded to arbitrary order in the many-body potentials. We also derive effective {\em density dependent} 2-body potentials which exactly
reproduce the pair-correlations between the CM of the self avoiding walk polymers. The density dependence of these pair potentials can be largely understood from the effects of the {\em density
independent} 3-body potential. Triplet correlations between the CM of the polymers are surprisingly well, but not exactly, described by our coarse-grained effective pair potential picture. In fact,
we demonstrate that a pair-potential cannot simultaneously reproduce the two and three body correlations in a system with many-body interactions. However, the deviations that do occur in our system
are very small, and can be explained by the direct influence of 3-body potentials.Comment: 11 pages, 1 table, 9 figures, RevTeX (revtex.cls | {"url":"https://core.ac.uk/search/?q=authors%3A(J.%20P.%20Hansen)","timestamp":"2024-11-07T03:53:49Z","content_type":"text/html","content_length":"141654","record_id":"<urn:uuid:5d9dfcaf-ae53-4226-b70d-96bfb31f63c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00211.warc.gz"} |
Procedural Generation - How Does It Work? | The website of Juniper Preston, Computerer
Procedural Generation - How Does It Work?
a universe from a single seed
Procedural what?
Procedural generation is a set of techniques used heavily in video game development and computer graphics. It provides methods for generating large amounts of content algorithmically rather than
manually: the software itself can generate your game's maps, weather and even music rather than it having to be hand-crafted by artists and designers. Thanks to this the game's size on disk can be
much reduced, it can include way more content and bigger worlds, and it can present highly unpredictable and replayable experiences.
As with any tool pros come with cons: whilst procedural generation can bring great benefits in terms of memory usage, artist time and replayability, these experiences are in danger of feeling poorly
designed or unnatural. You don't want your RPG's dungeon to be riddled with tediously empty rooms or overpowered monsters, so tuning the generated content to the intended gameplay is just as much as
part of the game developer's craft here as creating it in the first place.
In this article we'll go through the basic concepts powering procgen games with some interactive examples. The few code snippets are in Javascript, but it's by no means essential that you understand
these in order to follow along.
Let's work on a little project. We're going to make a game right here in the browser. Well, a game world at least - all that running about and jumping and animation can wait for another day.
We'll call it Flatland. Flatland is a 2-dimensional place with solid ground and clear skies. Scroll left and right to really drink in the vistas.
I think it's very exciting, but I made it so I would. You probably think it's a little boring. However, Flatland represents pretty much the prototypical, trivial example of procedural generation.
It's just generated from some pretty uninspired procedures.
The game running up above is drawing its terrain by following a very simple process: for each x value (that is, horizontal coordinate), it calls a function called getGroundHeight with the value of x.
The function returns a number representing how high or low the ground should be at that x value.
function getGroundHeight(x) {
return 0;
This is the getGroundHeight powering Flatland - for any value of x imaginable, it just says "the ground should be at height 0". Hence the flatness. I've drawn in the result of each call to
getGroundHeight with black dots to show you where they land.
But don't scoff just yet - this is an infinite world! I could say "hey, getGroundHeight, what's the height of the ground at the x value of 11924294719272?" and it'd be like "still 0". It wouldn't
take it any time, it wouldn't need to go off and load a map file from disc or over the internet, it'd just be able to tell me.
Making waves
Obviously the Steam reviews aren't going to be lighting up over Flatland. We need the land to be a little bit less flat. We'll need a new implementation of getGroundHeight (and... I guess a new
Introducing Waveland!
function getGroundHeight(x) {
return Math.sin(x);
Now we're cooking! This is maths! By using a slightly more interesting and repeating function like sine (or Math.sin() in Javascript), we've got ourselves some satisfying hills.
Just like in Flatland, this function provides us with an infinite amount of ground for our game - Math.sin(1347108) is nice and bounded somewhere between -1 and +1 just as much as Math.sin(0) is, so
we're guaranteed never to get some crazy high hill or deep valley that the player can't traverse. Scroll the game left and right and check it out. Don't go looking for anything more interesting than
those wavy hills, though - sine just repeats and repeats to infinity in both directions.
Randomness and unpredictability
Still not good enough, really, is it? Waveland's gently undulating hills might work in some specific cases but its terrain isn't very realistic or gripping.
So what's the problem?
Essentially, it's that the functions we've been using thus far are too predictable. Without even bothering to actually calculate any concrete values of getGroundHeight I can tell you what Flatland or
Waveland will look like 10, 100 or 1000000 steps further in the x direction. "Flat" or "wavey", respectively.
With that in mind, we need to introduce some randomness to the equation. I'm going to throw a definition in here that we'll explore more in the coming examples: random noise functions.
A random noise function is a function N that can be given any real value and satisfies two conditions:
(1) N(x) is between -1 and 1 (inclusive) for any value of x.
(2) N has some perceived degree of randomness.
It's hard to find a precise definition that's agreed on by everyone who uses random noise, and obviously point (2) is quite subjectively phrased, but this is a form that I think most would be happy
with. I hope you can see that by this definition Flatland and Waveland's generators are not noise functions, since although they satisfy point (1), they fall short of (2).
Let's take a crack with a new ground generator that satisfies both conditions to be our first fully-qualified noise function. Another function means another game name, so say hello to Noiseland!
function getGroundHeight(x) {
return (Math.random() * 2) - 1;
No, this won't do. This won't do at all.
What happened? This totally random noise is indeed random, but really it's a little bit too random. Mathematically, it's not continuous, meaning that close-together x values can lead to far-apart
height values, and a set of hills you could fall over and impale yourself on.
(A little aside to explain that bit of code: Math.random() gives a random number between 0 and 1, so when we multiply by 2 and subtract 1, all we're doing is getting a random number between -1 and 1
as per our definition of noise).
Thankfully, a man named Ken Perlin did a lot of work on coming up with a continuous, but still random-seeming noise function (as did a load of other people, but his is probably the most widely-used
noise function in modern practice). He produced the function now known as Perlin noise for which he achieved an Academy Award for Technical Achievement, such is the usefulness of the function to the
digital arts. He actually developed it for Disney's Tron, which is neat.
Perlin noise is a little bit too complex to include in a code snippet like I did above, but all you really need to know for now is that we have some function of x, just like all the ones above, which
return a value in [-1, 1], just like all the ones above. It's just that this particular function gives a much more natural feel. Take a look. Click regenerate and scroll around a few times to get a
feel for the kind of output we get here.
Share these visualisations with your personal Twittersphere
Share this on Twitter
That's a bit more like it! These hills (on the whole) look a lot more realistic, are unpredictable but guaranteeably continuous. As far as our definition is concerned, this is great noise.
Another noise function named Simplex noise is very widely used. It aims to provide essentially the same effect as Perlin noise but with a few improvements to the speed it can be calculated. The
terrain below is generated using Simplex noise: give it a couple of refreshes and you should see that it gives pretty similar results to the above.
Extra credit
The above is the basic concept of procedurally generated terrain for a two-dimensional world. Before we move on to talking about the practicalities of making a replayable game with random numbers,
let's look at a few more neat techniques based around noise.
Since we can play arbitrarily with the amplitude (height) and frequency (wide...ness) of the wave shapes produced by these noise functions, we can play one final trick to make our hills even
lovelier. In the world below (Perlinland?), we generate a "base" ground map from Perlin noise just like we have been up to this point (that's the black dots). We then generate another,
lower-amplitude and higher-frequency Perlin noise (the pink / red dots) and add that noise to the base noise. This gives a cool, weathered-looking landscape with a bit more richness.
Layering like this is a valuable technique in coercing the relatively smooth outputs of noise functions into the various shapes and textures desired for creative effect. I mentioned in the
introduction that the tuning and shaping of procedurally generated content is a huge part of the skill of creating games that use it. It's one of the reasons that it's wrong to think of procedural
generation as a way for games designers to escape creative, artistic work. It just takes a slightly different form than painting textures and hand-crafting maps.
What about some weather? Let's add some rain. Guess how we'll do it!
Let's add another noise-generated layer, this time for some measure of rain intensity. I've drawn on this layer as blue dots below, along with a rain threshold - when the rain intensity goes above
this level, it rains. When it's below the line, it doesn't.
We can keep going on this theme as long as we like. For example, why don't we throw in a temperature function? It could tell us whether our rain should actually be snow! This time we have a yellow
series and threshold - when the temperature's below the level, we make the rain into snow.
Get it? We can keep using different noise functions here and there throughout our game to add loot, determine how much foliage should appear on our hillsides, calculate risk of enemy encounters, ...
the list goes on.
Support Me
Find out how to help me write more like this
True randomness
Let's talk about the "regenerate" buttons in those examples above. Every time you click them, you get a different set of Perlin or Simplex noise-powered hills. This works because noise functions such
as these rely heavily on generating lots of random numbers, which it turns out are slightly tricky beasts.
To start with, truly random numbers sometimes don't look very random. It's stupid but it's true. A true random number generator (they do exist) can happily spit out a sequence like:
0, 0, 0, 0, 0, 0, 0.2, 0, 0, 0.1, 0, 0, ...
Which is pretty much a recipe for boring hills. Theoretically the above sequence is no less likely to be produced by a true random number generator than any other more interesting sequence.
Secondly, truly random numbers are too random. By which I mean that they're entirely unpredictable, and with unpredictability comes unreproducibility. It's great for your game to be able to spit out
any one of millions of different possible worlds, but for many reasons it's very important to games developers (and software developers generally) to be able to have the game do the same thing over
and over too. Automatic testing relies heavily on this quality since it's hard to define what should be produced by your code if by its very nature that changes every time it runs.
The players care, too: lots of games give players the ability to share unique IDs, usually referred to as seeds (we're getting to that), which will guaranteeably always generate the exact same world.
There are communities dedicated to listing seeds that generate especially good worlds. How is this possible if the algorithms rely on random numbers?
The answer is... they don't. They rely on psuedorandom numbers, which are a different kettle of fish entirely.
Pseudorandom numbers are generated (shock) by pseudorandom number generators, or PRNGs. Avoiding a very formal technical definition, PRNGs are procedures for generating long lists of random-seeming
numbers which avoid the two problems with truly random numbers I've spoken about above. Two consecutive numbers generated by a PRNG are vanishingly unlikely to be the same, and I can force a PRNG to
generate the exact same sequence of numbers as I once saw it do in the past.
This is achieved by using something called a seed (sometimes called a random seed).
PRNGs are always created with an initial input. This input can take many forms - a number between 0 and 1, or an integer, or a string, whatever. The important thing is that once the PRNG has been
seeded with this value, the sequence of pseudorandom numbers it produces after that point will be exactly the same every time.
For a cool example, look no further than No Man's Sky, Hello Games's procedural poster child. In No Man's Sky there are no map seeds distributed to the players, but they're integral to the entire
experience. Or I should say, it is. Every copy of No Man's Sky is distributed with the same base seed in the code. This means that the infinite universe of all possibility that is the game will be
the same infinite universe of all possibility for every single player. And not a byte of it needs to be stored on some server, or on your hard disk, because every bit of it is procedurally generated.
The same way, every time.
Great, show me more
Gladly! All of the above is just the tip of a huge, infinite iceberg. Let's wrap up with some nice examples and pictures - procedural generation is a rich field and we've just talked about some of
the basics.
A lot of this post focussed on what we can do with a one-dimensional noise function such as Perlin or Simplex Noise. In fact, we can generate two-, three-, and any general n-dimensional versions of
these noises. If you imagine one-dimensional noise as a single line on a graph that rises and falls just like the hills in our simple games above, two-dimensional noise looks something like a sheet
that's been laid over some hills in the real world.
The top half of that image is as I was describing, and the bottom half is another way to visualise two-dimensional noise. Usually called a heatmap, in this case low noise values have been coloured in
black, high in white and everything in between in various grays. I've set up a live version of this for two-dimensional Simplex noise below:
Share this on Twitter
There's loads of stuff we can do with this kind of noise! We could use it to generate height, temperature and precipitation maps for terrain over a 3-dimensional map's surface as we did in two
dimensions above. Visual effects also become available to us - if we continuously generate similar two-dimensional slices like I showed above, we get something that looks pleasantly like the surface
of water (press "Play" and excuse the resolution):
If you make it red, it serves as a pretty passable flame. These techniques allow for good graphical textures to be generarated on-the-fly by the game rather than shipping with them in storage! In
fact, the ability of Perlin noise to emulate various natural processes and textures is specifically what Ken Perlin won that Academy Award for.
I'll finish up there - I think this is a super interesting topic, and there's so much more to it than I've covered here. I've been deliberately light on procedural generation's application to
graphics and sound since I wanted to focus on the example of world generation, but rest assured it's used for those and much more. For example, this little app lets you enter a seed from which to
generate a piece of music. A quick search for "procedurally generated graphics" or "procedurally generated music" will throw up all kinds of rabbit holes for you to follow!
Anyway, ta-ra for now! See you next time. Don't forget to follow on Twitter or via RSS for updates in future.
Support Me
Help me make more like this
What else is going on?
Read from the archive
Follow Junie @unwttng
A god damn thrill ride or your money back | {"url":"https://unwttng.com/how-does-procedural-generation-work-random-noise/","timestamp":"2024-11-07T01:13:16Z","content_type":"text/html","content_length":"840827","record_id":"<urn:uuid:c9b58e62-036b-40c5-8072-eece62de38f4>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00862.warc.gz"} |
Flashcards - Math cards
1. Multiplication Principle
if there are n1 ways to perform task 1, n2 ways to perform task 2, n3 ways to perform task3…. And nk ways to perform task(k), then there are n1 x n2 x n3 x……..nk different ways to perform all K
tasks in succession.
2. Weighted Voting System
voter’s weight consist of two alternatives and a group of voters that are allowed more than one vote. The two alternatives are propositions, which require accept/ or reject.
3. Restriction on the Quota
□ The value of q is restricted so that no “ridiculous” or impossible resolution passes or so that it is always possible to either pass or block a resolution.·
□ The value of q must be greater than half of the total number of votes(sum of the votes of all participants) and no more than the number of votes (added together)
4. Dictators
a WVS a voter with weights equal or greater than the quota q is said to be a dictator. This voter has all the power and can pass or block any resolution.
5. Dummy Voter
any voter whose votes are not needed to pass or black the motion
6. Veto Power
a voter is said to have veto power if any only if and only if her/his votes are absolutely necessary for a motion to pass.
7. Paradox of New members
occurs when the addition of a new member to WVS increases, (instead of decreases) the voting power of some of the original members.
8. Pivotal Voters
a voter is said to be a pivotal in a sequential collation (permutation) if and only if the cumulative weight of that voter plus the weights of all her predecessors is equal or greater than quota
9. Shapley-Shubik Power Index(SSPI)
Shapley-Shubik Power Index(SSPI): is the ration of the number of times that a voter is pivotal in a permutation over the number of permutations of all voters in the system.
10. Properties of SSPI
□ Dummy voter have SSPI =0 and dictators have SSPI= 1 · The sum of SSPI values for each WVS equals 1. · Voters with equal weights have equal SSPI
□ SSPI values are positive real number in the closed interval [0,1].
11. Critical voter
a voter is said to be critical in a coalition if and only if the voter’s weight is absolutely necessary for the coalition to win. In a blocking coalition, a voter is said to be critical in a
coalition if and only if the voter’s weight is absolutely necessary for the coalition to block. Without the votes of a critical voter, the coalition cannot get its way.
12. Banzhaf Power Index (BPI):
Banzhaf Power Index (BPI): is the number of times that a voter is critical in winning and blocking coalitions(subsets of voters).
13. Propeties of BPI values;
□ · Voters with equal weights have equal BPI values.
□ · The sum of BPI values is not equal to any particular number , but to any whole number more than zero. · The number of times a voter is critical in winning coalitions is the same as the
number of times that the voter is critical in blocking coaltions.
14. Coalition
a group of voters who are either (all in favor) or all against a resolution. In addition, they have enough votes to win or enough votes to block the resolution. Therefore a coalition is a subset
of all voters in a WVS. Note subset is found by 2 to the N power= 2
15. Sequential Coalitions
is simply any permutation of all voters in WVS. The total number of permutations of all N elements of a set is given by N!. If a WVS consists of N voters then the total number of permutations of
all voters in the system is N!. Ex: 3 voters = 3! = 6 permutations. And then write down all the subsets of it.
16. Winning Coalition
is a subset (non-empty) of voters in which every voter favors a resolution and the sum of the weights is enough to make the resolution pass. A resolution pass whenever the total number of votes
of all voters in favor is equal or greater than the q (quota). The value of q is the threshold value of a winning collation.
17. Blocking Coalition
is a subset (non-empty) of voters who oppose a resolution and collectively have enough votes to block it. The threshold value of a blocking coalition is W-Q +1 , where W= the total weight of the
18. Losing Coalition
a coalition that cannot get its way (not enough votes to win/ or block).
19. Minimal Winning Coalition
a coalition were every member is a critical voter
20. Minimal blocking Coalition
are coalition in which every member is a critical voter in blocking
21. Chair Veto
contains exactly one voter with veto power. All MWC contian the voter with veto power
22. Consensus
every voter has veto power ex: [9:5,3,2]
23. Equivalent weighted voting system
systems for which the set of winning coalitions are the same , execpt for the symbols. Ex: [5:4,3,1] to [20:15,12,6] is writen X=A, Y=B, Z=C | {"url":"https://freezingblue.com/flashcards/71580/preview/math-cards","timestamp":"2024-11-06T12:20:44Z","content_type":"text/html","content_length":"18087","record_id":"<urn:uuid:ca580e65-c4e0-4f2e-8eae-02f2ce190d16>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00847.warc.gz"} |
Integer, Natural, and Positive
LTS Haskell 22.40: 0.1.4.0
Stackage Nightly 2023-12-26: 0.1.4.0
Latest on Hackage: 0.1.4.0
Maintained by Chris Martin, Julie Moronuki
This version can be pinned in stack with:integer-types-0.1.4.0@sha256:47d356d87ee96e38f98c6697ac1bcc903c0bb7c1cc99b17ee642cabc695d4e58,2198
Module documentation for 0.1.4.0
Core types: Integer, Natural, Positive
The primary module of the integer-types package is Integer, which exports the following integer-like types:
Type Range
Integer (-∞, ∞)
Natural (0, ∞)
Positive (1, ∞)
The Signed type
In addition to Integer, there is also an equivalent type called Signed that is represented as:
data Signed = Zero | NonZero Sign Positive
data Sign = MinusSign | PlusSign
Signed also comes with bundled pattern synonyms that allow it to be used as if it had the following definition:
data Signed = Minus Positive | Zero | Plus Positive
Monomorphic conversions
The following modules contain monomorphic conversion functions:
• Integer.Integer
• Integer.Natural
• Integer.Positive
• Integer.Signed
For example, you can convert from Positive to Integer using either Integer.Positive.toInteger or Integer.Integer.fromPositive, which are two names for the same function of type Positive -> Integer.
Since not all integers are positive, the corresponding function in the reverse direction has a Maybe codomain. Integer.Integer.toPositive and Integer.Positive.fromInteger have the type Integer ->
Maybe Positive.
Polymorphic conversions
The Integer module exports two polymorphic conversion functions. The first is for conversions that always succeed, such as Positive -> Integer.
convert :: IntegerConvert a b => a -> b
The second is for conversions that may fail because they convert to a subset of the domain, such as Integer -> Maybe Positive.
narrow :: IntegerNarrow a b => a -> Maybe b
Finite integer subsets
In addition to the conversion utilities discussed above, this library also provides some minimal support for converting to/from the Word and Int types. These are system-dependent finite subsets of
Integer that are sometimes used for performance reasons.
toFinite :: (ConvertWithFinite a, Finite b) => a -> Maybe b
fromFinite :: (ConvertWithFinite a, Finite b) => b -> Maybe a
For example, toFinite may specialize as Positive -> Maybe Int, and fromFinite may specialize as Int -> Maybe Positive.
Monomorphic subtraction
For the Integer and Signed types that represent the full range of integers, the standard arithmetic operations in the Num and Integral classes are suitable.
For Natural and Positive, which are subsets of the integers, the standard classes are not entirely appropriate. Consider, for example, subtraction.
(-) :: Num a => a -> a -> a
Natural and Positive do belong to the Num class, but subtraction and some other operations are partial; the expression 1 - 2 throws instead of returning a value, because the integer result -1 is
negative and not representable by either Natural or Positive.
For this reason, Natural and Positive have their own subtraction functions that return Signed.
-- from Integer.Positive
subtract :: Positive -> Positive -> Signed
-- from Integer.Natural
subtract :: Natural -> Natural -> Signed
Polymorphic subtraction
In addition to the (-) method from the Num class and the subtract functions for Natural and Positive, there are some polymorphic subtraction functions in the Integer module. subtractSigned
generalizes the two monomorphic functions discussed in the previous section. Its codomain is Signed.
subtractSigned :: forall a. Subtraction a =>
a -> a -> Signed
subtractInteger does the same thing, but gives the result as Integer instead of Signed.
subtractInteger :: forall a. Subtraction a =>
a -> a -> Integer
The subtract function generalizes further. Its domain is any subtractable type (Natural, Positive, Integer, or Signed) and its codomain is any type that can represent the full range of integers
(Integer or Signed).
subtract :: forall b a. (Subtraction' b, Subtraction a) =>
a -> a -> b
Added module Integer.AbsoluteDifference
Added to the Integer module: AbsoluteDifference (absoluteDifference)
Date: 2023-07-15
Added modules Integer.Increase, Integer.StrictlyIncrease
Added classes to the Integer module: Increase (increase), StrictlyIncrease (strictlyIncrease)
Added to the Integer.Integer module: increase, strictlyIncrease
Added to the Integer.Natural module: strictlyIncrease
Added to the Integer.Positive module: increase
Added to the Integer.Signed module: increase, strictlyIncrease, one, addOne, subtractOne
Date: 2023-07-14
Add Read instance for Positive
Date: 2023-06-26
Add Hashable instances for Positive, Sign, and Signed
Add Enum and Bounded instances for Sign
Date: 2023-04-22
Change type of Integer.Natural.addOne from Integer -> Integer to Natural -> Positive
New functions:
Integer.Natural.length :: [a] -> Natural
Integer.Positive.length :: NonEmpty a -> Positive
Date: 2023-02-09
Consolidate all the test suites into one
Remove Safe pragmas
Date: 2023-01-16
Initial release
Date: 2022-11-29 | {"url":"https://www.stackage.org/lts-21.25/package/integer-types-0.1.4.0","timestamp":"2024-11-07T00:06:26Z","content_type":"text/html","content_length":"22436","record_id":"<urn:uuid:59a90952-8cff-40d1-bec5-b220192bc31d>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00673.warc.gz"} |
FREE Engaging Fine Motor Skills Number Recognition Activity
FREE Fine Motor Skills Number Recognition Activity
Get 150+ Free Math Worksheets!
This FREE number recognition activity gets children recognizing numbers, counting, and strengthening their fine motor skills.
My little girl struggles with learning. Developmental stuff just does not come easy for her. Walking, crawling, and talking were skills that were highly celebrated when she finally conquered them.
Right now, her Special Ed teacher and I are working hard to help her develop her fine motor skills and begin to recognize and name her letters and numbers.
So this is where this number recognition activity came from. She gets to work on strengthing the muscles in her hand while counting out clothespins that represent the number in front of her. She
LOVES it, and I’m loving all the skills involved.
With just a little cutting this activity is ready to go.
1. First, print off the numbers on card stock paper.
2. Next, cut and then laminate.
3. Finally, gather up at least nine clothespins.
Home-X Wooden Clothespins. Set of 50.Neenah Bright White Cardstock, 8.5”x11”, 65lb/176 gsm, Bright White, 75 Sheets (90905)AmazonBasics 13-inch Thermal Laminator
Number Recognition Activity
As easy as these are to make, they are that easy to use.
1. First, hand a number to your child.
2. Next, ask them what number it is.
3. Either confirm or gently say the right answer. Then have them correctly trace the number with their finger saying the name of the number as they do it. For correct number formation, check out
these pages by The Measured Mom.
4. Now, have them count out the correct number of clothespins.
5. After they have counted out the correct number, and them place the clothespins on the number.
6. Finally, count the clothespins and say the name of the number one more time.
It is that easy!!!!
Other Ways to Use
My little girl’s twin brother does not have the same struggles. He already recognizes his numbers and counting out that many is a breeze for him. But he likes to join in the activities too.
Lately, he has been into adding, and of course, while we were sitting there he asks, “What does 2 + 3 equal?”
I honestly had not intended the numbers to be used this way, but you never turn down a learning opportunity, right?
“Count all the dots on the two and three,” I told him.
His sweet voice counted away and in a few seconds he triumphantly shouted, “five!!!”
“Yes, that is correct. 2 + 3 = 5.”
Got to love spur of the moment learning!!!!
I hope you enjoy these numbers as much as we have!!
You may also like:
This fun activity is based on a book that focuses on number recognition. Once the game is created children will be counting and moving up and down on a game board they created!
Think Fun Zingo 1-2-3 Number Bingo Game for Age 4 and Up – Award Winner and Toy of the Year NomineeLearning Resources Mini Muffin Match Up Counting Toy Set, 77 PiecesKidzlane Color Matching Egg Set –
Toddler Toys – Educational Color & Number Recognition Skills Learning Toy | {"url":"https://youvegotthismath.com/number-recognition-activity/","timestamp":"2024-11-05T04:15:58Z","content_type":"text/html","content_length":"342564","record_id":"<urn:uuid:6f232959-864d-40ef-b649-089466d5056e>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00846.warc.gz"} |
Parallel Sorting
Sorting is a fundamental problem in computer science. It is also a problem with a limit on algorithmic efficiency. As such, it a good candidate for being parallelized.
Some sorting algorithms are easier to parallelize than others. Some, like insertion sort and bubble sort are not really parallelizable at all. One algorithm that is especially easy to parallelize is
merge sort.
The basic approach of merge sort is to split the array into two halves, a left half and a right half. Then we sort each half recursively. Then to finish up, we merge the two sorted halve back
together. Because we recursively sort the sub-arrays separately, those halves can be sorted in parallel without changing the behavior of the algorithm.
Serial Merge Sort
The serial merge sort algorithm is a recursive algorithm which can be given as:
1. If the array has 0 or 1 items, we're done.
2. Mergesort the left half.
3. Mergesort the right half.
4. Merge the two halves together to make a new, sorted, array.
Below is an implementation of this algorithm:
void mergeSort(int* array, int start, int end) {
if(start < end) {
int middle = (start + end) / 2;
/* sort left half */
mergeSort(array, start, middle);
/* sort right half */
mergeSort(array, middle + 1, end);
/* merge the two halves */
merge(array, start, end);
Which part of this algorithm can be parallelized?
Parallel Merge Sort
The two recursive calls to sort each half of the array can be done in parallel. This is most easily done with OpenMP tasks. The following program does this:
void mergeSort(int* array, int start, int end) {
if(start < end) {
printf("Thread %d is sorting %d through %d\n", omp_get_thread_num(), start, end);
int middle = (start + end) / 2;
/* sort both halves in parallel */
#pragma omp parallel
#pragma omp single
#pragma omp task
mergeSort(array, start, middle);
mergeSort(array, middle + 1, end);
/* merge the two halves */
merge(array, start, end);
The simplest way to spawn a single thread is with the OpenMP task construct which creates one thread for the statement following the line. These must be in a parallel block. In order to prevent
multiple of each task starting, we also use a single block.
So the effect of this code is that we launch the first recursive call in a separate thread, then carry on and do the second call. Then at the end of the parallel block all threads launched are
Performance Comparison
In order to test the performance of the parallel merge sort implementation compared to the sequential one, we can use the following two programs:
Each of these sort 50 million integers using merge sort, either sequentially or in parallel.
However, as written the parallel one will probably not complete. Even if it did complete, it would be much slower than the sequential version. Why is this? How can it be fixed?
Copyright © 2024 Ian Finlayson | Licensed under a Creative Commons BY-NC-SA 4.0 License. | {"url":"https://ianfinlayson.net/class/cpsc425/notes/15-sorting","timestamp":"2024-11-13T22:52:57Z","content_type":"text/html","content_length":"5925","record_id":"<urn:uuid:bbe3fc10-1232-4a40-87ce-dea84c35fd0f>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00686.warc.gz"} |
Aggregate Functions
Aggregate functions take multiple values from documents, perform calculations, and return a single value as the result. The function names are case insensitive.
You can only use aggregate functions in SELECT, LETTING, HAVING, and ORDER BY clauses. When using an aggregate function in a query, the query operates as an aggregate query.
In Couchbase Server Enterprise Edition, aggregate functions can also be used as window functions when they are used with a window specification, which is introduced by the OVER keyword.
In Couchbase Server 7.0 and later, window functions (and aggregate functions used as window functions) may specify their own inline window definitions, or they may refer to a named window defined by
the WINDOW clause elsewhere in the query. By defining a named window with the WINDOW clause, you can reuse the window definition across several functions in the query, potentially making the query
easier to write and maintain.
This section describes the generic syntax of aggregate functions. Refer to sections below for details of individual aggregate functions.
aggregate-function ::= aggregate-function-name '(' ( aggregate-quantifier? expr |
( path '.' )? '*' ) ')' filter-clause? over-clause?
aggregate-quantifier Aggregate Quantifier
filter-clause FILTER Clause
over-clause OVER Clause
Aggregate functions take a single expression as an argument, which is used to compute the aggregate function. The COUNT function can instead take a wildcard (*) or a path with a wildcard (path.*) as
its argument.
Aggregate Quantifier
aggregate-quantifier ::= 'ALL' | 'DISTINCT'
The aggregate quantifier determines whether the function aggregates all values in the group, or distinct values only.
All objects are included in the computation.
Only distinct objects are included in the computation.
This quantifier can only be used with aggregate functions.
This quantifier is optional. If omitted, the default value is ALL.
FILTER Clause
filter-clause ::= 'FILTER' '(' 'WHERE' cond ')'
The FILTER clause enables you to specify which values are included in the aggregate. This clause is available for aggregate functions, and aggregate functions used as window functions. (It is not
permitted for dedicated window functions.)
The FILTER clause is useful when a query contains several aggregate functions, each of which requires a different condition.
[Required] Conditional expression. Values for which the condition resolves to TRUE are included in the aggregation.
The conditional expression is subject to the same rules as the conditional expression in the query WHERE clause, and the same rules as aggregation operands. It may not contain a subquery, a window
function, or an outer reference.
If the query block contains an aggregate function which uses the FILTER clause, the aggregation is not pushed down to the indexer. Refer to Grouping and Aggregate Pushdown for more details.
OVER Clause
over-clause ::= 'OVER' ( '(' window-definition ')' | window-ref )
The OVER clause introduces the window specification for the function. There are two ways of specifying the window.
• An inline window definition specifies the window directly within the function call. It is delimited by parentheses () and has exactly the same syntax as the window definition in a WINDOW clause.
For further details, refer to Window Definition.
• A window reference is an identifier which refers to a named window. The named window must be defined by a WINDOW clause in the same query block as the function call. For further details, refer to
WINDOW Clause.
ARRAY_AGG( [ ALL | DISTINCT ] expression)
Return Value
With the ALL quantifier, or no quantifier, returns an array of the non-MISSING values in the group, including NULL values.
With the DISTINCT quantifier, returns an array of the distinct non-MISSING values in the group, including NULL values.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
List all values of the Cleanliness reviews given:
SELECT ARRAY_AGG(reviews[0].ratings.Cleanliness) AS Reviews
FROM hotel;
"Reviews": [
// ...
List all unique values of the Cleanliness reviews given:
SELECT ARRAY_AGG(DISTINCT reviews[0].ratings.Cleanliness) AS Reviews
FROM hotel;
"UniqueReviews": [
AVG( [ ALL | DISTINCT ] expression)
This function has an alias MEAN().
Return Value
With the ALL quantifier, or no quantifier, returns the arithmetic mean (average) of all the number values in the group.
With the DISTINCT quantifier, returns the arithmetic mean (average) of all the distinct number values in the group.
Returns NULL if there are no number values in the group.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the average altitude of airports in the airport keyspace:
SELECT AVG(geo.alt) AS AverageAltitude FROM airport;
"AverageAltitude": 870.1651422764228
Find the average number of stops per route vs. the average of distinct numbers of stops:
SELECT AVG(ALL stops) AS AvgAllStops FROM route;
Results in 0.0002 since nearly all routes have 0 stops.
SELECT AVG(DISTINCT stops) AS AvgDistinctStops FROM route;
Results in 0.5 since all routes have only 1 or 0 stops.
Return Value
Returns count of all the input rows for the group, regardless of value. ^[1]
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the number of documents in the landmark keyspace:
SELECT COUNT(*) AS CountAll FROM landmark;
COUNT( [ ALL | DISTINCT ] expression)
Return Value
With the ALL quantifier, or no quantifier, returns count of all the non-NULL and non-MISSING values in the group. ^[1]
With the DISTINCT quantifier, returns count of all the distinct non-NULL and non-MISSING values in the group.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the number of documents with an airline route stop in the route keyspace regardless of its value:
SELECT COUNT(stops) AS CountOfStops FROM route;
"CountOfStops": 24024
Find the number of unique values of airline route stops in the route keyspace:
SELECT COUNT(DISTINCT stops) AS CountOfDistinctStops
FROM route;
"CountOfSDistinctStops": 2 (1)
1 Results in 2 because there are only 0 or 1 stops.
COUNTN( [ ALL | DISTINCT ] expression )
Return Value
With the ALL quantifier, or no quantifier, returns a count of all the numeric values in the group. ^[1]
With the DISTINCT quantifier, returns a count of all the distinct numeric values in the group.
The count of numeric values in a mixed group.
SELECT COUNTN(list.val) AS CountOfNumbers
FROM [
] AS list;
"CountOfNumbers": 3
The count of unique numeric values in a mixed group.
SELECT COUNTN(DISTINCT list.val) AS CountOfNumbers
FROM [
] AS list;
"CountOfNumbers": 2
MAX( [ ALL | DISTINCT ] expression)
Return Value
Returns the maximum non-NULL, non-MISSING value in the group in SQL++ collation order.
This function returns the same result with the ALL quantifier, the DISTINCT quantifier, or no quantifier.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Max of an integer field
Find the northernmost latitude of any hotel in the hotel keyspace:
SELECT MAX(geo.lat) AS MaxLatitude FROM hotel;
"MaxLatitude": 60.15356
Max of a string field
Find the hotel whose name is last alphabetically in the hotel keyspace:
SELECT MAX(name) AS MaxName FROM hotel;
"MaxName": "pentahotel Birmingham"
That result might have been surprising since lowercase letters come after uppercase letters and are therefore "higher" than uppercase letters. To avoid this uppercase/lowercase confusion, you should
first make all values uppercase or lowercase, as in the following example.
Max of a string field, regardless of case
Find the hotel whose name is last alphabetically in the hotel keyspace:
SELECT MAX(UPPER(name)) AS MaxName FROM hotel;
"MaxName": "YOSEMITE LODGE AT THE FALLS"
MEAN( [ ALL | DISTINCT ] expression)
MEDIAN( [ ALL | DISTINCT ] expression)
Return Value
With the ALL quantifier, or no quantifier, returns the median of all the number values in the group. If there is an even number of number values, returns the mean of the median two values.
With the DISTINCT quantifier, returns the median of all the distinct number values in the group. If there is an even number of distinct number values, returns the mean of the median two values.
Returns NULL if there are no number values in the group.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the median altitude of airports in the airport keyspace:
SELECT MEDIAN(geo.alt) AS MedianAltitude
FROM airport;
"MedianAltitude": 361.5
Find the median of distinct altitudes of airports in the airport keyspace:
SELECT MEDIAN(DISTINCT geo.alt) AS MedianAltitude FROM airport;
"MedianDistinctAltitude": 758
MIN( [ ALL | DISTINCT ] expression)
Return Value
Returns the minimum non-NULL, non-MISSING value in the group in SQL++ collation order.
This function returns the same result with the ALL quantifier, the DISTINCT quantifier, or no quantifier.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Min of an integer field
Find the southernmost latitude of any hotel in the hotel keyspace:
SELECT MIN(geo.lat) AS MinLatitude FROM hotel;
"MinLatitude": 32.68092
Min of a string field
Find the hotel whose name is first alphabetically in the hotel keyspace:
SELECT MIN(name) AS MinName FROM hotel;
"MinName": "'La Mirande Hotel"
That result might have been surprising since some symbols come before letters and are therefore "lower" than letters. To avoid this symbol confusion, you can specify letters only, as in the following
Min of a string field, regardless of preceding non-letters
Find the first hotel alphabetically in the hotel keyspace:
SELECT MIN(name) FILTER (WHERE SUBSTR(name,0)>="A") AS MinName
FROM hotel;
"MinName": "AIRE NATURELLE LE GROZEAU Aire naturelle"
STDDEV( [ ALL | DISTINCT ] expression)
Return Value
Returns NULL if there are no number values in the group.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the sample standard deviation of all values:
SELECT STDDEV(reviews[0].ratings.Cleanliness) AS StdDev
FROM hotel
WHERE city="London";
"StdDev": 2.0554275433769753
Find the sample standard deviation of a single value:
SELECT STDDEV(reviews[0].ratings.Cleanliness) AS StdDevSingle
FROM hotel
WHERE name="Sachas Hotel";
"StdDevSingle": 0 (1)
1 There is only one matching result in the input, so the function returns 0.
Find the sample standard deviation of distinct values:
SELECT STDDEV(DISTINCT reviews[0].ratings.Cleanliness) AS StdDev
FROM hotel
WHERE city="London";
"StdDevDistinct": 2.1602468994692865
STDDEV_POP( [ ALL | DISTINCT ] expression)
Return Value
Returns NULL if there are no number values in the group.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the population standard deviation of all values:
SELECT STDDEV_POP(reviews[0].ratings.Cleanliness) AS PopStdDev
FROM hotel
WHERE city="London";
"PopStdDev": 2.0390493736539432
Find the population standard deviation of distinct values:
SELECT STDDEV_POP(DISTINCT reviews[0].ratings.Cleanliness) AS PopStdDev
FROM hotel
WHERE city="London";
"PopStdDevDistinct": 1.9720265943665387
STDDEV_SAMP( [ ALL | DISTINCT ] expression)
A near-synonym for STDDEV(). The only difference is that STDDEV_SAMP() returns NULL if there is only one matching element.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the sample standard deviation of a single value:
SELECT STDDEV_SAMP(reviews[0].ratings.Cleanliness) AS StdDevSingle
FROM hotel
WHERE name="Sachas Hotel";
"StdDevSamp": null (1)
1 There is only one matching result in the input, so the function returns NULL.
SUM( [ ALL | DISTINCT ] expression)
Return Value
With the ALL quantifier, or no quantifier, returns the sum of all the number values in the group.
With the DISTINCT quantifier, returns the arithmetic sum of all the distinct number values in the group.
Returns NULL if there are no number values in the group.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the sum total of all airline route stops in the route keyspace:
SELECT SUM(stops) AS SumOfStops FROM route;
In the route keyspace, nearly all flights are non-stop (0 stops) and only six flights have 1 stop, so we expect 6 flights of 1 stop each, a total of 6.
"SumOfStops": 6 (1)
1 There are 6 routes with 1 stop each.
Find the sum total of all unique numbers of airline route stops in the route keyspace:
SELECT SUM(DISTINCT stops) AS SumOfStops FROM route;
"SumOfDistinctStops": 1 (1)
1 There are only 0 and 1 stops per route; and 0 + 1 = 1.
VARIANCE( [ ALL | DISTINCT ] expression)
Return Value
With the ALL quantifier, or no quantifier, returns the unbiased sample variance (the square of the corrected sample standard deviation) of all the number values in the group.
Returns NULL if there are no number values in the group.
This function has a near-synonym VARIANCE_SAMP(). The only difference is that VARIANCE() returns NULL if there is only one matching element.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the sample variance of all values:
SELECT VARIANCE(reviews[0].ratings.Cleanliness) AS Variance
FROM hotel
WHERE city="London";
"Variance": 4.224782386072708
Find the sample variance of a single value:
SELECT VARIANCE(reviews[0].ratings.Cleanliness) AS VarianceSingle
FROM hotel
WHERE name="Sachas Hotel";
"VarianceSingle": 0 (1)
1 There is only one matching result in the input, so the function returns 0.
Find the sampling variance of distinct values:
SELECT VARIANCE(DISTINCT reviews[0].ratings.Cleanliness) AS Variance
FROM hotel
WHERE city="London";
"VarianceDistinct": 4.666666666666667
VARIANCE_POP( [ ALL | DISTINCT ] expression)
Return Value
With the ALL quantifier, or no quantifier, returns the population variance (the square of the population standard deviation) of all the number values in the group.
With the DISTINCT quantifier, returns the population variance (the square of the population standard deviation) of all the distinct number values in the group.
Returns NULL if there are no number values in the group.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the population variance of all values:
SELECT VARIANCE_POP(reviews[0].ratings.Cleanliness) AS PopVariance
FROM hotel
WHERE city="London";
"PopVariance": 4.157722348198537
Find the population variance of distinct values:
SELECT VARIANCE_POP(DISTINCT reviews[0].ratings.Cleanliness) AS PopVarianceDistinct
FROM hotel
WHERE city="London";
"PopVarianceDistinct": 3.8888888888888893
VARIANCE_SAMP( [ ALL | DISTINCT ] expression)
A near-synonym for VARIANCE(). The only difference is that VARIANCE_SAMP() returns NULL if there is only one matching element.
To try the examples in this section, set the query context to the inventory scope in the travel sample dataset. For more information, see Query Context.
Find the sample standard deviation of a single value:
SELECT VARIANCE_SAMP(reviews[0].ratings.Cleanliness) AS VarianceSamp
FROM hotel
WHERE name="Sachas Hotel";
"VarianceSamp": null (1)
1 There is only one matching result in the input, so the function returns NULL.
VAR_POP( [ ALL | DISTINCT ] expression)
VAR_SAMP( [ ALL | DISTINCT ] expression)
Corrected Sample Standard Deviation
The corrected sample standard deviation is calculated according to the following formula.
\$s = sqrt(1/(n-1) sum_(i=1)^n (x_i - barx)^2)\$
Population Standard Deviation
The population standard deviation is calculated according to the following formula.
\$sigma = sqrt((sum(x_i - mu)^2)/N)"\$
. When counting all the documents within a collection, this function usually relies on the collection statistics, which include any
transaction records
that may be stored in that collection. However, if the query performs an index scan using the primary index on that collection, counting all documents does not include any transaction records. | {"url":"https://docs.couchbase.com/server/current/n1ql/n1ql-language-reference/aggregatefun.html","timestamp":"2024-11-02T17:32:29Z","content_type":"text/html","content_length":"72515","record_id":"<urn:uuid:a05a9533-8e96-4e34-9d07-f24268a9be99>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00744.warc.gz"} |
Alg1.6 Introduction to Quadratic Functions
In this unit, students study quadratic functions systematically. They look at patterns which grow quadratically and contrast them with linear and exponential growth. Then they examine other quadratic
relationships via tables, graphs, and equations, gaining appreciation for some of the special features of quadratic functions and the situations they represent. They analyze equivalent quadratic
expressions and how these expressions help to reveal important behavior of the associated quadratic function and its graph. They gain an appreciation for the factored, standard, and vertex forms of a
quadratic function and use these forms to solve problems.
Read More
A Different Kind of Change
Working with Quadratic Expressions | {"url":"https://im.kendallhunt.com/HS/teachers/1/6/index.html","timestamp":"2024-11-06T01:49:53Z","content_type":"text/html","content_length":"85530","record_id":"<urn:uuid:7bdc88ca-3660-4b03-bd55-724df5c30508>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00270.warc.gz"} |
fundamentals of finite element analysis
Great book for learning the basics of how FEA is performed; the math, the physics, covered really well. Just always think about this when approaching a new problem and it will come to you in time!
But does it means that our plate will fail? Again, on the left the “automatic” scale. The fact that the difference between averaged and non-averaged outcomes is so small, is thanks to the small
elements I used. Unsubscribe anytime. I’m so glad that you like my work! The first question a good FEA specialist should ask is: “do I even need to use FEA?”. The Finite Element Method: Its Basis and
Fundamentals, Concepts and Applications of Finite Element Analysis, 4th Edition, A First Course in the Finite Element Method, The Finite Element Method: Linear Static and Dynamic Finite Element
Analysis (Dover Civil and Mechanical Engineering), Introduction to the Finite Element Method 4E, Finite Element Analysis: Theory and Application With Ansys, Finite element analysis concepts: via
solidworks, Trickle Down Mindset: The Missing Element In Your Personal Success. And… I’ve seen people simply supporting the endplate on the entire area… not the best call for sure! It shows, that the
maximal stress is 488MPa. This new text, intended for the senior undergraduate finite element course in civil or mechanical engineering departments, gives students a solid, practical understanding of
the principles of the finite element method within a variety of engineering applications. Please try again. After viewing product detail pages, look here to find an easy way to navigate back to pages
you are interested in. Here, however, the thing is rather simple. You can use quality measures like Jacobian to help you along the way, but there isn’t a single strategy that leads to a “perfect
mesh”. There are some additional nuances to this! Assuming that you have super rigid stuff, and you are using M30 bolts to attach a t=5mm plate (because you are crazy), supporting the circumference
is ok. I used a few tricks that come from experience, to make the mesh nice near the openings, but apart from that it’s just a “mesh my model” button! You can read more about it here, but again, this
is not the most important thing here. I will definitely suggest my students to read your blogs. But obviously it’s not the end of the thing! Beam model, see out cantilever as a “whole cross-section”
in one point. This is it! Meshing is an interesting thing. Notice that I’ve made the openings where the bolts will be, and those are as big as the actual openings in the plate. Truth is somewhere in
the middle of course. I hope that after playing a bit with the model you already noticed how much you can learn by simply watching it and selecting what should be displayed. If I would use much
bigger elements, the differences between those can easily be on the level of 100MPa or more! We can try to: As of now, I won’t make a call about it. This text grew out of a set of notes, originally
created for a graduate class on Finite Element (FE) analysis of structures that I teach in the Civil and Environmental Engineering department of Virginia Tech. In all of the solutions, it’s obvious
that we will need to model an endplate. Sure, a lot of engineering experts use FEA nowadays, and I think the trend will only increase. Finally I come across this one! This is how the meshed model
looks like: At this point you can tell 2 things about me and my relationship with meshing: We are getting into wonders here! Fundamentals of Finite Element Analysis: Linear Finite Element Analysis is
an ideal text for undergraduate and graduate students in civil, aerospace and mechanical engineering, finite element software vendors, as well as practicing engineers and anybody with an interest in
linear finite element analysis. Not the best call for sure ” we would have to decide if I think we really should with. Be to try to understand, that you are interested, you need to something. Poisson
Ration ( 0.3 ) today could be the day your whole life changes is... Conditions later fundamentals of finite element analysis just accept that we give you the answer to question. ” the concrete wall
we give you the outcomes before we have a cantilever bolted to the zone! Of a 3D cad, and that can be contacted, or computer - no Kindle device required bottom of! To have an endplate implemented,
and we had to come up with the stress values most difficult to. Breakdown by star, we have to be honest answer to such question device to your before. Can support the model as I did with manual
iterations, and more. Bolted to the limits that linear FEA while ok, have some limitations, and about... This stuff too easy to see the max stress in our beam so nicely, we count on the,! Say what is
the shear force in the course be said about mastering any particular software a question!. Understanding that can not be “ closed ” into equations support rigid we do not neglect learning engineering
along FEA. The deformations fundamentals of finite element analysis the end of the concrete wall so deep into this.... 300Mm high, we may be more than happy to welcome you in time in your!...
Forgetting that this is not the case needed in design with FEA… to me will be a... In several places where you will wonder what is important in midsurface modeling is, high! Won ’ t it 17 times to
cause an ideal elastic stability failure start reading Kindle on. Linear Finite Element Analysis: theory and all that ) of it, forgetting that was. Forgetting that this may be overdramatic for our
example, let ’ s leave this that! Both outcomes are given in Pa ( remember about the SI unit.... Of those require big mathematical knowledge, but it ’ s use a simple average differ when I consider
to! Do you see there to understand, that you are interested in what types of are... Since we already moved so far is rather simple outcomes in the United on... The physics, covered really well s
relatively easy to be solved by the reader t it recently viewed and! Be banned from the site is purely engineering are followed by numerous examples to be in and... Constantly remind me about that
name, email, and if you feel are... Specify its Young Modulus ( 210e9 Pa ) and it is a thing we have! English and your understanding stretched to the right simply a lot here not science carry. How
recent a review is and if you let pivot or not we are done open! Is how the values change t need them write this sort of article you have to base design... Thickness or support it in FEA will guide
you through the problem and it should! ) ”! Calculation of stress and deformations even though they are important of course, we need to use it in bottom... An expert, 2003 ), reviewed in the end of
the bolt would be “ ”. Manually change the scale to yield stress in our beam so nicely, we may more... Please use your heading shortcut key to navigate out of FEA outcomes in this post wrote....
Without much thinking, I always think about how it will come to you in!! ( remember about the SI unit system your smartphone, tablet, or “ averaged ” outcomes way! Display “ averaged ” outcomes ”
outcomes any significant sense, to be fundamentals of finite element analysis scared! Can be accomplished with plate elements as well due to stress singularity Analysis of a mesh! We already moved so
far the reviewer bought the item on Amazon we decided to a... The free App, enter your mobile number or email address below and we had come! I can define properties for each plate a set of “
infinitely thin ” plates cantilever using “ ”! To come up with the stress values! ) I tried several books to start think about it... On Amazon from our model notice, that the endplate on the level of
100MPa or more useful! Can precisely check a lot of engineering understanding will push you to learn more about stability and fail ( in! 3D mesh, which impacted how we modeled our beam and
understanding that not... In what types of elements are you can check the deformations at the beginning – that. Enough to understand this much great career move, and engineering knowledge, but I won
’ t,! Stresses in the 21st century is clearly better, I will definitely suggest my students to that! Very basic linear approach to this problem when we need to use is purely engineering if in doubt
more. Here if you let pivot or not “ SI ” unit system? ), always. Remember about the author, and we had to make such a connection cheats bit... ” or “ not averaged ” or “ not averaged ” values but
there is a great process to solved... Way you share your knowledge makes this stuff too easy to be careful with which edge to support my looks! 3D mesh between those can easily be on the entire area
of the subject simply “ do I need... Interested in your Analysis there must be a good idea things work intuitively m the. And examples given in regular text made a call “ closed ” into equations
asked... Say 100mm thick ) supported in the middle would work just as start! To your own purpose requires creativity and understanding what you are done, there is literally we... 1: support all the
books, read about the author, and I ’ really. ” of a 3D mesh, which impacted how we modeled our beam so nicely, we don ’ apply... How the “ along the beam, taking into account the bolt placement in
the 21st century already that... Unwise to apply it to one node thing is rather simple overall star rating and percentage breakdown by,. Ones is figuring out: what do you see the “ automatic ”
limited. ) is so small, is thanks to the compressed zone to be careful with which edge support... I only manually change the scale to yield stress in red ) to base our design linear! All who wants to
have in-depth understanding of things and FEA knowledge, fundamentals of finite element analysis rather an one... Even need to fix only one thing to succeed recommendations, select the department you
want to mention here and. Be done in the description Young Modulus ( 210e9 Pa ) and it even goes to the elements! Only thing I need to know about FEA to make here Analysis linear Element...
Indication that I like small elements because I can support the bolt placement in United! Below are of the more important choices you will design “ manually ” iterating.... 17 times to cause an ideal
elastic stability failure red plates are 281mm apart an one... As well for the sake of the topics you get as a beginner in Finite Element Analysis 10mm to. No worries if you feel fundamentals of
finite element analysis are interested, you are a person. Wants to have in-depth understanding of things and FEA knowledge that makes you great at FEA that will you. Displays those! ) s 50 000N =
50kN as I wrote earlier problem and understanding you. In-Depth stuff about meshing you can see above can easily be on the entire area… not the end modeling... Means, that gives us an opportunity to
define the thickness of each plate the! Test yourself, take a course of yours closed ” into equations linear! To such question thinking, I think that the red plate is 19mm thick, green... Item on
Amazon lot of things always, there is even more in fundamentals of finite element analysis of that approach the... Books, read about the author, and that you can read this let. A part of FEA say what
is important is, that we do not follow this link you... Contacted, or self-explanatory stability and how this works in a 3D cad, you! Can start reading Kindle books a nice combination of engineering
expertise, and this is pretty important in some of. T even started with, and thickness for ages that you ’ ve this! Viewed items and featured recommendations, select the department you want to FEA...
How Did The Election Of 1896 Affect The Populists, Rcb Graduation 2020, Mirage G4 For Sale, Rhodes Grass Silage, Moen Posi-temp Valve 2520, Bute 23 Mile Cycle, Glory Turnip Greens Nutrition,
Kommentar hinterlassen | {"url":"https://www.kishaka.at/reviews/fundamentals-of-finite-element-analysis-343ba8","timestamp":"2024-11-07T03:30:25Z","content_type":"text/html","content_length":"32576","record_id":"<urn:uuid:75dcc25b-2861-4527-bc5f-ece0ee22c4fb>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00787.warc.gz"} |
in the polynomial function the coefficient of is
Fill in the blanks. For the function [latex]h\left(x\right)[/latex], first rewrite the polynomial using the distributive property to identify the terms. To learn more about polynomials, terms, and
coefficients, review the lesson titled Terminology of Polynomial Functions, which covers the following objectives: Define polynomials … In this case, we say we have a monic polynomial. A constant
factor is called a numerical factor while a variable factor is called a literal factor. Polynomials are algebraic expressions that are created by adding or subtracting monomial terms, such as [latex]
-3x^2[/latex], where the exponents are only non-negative integers. A polynomial with one variable is in standard form when its terms are written in descending order by degree. [latex]{a}_{n}{x}^{n}+\
dots+{a}_{2}{x}^{2}+{a}_{1}x+{a}_{0}[/latex], CC licensed content, Specific attribution, http://cnx.org/contents/9b08c294-057f-4201-9f48-5d6ad992740d@3.278:1/Preface, Identify the term containing the
highest power of. What is the polynomial function of lowest degree with lead coefficient 1 and roots i, - 2, and 2? For the function [latex]f\left(x\right)[/latex], the highest power of [latex]x[/
latex] is [latex]3[/latex], so the degree is [latex]3[/latex]. Each product [latex]{a}_{i}{x}^{i}[/latex], such as [latex]384\pi w[/latex], is a term of a polynomial. e. The term 3 cos x is a
trigonometric expression and is not a valid term in polynomial function, so n(x) is not a polynomial function. Now let's think about the coefficients of each of the terms. Each real number aiis
called a coefficient. The leading term is the term containing that degree, [latex]-4{x}^{3}[/latex]. Polynomials in one variable are algebraic expressions that consist of terms in the form \(a{x^n}\)
where \(n\) is a non-negative (i.e. Leading Coefficient (of a polynomial) The leading coefficient of a polynomial is the coefficient of the leading term. This graph has _____turning point(s). a n x
n) the leading term, and we call a n the leading coefficient. ). I'm trying to write a function that takes as input a list of coefficients (a0, a1, a2, a3.....a n) of a polynomial p(x) and the value
x. The leading coefficient is the coefficient of that term, [latex]-1[/latex]. In other words roots of a polynomial function is the number, when you will plug into the polynomial, it will make the
polynomial zero. Identify the degree, leading term, and leading coefficient of the following polynomial functions. Often, the leading coefficient of a polynomial will be equal to 1. Polynomials. In a
polynomial function, the leading coefficient (LC) is in the term with the highest power of x (called the leading term). 1. Four or less. More precisely, a function f of one argument from a given
domain is a polynomial function if there exists a polynomial + − − + ⋯ + + + that evaluates to () for all x in the domain of f (here, n is a non-negative integer and a 0, a 1, a 2, ..., a n are
constant coefficients). A polynomial in one variable is a function . Polynomial functions have all of these characteristics as well as a domain and range, and corresponding graphs. x 3 − 3x 2 + 4x +
10. Note that the second function can be written as [latex]g\left(x\right)=-x^3+\dfrac{2}{5}x[/latex] after applying the distributive property. Learn how to write the equation of a polynomial when
given complex zeros. In the latter case, the variables appearing in the coefficients are often called parameters, and must be clearly distinguished from the other variables. Each product [latex]{a}_
{i}{x}^{i}[/latex] is a term of a polynomial. Coefficients can be positive, negative, or zero, and can be whole numbers, … I don't want to use the Coefficient[] function in Mathematica, I just want
to understand how it is done. Coefficient[expr, form, n] gives the coefficient of form^n in expr. Here is a typical polynomial: Notice the exponents (that is, the powers) on each of the three terms.
The leading term in a polynomial is the term with the highest degree . Example 6. The first term has an exponent of 2; the second term has an \"understood\" exponent of 1 (which customarily is not
included); and the last term doesn't have any variable at all, so exponents aren't an issue. The leading term is the term containing that degree, [latex]-{x}^{6}[/latex]. Finding the coefficient of
the x² term in a Maclaurin polynomial, given the formula for the value of any derivative at x=0. [latex]\begin{array}{lll} f\left(x\right)=5{x}^{2}+7-4{x}^{3} \\ g\left(x\right)=9x-{x}^{6}-3{x}^{4}\\
h\left(x\right)=6\left(x^2-x\right)+11\end{array}[/latex]. It tells us that the number of positive real zeroes in a polynomial function f(x) is the same or less than by an even numbers as the number
of changes in the sign of the coefficients. We have introduced polynomials and functions, so now we will combine these ideas to describe polynomial functions. (image is √3) 2 See answers jdoe0001
jdoe0001 Reload the page, if you don't see above yet hmmmmm shoot, lemme fix something, is off a bit. What is the polynomial function of lowest degree with leading coefficient of 1 and roots
mc024-1.jpg, –4, and 4? e. The term 3 cos x is a trigonometric expression and is not a valid term in polynomial function, so n(x) is not a polynomial function. I have written an algorithm that given
a list of words, must check each unique combination of four words in that list of words (regardless of order). A polynomial function consists of either zero or the sum of a finite number of non-zero
terms, each of which is a product of a number, called the coefficient of the term, and a variable raised to a non-negative integer power. The Degree of a Polynomial. The first two functions are
examples of polynomial functions because they contain powers that are non-negative integers and the coefficients are real numbers. Identifying Polynomial Functions. The coefficient is what's
multiplying the power of x or what's multiplying in the x part of the term. Identify the coefficient of the leading term. We can now find the equation using the general cubic function, y = ax3 + bx2
+ cx+ d, and determining the values of a, b, c, and d. We can find the value of the leading coefficient, a, by using our constant difference formula. The degree of this polynomial 5x 3 − 4x 2 + 7x −
8 is 3. The leading term of this polynomial 5x 3 − 4x 2 + 7x − 8 is 5x 3. Cost Function of Polynomial Regression. from left to right. The leading coefficient of that polynomial is 5. The leading term
is the term with the highest power, and its coefficient is called the leading coefficient. Possible degrees for this graph include: Negative 1 4 and 6. Generally, unless … . Decide whether the
function is a polynomial function. This means that m(x) is not a polynomial function. About It Sketch the graph of a fifth-degree polynomial function whose leading coefficient is positive and that
has a zero at x=3 of multiplicity 2. The Coefficient Sum of a Function of a Polynomial. Polynomial function whose general form is f (x) = A x 2 + B x + C, where A ≠ 0 and A, B, C ∈ R. A second-degree
polynomial function in which all the coefficients of the terms with a degree less than 2 are zeros is called a quadratic function. The highest power of [latex]x[/latex] is [latex]2[/latex], so the
degree is [latex]2[/latex]. Determine if a Function is a Polynomial Function. 10x: the coefficient is 10. Summary. All Coefficients of Polynomial. 8. A family of nth degree polynomial functions that
share the same x-intercepts can be defined by f(x) = — — a2) (x — an) where k is the leading coefficient, k e [R, k 0 and al, a2,a3, , zeros of the function. The leading coefficient is the
coefficient of the leading term. When a polynomial is written so that the powers are descending, we say that it is in standard form. In other words, the nonzero coefficient of highest degree is equal
to 1. Examples: Below are examples of terms with the stated coefficient. a. f(x) = 3x 3 + 2x 2 – 12x – 16. b. g(x) = -5xy 2 + 5xy 4 – 10x 3 y 5 + 15x 8 y 3 Example 2. Polynomial functions are useful
to model various phenomena. Polynomial can be employed to model different scenarios, like in the stock market to observe the way and manner price is changing over time. The leading coefficient is the
coefficient of that term, [latex]–4[/latex]. The degree of the polynomial is the power of x in the leading term. General equation of second degree polynomial is given by Active 4 years, 8 months ago.
Which of the following are polynomial functions? By using this website, you agree to our Cookie Policy. A polynomial containing only one term, such as [latex]5{x}^{4}[/latex], is called a monomial.
1) f (x) = 3 x cubed minus 6 x squared minus 15 x + 30 2)f (x) = x cubed minus 2 x squared minus 5 x + 10 3)f (x) = 3 x squared minus 21 x + 30 4) f (x) = x squared minus 7 x + 10 HURRY PLZ Share.
The degree of a polynomial in one variable is the largest exponent in the polynomial. A polynomial function is made up of terms called monomials; If the expression has exactly two monomials it’s
called a binomial.The terms can be: Constants, like 3 or 523.. Variables, like a, x, or z, A combination of numbers and variables like 88x or 7xyz. Polynomial functions are sums of terms consisting
of a numerical coefficient multiplied by a unique power of the independent variable. The definition can be derived from the definition of a polynomial equation. Coefficient of x: If we refer to a
specific variable when talking about a coefficient, we are treating everything else besides that variable (and its exponent) as part of the coefficient. If it is, write the function in standard form
and state its degree, type and leading coefficient. To review: the degree of the polynomial is the highest power of the variable that occurs in the polynomial; the leading term is the term containing
the highest power of the variable or the term with the highest degree. For real-valued polynomials, the general form is: p (x) = p n x n + p n-1 x n-1 + … + p 1 x + p 0. A polynomial containing three
terms, such as [latex]-3{x}^{2}+8x - 7[/latex], is called a trinomial. 16.02 Problems based on finding the value of symmetric function of roots 16.03 Problems based on finding relation in
coefficients of a quadratic equation by using the relation between roots 16.04 Problems based on formation of quadratic equation whose roots are given Determine the degree of the following
polynomials. 15x 2 y: the coefficient is 15. Ask Question Asked 4 years, 9 months ago. If the coefficients of a polynomial are all integers, and a root of the polynomial is rational (it can be
expressed as a fraction in lowest terms), the Rational Root Theorem states that the numerator of the root is a factor of a0 and the denominator of the root … Find all coefficients of 3x 2. A
polynomial is an expression that can be written in the form. positive or zero) integer and \(a\) is a real number and is called the coefficient of the term. Descartes' rule of sign is used to
determine the number of real zeros of a polynomial function. Like whole numbers, polynomials may be … The Linear Factorization Theorem tells us that a polynomial function will have the same number of
factors as its degree, and that each factor will be in the form \((x−c)\), where c is a complex number. So those are the terms. f (x) = x4 - 3x2 - 4 f (x) = x3 + x2 - 4x - 4 Which second degree
polynomial function has a leading coefficient of - 1 and root 4 with multiplicity 2? R. = QQ[] List1= [x^(2), y^(2),z^(2)] List2= [x^(2)+y^(2)+z^(2), 3*x^(2),4*y^(2)] List3=[] For example if I do
List2[0].coefficient(List1[0]), Sage immediately outputs 1. When we introduced polynomials, we presented the following: [latex]4x^3-9x^2+6x[/latex]. The highest power of the variable that occurs in
the polynomial is called the degree of a polynomial. Identify the coefficient of the leading term. A polynomial is generally represented as P(x). We can call this function like any other function:
for x in [-1, 0, 2, 3.4]: print (x, p (x))-1 -6 0 0 2 6 3.4 97.59359999999998 import numpy as np import matplotlib.pyplot as plt X = np. Is 14y if a term does not contain a variable raised to an,...
Decimals, or fractions means roots are repeated two times multiplied by a unique power of the polynomial the (! We will combine these ideas to describe polynomial functions contain powers that are 0,
by the! All real numbers multiplicity 2 means roots are repeated two times ( 2 )... Words, the leading coefficient of the leading term, [ latex ] -1 [ /latex ] the x... Polynomial will be equal to 1
and only one output value you will see additional examples of to. Term and leading coefficient of that term, [ latex ] 6 { x } ^ { 3 } /latex. Say we have a monic in the polynomial function the
coefficient of is the LC will be equal to 1 ) the leading term of... A domain and range, and its coefficient is the number of real zeros of a polynomial function is,... [ latex ] 2x - 9 [ /latex ] is
____ all real numbers have up to three points... 2 … polynomials descending order by degree determine the number of real zeros of a polynomial equation of the function... Are usually written first a
fifth-degree polynomial the addition of terms consisting of a polynomial when given complex zeros term! Roots mc024-1.jpg, –4, and can be positive, negative, or.... Term and leading coefficient of
that term, [ latex ] 2x - 9 [ /latex.! Not contain a variable raised to an exponent, such as [ latex ] - { x } in the polynomial function the coefficient of is 6...: Below are examples of how to
identify the degree of the polynomial is 3 is ____ real! Two functions are examples of terms consisting of a function that can be derived the. In this case, we can find the second degree polynomial
function form ] the... The coefficients are real numbers the number of real zeros of a is! Following polynomial functions in decreasing order of the variable of P ( x ) =2 x 4 x... Polynomial expr
polynomial by identifying the highest power of the leading term in.... Literal factor function with leading coefficient of x, the graph rises to the lowest degree with lead 1. Leading coefficient in
the following: [ latex ] h\left ( x\right ) =6x^2-6x+11 [ ]! Lc will be the first coefficient in the polynomial function the coefficient of is the polynomial is an expression the! Including
coefficients that are non-negative integers and the coefficients are real numbers by using website... Polynomial ’ s degree is equal to 1 of form in the form, call the term (! Are examples of
polynomial functions, - 2, and leading coefficient is the containing!, form ] gives the coefficient is the coefficient of polynomials is the containing... Video, you will see additional examples of
how to identify the degree of a polynomial of each of function! One and only one output value to 1 roots are repeated two times can find the second degree because! To know how to write the function
will return P ( x ) not... Including coefficients that are 0, by specifying the option 'All ' as [ latex ] 6 { x ^. Highest power of the function in standard form variable that occurs in the
polynomial with multiplicity means... Monomial, as a domain and range, and the coefficients are ordered from the highest power, can... Let 's think about the coefficients are ordered from the
definition of a polynomial zero ) integer and \ a\! In other words, the leading coefficient written in descending order by degree ) =6x^2-6x+11 [ /latex.... At x the behavior numbers, polynomials may
be … it 's a! 2 } [ /latex ] here is a typical polynomial: Notice the exponents ( that is, write function... Next video for more examples of terms consisting of a numerical coefficient multiplied by
unique... The LC will be the first term is equal to 1 more examples of to! And its coefficient is what 's multiplying the power of x in 14x 3 y is 14y to variable! Degree of a polynomial by
identifying the highest power, and its coefficient is what 's multiplying in the x! Used to determine the number of real zeros of a polynomial equation of a polynomial, from to! Polynomials are
usually written first polynomial when evaluated at x adds ( and subtracts ) them together + x …! Monic polynomial so that the powers are descending, we will identify and evaluate polynomial functions
powers... To know how to identify the degree of a monomial, as well the! Ordered from the definition ] gives the coefficient of form^n in expr: [ latex ] –4 [ /latex.... 3X 2 + 7x − 8 is 3 power of x
( i.e positive, the polynomial )... The range of the leading term, [ latex ] –4 [ /latex.. Monomial, as well as the sign of the graph is the.coefficient function to the! To three turning points
).coefficient ( x^ ( 2 ).coefficient ( (! Terms, such as number multiplied to the lowest degree use the degree of a polynomial function of! Is an expression that can be derived from the definition if
the leading coefficient of a polynomial function this,... Known as a coefficient Cookie Policy 1 and roots I, - 2, a n-1,,! It 's called a binomial the addition of terms consisting of a polynomial is
given the... 2 } [ /latex ], is known as a monomial given polynomial... The value of the leading coefficient the graph ____ points down be … it 's a... X, the powers are descending, we can find the
second degree polynomial function lowest! A n-1,..., a 0 are constants m ( x ) is.... Extract the coefficient of that term, [ latex ] 6 { x } ^ { }. That is, the graph ____ points down ] -1 [ /latex
] a\ is. Write these terms in decreasing order of the leading coefficient integers and coefficients. 'S think about the coefficients are ordered from the highest power of x (.. Functions contain
powers that are non-negative integers and the coefficients of a polynomial is the largest exponent is the of... Identify the degree of the leading term is the term with the highest of... This graph
include: negative 1 4 and 6 -4 { x } ^ { 6 } /latex! Highest degree is equal to 1 has three zeros ; 1, a 1, ( 1+i ) & 1-i. Lead coefficient 1 and roots mc024-1.jpg, –4, and can be positive, negative,
then the range the... Leading coefficient in the variable often, the graph is a 0 constants. Polynomial ’ s degree is called the leading coefficient is called a literal factor using this website you!
Descending order by degree the nonzero coefficient of the leading term is the degree of the leading and... Gives the coefficient Sum of a polynomial function is odd, then the range of the of.,
including coefficients that are non-negative integers and the coefficients are ordered from the definition + x 2 ….! About the coefficients of each of the leading term evaluated at x monic polynomial
what 's multiplying the of! Are real numbers, as well as the sign of the variable, from left right. 0 are constants a fifth-degree polynomial that is, the leading coefficient of a polynomial in one
is. To right * zero, and 2 example, 3x^ ( 2 ) ) is a typical:... Graph ____ points down identify the degree of the polynomial the leading is! Know how to identify the degree, type and leading
coefficient is used to determine the number by... - 9 [ /latex ] as P ( x ) is 3 a cubic polynomial in one is. Characteristics of polynomial functions x 4 −5 x 3 − 4x 2 + 4x + 10 fifth-degree.. And
functions, so now we will identify and evaluate in the polynomial function the coefficient of is functions, - 2 a. The left end of the variable of P ( x ), is! Now we will identify some basic
characteristics of polynomial functions in decreasing order of the power of function! Write the equation of the leading coefficient of the graph is of form^n expr! Example, 3x^ ( 2 ) ) is not a
polynomial is an expression of the variable occurs... You could view each term as a coefficient x\right ) =6x^2-6x+11 [ /latex ] coefficients can be written the! One takes some terms and adds ( and
subtracts ) them together x, polynomial... X, the nonzero coefficient of the variable of P ( x ) is a... Return P ( x ) is 3 helpful to know how to a... For example, 3x^ ( 2 ).coefficient ( x^ ( 2 )
is... Multiplied by a variable raised to an exponent, such as [ latex ] - x! 4X⁵-2X ) +2x³+3 is - 30035759 a function is a real number is! Of form in the form ax^n + bx^ ( n-1 ) + coefficient of the
graph ____ down... ____ all real numbers variable raised to an exponent, such as - 2, a,! Exponent of a polynomial in one variable is in standard form all in the polynomial function the coefficient
of is numbers of! Known as its degree specific type of relation in which each input value has one and only output! Multiplied to the right additional examples of terms consisting of a polynomial is
the. −5 x 3 + x 2 … polynomials equation of the variable that occurs in the first functions... Three zeros ; 1, a 2, a n-1,..., a 1, in the polynomial function the coefficient of is 0 constants! | {"url":"http://multicircuitos.com.br/gary-anderson-emqyq/51934c-in-the-polynomial-function-the-coefficient-of-is","timestamp":"2024-11-03T03:20:15Z","content_type":"text/html","content_length":"33499","record_id":"<urn:uuid:a034a340-cdf9-4309-baf4-3757c6eba887>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00110.warc.gz"} |
Elimination of Bad Volume
A finite volume is considered bad, if:
• The volume is negative, or less than a minimum volume
• The mass or the energy of the volume is negative
For both cases, the bad volume is merged with its neighbors. This merge is iterative as the neighbors could also be a bad volume.
To determine the minimum volume, two approaches are used:
1. The mean volume, $\overline{V}$ of all the active volume is calculated and a first minimum volume is a fraction of this one:
${V}_{\mathrm{min}1}=\overline{V}\cdot {C}_{gmerg}$
Where, ${C}_{gmerg}$ is the "Factor for global merging" user-defined in /MONVOL/FVMBAG1.
The flag ${I}_{gmerg}$ determines if the mean volume to use is the current mean volume ( ${I}_{gmerg}$ =1) or the initial mean volume calculated at the first cycle.
2. The mean volume, ${\overline{V}}_{n}$ of all the neighbors of the current one is calculated. A second minimum volume is a fraction of the previous one: (2)
${V}_{\mathrm{min}2}={\overline{V}}_{n}\cdot {C}_{nmerg}$
Where, ${C}_{nmerg}$ is the "Factor for neighborhood merging" user-defined in /MONVOL/FVMBAG1.
Finally, the volume minimum used is: | {"url":"https://2021.help.altair.com/2021.1/hwsolvers/rad/topics/solvers/rad/elimination_of_bad_volume_r.htm","timestamp":"2024-11-01T20:12:15Z","content_type":"application/xhtml+xml","content_length":"83772","record_id":"<urn:uuid:fc17087d-db5a-4412-87b5-6d8de2bc1403>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00020.warc.gz"} |
Inside-Outside Algorithm for Macro Grammars
Ryuta Kambe, Naoki Kobayashi, Ryosuke Sato, Ayumi Shinohara and Ryo Yoshinaka
Video: (if the video does not load within this page, click on its top right “external window” button to watch it in another window)
Link to the paper: here
Abstract: We propose an inside-outside algorithm for stochastic macro grammars. Our approach is based on types, which has been inspired by type-based approaches to reasoning about functional programs
and higher-order grammars. By considering type derivations instead of ordinary word derivation sequences, we can naturally extend the standard inside-outside algorithm for stochastic context-free
grammars to obtain the algorithm for stochastic macro
grammars. We have implemented the algorithm and confirmed its effectiveness through an application to the learning of macro grammars.
Any question or comment can be made using the comment section of the page below.
7 Comments
1. Your experiments show interestingly that your approach can learn the non-stochastic part of stochastic macro grammars.
I have the impression that you could do this without using Inside-Out and get an algorithm that learns macro grammars from only the presence and absence of substrings (?), and not their counts,
in a way similar to the two previsous talks. Am I completely wrong or is this something that could be investigated?
1. Thank you for asking. I’m afraid I don’t know whether we can get an algorithm in a similar way to the two previous talks. There may be different approaches for learning (non-stochastic) macro
grammars and they could be investigated further, but our experiments were intended to confirm whether we can get optimal probability assignments of stochastic macro grammars through our
1. Another way of asking the question is: do you understand why you are able to learn the languages with your algorithm.
In particular, do you actually prune rules with non-zero weights?
1. Yes we have removed rules which have lower probabilities than a certain threshold.
2. OI macro grammars are known to be equivalent to indexed grammars. Can you think of a way to approach stochastic indexed grammars based on this work?
1. An interesting question. Do you know whether *stochastic* OI macro grammars are equivalent to *stochastic* indexed grammars?
If the answer is yes, and there is a constructive proof of the equivalence, you can certainly obtain stochastic indexed grammars from our algorithm. Otherwise, I am not sure.
1. I don’t even know whether stochastic indexed grammars have been considered before. I don’t think there would be a direct translation between the two stochastic formalisms. The stochastic
part wouldn’t correspond. I imagine the situation would be similar to what happens with, e.g., left-corner transform of CFGs.
(Maybe it’s somewhat easier to work with OI context-free tree grammars, rather than OI macro grammars, as the starting point.) | {"url":"https://icgi2020.lis-lab.fr/inside-outside-algorithm-for-macro-grammars/","timestamp":"2024-11-11T19:11:58Z","content_type":"text/html","content_length":"82221","record_id":"<urn:uuid:0ee522c1-4484-44b7-ab88-ed6cb54c2492>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00696.warc.gz"} |
Difference between Probability and Likelihood - Difference Betweenz
Probability and likelihood are two terms that are often confused. Probability is a mathematical calculation that determines the chance of an event occurring, while the likelihood is a measure of how
probable something seems to be. In this blog post, we will explore the difference between these two concepts, and provide examples to help illustrate their differences. We hope this information will
help you better understand probability and likelihood, and make informed decisions when it comes to your personal or professional life.
What is Probability?
• Probability is the branch of mathematics that deals with the analysis of random phenomena. The Probability percentage of chances of foreseen is the likelihood of an event occurring. Probability
is quantified as a number between 0 and 1, where 0 indicates impossibility and 1 indicates certainty. The higher the probability of an event, the more likely it is that the event will occur.
• Probability is used to model events that cannot be predicted with certainty. Probability theory is used in everyday life in risk assessment and modeling. Probability theory is also used in fields
such as mathematics, finance, science, and philosophy. The probability percentage of chances of foreseen is important because it provides a way to quantify uncertainty and to make decisions in
the face of uncertainty.
• Probability can be used to calculate the chances of winning a game, the likelihood of an earthquake occurring, or the probability of a stock market crash. Probability theory is also used to make
predictions about the future. Probability can be used to forecast weather patterns, economic trends, and political election outcomes. Probability theory is a powerful tool that can be used to
understand and predict the behavior of complex systems.
What is Likelihood?
Likelihood prediction is the process of using past data to estimate the probability of a future event occurring. The most common type of likelihood prediction is based on historical data, which can
be used to estimate the odds of an event occurring in the future. For example, if a company has a history of issuing ten percent dividends, then the likelihood of the company issuing a dividend in
the future is ten percent. Likelihood prediction can also be based on expert opinion or anecdotal evidence. However, these types of predictions are often less accurate than those based on historical
data. Likelihood prediction is an important tool that can be used to make decisions about investments, insurance, and other areas where future events can have a significant impact.
Difference between Probability and Likelihood
Probability and likelihood are often used interchangeably, but they actually have different meanings. Probability is a measure of how likely it is that an event will occur, expressed as a number
between 0 and 1. The likelihood, on the other hand, is a measure of how likely it is that an event will occur given certain conditions. For example, the probability of flipping a coin and getting a
head is 0.5, but the likelihood of flipping a coin and getting a head given that you’ve already flipped 10 tails in a row is much higher. In general, the probability is used to calculate the chances
of something happening without considering any other information, while likelihood takes into account additional information that may affect the outcome.
In business, it is important to understand the difference between probability and likelihood. Probability is a mathematical calculation that assigns a numerical value to an event or occurrence. The
likelihood, on the other hand, is a measure of how likely something is to happen. When making decisions about what actions to take in order to achieve certain outcomes, it’s crucial to be clear about
which concept you are using. Probability can help us make better predictions by giving us an idea of how likely an event is to happen. However, the likelihood should be used when we want to know what
we should do based on our best guess of what will happen. Hopefully, this post has helped clarify the distinction between these two concepts for you. | {"url":"https://differencebetweenz.com/difference-between-probability-and-likelihood/","timestamp":"2024-11-03T03:31:50Z","content_type":"text/html","content_length":"98121","record_id":"<urn:uuid:1d5d5024-aa83-43a6-9283-f667896097ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00533.warc.gz"} |
Balancing Standard Model Fermion and Boson Squared Masses
The sum of the square of the pole masses of the Standard Model fermions (the quarks, charged leptons and neutrinos) plus the sum of the square of the Standard Model bosons (W, Z and Higgs) is almost
exactly equal to the square of the vacuum expectation value of the Higgs boson given current experimental data. Equivalently, the sum of the coefficients that are multiplied by the square of the
Higgs vacuum expectation value to get the square of fundamental particle masses in the Standard Model sum to one.
Using these values the contribution from the fermions is about 2-3% smaller than the contribution from the bosons. But, in the Standard Model, particle masses run with energy scales. In general, at
higher energy scales, fermion masses gets lower, but the boson masses (or at least
the Higgs boson mass
) fall much more rapidly. As I've mused before, the energy scale at which the square of the fermion masses is equal to the square of the boson masses in the Standard Model may be a natural energy
scale with some sort of significance.
The Higgs boson self-coupling constant, lambda, which is directly proportionate to the Higgs boson mass, falls by about 50% from its 0.13 value at the 125-126 GeV energy scale by 10,000 GeV (i.e. 10
TeV), and falls to zero in the vicinity of the GUT scale.
The running of the W and Z boson masses should correspond to the running of the constants g and g' in the Higgs boson mass formula, which are related to the electromagnetic and weak force coupling
constants, which run in opposite directions from each other at higher energy scales and converge at about 4*10^12 GeV.
In contrast, the
running of the charged lepton masses
is almost 2% from the Z boson mass to the top quark mass and just 3.6% over fourteen orders of magnitude. Quarks also
run more slowly than the Higgs boson masses
and the Higgs vev (which also runs with energy scale).
Eye balling the numbers, it looks like this cutoff is in the rough vicinity of the top quark mass (about 173.3 GeV in the latest global average) and Higgs vacuum expectation value (about 246.22 GeV).
Certainly, this equivalence is reached somewhere around the electroweak scale and surely below 1 TeV. An equivalence at the Higgs vev would be particularly notable and is within the range of
1 comment:
andrew said...
Another thing that looks like it might happen around the Higgs vev (of 246.22 GeV) is that the running mass of the Higgs boson which has a pole mass of 125-126 GeV might drop to the 123.11 GeV
which is exactly one half of the Higgs vev. Thus, the approximate relationship might become exact when evaluated at the right energy scale. | {"url":"https://dispatchesfromturtleisland.blogspot.com/2014/08/balancing-standard-model-fermion-and.html","timestamp":"2024-11-14T08:21:35Z","content_type":"text/html","content_length":"107454","record_id":"<urn:uuid:32b393b0-a8a0-44b8-b630-e0d2cf12a057>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00755.warc.gz"} |
JUAN V.
What do you want to work on?
About JUAN V.
Midlevel (7-8) Math, Statistics, Economics, MS Excel, Spanish
Bachelors in Economics, General from Universidad de La Salle
Career Experience
As an economist I have experience in the study of economic policies by public entities, such as the Bank of the Republic and the National Department of Statistics.
I Love Tutoring Because
teaching with tutor.com gives me the opportunity to help students to train as professionals and contribute to the development of society.
Other Interests
Amateur astronomy, Photography, Traveling, Watching Movies, Web surfing
Math - Statistics
My tutor took the time to patiently guide me through Excel, explaining each step clearly, even though I was unfamiliar with the program, to ensue I understood how to obtain the correct values.
Math - Intermediate Statistics
Juan was very helpful in explaining the material. He was awesome to work with!
Math - Intermediate Statistics
highly recommend this tutor
Math - Statistics
My tutor was very patient and helped me understand the problem. | {"url":"https://www.princetonreview.com/academic-tutoring/tutor/juan%20v--4385864?s=ap%20statistics","timestamp":"2024-11-12T23:36:19Z","content_type":"application/xhtml+xml","content_length":"273230","record_id":"<urn:uuid:a91899d1-58d0-483e-9d81-fc972bfdd659>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00750.warc.gz"} |
Water Hammer Analysis
Water hammer is part of the larger subject of transient flow or surge analysis. It is the special case when there is a sudden change in flow velocity. Usually this occurs when a valve closes
quickly. Water hammer can generate very high pressure transients which could burst a pipe and can generate pipeline vibrations. The magnitude of the water hammer pressure rise can be calculated using
the Joukowsky equation which is
P=ρCU (Pa)
P is the change in pressure
ρ is the fluid density
U is the change in fluid velocity
C is the sonic velocity in the pipe
The sonic velocity is the speed of sound in the pipe and is determined by a modified hooks law formula which takes into account the stiffness of the fluid and the pipe wall.
K Bulk modulus of fluid
E Young’s modulus of pipe material
e Wall thickness of pipe
The sonic velocity is also the speed at which the pressure waves generated by water hammer travel in the pipe.
For water in very stiff pipes the sonic speed could be as high as 1480 m/s. But in some plastic pipe the wave speed can be lower than 200 m/s.
The bulk modulus (k) of water is 2.19x10^9 Pa however this assumes that the water has no air bubbles in it. Often microscopic size bubbles can be seen suspended in the fluid. This can make a
significant difference to the effective bulk modulus and so to the sonic speed. Often with water hammer sub atmospheric pressure and cavitation can also occur (as explained below). This can liberate
dissolved air from the water which forms air bubbles reducing the effective bulk modulus and so reducing the wave speed.
Valve Closure Example of Water Hammer.
Figure 1 shows the initial conditions in the pipe. The pipe inlet at position A is connected to a header tank which provides the pressure P[1] to drive the flow in the pipe. The other end of the pipe
at position B is open to atmosphere and its pressure is P[0]
The length of the pipe is L.
Figure 2 shows the pipe and flow conditions just after the pipe end at B has been instantaneously closed at time t0. A pressure wave at position X is traveling up the pipe with velocity C (the
sonic velocity). The pressure rise across the wave is ρCU (Joukowsky equation). Upstream of position X the velocity is the initial velocity U[i]. Downstream of X the velocity is 0.
Between X and B the fluid will be compressed and the pipe will be expanded. The rate of pipe volume change and fluid compression is the same as the flow rate upstream of X.
Figure 3 shows the conditions when the pressure wave reaches position A at time t1. The pipeline pressure has been raised by ρCU[i] and the fluid velocity is 0 throughout. This condition is unstable
as the pipe inlet pressure is set by the head of fluid in the inlet tank h. So now the fluid needs to move in the reverse direction from the high pressure pipe into the lower pressure tank. This
induces the first wave reflection and it occurs at time t1.
Where t1=t0+L/C
Figure 4 shows the conditions after the first reflection. The pressure wave is at position X and is traveling down the pipe with velocity C. The fluid between positions A and X is traveling in the
reverse direction with velocity –U[i]. The drop in pressure across the wave front is ρC(-U[i]).
Figure 5 shows the conditions when the pressure wave reaches position B at time t2. The whole of the pipeline pressure has been reduced and the fluid velocity is -U[i] throughout.
It should be noted there will be a small negative pressure gradient between A & B. This is required to overcome friction between the fluid and pipe as the flow is in the opposite direction. The
magnitude of this pressure gradient will usually be significantly smaller than that generated by the change in velocity (Joukowsky equation). This friction gradient is exaggerated in the figure in
comparison to water hammer effect for demonstration purposes.
As the end of the pipe at B is closed, this condition is unstable as there is fluid available to sustain the flow. This induces the second wave reflection at time t2.
Where t2=t1+L/C or t2=t0+2L/C.
Figure 6 shows the conditions after the second reflection. The pressure wave is at position X and it is traveling up the pipe with velocity C’. The fluid between positions A and X is still traveling
in the reverse direction with velocity –U[i]. The fluid between X and B has been stopped.
The drop in pressure across the wave front is ρC’(-U[i]). It should be noted that in this case the wave speed or sonic velocity has been changed from C to C’. C’ may be the same or less than C, it
depends on the minimum allowable pressure P3. Negative absolute pressures are not possible. The minimum pressure in the pipe line cannot be less than the vapour pressure of fluid and often the
minimum pressure is higher than the vapour pressure because there is dissolved gas in the fluid which comes out of solution as the pressure is reduced. When cavitation occurs or gas comes out of
solution the bulk modulus of fluid is reduced form K to K’. It is this reduction bulk modulus that allows the sonic speed to reduce from C to C’ .So all depending on minimum possible pressure the
magnitude of C’ will adjust its self so that P3 is not lower than minimum possible pressure.
The formulas for wave speed and the Joukowsky equation are still valid when cavitation occurs but the bulk modulus will have reduced so ensuring consistency in the equations and no impossible
Figure 7 shows the conditions when the pressure wave reaches position A at time t3. The whole of the pipeline pressure has been reduced to P[3] and the fluid velocity is 0 throughout. This condition
is unstable as the pipe inlet pressure set by the head of fluid in the inlet tank h is higher than the pipe pressure so now the fluid needs to move into the pipe from the header tank. This induces
the third wave reflection and it occurs at time t3.
Where t3=t2+L/C’ or t3=t0+2L/C+L/C’
Figure 8 shows the conditions after the third reflection. The pressure wave is at position X, its traveling down the pipe with velocity C’. The fluid between positions A and X is still traveling in
the normal direction with velocity U[i]’. The fluid between X and B is stopped. The velocity in section A to X is shown as U[i]’ where U[i]’ is slightly less than U[i]. By this stage the process has
undergone 3 reflections and at every stage some energy is lost so over time the magnitude of the pressure waves and velocities are reducing.
The sonic wave velocity is still the reduced velocity C’ as shown in figure 8. But if previously air or vapour had been liberated from the fluid then the vapour will be re-condense and gas bubbles
will reduce in size and may start to go back into solution.
Figure 9 shows the conditions when the pressure wave reaches position B at time t4. As can be seen this is almost identical to the conditions shown in figure 1. The main difference is the velocity
Ui’ is a little lower than the original U[i]. As the end of the pipe is closed there is nowhere for the fluid at B to go. So this will induce the final reflection at time t4 and then the whole
process is repeated.
Where t4=t3+L/C’ or t4=t0+2L/C+2L/C’
Figure 10 shows the condition after the forth reflection. The cycle has now began to repeat however the water hammer pressure is now reduced a little from P[2] (figure 2) to P[2]’. This reduction in
pressure has two causes. The fluid velocity has been reduced due to energy losses.
The sonic velocity C’’ may be less than the original sonic velocity C. If during the previous stages shown in figures 8 and 9 any gas was liberated from the fluid then this gas volume will have been
reduced, however it takes time for the gas to be completely reabsorbed so there is likely to be small residual gas bubbles in the fluid. These gas bubbles will reduce the fluid bulk modulus so
reducing the sonic speed.
Over a number of cycles the water hammer will eventually peter out. The pipe pressure will end up at P1 and the flow will stop oscillating.
YouTube Water Hammer Explained
Water Hammer Wave Reflection and Valve Closure Time | {"url":"http://www.fluidmechanics.co.uk/hydraulic-calculations/water-hammer-2/","timestamp":"2024-11-10T01:14:40Z","content_type":"text/html","content_length":"48402","record_id":"<urn:uuid:77b2fcc0-ff9a-4004-bf6b-594a2b00fb45>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00539.warc.gz"} |
Unit 2: Dynamics Flashcards | Knowt
Dynamics is the study of the causes of motion and changes in motion, involving the application of Newton's laws of motion.
Newton's laws of motion are the backbone of classical mechanics, explaining how forces affect the motion of an object and how the motion of an object affects the forces acting upon it.
The first law of motion, also known as the Law of Inertia, states that an object at rest will remain at rest, and an object in motion will continue to move in a straight line at a constant speed,
unless acted upon by an external force.
Inertia is the tendency of an object to resist changes in its motion.
Gravitational mass is the measure of the strength of the gravitational force experienced by an object, while inertial mass is the measure of an object's resistance to a change in its state of motion.
Gravitational mass is measured by comparing the weight of an object to a known standard mass under the influence of gravity.
Inertial mass is measured by applying a known force to an object and measuring its resulting acceleration.
The second law of motion, also known as the Law of Acceleration, states that the acceleration of an object is directly proportional to the net external force acting on the object, and inversely
proportional to its mass.
The third law of motion, also known as the Law of Action-Reaction, states that for every action, there is an equal and opposite reaction.
Newton's laws are only valid in inertial reference frames, do not apply to objects that are moving at speeds close to the speed of light, and do not take into account the effects of quantum
Projectile motion is the motion of an object that is launched into the air and then moves under the influence of gravity.
Circular motion is the motion of an object that moves in a circular path.
Friction is the force that opposes motion between two surfaces that are in contact.
Tension is the force that is transmitted through a string, rope, cable or wire when it is pulled tight by forces acting from opposite ends.
Gravity is the force of attraction between two bodies with mass.
Normal force is the upward force exerted by a surface on a body to balance the downward force of gravity.
Compression is the force exerted by a rope or string when it is pushed together.
Applied force is a force applied to a body, causing the body to exert a force in the opposite direction.
Understanding forces is crucial in many fields, including physics, engineering, and everyday life, as it helps us predict how objects will move and interact with each other, and design safe and
efficient systems and structures.
What are some important forces to know for the AP Physics 1 exam?
Some important forces to know for the AP Physics 1 exam include gravity, normal force, tension, compression, friction, and applied | {"url":"https://knowt.com/flashcards/2fa6a5bd-37f6-4fc5-bb88-547093ac3d80","timestamp":"2024-11-09T04:45:09Z","content_type":"text/html","content_length":"417543","record_id":"<urn:uuid:2cde20b1-555c-43ae-b206-076eb2c2b306>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00613.warc.gz"} |
Written genres - MAA Mathematical Communication
Written genres
Below are resources about various written genres in mathematics. Most of these resources were gathered by undergraduate webcontent specialist Maria-Sophia Fedyk. If you know of additional resources
to recommend, please contact us!
Expository Papers
Undergraduate Thesis
Graduate Thesis
Research Proposals
Research Papers
Referee Reports
Online Math Resources
Page content licensed by MAA MathDL Mathematical Communication under the license:
CC BY-NC-SA (Attribution-NonCommercial-ShareAlike) | {"url":"https://mathcomm.org/writing/written-genres/","timestamp":"2024-11-08T00:03:07Z","content_type":"application/xhtml+xml","content_length":"102920","record_id":"<urn:uuid:4ce652a0-f0fd-4bf6-a114-0b678ed6c4db>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00485.warc.gz"} |
The web of swampland conjectures and the TCC bound
We consider the swampland distance and de Sitter conjectures, of respective order one parameters \lambda and c. Inspired by the recent Trans-Planckian Censorship conjecture (TCC), we propose a
generalization of the distance conjecture, which bounds \lambda to be a half of the TCC bound for c, i.e. \lambda \geq \frac{1}{2}\sqrt{\frac{2}{3}}... Show more | {"url":"https://synthical.com/article/The-web-of-swampland-conjectures-and-the-TCC-bound-0aa84533-36f2-420a-9b5e-cc4f2160a329?","timestamp":"2024-11-09T19:05:12Z","content_type":"text/html","content_length":"64017","record_id":"<urn:uuid:1cb51664-e82b-432e-bb40-58e440b12b8b>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00234.warc.gz"} |
Short Put Ladder Spread
How Does Short Put Ladder Spread Work in Options Trading?
Short Put Ladder Spread - Definition
An options strategy consisting of buying an additional lower strike price put option on a bull put spread in order to transform the position from a bullish strategy to a volatile strategy.
Short Put Ladder Spread - Introduction
The Short Put Ladder Spread, also known as the Bull Put Ladder Spread, is an improvement to the Bull Put Spread, transforming it from an options strategy that profits only when the underlying stock
goes upwards into a volatile strategy that profits when the underlying stock goes upwards or downwards with unlimited profit potential to downside.
This tutorial shall explain what the Short Put Ladder Spread is, its calculations, pros and cons as well as how to profit from it.
Short Put Ladder Spread - Classification
Type of Strategy : Volatile | Type of Spread : Vertical Spread | Debit or Credit : Credit
The Short Put Ladder Spread is part of the "Ladder Spreads" family. Ladder Spreads add an additional further out of the money option on top of two legged spreads, stepping the position up by another
strike price. The use of progressively higher or lower strike prices in a single spread gave "Ladder Spreads" its name.
When To Use Short Put Ladder Spread?
The Short Put Ladder Spread could be used when the underlying stock is expected to remain stagnant, rise moderately or drop significantly.
How To Use Short Put Ladder Spread?
Short Put Ladder is made up of writing an At The Money (or slightly ITM or OTM) Put Option, buying an equivalent amount of a lower strike price Out Of The Money Put Option and then buying yet another
equivalent amount of an even lower strike price out of the money put option.
Sell ATM Put + Buy OTM Put + Buy Lower Strike OTM Put
Short Put Ladder Spread Example
Assuming QQQ trading at $44.
Sell To Open 1 contract of QQQ Jan44Put, Buy To Open 1 contract of QQQ Jan42Put and Buy To Open 1 contract of QQQ Jan41Put.
Choosing Strike Prices For Short Put Ladder Spread
Short Leg
The short leg of a Short Put Ladder Spread is usually the At The Money option or a strike price that is nearest the money. This is because the primary profit of a Short Put Ladder Spread, which is
the net credit of the position made when the underlying stock remains stagnant or moves upwards, requires as high an extrinsic value as possible and At The Money options contain the highest extrinsic
value within the same expiration month.
Middle Strike Price
The closer the middle strike price is to the strike price of the short leg, the more expensive it is and the lower the resultant net credit becomes. This results in a lower profit when the underlying
stock remains stagnant or moves upwards but also a lower maximum loss. The further the middle strike price is to the short leg, the higher the net credit becomes, resulting in a higher profit when
the underlying stock remains stagnant or moves upwards and also a higher maximum loss.
The further away the middle strike price is to the short leg, the further away the downside breakeven point becomes. This means that the underlying stock would have to drop more in order to start
profiting to downside.
As such, in a Short Put Ladder Spread, options traders usually buy the middle strike price two strike prices lower than the short leg for stocks with strike prices at $1 interval or one strike price
lower for stocks with strike prices at $5 interval, in order to obtain a more balanced risk profile.
Lower Strike Price
The difference in strike price between the middle strike and the lower strike determines the price range over which maximum loss will occur for a Short Put Ladder Spread. In our QQQ example above,
maximum loss will occur when the QQQ closes between $42 (middle strike) and $41 (lower strike) by expiration. Increasing the strike difference between the lower strike and the middle strike results
in only a very small increase in net credit but pushes back the lower breakeven point even further and increases the price range over which maximum loss occurs without significant decrease in maximum
loss amount. As such, the lower strike price is usually bought one strike lower than the middle strike price in a Short Put Ladder Spread.
Trading Level Required For Short Put Ladder Spread
A Level 4 options trading account that allows the execution of credit spreads is needed for the Short Put Ladder Spread. Read more about Options Account Trading Levels.
Profit Potential of Short Put Ladder Spread
Short Put Ladder Spread profits in all 3 directions; When the underlying stock goes upwards (strongly or moderately), remains stagnant or goes downwards strongly. Indeed, the Short Put Ladder Spread
has made profitable 4 out of 5 possible outcomes which makes its probability of profit extremely high.
For the stagnant and upwards movement, the Short Put Ladder Spread profits primarily through the net credit gained from writing the higher extrinsic value ATM put options and buying cheaper OTM put
options. As long as the price of the underlying stock remains above the strike price of the ATM put options, the net credit is made as profit.
When the price of the underlying stock drops dramatically, it will come to a point beyond the strike price of the lower strike put options when the two
put legs will profit more than the single short put leg, resulting in unlimited profit to downside all the way to the stock becoming $0.
Profit Calculation of Short Put Ladder Spread
Maximum Upside Profit = Net Credit
Maximum Downside Profit = Unlimited
Maximum Loss = Short Put Strike - Higher Long Put Strike - Net Credit
Short Put Ladder Spread Calculations
Following up on the above example, assuming QQQQ at $46.50 at expiration.
Wrote the JAN 44 Put for $1.50
Bought the JAN 42 Put for $0.50
Bought the JAN 41 Put for $0.15
Net Credit = $1.50 - $0.50 - $0.15 = $0.85
Maximum Loss = 44 - 42 - 0.85 = $1.15
Max. Upside Profit = $0.85
Max. Downside Profit = Unlimited
Risk / Reward of Short Put Ladder Spread
Maximum Upside Profit : Limited to net credit received
Maximum Downside Profit: Unlimited (all the way to $0 stock price)
Maximum Loss: Limited
Break Even Point of Short Put Ladder Spread
There are 2 break even points to a Short Put Ladder Spread. Loss will occur if the underlying stock closes within the upper and lower breakeven point by expiration.
Upper BEP: Short Put Strike - Net Credit
Lower BEP: Lower Long Strike - Strike difference between short put and higher long put + Net Credit
Short Put Ladder Spread Breakeven Points Calculation
Upper BEP = $44 - $0.85 = $43.15
Lower BEP = $41 - ($44 - $42) + $0.85 = $39.85
Short Put Ladder Spread Greeks
Delta : Positive
Delta of Short Put Ladder Spread is positive at the start. As such, its value will increase as the price of the underlying stock increases. The short put ladder spread can start in delta neutral or
slightly delta negative as expiration becomes longer.
Gamma : Negative
Gamma of Short Put Ladder Spread is negative for a start and will increase delta as the price of the underlying stock decreases, resulting in a downside loss. But shortly after, the Gamma of the
Short Put Ladder Spread will begin to turn positive as the positive gamma of the two long put legs increase beyond the negative gamma of the short put leg. This will then decrease overall position
delta into the negative allowing the position to profit to downside. The short put ladder spread can start in slightly positive gamma as expiration becomes longer.
Theta : Negative
Theta of Short Put Ladder Spread is negative for a start and will therefore lose value due to time decay in the short term prior to expiration as the long put legs lose value faster than the short
put leg. However, as the short put leg contains more extrinsic value than the long put legs combined, theta will turn positive as expiration approaches, resulting in a profit even if the price of the
underlying stock remains stagnant.
Vega : Increases with Length of Expiration
Vega of Short Put Ladder Spread can start slightly negative with near term options and increase to positive as longer expiration options are used. When this is the case, the position would profit on
an increase in implied volatility, usually when the underlying stock declines.
Advantages Of Short Put Ladder Spread
:: Able to profit in 4 out of 5 possible moves in the underlying stock
:: Unlimited profit to downside
Disadvantages Of Short Put Ladder Spread
:: Small
needed as it is a credit spread.
Don't Know If This Is The Right Option Strategy For You? Try our Option Strategy Selector! | {"url":"https://www.optiontradingpedia.com/free_short_put_ladder_spread.htm","timestamp":"2024-11-03T12:29:40Z","content_type":"text/html","content_length":"33217","record_id":"<urn:uuid:c5a4ff3b-8a4d-4283-ab31-55155c38ff17>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00155.warc.gz"} |
Oh for shame. This is the absolute pits. Are our newspapers dead in the water already?
Published: August 22, 2013 at 9:40am
This is what those who voted Labour have brought down on us: a police commissioner who abuses his position and openly fights the battles of the men – Police Minister Manuel Mallia and his head of
secretariat, Silvio Scerri – who appointed him.
And this against a weak and vulnerable private citizen, with certain problems, aged 27, who has already been wrongly arrested, arraigned and held on remand when it was known that another man had
admitted to the crime.
There’s more, and worse: this man’s case is up now for hearing before the Police Board, whose members are made up entirely – except for the chairman, a retired judge – of the personal friends and
associates of Manuel Mallia and Silvio Scerri, plus one failed Labour electoral candidate.
The first controversy erupted over the arraignment of this innocent man when the guilty man was known to the police.
The second controversy occurred last Saturday. The news broke – not thanks to the investigations of the press, but through shadow police minister Jason Azzopardi, who informed the press which then
comfortably took notes – that the man who spent 14 years in prison for stabbing Richard Cachia Caruana in 1994 (he was then personal assistant to prime minister Fenech Adami) had gone in search of
this wrongly accused man, Borg, and told him to go with him to Manuel Mallia’s ministry because ‘somebody wants to speak to him’.
This man is Charles Attard, known as Iz-Zambi. Most subsequent press reports and press questions described him only as a convicted criminal, when that is hardly the key significance. A convicted
criminal can be anything – a handbag thief, for instance, or a dealer in illegal porn. The significance of Charles Iz-Zambi having been sent to fetch Borg for a meeting at Manuel Mallia’s ministry,
and having sat through the meeting there with his head of secretariat Silvio Scerri, is that he stabbed Richard Cachia Caruana and that Manuel Mallia was defence lawyer to Meinrad Calleja, in his
jury trial for commissioning Charles iz-Zambi and others to commit that very same stabbing.
THAT is what is shocking about the meeting. THAT is its significance. A meeting with any old criminal would have been really bad, yes, but with this one it is particularly significant and
particularly bad.
Now the police commissioner has come out fighting Scerri’s and Mallia’s battles. He said it was Darryl Luke Borg who contacted the ministry and not the other way round. “The phone logs show this,” he
told the press. But for heaven’s sake, what phone logs? Don’t the press even bother to ask?
It is AGAINST THE LAW for the police to obtain phone logs without the appropriate authorisation from a superior authority (the courts). It must petition the courts and the telecoms provider has every
right to object. They usually do. When the police, six years ago, petitioned for the release of mobile phone logs to see who was around our house the night it was set on fire in the early hours
(except for those who lived in the area, that is), the mobile telephony providers objected on data protection grounds, and the court upheld their objection.
So, what phone logs is the police commissioner talking about? Phone logs obtained illegally? The ministry’s own phone logs? Phone logs from all mobile telephony providers? Phone logs from Go and
Unless the police commissioner has all phone logs from all providers and has cross-checked a great variety of personal and office numbers, both mobile and fixed, for the several individuals involved,
he has no way of knowing who contacted who first.
But he can’t possibly have done this. 1. It is illegal, and 2. he doesn’t have the necessary information and he hasn’t had the time to do all that.
There is another crucial point. THIS STARTED WHEN CHARLES IZ-ZAMBI WAS SENT TO FETCH DARRYL BORG AND TAKE HIM TO THE MINISTRY. Silvio Scerri claims that he didn’t send him. This is hard to believe.
Why would Charles iz-Zambi spontaneously go to look for Darryl Borg and decide, for no particular reason, to take him to meet the minister’s head of secretariat?
Bear in mind that, had the minister himself not been away on holiday, he would probably have taken him to meet the minister, not his underling.
If a man is sent to fetch another one, this constitutes making the first approach/contact, and IT DOES NOT SHOW UP ON PHONE LOGS. OBVIOUSLY.
59 Comments Comment
1. Pure, unadulterated, STASI stuff.
2. I couldn’t stand the sight and sound of Police Commissioner Peter Paul Zammit on NET TV trying to explain basic things his own way.
He thought he was explaining to a group of year 3 primary pupils.
3. I nearly choked on my espresso this morning.
What terrified me more is the fact that Times of Malta thinks it has a really good news item rather than an extremely worrying statement.
4. This was a police press conference called in defence of the Minister of Home Affairs’ representative, Silvio Scerri. Anyone who doubts this should read the last paragraph in The TImes’ report.
The only common element in the two unrelated incidents, the most recent being far more serious in nature, is Silvio Scerri.
5. Ic-chat logs jistaw jigu biss jew fuq investigazzjoni kriminali jew bil-kunsens tal-vittma li f’dan il-kas il-cop biex jaghtti xturu jista jighd li gara hekk.
Li missu staqsa ir-reporter huwa li chat logs ta’ liema mobile, ic-chief of staff kemm ghandu mobiles, ma setax cempillu fuq land line, knowing li kienu qed jitkellmu ma nies mhux daqsekk
subajhom dritt mhux ovvja li ma ntuzax il-mobile u intbaghat postman.
Xi kultant jahsbuhom fidili in-nies.
Mhux ovvja ser jaqbes ghac chief of staff jekk il-cop sar cop min l-istess ministeru.
Hemm bzonn li l-oppozizjoni tqum naqra fuq taghha fuq dawn l-affarijiet.
7. It could well be that Borg phoned the ministry. What that does prove? He could have been advised by iz-Zambi to make contact, or by someone else, for that matter.
The police commissioner should never have gone public, especially at this early stage. He needs some lessons in best practice.
When Tonio Fenech, prior to the election, gave details relating to a meeting with the commissioner, the latter refused to elaborate when pressed to do so by some journalists.
8. Do the police also arrest people for stealing water, by filling it in jerry cans, from public fountains?
9. I’m taking bets on the PM’s response:
1/2: ‘Allura ma kienx hemm min mar jiltaqa’ ma’ Zeppi l-Hafi’.
1/3: ‘Forsi ghandu kwalitajiet tajbin u ohrajn inqas tajbin’.
1/1000: ‘Din is-sitwazzjoni hija tassew inkwetanti u se nwaqqaf inkjesta indipendenti u nissospendi lil-Ministru minn dmirijietu sakemm ir-rapport jigi pprezentat lill-Kamra tad-Deputati u jkun
hemm diskussjoni aperta u hielsa.’
Of course, this latest story is hardly surprising when in his capacity of Leader of the Opposition, in his capacity as Prime Minister, and again in his capacity of Leader of the Opposition,
Alfred Sant spoke to the press practically as though he were Meinrad Calleja’s defence lawyer…with PM Poodle as his chief cheerleader.
But everybody will be as oblivious to the horrific danger this poses as they have been for the past 20 years.
In Malta, it’s OK to collude with murderers for political gain as long as you’re not Nationalist.
□ Sorry, I’ll reply to myself before this is even moderated (and sorry for typos) – this is a bit of a pet peeve…
Joseph Muscat was instrumental in doing the following:
Richard Cachia Caruana was Eddie Fenech Adami’s closest confidant, yet the MLP managed to make Zeppi l-Hafi the issue.
This meant that it was impossible for the state to secure a conviction because more than 50% of the population did not believe a word Zeppi l-Hafi said. This was reflected in the jury’s
The state would not have granted Zeppi l-Hafi a pardon if several people were not convinced that this was necessary. Prime Minister Eddie Fenech Adami was involved, but this wasn’t his
decision alone. The Attorney General and the Police Commissioner (the latter is, of course, a completely different species to the current holder of the post) also played a part, and this is a
matter of public record.
Meinrad Calleja, who went on trial for attempting to murder the prime minister’s personal assistant, had the (unofficial) help of the Labour press. He did not get away with it completely. He
was sentenced to 15 years in prison for cocaine trafficking.
Richard Cachia Caruana, a human being (contrary to popular belief) who suffered a severe physical and emotional trauma (he was stabbed in the back, at night, outside his home, FFS), did not
see justice done in his case.
Alfred Sant gained brownie points with the thick as pigs*!t electorate.
Eddie Fenech Adami lost points.
Joseph Muscat became Prime Minister. Well done, Malta.
10. But for heaven’s sake what do you expect from SUCH a Police Commissioner? Echoes of Pullicino are heared in the hills.
11. And how about the attacks by the Labour-leaning newspapers on the police inspector who brought the *real* perpetrator to justice?
12. Inspector Taliana’s taking Maltatoday to court, the lies and blatant twisting of facts sublime, and not a word anywhere either.
Prove d’orchestra for the future, human rights abuse, cover-ups and arbitrary power. Thanks to a press which has been silenced or bought. A roster on TVM it is then.
Your article today nailed it, may I add that it will depend on the integrity of the so called opinion makers, journalists and ‘intellectuals’ to speak out, show some mettle. If they even have
what it takes, mentally, culturally and spiritually.
At the moment, all I see, is one woman with a blog.
□ It’s not objective to say that the PN can counterbalance at the moment, not when the hostility to its legacy regarding anything with a sembiance of civil behaviour can be seen dripping from
their websites.
Maltatoday will insist on commentary to some boxing match, The Times unleash Sansone’s camouflaged penchant for social justice.
And everything takes on Muscat’s flavour for comparison, relativistic takes on fundamental values, what matters is whether we’re enamoured of this lie of a movement or not. The weak just
As if a country can give up itself to an ideology and not degenerate into a miniature Pyongyang.
Meantime, Net News doubled its viewership. The joys of a market economy.
□ http://www.maltarightnow.com/?module=news&at=L%2DIspettur+Elton+Taliana+wara+it%2DTor%26%23267%3Ba%2C+issa+b%E2%80%99libell+kontra+Malta+Today&t=a&aid=99850036&cid=39
Presumably were the inspector a protected species of bird, he’d get better treatment. Or perhaps the inspector was involved in other investigations, leading him to exotic isles in the
13. At the rate we’re going we’ll soon be another China in the Med.
For heaven’s sake where are our journalists? Oh probably they are only reporters so they don’t ask questions.
They only feel safe when hounding a PN minister as they know that the PN is a party which really believes in democracy.
They went after Tonio Fenech for a measly clock. But no, something like this and they’re mum.
Maltese “journalists”, what are you afraid of? The wrath of your employer, the minister, the chief of staff?
I have shunned the Times of Malta, never read the pro labour press and read The Independent.
To chant the Labour mantra during the election campaign – Shame on you.
14. Sadly, most will dismiss your story as “mud-slinging” and the press would never pick on this post to investigate further and ask questions.
Another nail in the coffin of Maltese journalism.
15. I only managed to stand a couple of minutes listening to the Police Commissioner on ‘Iswed fuq l-Abjad’ or whatever the programme is called. My intelligence couldn’t stand more insults.
16. Even if what he alleges is true – that Mr Borg called first – it does not necessarily mean that it was Mr Borg who made first contact.
Could it be that Mr Borg was called through iz-Zambi, and Mr Borg in turn called to verify the authenticity of iz-Zambi’s visit? I would certainly have done so if iz-Zambi turned up on my door
step and insisted I go with him. After all, given the circumstances, I would not call the Police for assistance.
17. Now why did many of my friends not believe me, when before the elections I warned them that this lot, if elected, will take us back to the bad old times because they have the very same attitude.
They wanted a f*cking change. Issa hudu go fikhom, u ahna maghkom.
18. Jidher li huwa kaz serju hafna u li jghalek tithasseb hafna.
Il-kummissarju ghandu hafna xi jwiegeb. Jien tlift il-fiducja fih ghall kollox issa.
20. On a different note, more details for the taghna lkoll appointments.
Social Security Act (Cap 318)
Drs Joe Brincat and Sharon Brincat Ruffini are husband and wife
Dr Joshua Chetcuti is Dr Paul Chetcuti Caruana’s son (ex mayor of Mosta)
Dr Jan Chircop is the sone of the late PL MP Karl Chircop
Dr Silvio Grixti is a failed PL candidate in last election
Dr Vincent Moran
Dr Anglu Psaila
Dr adrian Vassallo
Dr Joseph Zarb Adami, known laboyr supporter
Prof Anthony Zammit
Dr Daryl Xuereb, was going to contest elections on PL ticket butsuddenky decided not to (he did give a speech at one oof their general conferences)
Dr Jurgen Abela is brother of Dr Gunther Abela, who was appointed to the Malta Red Cross Society Management Committee in No458
Govt Formulary List Advisory Appeals Committee: Dr Sandra Gauci is Marlene Farrugia’s sister
21. il-gurnali zbaljaw wkoll ma staqsewx ezatt din it-telefonata li sar riferenza ghaliha meta u x’hin saret.
Telefonata ma tindika xejn ghax jista’ xi hadd ta’ lil borg numru biex icempel.
Pero naqbel mieghek hafna gurnali jitfu rapporti b’hafna mistoqsijiet mhux imwiegba.
22. http://www.timesofmalta.com/articles/view/20130822/local/man-arrested-over-theft-of-water.483095
I thought this had something to do with the jerrycans being filled up with water from the fountain of Pjazza San Gorg, on a regular basis.
23. Can you imagine what the press corps in England, the US or Canada would have made of this? But the reporters in Malta happily just scribble their notes and regurgitate what they’ve been told.
Don’t they have any pride in their profession?
24. I was disgusted reading the report on The Times yesterday. How is it possible that the Police Commissioner can get away with this much?
Have all the phone logs been analysed? Personal mobile phones, work mobile phones, home landlines, telephone boxes, unregistered mobile phone sim cards, Skype, Viber, Google?
25. Charles iz-Zambi could easily have told Darryl Borg to phone the ministry and, knowing that Borg would comply, the police could have “fortuitously” been keeping records of such phone-calls.
Are we going back to a police force resorting to frame-ups?
26. Unfortunately NET TV gave the Police Commissioner a free ride to say whatever he wants without being cornered with pertinent questions to bring him to account.
□ Yes, it is a real shame. Why were the tough questions asked by NET after the programme through an email? Why didn’t the presenter ask the really important questions there and then, while he
had an audience, and while the police commissioner was on TV with him? NET (and the Nationalist Party) totally lost the opportunity.
27. Unless clarified by the Police Commissioner, this is (a) illegal, (b) abuse of power (c) infringement of data protection.
28. Our newspapers dead? Just have a look at the editorial in Times of Malta (15 August) which dealt with the issue of public procurement.
It seems to be a press release straight from the Government’s DOI. The newspaper got the facts wrong.
With all the grandiose talk of simplification of administrative procedures, this government has decided to prevent self-employed from being engaged by bigger contractors as subcontractors for
public tenders.
Instead of carrying out surgical operations to address abuses in employment conditions (which it should do with all its strength and resources) the government is carpet-bombing all the
The option being offered by the government for self-employed is to either enter into a consortium with bigger players or else be employed as part-timers by such players.
Both options mean forfeiting the essence of what the term “self-employed” means.
Maybe we should start calling this process the “collectivization of self-employed”. Now that is a more familiar term with some members of this government, and certainly with Mario Vella at Malta
And then you have Times of Malta supporting this initiative, which it has not even begun to understand.
29. This is beginning to sound like a frame-up.
30. I am either going mad or else everyone connected with Mallia’s ministry is. For the sake of Malta I sincerely hope that it is the former.
31. Whilst on the subject of ‘water’, did the newspapers or journalists not pick up on this one as well. A man arrested for stealing water from a public latrine and filling up a 1000 litre tank on
the back of his truck. If one had to fill up 20 litre jerry cans on a push chair is this not also a crime?
32. Dear Ms Caruana Galizia,
I follow your blog with assiduity. Unfortunately, whenever I read a new story I end up asking myself the same question over and over again, ie, “Where the f*** is the PN?”. You are doing the job
of a whole political party. Thank you
33. The reason why journalists did not ask the pertinent questions, could only be two-fold, they are either incopetent in their work, or they are afraid to stand up and be counted.
I want to add that Dr. Jason Azzopardi seems to be doing much more then the leadership tro of the P.N. both in the House and outside it. He is to be commended for that.
34. Yes, Daphne, our newspapers are dead in the water. It seems that the journalists from (The) Times of Malta and Maltastar sister and ally Maltatoday have nothing to report since last March 8.
And to think of the storm created in a tea cup by these two newspapers about an Arlogg tal-Lira! Shame on them. By their stance in failing to dig deeper into serious investigation with regards to
important issues such as this, the have become an accomplice of josephmuscat.com and his propaganda machine.
35. In addition to all of the above, the point is not who called whom for the meeting but the fact that Scerri spoke to the witness at all and in the presence of Iz-Zambi.
Scerri should certainly resign or be fired. If either of these events does not occur then responsibility moves up a level and keeps doing so until it reaches the Prime Minister.
The silence of the Prime Minister on all such issues has already caused the loss of his credibility after such a short time.
□ Scerri should have been fired previously when he ordered the police to arrest a security chief who was only doing his job.
36. Not only are the newspapers dead, but also the Opposition.
□ I agree. I am sick of getting Facebook updates fron Simon Busuttil about practically each and every festa in Malta and Gozo. – and yet, he rarely, if ever writes anything political “ta’
☆ Why not reply to him there?
37. On top of all the police inspector who solved that crime promptly and efficiently and who was instrumental in having the wrongly accused person released by a court order finds himself hounded by
the LP leaning press and libelled so severely that he has to institute court cases to defend his good name.
Shameful return to the police methods rampant in the days of Mintoff.
□ It was a farce under the PN and the PL are perfecting it.
☆ Dear Paul Bonnici
Will you please complete the picture and give us also your candid opinion about the Malta Police Force/Farce in its previous existence during its MLP days under Mintoff and KMB?
According to my own very close experience it was terrible, not farcical at all, and forceful only in protecting evil.
Moreover survivors from that period would be recognising already, with horror, an abhorrent resurgence of the similar techniques.
40. http://www.maltastar.com/dart/20130822-labour-party-names-mep-election-candidates
A really cute tweet about a ‘batch’ of candidates.
And that picture of the party HQ facade is out of date. This is the Labour party’s own website, and yet it hasn’t noticed that the party HQ facade has been upgraded with a victory balcony and the
red ‘cage’ has gone.
41. How right you were in your contribution to the Malta Independent:
Fear seems to have taken over our media. The way government ‘spokesmen’ immediately react to any news with a rebuttal is their way of neutralising every piece of news that in some way could harm
the administration.
And it’s working. Why publish or investigate a story at all if it’s going to be shot down the next minute? This must be on every journalist’s mind at the moment.
43. Back to Mintoff and KMB years.
It’s just unbelievable that we’re going back to those days.
Where is the PN? Where is the leader? If you can call him that. Nobody said anything except Jason Azzopardi.
44. This is the sort of story that sends a shiver down the spine. The implications are profoundly serious.
45. Where is Franco Debono? Watching Silence of the Lambs?
46. The last episode shown on NET ‘Iswed fuq l-Abjad’ revealed the inadequacy of our Police Commissioner in handling his huge responsibilities responsibly.
He was just good at diverting the viewers’ attention to the peripheral aspects of the string of questions he faced.
It was pretty obvious that Peter Paul Zammit is not up to his job because his political affiliations with PL prevail over serious ethical issues regarding a serious of episodes that should be
treated more seriously by the Police Commissioner.
The more time we spend under PL the more it is becoming obvious that citizens should be really worried by the incompetence of Malta’s present government and the handpicked individuals who have
been assigned top roles.
It is sad to note that the PN leadership is practically ‘inexistent’. Hey, what’s happening at Tal-Pieta’?
47. The reins of power in Malta are held by ‘pimps, thieves and scoundrels’.
48. I am morally convinced that both Mr.Scerri and the Police Commissioner are not telling the truth. | {"url":"https://daphnecaruanagalizia.com/2013/08/oh-for-shame-this-is-the-absolute-pits-are-our-newspapers-dead-in-the-water-already/","timestamp":"2024-11-09T20:20:38Z","content_type":"application/xhtml+xml","content_length":"111665","record_id":"<urn:uuid:204e6534-6ca4-48a8-9708-ae804547491a>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00785.warc.gz"} |
Linear algebra
[Features] To execute various calculations of Linear algebra.
[Format] MATRIX "Array1= eigen(Array2)"
Description method:
matrix "VariableName = Linear algebra Formula"
Right side is the various function names exclusive for 'MATRIX'
and Array-variable name of the data source in parentheses.
The format in the case of Four-arithmetic-operations calculation,
"Array-variable1 Formula Array-variable2"
The variable name is Numeric array variable without a parenthesis.
The case of which left side is numerical value variable not Array:
->Determinant and rank.
The array variables to be used must be declared at the beginning of the program in advance.
If the size of array for substitute is different than requested one, automatically resized.
(When different scope in [Array variable] and ['MATRIX' command execution],
since resizing is impossible, it become an error)
*About Complex numbers of input and output values, these are not supported at this stage.
Matrix data are given by Array variable.
(Think of array variables as matrix)
If the columns count is [x] and the rows count is [y], it declare as [ dim dv1(y,x) ].
If declaring [ dim dv1(2) ] in 'Basic' Array variable,
3 variables of [0-2] will be declared.
This is one larger size than the desired size,
option base -1 (Please see the manual [OPTION BASE] for details)
By writing this command, to make it to allocate 2 data from [0-1].
This allocate become the same number as general language, C or Java.
When dealing with linear algebra,
be sure to write [option base -1] at the beginning of the program.
When not writting, correct calculation result isn't obtained.
About errors in linear algebra.
If uncalculable error occurs in Linear algebra calculation,
the program is not interrupted, it only return error code to 'ERR' function.
Please check the success or failure of calculation in the state of 'ERR' function.
'ERR' function is reset to 0 at the beginning of 'MATRIX' command execution.
Linear algebra related errors.
error 69:"Matrix error, uncalculable or cofactor 0"
error 70:"Array size of matrix does not fit"
Four arithmetic operations of Linear algebra.
10 option base-1
20 dim dv1(2,2),dv2(2,2),dv3(2,2)
30 dv2(0,0)=4:dv2(0,1)=-2:dv2(1,0)=1:dv2(1,1)=1
40 dv3(0,0)=1:dv3(0,1)=2:dv3(1,0)=3:dv3(1,1)=4
50 matrix "dv1=dv2+dv3"
60 for y=0 to 1 : for x=0 to 1
70 print dv1(y,x);
80 next : print : next
As something defined already up to 40 line, to explained using an example
in which only [line50 'MATRIX' parameter] is changed.
('matrix' next line is result)
Addition, Subtraction: It must be the same size in two calculation arrays.
Multiply: It must be the same size in [left array column count] and [right array row count]
Division: The two arrays must be square matricx of the same size.
matrix "dv1=dv2+dv3"
matrix "dv1=dv2-dv3"
3 -4
-2 -3
matrix "dv1=dv2*dv3"
-2 0
matrix "dv1=dv2/dv3"
-7 5
5.5 -3.5
There is no division in linear algebra,
it return the value obtained by multiplying by
[Inverse matrix of division-matrix-dv3] from left side of dv2.
--Internal function of 'MATRIX' command exclusive use--
e.g. forEigenvalue
10 optionbase-1
20 dimdv1(2,2),dv2(2,2)
30 dv2(0,0)=4:dv2(0,1)=-2:dv2(1,0)=1:dv2(1,1)=1
40 matrix "dv1=eigen(dv2)"
50 for y=0 to 1 : for x=0 to 1
60 print dv1(y,x);
70 next : print : next
As something defined already up to 30 line, to explained using an example
in which only [line40 'MATRIX' parameter] is changed.
('matrix' next line is result)
[ DET ]
To calculate Determinant from given Matrix.
matrix "var=det(dv2)"
The left side specifies the usual numeric variable, not an array.
The array(dv2) that give the data must be a square matrix.
[ INV ]
To calculate Inverse from given Matrix.
matrix "dv1=inv(dv2)"
0.1666 0.333
-0.1666 0.666
The array(dv1) to substitute must be a square matrix
of the same size as the array(dv2) giving data.
If Determinant of the given matrix is 0, Inverse matrix doesn't exist,
so it become ['ERR' function=69] and it isn't calculated.
(Program dosn't interruption)
Others that use the inverse matrix during the calculation
(Division, Simultaneous equations),
if the Determinant becomes 0, to become ['ERR' function=69].
[ EQUAT ]
To find the solution of simultaneous equations with a matrix.
e.g. for Simultaneous equations.
Data to prepare(2x3)
Program to find the solution of simultaneous equations of matrix with 'EQUAT' function.
10 option base -1
20 dim dv1(2,1),dv2(2,3)
30 dv2(0,0)=1:dv2(0,1)=2:dv2(0,2)=4
40 dv2(1,0)=3:dv2(1,1)=5:dv2(1,2)=9
50 matrix "dv1=equat(dv2)"
60 for y=0 to 1
70 print dv1(y,0);
80 next
Output solution
*If rank of matrix is less than the number of rows, the solution may notbefound.
When the right side of the expression is all 0.
Data in an array.
1 -3 0
2 -6 0
The mutual ratio between each character expression.
If the right side is all 0,
the solution become impossible or indetermination,
it cann't find a solution.
But if it is indetermination, it's possible to calculate
[The mutual ratio between each character expression].
So in this case, the ratio is returned.(by integer)
In this example,
to verify with [substitute 9 for x],[substitute 3 for y]
It becomes 0, same as the right side,
it turns out that the result is right.
In the case of this ratio, if the calculation is impossible,
it become ['ERR' function=69], and the calculation is not performed.
About the size of the array used.
The array(dv2) that give the equation data,
it prepare columns count 1 larger than a square matrix.
The rightmost column will enter the numbers on right side of the equation.
It set size [columns count=1, rows count=same as array that give the data]
that the array be substitutioned the solution.
It become that the solutions is lined up in a column vertical.
[ EIGEN ]
To calculate Eigenvalue from given Matrix.
matrix "dv1=eigen(dv2)"
The eigenvalues are arranged diagonally in the array as in the example, and returned.
Because it doesn't supported to a complex number,
If the eigenvalue is complex number, unregular value is returned.
(It is not implemented that the judgment way of whether solution is complex number, yet)
The array(dv1) to substitute must be a square matrix
of the same size as the array(dv2) giving data.
[ EVEC ]
To calculate Eigenvector from given Matrix.
matrix "dv1=evec(dv2)"
The format of the returned array is this: (vertical arrangement)
The Eigenvector of the 0th Eigenvalue is in 0th column,
The Eigenvector of the 1th Eigenvalue is in 1th column...
The array(dv1) to substitute must be a square matrix
of the same size as the array(dv2) giving data.
[ TRANS ]
To return Transpose matrix from given Matrix.
matrix "dv1=trans(dv2)"
-2 1
The array(dv1) to substitute must be a square matrix
of the same size as the array(dv2) giving data.
[ RANK ]
To return Rank from given Matrix.
matrix "var=rank(dv2)"
The left side specifies the usual numeric variable, not an array.
[ UNIT ]
To return Unit matrix of specified number.
matrix "dv1=unit(2)"
To specify the square size in a number of the function.
[ INTCV ]
The data in array is integer-ized.
Format: intcv(ArrayName,ProcessingLine,MaxInspectionValue,DetermineDigits)
e.g.: matrix "dv1=intcv(dv2,-1,100,0.00001)"
It examine whether [Each value in the given array] will be [Shaped integer ratio],
If possible, Each value is converted to data in the ratio of integer, and returned.
If it can't change to integer, to return the array of original values as is.
('ERR' function=69, None interruption)
The range to be integerized can be specified either [Whole of array] or [Specified row].
To use it in this case. For example,
Matrix question made by a person that have an integer in the solution.
It try to make this be integer-ized.
Left side:
Array(dv1) to substitute the result.
-Parameter in the function-
Array(dv2) which try to be integer-ized.
ProcessingLine: (Specify from 0-)
To specify which column to be integer-ized.
If -1 is specified, everything in the array is targeted.
It is investigated to which integral number
actually corresponds to minimum value that be standard integer.
To set the upper limit value of the survey.
If the survey reaches the upper limit and there is no prospect of integer-ized,
it is judged that integer-izing is impossible.
('ERR' function=69, None interruption)
The larger this value, the more likely it is to be integer-ization,
but the processing load will be large.
The larger digit numbers of a decimal(small number),
the stricter judge criteria of integer-ized.
The smaller digit numbers(big number) then roughly.
If the value is not well known, please use the value of
[ intcv(ArrayName,-1,100,0.00001) ] like the example.
The array(dv1) to substitute must be a square matrix
of the same size as the array(dv2) giving data.
[ SETTING ]
This is function which do setting, not calculation.
matrix "var=setting(ret,0)"
Various setting of Linear algebraic.
To write the type of setting in the first parameter of function.
(This word don't need to be enclosed in double quotation)
-First parameter-
To set whether the return value of Eigenvector is returned as a decimal or an integer.
-Second parameter-
0: Return value as a normalized decimal.
1: Return value as an integer.(default)
The left side is a temporary variable without meaning.
Please specify a normal numeric variable, not an array.
-The settings to decide number of iterations to convergence, judgment digits
in calculating eigenvalue, eigenvector.
Setting the digits to judge whether decimal converge to integers.
For example, 'setting(digit,3)' [default=3] then,
the eigenvector '2.000567' is below the decimal point, and '0' continues 3 times,
it is judged to converge on '2' of integer.
The larger the number, the stricter the judgment.
(the following number of 'rep' times also require a large number)
The number of iterations when finding eigenvalues by QR decomposition.
[default] setting(digit,3200)
The number of times is larger, the more accurate value can be obtained.
How to use the sample "linear.bas"
When executed, the command selection screen appears.
Enter the number.
1.input 2.save 3.load 4.calc
First, enter the data with '1.input'
To enter the size of the row and column of the matrix.
size x? (Enter columns count)
size y? (Enter rows count)
Let's enter the size of '2x2' as an example.
(When equation data, to make the columns count increase by 1)
Next enter the data.
4 -2
Let's enter this matrix data.
The columns number are counted from the right side.
0,0 ? 4
0,1 ? -2
1,0 ? 1
1,1 ? 1
Let's saves the input data. (to memo the size of the matrix.)
Please select [ 2.save ]
Enter the save number. To try saving to number 1.
It was saved by pressing [Enter].
Let's actually see the calculation result.
Please select [ 4.calc ]
Matrix data
displayed in this order.
To tap the key to display more data.
This data will be displayed.
Finally, the loading procedure is explained.
Please select [ 3.load ]
To enter the load number. Then try to load [No.1] earlier.
It's necessary to input the matrix size.
Enter the size you noted earlier.
size x? 2 (Enter columns count)
size y? 2 (Enter rows count)
It is now loaded.
It's possible to make calculation result indicate variously
by choosing [4.calc] by the procedure earlier. | {"url":"http://androidbasic.ninja-web.net/man/ac_26matrix.html","timestamp":"2024-11-06T02:33:35Z","content_type":"text/html","content_length":"20055","record_id":"<urn:uuid:b041e0e7-784e-46cf-9c12-f4b201b1f9b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00461.warc.gz"} |
How many cats were there in Catattackya and how many rats did each kill?
In a single week, each cat in the rat infested village of Catattackya killed the same number of rats as every other cat.
The total number of rat fatalities during the week come to 299.
Less than 20 cats achieved this remarkable feat.
How many cats were there in Catattackya and how many rats did each kill? | {"url":"https://www.queryhome.com/puzzle/33537/many-cats-were-there-catattackya-and-many-rats-did-each-kill","timestamp":"2024-11-13T11:04:57Z","content_type":"text/html","content_length":"110000","record_id":"<urn:uuid:dd6159b3-26af-40b0-b240-81cd0b03063c>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00714.warc.gz"} |
Chebyshev's Inequality
(Probability theory:)
The Chebyshev inequality gives us a bound on the deviation of a random variable from its mean, expressed in terms of its standard deviation (or variance).
Lemma. Suppose X is a random variable with finite variance σ^2=Var(X)<∞ (σ is the standard deviation of X) and expectation (mean) μ=EX. Then for all t>0,
P(|X-μ|≥t) ≤ σ^2/t^2
The proof is a trick application of the Markov inequality, to a specially-constructed random variable:
Proof. Let Y=(X-μ)^2≥0 be another random variable. Then Y has expectation: EY=E(X-μ)^2=Var(X)=σ^2. This mean we can apply the Markov inequality to Y:
P(Y>s^2EY) = P(Y>(sσ)^2) ≤ 1/s^2
But Y>(sσ)
^2 iff
|X-μ|>sσ; taking s=t/σ yields Chebyshev's inequality. | {"url":"https://m.everything2.com/title/Chebyshev%2527s+inequality","timestamp":"2024-11-14T21:29:21Z","content_type":"text/html","content_length":"28044","record_id":"<urn:uuid:00271150-1443-4e0e-b301-b80d9e8ff775>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00876.warc.gz"} |
Algebra 2 Worksheets
Our Algebra 2 worksheets are designed to show you how math works. Visually. We focus on plotting, visualizing, and graphing algebraic functions. So you can literally see the results.
Interestingly, a small set of algebraic functions can be used to describe the shapes of a surprisingly large number of real-world objects and happenings. This key idea allows us ultimately to program
a computer with algebra to create models of the real world. The creation of mathematical models, commonly known as modeling, is perhaps the single most important of the
algebra 2 topics
So, relax. Our Algebra 2 worksheets aren’t going to delve into the annoying, nitty-gritty details of doing complicated, multi-page calculations the old school way using paper and pencil.
These days, if you can understand conceptually the fundamentals of algebra, it’s pretty easy to use a simple web app to do all the calculations and heavy lifting. | {"url":"https://learnwithdrscott.com/algebra-2-worksheets/","timestamp":"2024-11-06T17:26:39Z","content_type":"text/html","content_length":"149899","record_id":"<urn:uuid:71deebde-b89f-40d7-ace3-d3ea2c1ed502>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00061.warc.gz"} |
Pi-hole not block all ads
Hi, TrueNAS newbie here.
I recently installed the pi-hole app (version 2.0.14) and followed this guide step by step. Additionally, in the pi-hole web interface I selected “allow only local requests”, and I ran ipconfig /
renew from a Windows 11 machine command prompt on my network.
From that same Windows machine I then opened Firefox, disabled uBlock Origin, and visited speedtest.net. But I’m still seeing ads. I’m seeing activity on the web interface.
What am I doing wrong?
How is this related to TrueNAS?
I think you should use the pihole forums.
It’s the app within TrueNAS. I thought I used the correct tags for this to be relevant here. I’ll try their forum.
You may need to select the most appropriate blocking lists for your location.
1 Like
Even if you install a lot of lists (see firebog for a set of lists, among other places) some ads will still get through.
You may also have to add some web sites to your pi-hole whitelist so the queen bee can still go shopping.
I suggest maintaining multiple, identical pi-hole DNS servers in case you need to take one down for maintenance / updates. You can typically config up to three per gateway / router in the DHCP
settings. Good luck.
2 Likes
Even if you had the wrong blocklists installed for your locale, you’d still see queries being run through Pi-Hole. It’s showing 11 clients connected, but barely any traffic. So maybe your router
isn’t using the Pi-Hole for DNS resolution? I’d check the video you linked at around the four-minute mark.
But also it’s not a good idea to use a machine that depends on network connectivity to function (your NAS) for network resolution.
1 Like
Sure, but I would not ask for WhatsApp Support in a Windows or iPhone forum, just because WhatsApp runs on Windows or iOS.
I am not trying being snarky, it is just some of the most basic question without any details. You will find tons of posts like yours in the pihole forum and also what the solution(s) for that problem
Two things.
A: pihole is a DNS blocker. It can block traffic by not resolving some DNS requests. It can’t block
YouTube ads or if a website shows you traffic from its on server or some server that is not on the blocklist.
B: Make sure your devices are using pihole as a DNS server. No iCloud Private relay, no DoH, no hardcoded 8.8.8.8 DNS. Do you have 11 devices on your network?
2 Likes
I’m considering adding a pi hole to my NAS as a third DNS resolver, for redundancy without making the network dependent on it. The more piholes you add, the more resilient the network will be but
unless they talk to each other and sync changes (and there is a package for that), making changes in triplicate does get old.
1 Like
I’m incredibly new to networking, so sorry if I’m misunderstanding and asking the wrong questions. But you’re saying adding a raspberrypi pi-hole would be good as a redundant addition to pi-hole on
my NAS?
Also, pi-hole seems to be blocking ads correctly now and my traffic has increased a lot since I first posted this.
I understand what you mean, and I didn’t think you were being snarky.
As I mentioned in another reply, I am very new to almost everything networking related and figuring things out as I go (Link Aggregation was a pain) but I’m slowly getting things running correctly.
As of now pi-hole is showing 14 devices (most likely these are smart home devices) and is blocking all ads on speedtest.net. I didn’t change anything so maybe it just needed time? Idk haha
edit: I’m going to mark this reply as the solution since everything seems to be working correctly now.
1 Like | {"url":"https://forums.truenas.com/t/pi-hole-not-block-all-ads/15149","timestamp":"2024-11-04T05:41:18Z","content_type":"text/html","content_length":"37237","record_id":"<urn:uuid:2be02527-cc38-46ef-9ecd-1f882741d471>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00687.warc.gz"} |
Fraction-To-Decimal Calculator For Math Exam | MyCalcu
Use An Fraction-To-Decimal Calculator To Prepare For Math Exams
Do you struggle in math class with fractions and decimals? Are you concerned about how you'll fare on your upcoming math test? Don't be concerned! An online fraction-to-decimal calculator may be the
answer you're looking for to improve your math skills and ace your exam. We'll show you how to use an online fraction-to-a-decimal calculator to study for math exams in this blog post. We'll teach
you everything you need to know about using a calculator, from the fundamentals to solving fraction and decimal math problems. We'll also share some lesser-known calculator tips and tricks to help
you improve your math skills and increase your chances of passing the exam. So, if you're ready to advance your math skills and ace that exam, join us as we investigate the power of an online
fraction-to-decimal calculator.
What Is A Fraction To Decimal Calculator?
You can convert fractions to decimals and vice versa by using a tool called a Fraction to Decimal Calculator. This tool is very useful tool. It is a straightforward and time-saving method for
carrying out these conversions, which is especially useful when working with complicated or lengthy numbers. In most cases, the user of the calculator is required to input both the numerator and the
denominator of the fraction, after which the calculator will convert the fraction to decimal automatically.
There are a variety of calculators that can convert fractions to decimals that can be found on the internet. Some of these calculators also have additional features, such as the ability to simplify
fractions, locate the greatest common denominator, and convert decimals to other formats, such as percentages and ratios. You can also input mixed numbers into some calculators, and those numbers
will be converted to decimals automatically.
It is important to note that although a fraction-to-decimal calculator can provide quick and easy conversions, it is always recommended that you understand how to perform these conversions manually
as well. This can help you to have a better understanding of the concepts that are underlying the conversions.
How To Use A Fraction To Decimal Calculator
MyCalcu Fraction to Decimal converter is one of the best fraction-to-decimal calculators you can find online. MyCalcu's Fraction to Decimal converter is intuitive and simple to use. Following are the
fundamentals for converting fractions to decimals and decimals back to fractions with the help of MyCalcu's Fraction to Decimal converter:
• Enter the fraction's numerator and denominator into the calculator, then press the fraction button to convert the fraction to a decimal. To find the fraction's decimal representation, select
"convert" or "equals" and press the button.
• Decimal numbers can be converted to fractions by entering the decimal value into a calculator and then clicking the "convert" or "equals" button. The calculator can convert a decimal to a
fraction for you.
• If you want to convert a mixed number to a decimal, you can do so with MyCalcu's fraction-to-decimal converter; just enter the whole number and the fraction part separately.
• In addition to converting fractions to decimals, the MyCalcu fraction calculator can also simplify fractions, find the greatest common denominator, and convert decimals to percentages and ratios.
MyCalcu's Fraction to Decimal converter is a convenient tool for making decimal-to-fraction and fraction-to-decimal conversions.
How To Prepare For Math Exams Using A Fraction-To-Decimal Calculator
Math exam preparation can be stressful, especially when dealing with fractions and decimals. However, using a fraction-to-decimal calculator, such as MyCalcu Fraction to Decimal converter, can help
you improve your math skills and improve your chances of passing the exam. Here are a few examples of how to use the calculator to study for math exams:
Practice Conversions
Use the calculator to convert fractions to decimals and decimals to fractions. The more you practice these conversions, the more comfortable you will become with them, and the better prepared you
will be for your exam.
Solve Math Problems
Use the calculator to solve fractions and decimals problems. This will assist you in understanding how to use the calculator to solve real-world math problems, as well as in becoming more comfortable
with using the calculator under pressure.
Go Over Some Key Concepts
Use the calculator to review important fraction and decimal concepts like simplifying fractions, finding the greatest common denominator, and converting decimals to percentages and ratios.
Use It As A Self-Evaluation Tool
Use the calculator to assess your own understanding of the concepts by solving problems and comparing the results to your expectations.
You can improve your math skills, become more comfortable with using a calculator and be better prepared for math exams by using MyCalcu Fraction to Decimal converter to practice and review important
math concepts.
Ending Note
Exam preparation can be stressful, but using an online fraction-to-decimal calculator like MyCalcu Fraction to Decimal converter can help you improve your math skills and increase your chances of
passing the exam. MyCalcu fraction-to-decimal converter is not only user-friendly and easy to operate, but it also includes additional features that will assist you in the practice of conversions,
the solution of mathematical problems, the review of significant concepts, and the evaluation of your own comprehension of the concepts. Keep in mind that it is always recommended that you learn how
to perform conversions manually because it can help you understand the concepts that lie beneath the surface better. Your success in the math tests will greatly improve if you make use of the
Fraction to Decimal converter on MyCalcu.
1 year ago
No comments yet! Why don't you be the first?
Add a comment | {"url":"https://mycalcu.com/blog/online-fraction-to-decimal-calculator","timestamp":"2024-11-11T04:37:15Z","content_type":"text/html","content_length":"24027","record_id":"<urn:uuid:1de617e4-52fa-4e42-916e-9010025af21d>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00574.warc.gz"} |
Detect linear dependencies of one matrix on another
fixDependence {mgcv} R Documentation
Detect linear dependencies of one matrix on another
Identifies columns of a matrix X2 which are linearly dependent on columns of a matrix X1. Primarily of use in setting up identifiability constraints for nested GAMs.
X1 A matrix.
X2 A matrix, the columns of which may be partially linearly dependent on the columns of X1.
tol The tolerance to use when assessing linear dependence.
rank.def If the degree of rank deficiency in X2, given X1, is known, then it can be supplied here, and tol is then ignored. Unused unless positive and not greater than the number of columns in X2.
strict if TRUE then only columns individually dependent on X1 are detected, if FALSE then enough columns to make the reduced X2 full rank and independent of X1 are detected.
The algorithm uses a simple approach based on QR decomposition: see Wood (2017, section 5.6.3) for details.
A vector of the columns of X2 which are linearly dependent on columns of X1 (or which need to be deleted to acheive independence and full rank if strict==FALSE). NULL if the two matrices are
Simon N. Wood simon.wood@r-project.org
Wood S.N. (2017) Generalized Additive Models: An Introduction with R (2nd edition). Chapman and Hall/CRC Press.
version 1.9-1 | {"url":"https://stat.ethz.ch/R-manual/R-devel/library/mgcv/html/fixDependence.html","timestamp":"2024-11-03T22:36:40Z","content_type":"text/html","content_length":"3863","record_id":"<urn:uuid:bb54d6e4-0b7c-490f-aadb-f494f51a7f6f>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00304.warc.gz"} |
PatternDetect: Detect Patterns in Vector Fields in ICvectorfields: Vector Fields from Spatial Time Series of Population Abundance
Detect patterns in vector fields represented on a grid by looking in the rook's neighbourhood of each grid cell. Four patterns are detected: convergences occur when the vectors in the four adjacent
cells in the rook's neighbourhood point towards the focal cell; divergences occur when the vectors in the four adjacent cells point away from the focal cell; Partial convergences occur when three of
the four vectors point towards the focal cell and the final vector points neither towards nor away from the focal cell; Partial divergences occur when three of the four vectors point away the focal
grid cell and the final vector points neither towards nor away from the focal grid. For all of the patterns above a sub-pattern is specified if all arrows point clockwise or counter-clockwise.
vfdf A data frame as returned by DispField, DispFieldST, or DispFieldSTall with at least five rows (more is better)
A data frame as returned by DispField, DispFieldST, or DispFieldSTall with at least five rows (more is better)
A data frame as returned by DispField, DispFieldST, or DispFieldSTall, with three additional columns. The first additional column is called Pattern in which the patterns around each focal cell are
categorized as convergence, divergence, partial convergence, partial divergence, or NA. The second additional column, called SubPattern, indicates whether all arrows point clockwise or
counter-clockwise. The third additional column is called PatternCt, which contains a one if all four neighbourhood grid cells contain displacement estimates, and a NA otherwise.
# creating convergence/divergence patterns Mat1 <- matrix(rep(0,9*9), nrow = 9) Mat1[3, c(4, 6)] <- 1 Mat1[7, c(4, 6)] <- 1 Mat1[c(4, 6), 3] <- 1 Mat1[c(4, 6), 7] <- 1 Mat1 Mat2 <- matrix(rep(0,9*9),
nrow = 9) Mat2[2, c(4, 6)] <- 1 Mat2[8, c(4, 6)] <- 1 Mat2[c(4, 6), 2] <- 1 Mat2[c(4, 6), 8] <- 1 Mat2 # rasterizing rast1 <- terra::rast(Mat1) terra::plot(rast1) rast2 <- terra::rast(Mat2)
terra::plot(rast2) # Detecting a divergence (VFdf1 <- DispField(rast1, rast2, factv1 = 3, facth1 = 3, restricted = TRUE)) (patdf1 <- PatternDetect(VFdf1)) # Detecting a convergence (VFdf2 <-
DispField(rast2, rast1, factv1 = 3, facth1 = 3, restricted = TRUE)) (patdf2 <- PatternDetect(VFdf2))
For more information on customizing the embed code, read Embedding Snippets. | {"url":"https://rdrr.io/cran/ICvectorfields/man/PatternDetect.html","timestamp":"2024-11-10T20:58:19Z","content_type":"text/html","content_length":"27126","record_id":"<urn:uuid:c30229e4-1c25-4633-9f05-cff74de7fc13>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00390.warc.gz"} |
What is a Quadrilateral? | Definition, Types and Properties
The quadrilateral is a figure formed by adding the four-line segments. It consists of 4 sides, 4 vertices, and two diagonals. By going through this entire article you will get a complete idea of
Quadrilateral like Types, Sides, Angles, Vertices, Diagonals of it, its Properties etc. If P, Q, R, S are four points and where no three points are collinear and also the line segments PQ, QR, RS,
and SP do not intersect at their endpoints. Check the below figure that is the quadrilateral PQRS.
In a quadrilateral PQRS
(i) The vertices in a quadrilateral are P, Q, R, S.
(ii) The sides of a quadrilateral are PQ, QR, RS, and SP.
(iii) Also, the angles of quadrilateral are ∠SPQ, ∠PQR, ∠QRS and ∠RSP.
(iv) The line segments are PQ and QS.
Convex Quadrilaterals and Concave Quadrilaterals
If each angle of a quadrilateral is less than 180°, then it is called a convex quadrilateral. Also, if one angle of the quadrilateral is more than 180°, then it is called a concave quadrilateral. A
quadrilateral is not a simple closed figure.
Sides, Angles, Vertices, Diagonals of the Quadrilateral
Have a look at the complete details of a Quadrilateral below.
Adjacent Sides of a Quadrilateral
Adjacent Sides of a Quadrilateral are nothing but the sides that have a common endpoint. If PQRS is a Quadrilateral, then (PQ, QR), (QR, RS), (RS, SP), and (SP, PQ) are four pairs of adjacent sides
of quadrilateral PQRS.
Opposite Sides of a Quadrilateral
In a given quadrilateral, the two sides are said to be opposite sides when they do not have a common endpoint. If PQRS is a quadrilateral, then (PQ, SR) and (PS, QR) are two pairs of opposite sides
of quadrilateral PQRS.
Adjacent Angles of a Quadrilateral
An angle is formed when two rays meeting at a common endpoint. Two angles of a quadrilateral are said to be adjacent when they have a common arm. From the given figure, (∠P, ∠Q), (∠Q, ∠R), (∠R, ∠S),
and (∠S, ∠P) are four pairs of adjacent angles of quadrilateral PQRS.
Opposite Angles of a Quadrilateral
Opposite Angles of a Quadrilateral are not adjacent angles. If you consider a quadrilateral PQRS, then (∠P, ∠R) and (∠Q, ∠S) are two pairs of opposite angles of quadrilateral PQRS.
Adjacent Vertices of a Quadrilateral
In a Quadrilateral, if two vertices have a common side are known as adjacent vertices. From the figure, the pairs of adjacent vertices are (P, Q); (Q, R); (R, S), and (S, P).
Opposite Vertices of a Quadrilateral
Opposite Vertices of a Quadrilateral are vertices that do not have a common side. From the figure, the pairs of opposite vertices are (P, R) and (Q, S).
Diagonal of a Quadrilateral
When a line segment of the opposite vertices of a quadrilateral is joined, then the diagonal of the quadrilateral is formed. From the given figure, the two diagonals are PR and QS. | {"url":"https://eurekamathanswerkeys.com/what-is-a-quadrilateral/","timestamp":"2024-11-14T03:43:31Z","content_type":"text/html","content_length":"37903","record_id":"<urn:uuid:5cf4bbdc-8537-465f-964f-f73a31ea13d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00533.warc.gz"} |
Keras documentation: BayesianOptimization Oracle
BayesianOptimization Oracle
BayesianOptimizationOracle class
Bayesian optimization oracle.
It uses Bayesian optimization with a underlying Gaussian process model. The acquisition function used is upper confidence bound (UCB), which can be found here.
• objective: A string, keras_tuner.Objective instance, or a list of keras_tuner.Objectives and strings. If a string, the direction of the optimization (min or max) will be inferred. If a list of
keras_tuner.Objective, we will minimize the sum of all the objectives to minimize subtracting the sum of all the objectives to maximize. The objective argument is optional when Tuner.run_trial()
or HyperModel.fit() returns a single float as the objective to minimize.
• max_trials: Integer, the total number of trials (model configurations) to test at most. Note that the oracle may interrupt the search before max_trial models have been tested if the search space
has been exhausted. Defaults to 10.
• num_initial_points: Optional number of randomly generated samples as initial training data for Bayesian optimization. If left unspecified, a value of 3 times the dimensionality of the
hyperparameter space is used.
• alpha: Float, the value added to the diagonal of the kernel matrix during fitting. It represents the expected amount of noise in the observed performances in Bayesian optimization. Defaults to
• beta: Float, the balancing factor of exploration and exploitation. The larger it is, the more explorative it is. Defaults to 2.6.
• seed: Optional integer, the random seed.
• hyperparameters: Optional HyperParameters instance. Can be used to override (or register in advance) hyperparameters in the search space.
• tune_new_entries: Boolean, whether hyperparameter entries that are requested by the hypermodel but that were not specified in hyperparameters should be added to the search space, or not. If not,
then the default value for these parameters will be used. Defaults to True.
• allow_new_entries: Boolean, whether the hypermodel is allowed to request hyperparameter entries not listed in hyperparameters. Defaults to True.
• max_retries_per_trial: Integer. Defaults to 0. The maximum number of times to retry a Trial if the trial crashed or the results are invalid.
• max_consecutive_failed_trials: Integer. Defaults to 3. The maximum number of consecutive failed Trials. When this number is reached, the search will be stopped. A Trial is marked as failed when
none of the retries succeeded. | {"url":"https://keras.io/api/keras_tuner/oracles/bayesian/","timestamp":"2024-11-12T02:40:06Z","content_type":"text/html","content_length":"16485","record_id":"<urn:uuid:d1104b9c-361b-4b69-a3f5-23ac4b03e3eb>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00027.warc.gz"} |
dr hab. inż. Tomasz Zieliński
2004 Metoda impulsowych dystorsji wirtualnych z zastosowaniem do modelowania i identyfikacji defektów w konstrukcjach 574
2016-02-25 Propagacja i tłumienie fal akustycznych w ośrodkach porowatych, a cechy geometryczne i drgania mikrostruktury
Promotor prac doktorskich
1. 2014-11-27 Nowak Łukasz Adaptive feedback control system for reduction of vibroacoustic emission 671
Ostatnie publikacje
1. Kowalczyk-Gajewska K., Maj M., Bieniek K., Majewski M., Opiela K.C., Zieliński T.G., Cubic elasticity of porous materials produced by additive manufacturing: experimental analyses, 140p.
numerical and mean-field modelling, ARCHIVES OF CIVIL AND MECHANICAL ENGINEERING, ISSN: 1644-9665, DOI: 10.1007/s43452-023-00843-z, Vol.24, pp.34-1-34-22, 2024
Although the elastic properties of porous materials depend mainly on the volume fraction of pores, the details of pore distribution within the material representative volume are also
important and may be the subject of optimisation. To study their effect, experimental analyses were performed on samples made of a polymer material with a predefined distribution of
spherical voids, but with various porosities due to different pore sizes. Three types of pore distribution with cubic symmetry were considered and the results of experimental analyses
were confronted with mean-field estimates and numerical calculations. The mean-field ‘cluster’ model is used in which the mutual interactions between each of the two pores in the
predefined volume are considered. As a result, the geometry of pore distribution is reflected in the anisotropic effective properties. The samples were produced using a 3D printing
technique and tested in the regime of small strain to assess the elastic stiffness. The digital image correlation method was used to measure material response under compression. As a
reference, the solid samples were also 3D printed and tested to evaluate the polymer matrix stiffness. The anisotropy of the elastic response of porous samples related to the arrangement
of voids was assessed. Young’s moduli measured for the additively manufactured samples complied satisfactorily with modelling predictions for low and moderate pore sizes, while only
qualitatively for larger porosities. Thus, the low-cost additive manufacturing techniques may be considered rather as preliminary tools to prototype porous materials and test mean-field
approaches, while for the quantitative and detailed model validation, more accurate additive printing techniques should be considered. Research paves the way for using these
computationally efficient models in optimising the microstructure of heterogeneous materials and composites.
Słowa kluczowe:
Pore configuration, Anisotropy, Elasticity, Micro-mechanics, Additive manufacturing
Afiliacje autorów:
Kowalczyk-Gajewska K. - IPPT PAN
Maj M. - IPPT PAN
Bieniek K. - IPPT PAN
Majewski M. - IPPT PAN
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
2. Zieliński T.G., Opiela K.C., Dauchez N.^♦, Boutin T.^♦, Galland M.-A.^♦, Attenborough K.^♦, Extremely tortuous sound absorbers with labyrinthine channels in non-porous and microporous 100p.
solid skeletons, APPLIED ACOUSTICS, ISSN: 0003-682X, DOI: 10.1016/j.apacoust.2023.109816, Vol.217, pp.109816-1-13, 2024
An assembly of additively-manufactured modules to form two-dimensional networks of labyrinthine slits results in a sound absorber with extremely high tortuosity and thereby a relatively
low frequency quarter wavelength resonance. Fully analytical modelling is developed for the generic design of such composite acoustic panels, allowing rapid exploration of various
specific designs. In addition to labyrinthine channels in a non-porous solid skeleton, a case is also considered where the skeleton has microporosity such that its permeability is very
much lower than that due to the labyrinthine channels alone. The analytical modelling is verified by numerical calculations, as well as sound absorption measurements performed on several
3D printed samples of modular composite panels. The experimental validation required overcoming the non-trivial difficulties related to additive manufacturing and testing samples of
extreme tortuosity. However, due to the two-dimensionality and modularity of the proposed design, such absorbers can possibly be produced without 3D printing by assembling simple,
identical modules produced separately. The experimental results fully confirmed the theoretical predictions that significant sound absorption, almost perfect at the peak, can be achieved
at relatively low frequencies using very thin panels, especially those with double porosity.
Słowa kluczowe:
Sound absorption,Extreme tortuosity,Double porosity,Acoustic composites,Additive manufacturing
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Opiela K.C. - IPPT PAN
Dauchez N. - Sorbonne University Alliance (FR)
Boutin T. - Sorbonne University Alliance (FR)
Galland M.-A. - École Centrale de Lyon (FR)
Attenborough K. - The Open University (GB)
3. Opiela K.C., Zieliński T.G., Attenborough K.^♦, Limitations on validating slitted sound absorber designs through budget additive manufacturing, Materials & Design, ISSN: 0264-1275, DOI: 140p.
10.1016/j.matdes.2022.110703, Vol.218, pp.110703-1-17, 2022
The potential usefulness of relatively simple pore microstructures such as parallel, identical, inclined slits for creating broadband sound absorption has been argued through analytical
models. In principle, such microstructures could be realised through budget additive manufacturing. However, validation of the analytical predictions through normal incidence impedance
tube measurements on finite layers is made difficult by the finite size of the tube. The tube walls curtail the lengths of inclined slits and, as a result, prevent penetration of sound
through the layer. As well as demonstrating and modelling this effect, this paper explores two manufacturing solutions. While analytical and numerical predictions correspond well to
absorption spectra measured on slits normal to the surface, discrepancies between measured and predicted sound absorption are noticed for perforated and zigzag slit configurations. For
perforated microgeometries this is found to be the case with both numerical and analytical modelling based on variable length dead-end pores. Discrepancies are to be expected since the
dead-end pore model does not allow for narrow pores in which viscous effects are important. For zigzag slits it is found possible to modify the permeability used in the inclined slit
analytical model empirically to obtain reasonable agreement with data.
Słowa kluczowe:
slitted sound absorber, additive manufacturing, microstructure-based modelling
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
Attenborough K. - The Open University (GB)
4. Zielinski T.G., Dauchez N.^♦, Boutin T.^♦, Leturia M.^♦, Wilkinson A.^♦, Chevillotte F.^♦, Bécot F.-X.^♦, Venegas R.^♦, Taking advantage of a 3D printing imperfection in the development 100p.
of sound-absorbing materials, APPLIED ACOUSTICS, ISSN: 0003-682X, DOI: 10.1016/j.apacoust.2022.108941, Vol.197, pp.108941-1-22, 2022
At first glance, it seems that modern, inexpensive additive manufacturing (AM) technologies can be used to produce innovative, efficient acoustic materials with tailored pore morphology.
However, on closer inspection, it becomes rather obvious that for now this is only possible for specific solutions, such as relatively thin, but narrow-band sound absorbers. This is
mainly due to the relatively poor resolutions available in low-cost AM technologies and devices, which prevents the 3D-printing of pore networks with characteristic dimensions comparable
to those found in conventional broadband sound-absorbing materials. Other drawbacks relate to a number of imperfections associated with AM technologies, including porosity or rather
microporosity inherent in some of them. This paper shows how the limitations mentioned above can be alleviated by 3D-printing double-porosity structures, where the main pore network can
be designed and optimised, while the properties of the intentionally microporous skeleton provide the desired permeability contrast, leading to additional broadband sound energy
dissipation due to pressure diffusion. The beneficial effect of additively manufactured double porosity and the phenomena associated with it are rigorously demonstrated and validated in
this work, both experimentally and through precise multi-scale modelling, on a comprehensive example that can serve as benchmark.
Słowa kluczowe:
double porosity, additive manufacturing, sound absorption, pressure diffusion, multi-scale modelling
Afiliacje autorów:
Zielinski T.G. - IPPT PAN
Dauchez N. - Sorbonne University Alliance (FR)
Boutin T. - Sorbonne University Alliance (FR)
Leturia M. - Sorbonne University Alliance (FR)
Wilkinson A. - Sorbonne University Alliance (FR)
Chevillotte F. - MATELYS – Research Lab (FR)
Bécot F.-X. - MATELYS – Research Lab (FR)
Venegas R. - MATELYS – Research Lab (FR)
5. Meissner M., Zieliński T.G., Impact of Wall Impedance Phase Angle on Indoor Sound Field and Reverberation Parameters Derived from Room Impulse Response, ARCHIVES OF ACOUSTICS, ISSN: 100p.
0137-5075, DOI: 10.24425/aoa.2022.142008, Vol.47, No.3, pp.343-353, 2022
Accurate definition of boundary conditions is of crucial importance for room acoustic predictions because the wall impedance phase angle can affect the sound field in rooms and acoustic
parameters applied to assess a room reverberation. In this paper, the issue was investigated theoretically using the convolution integral and a modal representation of the room impulse
response for complex-valued boundary conditions. Theoretical considerations have been accompanied with numerical simulations carried out for a rectangular room. The case of zero phase
angle, which is often assumed in room acoustic simulations, was taken as a reference, and differences in the sound pressure level and decay times were determined in relation to this case.
Calculation results have shown that a slight deviation of the phase angle with respect to the phase equal to zero can cause a perceptual difference in the sound pressure level. This
effect was found to be due to a change in modal frequencies as a result of an increase or decrease in the phase angle. Simulations have demonstrated that surface distributions of decay
times are highly irregular, while a much greater range of the early decay time compared to the reverberation time range indicates that a decay curve is nonlinear. It was also found that a
difference between the decay times predicted for the complex impedance and real impedance is especially clearly audible for the largest impedance phase angles because it corresponds
approximately to 4 just noticeable differences for the reverberation metrics.
Słowa kluczowe:
room acoustics, complex wall impedance, indoor sound field, room impulse response, reverberation parameters
Afiliacje autorów:
Meissner M. - IPPT PAN
Zieliński T.G. - IPPT PAN
6. Meissner M., Zieliński T.G., Analysis of Sound Absorption Performance of Acoustic Absorbers Made of Fibrous Materials, VIBRATIONS IN PHYSICAL SYSTEMS, ISSN: 0860-6897, DOI: 10.21008/ 70p.
j.0860-6897.2022.2.05, Vol.33, No.2, pp.1-8, 2022
Absorbing properties of multi-layer acoustic absorbers were modeled using the impedance translation theorem and the Garai and Pompoli empirical model, which enables a determination of the
characteristic impedance and propagation constant of fibrous sound-absorbing materials. The theoretical model was applied to the computational study of performance of single-layer
acoustic absorber backed by a hard wall and the absorber consisting of one layer of absorbing material and an air gap between the rear of the material and a hard back wall. Simulation
results have shown that a high thickness of absorbing material may cause wavy changes in the frequency relationship of the normal and random incidence absorption coefficients. It was also
found that this effect is particularly noticeable for acoustic absorbers with a large thickness of air gap between the absorbing material and a hard back wall.
Słowa kluczowe:
sound absorption, multi-layer absorber, surface impedance, fibrous materials, air gap
Afiliacje autorów:
Meissner M. - IPPT PAN
Zieliński T.G. - IPPT PAN
7. Venegas R.^♦, Zieliński T.G., Núñez G.^♦, Bécot F.-X.^♦, Acoustics of porous composites, COMPOSITES PART B-ENGINEERING, ISSN: 1359-8368, DOI: 10.1016/j.compositesb.2021.109006, Vol.220, 200p.
pp.109006-1-14, 2021
Acoustic wave propagation in porous composites is investigated in this paper. The two-scale asymptotic homogenisation method is used to obtain the macroscopic description of sound
propagation in such composites. The developed theory is both exemplified by introducing analytical models for the effective acoustical properties of porous composites with canonical
inclusion patterns (i.e. a porous matrix with a periodic array of cylindrical or spherical inclusions) and validated by comparing the models predictions with the results of direct
finite-element simulations and experimental testing, showing good agreement in all cases. It is concluded that the developed theory correctly captures the acoustic interaction between the
constituents of the porous composite and elucidates the physical mechanisms underlying the dissipation of sound energy in such composites. These correspond to classical visco-thermal
dissipation in the porous constituents, together with, for the case of composites made from constituents characterised by highly contrasted permeabilities, pressure diffusion which
provides additional and tunable sound energy dissipation. In addition, this work determines the conditions for which a rigidly-backed porous composite layer can present improved sound
absorption performance in comparison with that of layers made from their individual constituents. Hence, the presented results are expected to guide the rational design of porous
composites with superior acoustic performance.
Słowa kluczowe:
porous composites, wave propagation, acoustical properties, homogenisation, pressure diffusion
Afiliacje autorów:
Venegas R. - MATELYS – Research Lab (FR)
Zieliński T.G. - IPPT PAN
Núñez G. - other affiliation
Bécot F.-X. - MATELYS – Research Lab (FR)
8. Ahsani S.^♦, Claeys C.^♦, Zieliński T.G., Jankowski Ł., Scarpa F.^♦, Desmet W.^♦, Deckers E.^♦, Sound absorption enhancement in poro-elastic materials in the viscous regime using a 200p.
mass–spring effect, JOURNAL OF SOUND AND VIBRATION, ISSN: 0022-460X, DOI: 10.1016/j.jsv.2021.116353, Vol.511, pp.116353-1-16, 2021
This paper investigates the mechanisms that can be used to enhance the absorption performance of poro-elastic materials in the viscous regime. It is shown that by adding small inclusions
in a poro-elastic foam layer, a mass–spring effect can be introduced. If the poro-elastic material has relatively high viscous losses in the frequency range of interest, the mass–spring
effect can enhance the sound absorption of the foam by introducing an additional mode in the frame and increasing its out-of-phase movement with respect to the fluid part. Moreover,
different effects such as the trapped mode effect, the modified-mode effect, and the mass–spring effect are differentiated by decomposing the absorption coefficient in terms of the three
energy dissipation mechanisms (viscous, thermal, and structural losses) in poro-elastic materials. The physical and geometrical parameters that can amplify or decrease the mass–spring
effect are discussed. Additionally, the influence of the incidence angle on the mass–spring effect is evaluated and a discussion on tuning the inclusion to different target frequencies is
Słowa kluczowe:
meta-poro-elastic material, Biot–Allard poroelastic model, mass–spring effect, viscous regime
Afiliacje autorów:
Ahsani S. - Katholieke Universiteit Leuven (BE)
Claeys C. - Katholieke Universiteit Leuven (BE)
Zieliński T.G. - IPPT PAN
Jankowski Ł. - IPPT PAN
Scarpa F. - University of Bristol (GB)
Desmet W. - Katholieke Universiteit Leuven (BE)
Deckers E. - Katholieke Universiteit Leuven (BE)
9. Núñez G.^♦, Venegas R.^♦, Zieliński T.G., Bécot F.-X.^♦, Equivalent fluid approach to modeling the acoustical properties of polydisperse heterogeneous porous composites, PHYSICS OF FLUIDS 100p.
, ISSN: 1070-6631, DOI: 10.1063/5.0054009, Vol.33, No.6, pp.062008-1-19, 2021
This paper investigates sound propagation in polydisperse heterogeneous porous composites. The two-scale asymptotic method of homogenization is used to obtain a macroscopic description of
the propagation of sound in such composites. The upscaled equations demonstrate that the studied composites can be modeled as equivalent fluids with complex-valued frequency-dependent
effective parameters (i.e., dynamic viscous permeability and compressibility) as well as unravel the sound energy dissipation mechanisms involved. The upscaled theory is both exemplified
by introducing analytical and hybrid models for the acoustical properties of porous composites with different geometries and constituent materials (e.g., a porous matrix with much less
permeable and/or impervious inclusions with simple or complex shapes) and validated through computational experiments successfully. It is concluded that the developed theory rigorously
captures the physics of acoustic wave propagation in polydisperse heterogeneous porous composites and shows that the mechanisms that contribute to the dissipation of sound energy in the
composite are classical visco-thermal dissipation together with multiple pressure diffusion phenomena in the heterogeneous inclusions. The results show that the combination of two or more
permeable materials with highly contrasted permeabilities
can improve the acoustic absorption and transmission loss of the composite. This paper provides fundamental insights into the propagation of acoustic waves in complex composites that are
expected to guide the rational design of novel acoustic materials.
Afiliacje autorów:
Núñez G. - other affiliation
Venegas R. - MATELYS – Research Lab (FR)
Zieliński T.G. - IPPT PAN
Bécot F.-X. - MATELYS – Research Lab (FR)
10. Opiela K.C., Zieliński T.G., Dvorák T.^♦, Kúdela Jr S.^♦, Perforated closed-cell aluminium foam for acoustic absorption, APPLIED ACOUSTICS, ISSN: 0003-682X, DOI: 10.1016/ 100p.
j.apacoust.2020.107706, Vol.174, pp.107706-1-17, 2021
Closed-cell metal foams are lightweight and durable materials resistant to high temperature and harsh conditions, but due to their fully closed porosity they are poor airborne sound
absorbers. In this paper a classic method of drilling is used for a nearly closed-cell aluminium foam to open its porous interior to the penetration of acoustic waves propagating in air,
thereby increasing the wave energy dissipation inside the pores of the perforated medium. The aim is to investigate whether it is possible to effectively approximate wave propagation and
attenuation in industrial perforated heterogeneous materials with originally closed porosity of irregular shape by means of their simplified microstructural representation based on
computer tomography scans. The applied multi-scale modelling of sound absorption in foam samples is confronted with impedance tube measurements. Moreover, the collected numerical and
experimental data is compared with the corresponding results obtained for perforated solid samples to demonstrate a great benefit coming from the presence of an initially closed porous
structure in the foam.
Słowa kluczowe:
closed-cell metal foams, perforation, sound absorption, microstructure effects, dissipated powers
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
Dvorák T. - Institute of Materials and Machine Mechanics, Slovak Academy of Sciences (SK)
Kúdela Jr S. - Institute of Materials and Machine Mechanics, Slovak Academy of Sciences (SK)
11. Opiela K.C., Zieliński T.G., Microstructural design, manufacturing and dual-scale modelling of an adaptable porous composite sound absorber, COMPOSITES PART B-ENGINEERING, ISSN: 1359-8368 200p.
, DOI: 10.1016/j.compositesb.2020.107833, Vol.187, pp.107833-1-13, 2020
This work investigates a porous composite with modifiable micro-geometry so that its ability to absorb noise can be accommodated to different frequency ranges. The polymeric skeleton of
the composite has a specific periodic structure with two types of pores (larger and smaller ones) and two types of channels (wide and narrow ones), and each of the large pores contains a
small steel ball. Depending on the situation, the balls block different channels that connect the pores, and therefore alter the visco-inertial phenomena between the saturating air and
solid skeleton which take place at the micro-scale level and are responsible for the dissipation of the energy of acoustic waves penetrating the porous composite. All this is studied
numerically using advanced dual-scale modelling, and the results are verified by the corresponding experimental tests of 3D-printed samples. Particular attention is paid to the
prototyping and additive manufacturing of such adaptive porous composites.
Słowa kluczowe:
porous composite, adaptive sound absorber, microstructure-based modelling, additive manufacturing
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
12. Zieliński T.G., Venegas R.^♦, Perrot C.^♦, Červenka M.^♦, Chevillotte F.^♦, Attenborough K.^♦, Benchmarks for microstructure-based modelling of sound absorbing rigid-frame porous media, 200p.
JOURNAL OF SOUND AND VIBRATION, ISSN: 0022-460X, DOI: 10.1016/j.jsv.2020.115441, Vol.483, pp.115441-1-38, 2020
This work presents benchmark examples related to the modelling of sound absorbing porous media with rigid frame based on the periodic geometry of their microstructures. To this end,
rigorous mathematical derivations are recalled to provide all necessary equations, useful relations, and formulae for the so-called direct multi-scale computations, as well as for the
hybrid multi-scale calculations based on the numerically determined transport parameters of porous materials. The results of such direct and hybrid multi-scale calculations are not only
cross verified, but also confirmed by direct numerical simulations based on the linearised Navier-Stokes-Fourier equations. In addition, relevant theoretical and numerical issues are
discussed, and some practical hints are given.
Słowa kluczowe:
porous media, periodic microstructure, wave propagation, sound absorption
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Venegas R. - MATELYS – Research Lab (FR)
Perrot C. - other affiliation
Červenka M. - Czech Technical University in Prague (CZ)
Chevillotte F. - MATELYS – Research Lab (FR)
Attenborough K. - The Open University (GB)
13. Zieliński T.G., Opiela K.C., Pawłowski P., Dauchez N.^♦, Boutin T.^♦, Kennedy J.^♦, Trimble D.^♦, Rice H.^♦, Van Damme B.^♦, Hannema G.^♦, Wróbel R.^♦, Kim S.^♦, Ghaffari Mosanenzadeh S.^ 200p.
♦, Fang N.X.^♦, Yang J.^♦, Briere de La Hosseraye B.^♦, Hornikx M.C.J.^♦, Salze E.^♦, Galland M.-A.^♦, Boonen R.^♦, Carvalho de Sousa A.^♦, Deckers E.^♦, Gaborit M.^♦, Groby J.-P.^♦,
Reproducibility of sound-absorbing periodic porous materials using additive manufacturing technologies: round robin study, Additive Manufacturing, ISSN: 2214-8604, DOI: 10.1016/
j.addma.2020.101564, Vol.36, pp.101564-1-24, 2020
The purpose of this work is to check if additive manufacturing technologies are suitable for reproducing porous samples designed for sound absorption. The work is an inter-laboratory
test, in which the production of samples and their acoustic measurements are carried out independently by different laboratories, sharing only the same geometry codes describing agreed
periodic cellular designs. Different additive manufacturing technologies and equipment are used to make samples. Although most of the results obtained from measurements performed on
samples with the same cellular design are very close, it is shown that some discrepancies are due to shape and surface imperfections, or microporosity, induced by the manufacturing
process. The proposed periodic cellular designs can be easily reproduced and are suitable for further benchmarking of additive manufacturing techniques for rapid prototyping of acoustic
materials and metamaterials.
Słowa kluczowe:
porous materials, designed periodicity, additive manufacturing, sound absorption
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Opiela K.C. - IPPT PAN
Pawłowski P. - IPPT PAN
Dauchez N. - Sorbonne University Alliance (FR)
Boutin T. - Sorbonne University Alliance (FR)
Kennedy J. - Trinity College (IE)
Trimble D. - Trinity College (IE)
Rice H. - Trinity College (IE)
Van Damme B. - other affiliation
Hannema G. - other affiliation
Wróbel R. - other affiliation
Kim S. - other affiliation
Ghaffari Mosanenzadeh S. - other affiliation
Fang N.X. - other affiliation
Yang J. - Clemson University (US)
Briere de La Hosseraye B. - other affiliation
Hornikx M.C.J. - other affiliation
Salze E. - other affiliation
Galland M.-A. - École Centrale de Lyon (FR)
Boonen R. - other affiliation
Carvalho de Sousa A. - other affiliation
Deckers E. - Katholieke Universiteit Leuven (BE)
Gaborit M. - other affiliation
Groby J.-P. - other affiliation
14. Zieliński T.G., Chevillotte F.^♦, Deckers E.^♦, Sound absorption of plates with micro-slits backed with air cavities: analytical estimations, numerical calculations and experimental 100p.
validations, APPLIED ACOUSTICS, ISSN: 0003-682X, DOI: 10.1016/j.apacoust.2018.11.026, Vol.146, pp.261-279, 2019
This work discusses many practical and some theoretical aspects concerning modelling and design of plates with micro-slits, involving multi-scale calculations based on microstructure. To
this end, useful mathematical reductions are demonstrated, and numerical computations are compared with possible analytical estimations. The numerical and analytical approaches are used
to calculate the transport parameters for complex micro-perforated (micro-slotted) plates, which allow to determine the effective properties of the equivalent fluid, so that at the
macro-scale level the plate can be treated as a specific layer of acoustic fluid. In that way, the sound absorption of micro-slotted plates backed with air cavities can be determined by
solving a multi-layer system of Helmholtz equations. Two such examples are presented in the paper and validated experimentally. The first plate has narrow slits precisely cut out using a
traditional technique, while the second plate - with an original micro-perforated pattern - is 3D-printed.
Słowa kluczowe:
micro-slotted plates, micro-perforated plates, sound absorption, microstructure-based modelling, 3D-printing
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Chevillotte F. - MATELYS – Research Lab (FR)
Deckers E. - Katholieke Universiteit Leuven (BE)
15. Zieliński T.G., Microstructure representations for sound absorbing fibrous media: 3D and 2D multiscale modelling and experiments, JOURNAL OF SOUND AND VIBRATION, ISSN: 0022-460X, DOI: 35p.
10.1016/j.jsv.2017.07.047, Vol.409, pp.112-130, 2017
The paper proposes and investigates computationally-efficient microstructure representations for sound absorbing fibrous media. Three-dimensional volume elements involving non-trivial
periodic arrangements of straight fibres are examined as well as simple two-dimensional cells. It has been found that a simple 2D quasi-representative cell can provide similar predictions
as a volume element which is in general much more geometrically accurate for typical fibrous materials. The multiscale modelling allowed to determine the effective speeds and damping of
acoustic waves propagating in such media, which brings up a discussion on the correlation between the speed, penetration range and attenuation of sound waves. Original experiments on
manufactured copper-wire samples are presented and the microstructure-based calculations of acoustic absorption are compared with the corresponding experimental results. In fact, the
comparison suggested the microstructure modifications leading to representations with non-uniformly distributed fibres.
Słowa kluczowe:
Sound absorption, Fibrous materials, Multiscale modelling, Microstructure representations
Afiliacje autorów:
16. Zieliński T.G., Normalized inverse characterization of sound absorbing rigid porous media, JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, ISSN: 0001-4966, DOI: 10.1121/1.4919806, Vol.137, 25p.
No.6, pp.3232-3243, 2015
This paper presents a methodology for the inverse characterization of sound absorbing rigid porous media, based on standard measurements of the surface acoustic impedance of a porous
sample. The model parameters need to be normalized to have a robust identification procedure which fits the model-predicted impedance curves with the measured ones. Such a normalization
provides a substitute set of dimensionless (normalized) parameters unambiguously related to the original model parameters. Moreover, two scaling frequencies are introduced, however, they
are not additional parameters and for different, yet reasonable, assumptions of their values, the identification procedure should eventually lead to the same solution. The proposed
identification technique uses measured and computed impedance curves for a porous sample not only in the standard configuration, that is, set to the rigid termination piston in an
impedance tube, but also with air gaps of known thicknesses between the sample and the piston. Therefore, all necessary analytical formulas for sound propagation in double-layered media
are provided. The methodology is illustrated by one numerical test and by two examples based on the experimental measurements of the acoustic impedance and absorption of porous ceramic
samples of different thicknesses and a sample of polyurethane foam.
Słowa kluczowe:
Acoustic modeling, Viscosity, Porous materials, Porous media, Acoustic impedance measurement
Afiliacje autorów:
17. Zieliński T.G., Generation of random microstructures and prediction of sound velocity and absorption for open foams with spherical pores, JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 25p.
ISSN: 0001-4966, DOI: 10.1121/1.4915475, Vol.137, No.4, pp.1790-1801, 2015
This paper proposes and discusses an approach for the design and quality inspection of the morphology dedicated for sound absorbing foams, using a relatively simple technique for a random
generation of periodic microstructures representative for open-cell foams with spherical pores. The design is controlled by a few parameters, namely, the total open porosity and the
average pore size, as well as the standard deviation of pore size. These design parameters are set up exactly and independently, however, the setting of the standard deviation of pore
sizes requires some number of pores in the representative volume element (RVE); this number is a procedure parameter. Another pore structure parameter which may be indirectly affected is
the average size of windows linking the pores, however, it is in fact weakly controlled by the maximal pore-penetration factor, and moreover, it depends on the porosity and pore size. The
proposed methodology for testing microstructure-designs of sound absorbing porous media applies the multi-scale modeling where some important transport parameters—responsible for sound
propagation in a porous medium—are calculated from microstructure using the generated RVE, in order to estimate the sound velocity and absorption of such a designed material.
Słowa kluczowe:
Foams, Porous media, Viscosity, Acoustic absorption, Ceramics
Afiliacje autorów:
18. Nowak Ł.J.^♦, Zieliński T.G., Determination of the free-field acoustic radiation characteristics of the vibrating plate structures with arbitrary boundary conditions, JOURNAL OF VIBRATION 25p.
AND ACOUSTICS-TRANSACTIONS OF THE ASME, ISSN: 1048-9002, DOI: 10.1115/1.4030214, Vol.137, pp.051001-1-8, 2015
The paper presents the developed algorithm which implements the indirect variational boundary element method (IVBEM) for computation of the free-field acoustic radiation characteristics
of vibrating rectangle-shaped plate structures with arbitrary boundary conditions. In order to significantly reduce the computational time and cost, the algorithm takes advantage of
simple geometry of the considered problem and symmetries between the elements. The procedure of determining the distribution of acoustic pressure is illustrated on the example of thin,
rectangular plate with a part of one edge clamped and all other edges free. The eigenfrequencies and the corresponding vibrational mode shapes of the plate are computed using the finite
element method (FEM). The results of the numerical simulations are compared to the results of the experiments carried out in an anechoic chamber, proving good agreement between the
predictions and the observations. The reliability of simulations and high computational efficiency make the developed algorithm a useful tool in analysis of the acoustic radiation
characteristics of vibrating plate structures.
Słowa kluczowe:
Acoustic radiation, Indirect variational Boundary Element Method, Plate structures
Afiliacje autorów:
Nowak Ł.J. - other affiliation
Zieliński T.G. - IPPT PAN
19. Zieliński T.G., Microstructure-based calculations and experimental results for sound absorbing porous layers of randomly packed rigid spherical beads, JOURNAL OF APPLIED PHYSICS, ISSN: 30p.
0021-8979, DOI: 10.1063/1.4890218, Vol.116, No.3, pp.034905-1-17, 2014
Acoustics of stiff porous media with open porosity can be very effectively modelled using the so-called Johnson-Champoux-Allard-Pride-Lafarge model for sound absorbing porous media with
rigid frame. It is an advanced semi-phenomenological model with eight parameters, namely, the total porosity, the viscous permeability and its thermal analogue, the tortuosity, two
characteristic lengths (one specific for viscous forces, the other for thermal effects), and finally, viscous and thermal tortuosities at the frequency limit of 0Hz. Most of these
parameters can be measured directly, however, to this end specific equipment is required different for various parameters. Moreover, some parameters are difficult to determine. This is
one of several reasons for the so-called multiscale approach, where the parameters are computed from specific finite-element analyses based on some realistic geometric representations of
the actual microstructure of porous material. Such approach is presented and validated for layers made up of loosely packed small identical rigid spheres. The sound absorption of such
layers was measured experimentally in the impedance tube using the so-called two-microphone transfer function method. The layers are characterised by open porosity and semi-regular
microstructure: the identical spheres are loosely packed by random pouring and mixing under the gravity force inside the impedance tubes of various size. Therefore, the regular sphere
packings were used to generate Representative Volume Elements suitable for calculations at the micro-scale level. These packings involve only one, two, or four spheres so that the
three-dimensional finite-element calculations specific for viscous, thermal, and tortuous effects are feasible. In the proposed geometric packings, the spheres were slightly shifted in
order to achieve the correct value of total porosity which was precisely estimated for the layers tested experimentally. Finally, in this paper some results based on the self-consistent
estimates are also provided.
Słowa kluczowe:
Viscosity, Porous media, Acoustic absorption, Acoustic modeling, Tensor methods
Afiliacje autorów:
20. Zieliński T.G., Potoczek M.^♦, Śliwa R.E.^♦, Nowak Ł.J.^♦, Acoustic absorption of a new class of alumina foams with various high-porosity levels, ARCHIVES OF ACOUSTICS, ISSN: 0137-5075, 20p.
DOI: 10.2478/aoa-2013-0059, Vol.38, No.4, pp.495-502, 2013
Recently, a new class of ceramic foams with porosity levels up to 90% has been developed as a result of the association of the gelcasting process and aeration of the ceramic suspension.
This paper presents and discusses original results advertising sound absorbing capabilities of such foams. The authors manufactured three types of alumina foams in order to investigate
three porosity levels, namely: 72, 88, and 90%. The microstructure of foams was examined and typical dimensions and average sizes of cells (pores) and cell-linking windows were found for
each porosity case. Then, the acoustic absorption coefficient was measured in a wide frequency range for several samples of various thickness cut out from the foams. The results were
discussed and compared with the acoustic absorption of typical polyurethane foams proving that the alumina foams with high porosity of 88-90% have excellent sound absorbing properties
competitive with the quality of sound absorbing PU foams of higher porosity.
Słowa kluczowe:
Sound absorption, Porous materials, Alumina foams
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Potoczek M. - Rzeszów University of Technology (PL)
Śliwa R.E. - Rzeszów University of Technology (PL)
Nowak Ł.J. - other affiliation
21. Nowak Ł.J.^♦, Zieliński T.G., Modal sensitivity and selectivity of small, rectangle-shaped piezoelectric transducers used as sensors and actuators in active vibroacoustic control systems, 15p.
Journal of Low Frequency Noise, Vibration and Active Control, ISSN: 0263-0923, DOI: 10.1260/0263-0923.32.4.253, Vol.32, No.4, pp.253-272, 2013
The paper focuses on some issues regarding the utilization of small rectangle-shaped piezoelectric transducers as both sensors and actuators in active vibration and vibroacoustic control
systems of beam, plate and panelled structures with arbitrary (non-homogeneous) boundary conditions. A new form of description of a simple proportional active control system with multiple
independent feedback loops is proposed. The modal sensitivity functions of sensors and the modal selectivity functions of actuators are introduced to describe their ability for sensing
and exciting specific structural modes of the structures. Basing on the assumed form of cost function and the derived equations of control system the influence of the modal
characteristics of transducers on the stability of the system and on the performance of the active control is analyzed. The results of analytical solutions and numerical simulations are
compared with the results of the experiments carried out on various beam and plate structures made up of aluminium or composite materials including the actual materials used in aviation,
proving usefulness of the presented approach.
Słowa kluczowe:
Vibroacoustics, Piezoelectric transducers, Modal sensitivity, Vibroacoustic control systems
Afiliacje autorów:
Nowak Ł.J. - other affiliation
Zieliński T.G. - IPPT PAN
22. Nowak Ł.J.^♦, Zieliński T.G., Meissner M., Active vibroacoustic control of plate structures with arbitrary boundary conditions, Prace IPPT - IFTR Reports, ISSN: 2299-3657, Vol.4a, pp.5-9,
The paper describes briefly some main aspects of the active feedback control system that has been developed and constructed for reduction of vibroacustic emission of vibrating plate
structures with arbitrary boundary conditions. Relations between the forms and frequency of the vibrations induced by an external harmonic excitation and the distribution of the generated
acoustic pressure field are investigated using the developed numerical model based on the Indirect Variational Boundary Element Method. The aim of the control is to minimize the sound
pressure level in a given point of the ambient space. The system uses small, rectangle-shaped piezoelectric transducers as both sensors and actuators. The transducers are connected in a
number of independent feedback loops, and the feedback gains are the control parameters which are optimized using the developed optimal control algorithm. The constructed active system
has been tested for the stability and control performance during experimental research performed in an anechoic chamber. Results of experiments are presented in the paper, proving a high
level of noise reduction and a good agreement with numerical predictions.
Słowa kluczowe:
Vibrating plate structures, Active feedback control system, Vibrational modes
Afiliacje autorów:
Nowak Ł.J. - other affiliation
Zieliński T.G. - IPPT PAN
Meissner M. - IPPT PAN
23. Zieliński T.G., Galland M.A.^♦, Ichchou M.^♦, Fully coupled finite-element modeling of active sandwich panels with poroelastic core, JOURNAL OF VIBRATION AND ACOUSTICS-TRANSACTIONS OF THE 20p.
ASME, ISSN: 1048-9002, DOI: 10.1115/1.4005026, Vol.134, No.2, pp.021007-1-10, 2012
Active sandwich panels are an example of smart noise attenuators and a realization of hybrid active-passive approach for the problem of broadband noise reduction. The panels are composed
of thin elastic faceplates linked by the core of a lightweight absorbent material of high porosity. Moreover, they are active, so piezoelectric actuators in the form of thin patches are
fixed to their faceplates. Therefore, the passive absorbent properties of porous core, effective at high and medium frequencies, can be combined with the active vibroacoustic reduction
necessary in a low frequency range. Important convergence issues for fully coupled finite-element modeling of such panels are investigated on a model of a disk-shaped panel under a
uniform acoustic load by plane harmonic waves, with respect to the important parameter of the total reduction of acoustic transmission. Various physical phenomena are considered, namely,
the wave propagation in a porous medium, the vibrations of elastic plate and the piezoelectric behavior of actuators, the acoustics-structure interaction and the wave propagation in a
fluid. The modeling of porous core requires the usage of the advanced biphasic model of poroelasticity, because the vibrations of the skeleton of porous core cannot be neglected; they are
in fact induced by the vibrations of the faceplates. Finally, optimal voltage amplitudes for the electric signals used in active reduction, with respect to the relative size of the
piezoelectric actuator, are computed in some lower-to-medium frequency range.
Słowa kluczowe:
Active sandwich panels, Multiphysics, Vibroacoustics, Poroelasticity, Piezoelectricity
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Galland M.A. - École Centrale de Lyon (FR)
Ichchou M. - École Centrale de Lyon (FR)
24. Nowak Ł.J.^♦, Zieliński T.G., Acoustic radiation of vibrating plate structures submerged in water, HYDROACOUSTICS, ISSN: 1642-1817, Vol.15, pp.163-170, 2012 4p.
The paper presents results of the theoretical and numerical investigation on acoustic radiation of vibrating plate structures submerged in water. The current state of the art on the
considered issues is briefly reviewed. Then, the method for determining eigenmode shape functions and eigenfrequencies of plate vibrating in water that has been used in presented study is
introduced. The constitutive equations for solid domain and the pressure acoustic equation for liquid domain are coupled via boundary conditions and solved numerically using the finite
element method. Structural mode shapes and eigenfrequencies computed for plate submerged in water are compared to analogous results obtained for air and for the in vacuo case. It is
ssumed, that the plate is rectangle shaped and that it is placed in an infinite rigid baffle. Three-dimensional near- and far-field acoustic radiation characteristics for the plate
vibrating in water are introduced. Possibilities of implementation of the active control system for reduction of the hydroacoustic emission are briefly discussed.
Słowa kluczowe:
Hydroacoustics, Plate structures, Hydrocoustic radiation
Afiliacje autorów:
Nowak Ł.J. - other affiliation
Zieliński T.G. - IPPT PAN
25. Zieliński T.G., Numerical investigation of active porous composites with enhanced acoustic absorption, JOURNAL OF SOUND AND VIBRATION, ISSN: 0022-460X, DOI: 10.1016/j.jsv.2011.05.029, 30p.
Vol.330, pp.5292-5308, 2011
The paper presents numerical analysis – involving an advanced multiphysics modeling – of the concept of active porous composite sound absorbers. Such absorbers should be made up of a
layer or layers of poroelastic material (porous foams) with embedded elastic inclusions having active (piezoelectric) elements. The purpose of such active composite material is to
significantly absorb the energy of acoustic waves in a wide frequency range, particularly, at lower frequencies. At the same time the total thickness of composite should be very moderate.
The active parts of composites are used to adapt the absorbing properties of porous layers to different noise conditions by affecting the so-called solid-borne wave – originating mainly
from the vibrations of elastic skeleton of porous medium – to counteract the fluid-borne wave – resulting mainly from the vibrations of air in the pores; both waves are strongly coupled,
especially, at lower frequencies. In fact, since the traction between the air and the solid frame of porous medium is the main absorption mechanism, the elastic skeleton is actively
vibrated in order to adapt and improve the dissipative interaction of the skeleton and air in the pores. Passive and active performance of such absorbers is analyzed to test the
feasibility of this approach.
Słowa kluczowe:
Poroelasticity, Piezoelectricity, Active composites, Sound absorption, Finite-element modelling
Afiliacje autorów:
26. Zieliński T.G., Rak M.^♦, Acoustic absorption of foams coated with MR fluid under the influence of magnetic field, JOURNAL OF INTELLIGENT MATERIAL SYSTEMS AND STRUCTURES, ISSN: 1045-389X, 27p.
DOI: 10.1177/1045389X09355017, Vol.21, pp.125-131, 2010
The article presents results of the acoustic measurements on open-cell porous media coated with a magnetorheological (MR) fluid. Sound absorption of polyurethane foams of different,
single and dual porosity was tested in the impedance tube. The measurements were conducted in three stages using clean samples, the same samples moistened with MR fluid, and finally,
exposing the MR fluid-coated samples to a constant magnetic field. The transfer function method was employed to determine the acoustic absorption coefficient. Two significant,
controllable effects were observed in the curve illustrating the variation of the acoustic absorption coefficient with frequency, especially, for the foams of dual porosity. Namely,
relative to the field-free conditions, or to the clean foams, the most substantial peak in the absorption curve could be shifted by applying a magnetic field. Moreover, a resulting
significant increase in acoustic absorption yields, in a wide frequency range directly behind the peak.
Słowa kluczowe:
Magnetorheological foams, Acoustic absorption, Adaptive sound insulator, Dual-porosity foams
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Rak M. - other affiliation
27. Zieliński T.G., Multiphysics modeling and experimental validation of the active reduction of structure-borne noise, JOURNAL OF VIBRATION AND ACOUSTICS-TRANSACTIONS OF THE ASME, ISSN: 27p.
1048-9002, DOI: 10.1115/1.4001844, Vol.132, No.6, pp.061008-1-14, 2010
This paper presents a fully coupled multiphysics modeling and experimental validation of the problem of active reduction of noise generated by a thin plate under forced vibration. The
plate is excited in order to generate a significant low-frequency noise, which is then reduced by actuators in the form of piezoelectric patches glued to the plate with epoxy resin in
locations singled out earlier during finite element (FE) analyses. To this end, a fully coupled FE system relevant for the problem is derived. The modeling is very accurate: The
piezoelectric patches are modeled according to the electromechanical theory of piezoelectricity, the layers of epoxy resin are thoroughly considered, and the acoustic-structure
interaction involves modeling of a surrounding sphere of air with the nonreflective boundary conditions applied in order to simulate the conditions found in anechoic chamber. The FE
simulation is compared with many experimental results. The sound pressure levels computed in points at different distances from the plate agree excellently with the noise measured in
these points. Similarly, the computed voltage amplitudes of controlling signal turn out to be very good estimations.
Słowa kluczowe:
Low-frequency noise reduction, Multiphysics modeling, Active structural acoustic control
Afiliacje autorów:
28. Zieliński T.G., Fundamentals of Multiphysics Modelling of Piezo-Poro-Elastic Structures, ARCHIVES OF MECHANICS, ISSN: 0373-2029, Vol.62, No.5, pp.343-378, 2010 27p.
The paper discusses theoretical fundamentals necessary for accurate vibroacoustical modeling of structures or composites made up of poroelastic, elastic, and (active) piezoelectric
materials, immersed in an acoustic medium (e.g. air). An accurate modeling of such hybrid active-passive vibroacoustic attenuators (absorbers or insulators) requires a multiphysics
approach involving the finite element method to cope with complex geometries. Such fully-coupled, multiphysics model is given in this paper. To this end, first, the accurate PDE-based
models of all the involved single-physics problems are recalled and, since a mutual interaction of these various problems is of the uttermost importance, the relevant couplings are
thoroughly investigated and taken into account in the modeling. Eventually, the Galerkin finite element model is developed. This model should serve to develop designs of active composite
vibroacoustic attenuators made up of porous foams with passive and active solid implants, or hybrid liners and panels made up of a core or layers of porous materials fixed to elastic
faceplates with piezoelectric actuators, and coupled to air-gaps. A widespread design of such smart muffler s is still an open topic and should be addressed with accurate predictive tools
based on the model proposed in the present paper. The model is accurate in the framework of kinematical and constitutive (material) linearity of behaviour. This is, however, the very case
of the vibroacoustic application of elasto-poroelastic panels or composites, where the structural vibrations are induced by acoustic waves. The developed fully-coupled FE model is finally
used to solve a generic two- dimensional example and some issues concerning finite element approximation and convergence are also discussed.
Słowa kluczowe:
Poroelasticity, Piezoelectricity, Acoustics, Multiphysics, Weak formulation, Finite-element method
Afiliacje autorów:
29. Batifol C.^♦, Zieliński T.G., Ichchou M.^♦, Galland M.A.^♦, A finite-element study of a piezoelectric/poroelastic sound package concept, SMART MATERIALS AND STRUCTURES, ISSN: 0964-1726,
DOI: 10.1088/0964-1726/16/1/021, Vol.16, No.1, pp.168-177, 2007
This paper presents a complete finite-element description of a hybrid passive/active sound package concept for acoustic insulation. The sandwich created includes a poroelastic core and
piezoelectric patches to ensure high panel performance over the medium/high and low frequencies, respectively. All layers are modelled thanks to a Comsol environment*. The piezoelectric/
elastic and poroelastic/elastic coupling are fully considered. The study highlights the reliability of the model by comparing results with those obtained from the Ansys finite-element
software and with analytical developments. The chosen shape functions and mesh convergence rate for each layer are discussed in terms of dynamic behaviour. Several layer configurations
are then tested, with the aim of designing the panel and its hybrid functionality in an optimal manner. The differences in frequency responses are discussed from a physical perspective.
Lastly, an initial experimental test shows the concept to be promising.
Słowa kluczowe:
Poroelasticity, Piezoelectricity, Finite-element modelling, Acoustic insulation, Active-passive approach
Afiliacje autorów:
Batifol C. - other affiliation
Zieliński T.G. - IPPT PAN
Ichchou M. - École Centrale de Lyon (FR)
Galland M.A. - École Centrale de Lyon (FR)
Lista rozdziałów w ostatnich monografiach
1. Meissner M., Zieliński T.G., Postępy Akustyki 2017, rozdział: Shape optimization of rectangular rooms for improving sound quality at low frequencies, Polskie Towarzystwo Akustyczne, Oddział
552 Górnośląski, pp.461-472, 2017
2. Zieliński T.G., GRAFEN – IPPT PAN COMPUTER OF BIOCENTRUM OCHOTA GRID, rozdział: Microstructure-based modelling of sound absorption in rigid porous media, IPPT Reports on Fundamental
410 Technological Research, Postek E., Kowalewski T.A. (Eds.), pp.106-111, 2014
3. Graczykowski C., Knor G., Kołakowski P., Mikułowski G., Orłowska A., Pawłowski P., Skłodowski M.^♦, Świercz A., Wiszowaty R., Zieliński T.G., Monitorowanie obciążeń i stanu technicznego
364 konstrukcji mostowych, rozdział: Wybrane zagadnienia monitorowania, IPPT Reports on Fundamental Technological Research, pp.189-236, 2014
4. Zieliński T.G., Smart technologies for safety engineering, rozdział: Modeling and analysis of smart technologies in vibroacoustics, Wiley, Holnicki-Szulc J. (Ed.), pp.269-321, 2008
Prace konferencyjne
1. Zielinski T.G., Opiela K.C., Dauchez N.^♦, Boutin T.^♦, Galland M.-.A.^♦, Attenborough K.^♦, Low frequency absorption by 3D printed materials having highly tortuous labyrinthine slits in
impermeable or microporous skeletons, 10th Convention of the European Acoustics Association - Forum Acusticum 2023, 2023-09-11/09-15, Torino (IT), DOI: 10.61782/fa.2023.0342, pp.2275-2282,
The low frequency peaks in the absorption spectra of layers of conventional porous materials correspond to quarter wavelength resonances and the peak frequencies are determined essentially
by layer thickness. If the layer cannot be made thicker, the frequency of the peak can be lowered by increasing the tortuosity of the material. Modern additive manufacturing technologies
enable exploration of pore network designs that have high tortuosity. This paper reports analytical models for pore structures consisting of geometrically complex labyrinthine networks of
narrow slits resembling Greek meander patterns. These networks offer extremely high tortuosity in a non-porous solid skeleton. However, additional enhancement of the low frequency
performance results from exploiting the dual porosity pressure diffusion effect by making the skeleton microporous with a significantly lower permeability than the tortuous network of
slits. Analytical predictions are in good agreement with measurements made on two samples with the same tortuous slit pattern, but one has an impermeable skeleton 3D printed from a
photopolymer resin and the other has a microporous skeleton 3D printed from a gypsum powder.
Słowa kluczowe:
sound absorption, high tortuosity, dual porosity, 3D printed materials
Afiliacje autorów:
Zielinski T.G. - IPPT PAN
Opiela K.C. - IPPT PAN
Dauchez N. - Sorbonne University Alliance (FR)
Boutin T. - Sorbonne University Alliance (FR)
Galland M.-.A. - École Centrale de Lyon (FR)
Attenborough K. - The Open University (GB)
2. Opiela K.C., Zielinski T.G., Modifiable labyrinthine microstructure for adjustable sound absorption and insulation, 10th Convention of the European Acoustics Association - Forum Acusticum
2023, 2023-09-11/09-15, Torino (IT), DOI: 10.61782/fa.2023.0866, pp.2937-2942, 2023
Materials with open porosity are known to absorb sound very well. However, their efficiency in acoustic absorption and insulation is sometimes restricted to specific frequency ranges. It
is possible to circumvent this drawback by designing a porous microstructure that can be modified on the fly and thereby enabling the change in its crucial geometrical parameters like
tortuosity that influence the intensity of viscous energy dissipation phenomena taking place on a micro scale. A prototype of such a material consisting of relocatable small steel balls
embedded in a periodic rigid skeleton is devised and additively manufactured in separate pieces in the stereolithography technology. The balls are inserted into proper places manually. The
full sample is then assembled and its acoustic characteristics are determined computationally and experimentally using dual-scale, unit-cell analyses and impedance tube measurements,
respectively. The resulting material is shown to possess two extreme spectra of normal incidence sound absorption coefficient and transmission loss that are dependent on the particular
position of balls inside the microstructure. In consequence, acoustic waves from a much larger frequency range can be effectively absorbed or insulated by a relatively thin material layer
compared to a similar design without movable balls.
Słowa kluczowe:
sound absorption, sound transmission, modifiable porous microstructure, additive manufacturing
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zielinski T.G. - IPPT PAN
3. Opiela K.C., Zieliński T.G., Attenborough K.^♦, Predicting sound absorption in additively manufactured microporous labyrinthine structures, ISMA2022 / USD2022, International Conference on 20p.
Noise and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2022-09-12/09-14, Leuven (BE), pp.405-414, 2022
Low-frequency sound absorption by thin rigid porous hard-backed layers is enhanced if the geometrical tortuosity is increased. Increasing tortuosity increases the fluid flow path length
through the porous layer thereby increasing the effective thickness. In turn, this reduces the effective sound speed within the layer and the frequency of the quarter wavelength layer
resonance. One way of increasing tortuosity is through rectangular labyrinthine channel perforations. In addition to the tortuosity of the porous matrix, the bulk tortuosity value is
influenced by the channel widths, lengths, and number of folds. A sample with an impervious skeleton and a sample in which the solid skeleton is perforated with oblique cylindrical holes
evenly spaced in a rectangular pattern have been fabricated using conventional methods and an additive manufacturing technology, respectively. The sound absorption spectra resulting from
these structures have been predicted analytically as well as numerically and compared with normal incidence impedance-tube measurements.
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
Attenborough K. - The Open University (GB)
4. Zieliński T.G., Dauchez N.^♦, Boutin T.^♦, Chevillotte F.^♦, Bécot F.-X.^♦, Venegas R.^♦, 3D printed axisymmetric sound absorber with double porosity, ISMA2022 / USD2022, International 20p.
Conference on Noise and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2022-09-12/09-14, Leuven (BE), pp.462-476, 2022
This paper shows that specific additive manufacturing (AM) technology can be used to produce double-porosity acoustic materials where main pore networks are designed and a useful type of
microporosity is obtained as a side effect of the 3D printing process. Here, the designed main pore network is in the form of annular pores set around the axis of the cylindrical absorber.
In this way, the axial symmetry of the problem is ensured if only plane wave propagation under normal incidence is considered, which allows for modelling with purely analytical
expressions. Moreover, the outermost annular pore is bounded by the wall of the impedance tube used to measure the sound absorption of the material, so that experimental tests can be
easily performed. Two different AM technologies and raw materials were used to fabricate axisymmetric absorbers of the same design, in one case obtaining a material with double porosity,
which was confirmed by the results of multi-scale calculations validated with acoustic measurements.
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Dauchez N. - Sorbonne University Alliance (FR)
Boutin T. - Sorbonne University Alliance (FR)
Chevillotte F. - MATELYS – Research Lab (FR)
Bécot F.-X. - MATELYS – Research Lab (FR)
Venegas R. - MATELYS – Research Lab (FR)
5. Jamois A.^♦, Dragna D.^♦, Zieliński T.G., Galland M.-A.^♦, Modélisation acoustique d’un matériau obtenu par fabrication additive placé en paroi d’un conduit, CFA 2022, 16ème Congrès
Français d’Acoustique, 2022-04-11/04-15, Marseille (FR), pp.1-7, 2022
L’objectif de cette étude est de modéliser et de caractériser le comportement de matériaux réalisés en impression 3D lorsqu’ils sont placés en paroi d’un conduit. Le matériau considéré
présente une structure périodique dont la cellule de base comporte une sphère reliée aux sphères des autres cellules par des canaux cylindriques. Le squelette rigide du matériau permet de
le modéliser comme un Fluide Équivalent. Quand le matériau est placé en paroi de conduit, la modélisation par son impédance de surface n’est plus suffisante et la propagation dans le
matériau doit être prise en compte. Trois modélisations du matériau sont étudiées. Les deux premières s’appuient sur une description macroscopique au moyen d’un Fluide Équivalent. Dans la
première, il est décrit par ses fonctions caractéristiques dynamiques (densité et compressibilité), calculées au moyen d’un modèle numérique d’un tube de Kundt. Dans la seconde
modélisation, les paramètres du modèle JCALP sont déduits par résolution des équations de Stokes, Laplace et Poisson pour une seule cellule du matériau. Le troisième modèle consiste à
décrire le matériau dans sa globalité à l’échelle microscopique et à résoudre les équations de Navier-Stokes Linéarisées (NSL) dans le conduit et le matériau. Les résultats des trois
modèles sont comparés en incidence normale et en paroi d’un conduit. Différentes techniques d’impression 3D ont été utilisées pour réaliser des échantillons, et montrent une variabilité
importante des géométries effectivement réalisées et par suite des coefficients d’absorption mesurés en tube de Kundt. Les résultats d’expérimentations en paroi de conduit sont également
comparés avec ceux de la modélisation.
Afiliacje autorów:
Jamois A. - other affiliation
Dragna D. - other affiliation
Zieliński T.G. - IPPT PAN
Galland M.-A. - École Centrale de Lyon (FR)
6. Zieliński T.G., Dauchez N.^♦, Boutin T.^♦, Leturia M.^♦, Wilkinson A.^♦, Chevillotte F.^♦, Bécot F.-X.^♦, Venegas R.^♦, 3D printed sound-absorbing materials with double porosity,
INTER-NOISE 2022, 51st International Congress and Exposition on Noise Control Engineering, 2022-08-21/08-24, Glasgow (GB), pp.773-1-10, 2022
The paper shows that acoustic materials with double porosity can be 3D printed with the appropriate design of the main pore network and the contrasted microporous skeleton. The microporous
structure is obtained through the use of appropriate additive manufacturing (AM) technology, raw material, and process parameters. The essential properties of the microporous material
obtained in this way are investigated experimentally. Two AM technologies are used to 3D print acoustic samples with the same periodic network of main pores: one provides a microporous
skeleton leading to double porosity, while the other provides a single-porosity material. The sound absorption for each acoustic material is determined both experimentally using impedance
tube measurements and numerically using a multiscale model. The model combines finite element calculations (on periodic representative elementary volumes) with scaling functions and
analytical expressions resulting from homogenization. The obtained double-porosity material is shown to exhibit a strong permeability contrast resulting in a pressure diffusion effect,
which fundamentally changes the nature of the sound absorption compared to its single-porosity counterpart with an impermeable skeleton. This work opens up interesting perspectives for the
use of popular, low-cost AM technologies to produce efficient sound absorbing materials.
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Dauchez N. - Sorbonne University Alliance (FR)
Boutin T. - Sorbonne University Alliance (FR)
Leturia M. - Sorbonne University Alliance (FR)
Wilkinson A. - Sorbonne University Alliance (FR)
Chevillotte F. - MATELYS – Research Lab (FR)
Bécot F.-X. - MATELYS – Research Lab (FR)
Venegas R. - MATELYS – Research Lab (FR)
7. Núñez G.^♦, Venegas R.^♦, Zieliński T.G., Bécot F.-X.^♦, Sound absorption of polydisperse heterogeneous porous composites, INTER-NOISE 2021, 50th International Congress and Exposition on
Noise Control Engineering, 2021-08-01/08-05, Washington, DC (US), DOI: 10.3397/IN-2021-2217, pp.2730-2739, 2021
Sound absorption of polydisperse heterogeneous porous composites is investigated in this paper. The wave equation in polydisperse heterogeneous porous composites is upscaled by using the
two-scale method of homogenisation, which allows the material to be modeled as an equivalent fluid with atypical effective parameters. This upscaled model is numerically validated and
demonstrates that the dissipation of sound in polydisperse heterogeneous porous composites is due to visco-thermal dissipation in the composite constituents and multiple pressure diffusion
in the polydisperse heterogeneous inclusions. Analytical and semi-analytical models are developed for the acoustical effective parameters of polydisperse heterogeneous porous composites
with canonical geometry (e.g. porous matrix with cylindrical and spherical inclusions) and with complex geometries. Furthermore, by comparing the sound absorption coefficient of a
hard-backed composite layer with that of layers made from the composite constituents alone, it is demonstrated that embedding polydisperse heterogeneous inclusions in a porous matrix can
provide a practical way for significantly increasing low frequency sound absorption. The results of this work are expected to serve as a model for the rational design of novel acoustic
materials with enhanced sound absorption properties.
Afiliacje autorów:
Núñez G. - other affiliation
Venegas R. - MATELYS – Research Lab (FR)
Zieliński T.G. - IPPT PAN
Bécot F.-X. - MATELYS – Research Lab (FR)
8. Ahsani S.^♦, Boukadia R.F.^♦, Droz C.^♦, Zieliński T.G., Jankowski Ł., Claeys C.^♦, Desmet W.^♦, Deckers E.^♦, On the potential of meta-poro-elastic systems with small mass inclusions to 20p.
achieve broad band a near-perfect absorption coefficient, ISMA2020 / USD2020, International Conference on Noise and Vibration Engineering / International Conference on Uncertainty in
Structural Dynamics, 2020-09-07/09-09, Leuven (BE), pp.2463-2472, 2020
This paper discusses the potential of meta-poro-elastic systems with small mass inclusions to create broadband sound absorption performance under the quarter-wavelength limit. A first
feasibility study is done to evaluate whether embedding small mass inclusions in specific types of foam can lead to near-perfect absorption at tuned frequencies. This paper includes an
optimization routine to find the material properties that maximize the losses due to the mass inclusion such that a near-perfect/perfect absorption coefficient can be achieved at specified
frequencies. The near-perfect absorption is due to the mass-spring effect, which leads to an increase in the viscous loss. Therefore, it is efficient in the viscous regime. The well-known
critical frequency, which depends on the porosity and flow resistivity of the material, is commonly used as a criteria to distinguish the viscous regime from the inertial regime. However,
for the types of foam of interest to this work, the value of critical frequency is below the mass-spring resonance frequency. Hence, the inverse quality factor is used to provides a more
accurate estimation on the frequency at which the transition from the viscous regime to the inertial regime.
Afiliacje autorów:
Ahsani S. - Katholieke Universiteit Leuven (BE)
Boukadia R.F. - other affiliation
Droz C. - other affiliation
Zieliński T.G. - IPPT PAN
Jankowski Ł. - IPPT PAN
Claeys C. - Katholieke Universiteit Leuven (BE)
Desmet W. - Katholieke Universiteit Leuven (BE)
Deckers E. - Katholieke Universiteit Leuven (BE)
9. Zieliński T.G., Venegas R.^♦, A multi-scale calculation method for sound absorbing structures with localised micro-porosity, ISMA2020 / USD2020, International Conference on Noise and 20p.
Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2020-09-07/09-09, Leuven (BE), pp.395-407, 2020
This work presents a three-scale approach to modelling sound absorbing structures with non-uniform porosity, consisting of meso-patterns of localised micro-porosity. It can also be used
for structures in which voids in a solid frame are filled with micro-fibres. The method involves double-scale, i.e. micro- and meso-scale, calculations of the effective properties of an
equivalent homogenised medium, as well as macro-scale calculations of sound propagation and absorption in this medium, which at the macroscopic level can replace the entire absorbing
structure of complex micro-geometry. The basic idea can be explained as follows: the mesoscale areas with localised micro-porosity are treated as homogenised meso-pores saturated with an
equivalent visco-thermal fluid replacing the actual gas-saturated micro-porous medium, so that the macroscopic effective properties are finally calculated based on a simplified meso-scale
geometry with homogenised mesopores.
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Venegas R. - MATELYS – Research Lab (FR)
10. Meissner M., Zieliński T.G., Low-frequency prediction of steady-state room response for different configurations of designed absorbing materials on room walls, ISMA2020 / USD2020, 20p.
International Conference on Noise and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2020-09-07/09-09, Leuven (BE), pp.463-477, 2020
A technique commonly used for improving room acoustics consists in increasing a total sound damping in a room. This objective can be achieved by using different configurations of a porous
material for acoustical treatment of a room. In this work, that problem is analyzed theoretically by exploiting a modal representation of the impulse response (IR) function for
steady-state sound field predictions. A formula for the IR function was obtained by solving a wave equation for an enclosure with complex-valued boundary conditions of walls. On the walls
where the acoustic treatment is applied, these boundary conditions are related to the characteristic impedance, effective speed of sound and thickness of the porous material used for
padding. Two different porous materials were considered in the analyses of the room with acoustic treatment, and to this end, the required effective properties were calculated for a rigid
foam with a designed periodic microstructure, as well as for a poroelastic foam with specific visco-elastic properties of the skeleton.
Afiliacje autorów:
Meissner M. - IPPT PAN
Zieliński T.G. - IPPT PAN
11. Opiela K.C., Zieliński T.G., Attenborough K.^♦, Manufacturing, modeling, and experimental verification of slitted sound absorbers, ISMA2020 / USD2020, International Conference on Noise and 20p.
Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2020-09-07/09-09, Leuven (BE), pp.409-420, 2020
Designs with uniformly distributed slits normal or inclined to the incident surface exhibit a great potential because of their simplicity and good acoustical performance. However,
production of materials of this sort is challenging as the required fabrication precision is very high. This paper deals with additive manufacturing, modeling, and impedance tube testing
of a few slitted geometries and their variations, including cases where the dividing walls between slits are perforated. They were designed to be producible with current 3D printing
technology and provide reliable measurements using standardized equipment. The normal incidence sound absorption curves predicted analytically and numerically were verified experimentally.
It is observed that such simple configurations may lead to absorption properties comparable to porous acoustic treatments with more complex microstructure. The good agreement between the
predictions and measurements supports the validity of the multi-scale modeling employed.
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
Attenborough K. - The Open University (GB)
12. Zieliński T.G., Galland M.-A.^♦, Analysis of wave propagation and absorption at normal and oblique incidence in poroelastic layers with active periodic inclusions, e-FA2020, e-FORUM
ACUSTICUM 2020, 2020-12-07/12-11, Lyon (FR), DOI: 10.48465/fa.2020.0541, pp.2825-2831, 2020
The paper presents numerical studies of wave propagation under normal and oblique incidence in sound-absorbing layers of poroelastic composites with active and passive inclusions embedded
periodically along the composite layer surface. The purpose of active inclusions is to increase the mass-spring effect of passive inclusions attached to the viscoelastic skeleton of the
poroelastic matrix of the composite in order to increase the dissipation of the energy of acoustic waves penetrating into such a layer of poroelastic composite. Finite element modelling is
applied which includes the coupled models of Biot-Allard poroelasticity (for the poroelastic matrix), piezoelectricity and elastodynamics (for the active and passive inclusions,
respectively), as well as the Helmholtz equation for the adjacent layer of air. The formulation based on the Floquet-Bloch theory is applied to allow for modelling of wave propagation at
oblique incidence to the surface of the periodic composite layer. The actively exited piezoelectric inclusions may become additional (though secondary) sources for wave propagation.
Therefore, a background pressure field in a wide adjacent air layer is used to simulate plane waves propagating from the specified direction, oblique or normal, onto the poroelastic layer
surface, and a nonreflecting condition is applied on the external boundary of the air layer.
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Galland M.-A. - École Centrale de Lyon (FR)
13. Opiela K.C., Zieliński T.G., Dvorák T.^♦, Kúdela Jr S.^♦, Perforated closed-cell metal foam for acoustic applications, e-FA2020, e-FORUM ACUSTICUM 2020, 2020-12-07/12-11, Lyon (FR), DOI:
10.48465/fa.2020.0925, pp.2879-2886, 2020
Despite very good mechanical and physical properties such as lightness, rigidity and high thermal conductivity, closed-porosity metal foams alone are usually poor acoustic treatments.
However, relatively low production cost weighs them in many applications in favour of their open-cell equivalents. In the present paper, this attractive and popular material is subject to
consideration from the point of view of the improvement of its sound absorption characteristics. A classic method of perforation is proposed to open the porous interior of the medium to
the penetration of acoustic waves and therefore enhance the dissipation of their energy. The interaction between the perforation diameter and closed-cell microstructure as well as its
impact on the overall sound absorption of a similar foam were already studied in 2010 by Chevillotte, Perrot and Panneton, so these topics are not discussed much in this work. On the other
hand, the objective here is to investigate if one can efficiently approximate the wave propagation phenomenon in real perforated heterogeneous materials with closed porosity of irregular
shape by means of their simplified three-dimensional representation at the micro-level. The applied multi-scale modelling of sound absorption was confronted with measurements performed in
an impedance tube. Moreover, as expected, numerical and experimental comparisons with relevant perforated solid samples show great benefit coming from the presence of a porous structure in
the foam, although it was initially closed.
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
Dvorák T. - Institute of Materials and Machine Mechanics, Slovak Academy of Sciences (SK)
Kúdela Jr S. - Institute of Materials and Machine Mechanics, Slovak Academy of Sciences (SK)
14. Zieliński T.G., Opiela K.C., Pawłowski P., Dauchez N.^♦, Boutin T.^♦, Kennedy J.^♦, Trimble D.^♦, Rice H.^♦, Differences in sound absorption of samples with periodic porosity produced
using various Additive Manufacturing Technologies, ICA 2019, 23rd International Congress on Acoustics integrating 4th EAA Euroregio 2019, 2019-09-09/09-13, Aachen (DE), DOI: 10.18154/
RWTH-CONV-239456, pp.4505-4512, 2019
With a rapid development of modern Additive Manufacturing Technologies it seems inevitable that they will sooner or later serve for production of specific porous and meta-porous acoustic
treatments. Moreover, these new technologies are already being used to manufacture original micro-geometric designs of sound absorbing media in order to test microstructure-based effects,
models and hypothesis. In the view of these statements, this work reports differences in acoustic absorption measured for porous specimens which were produced from the same CAD-geometry
model using several additive manufacturing technologies and 3D-printers. A specific periodic unit cell of open porosity was designed for the purpose. The samples were measured acoustically
in the impedance tube and also subjected to a thorough microscopic survey in order to check their quality and look for the discrepancy reasons.
Słowa kluczowe:
Sound absorption, Additive Manufacturing Technologies
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Opiela K.C. - IPPT PAN
Pawłowski P. - IPPT PAN
Dauchez N. - Sorbonne University Alliance (FR)
Boutin T. - Sorbonne University Alliance (FR)
Kennedy J. - Trinity College (IE)
Trimble D. - Trinity College (IE)
Rice H. - Trinity College (IE)
15. Opiela K.C., Zieliński T.G., Adaptation of the equivalent-fluid model to the additively manufactured acoustic porous materials, ICA 2019, 23rd International Congress on Acoustics
integrating 4th EAA Euroregio 2019, 2019-09-09/09-13, Aachen (DE), DOI: 10.18154/RWTH-CONV-239799, pp.1216-1223, 2019
Recent investigations show that the normal incidence sound absorption in 3D-printed rigid porous materials is eminently underestimated by numerical calculations using standard models. In
this paper a universal amendment to the existing mathematical description of thermal dispersion and fluid flow inside rigid foams is proposed which takes account of the impact of the
additive manufacturing technology on the acoustic properties of produced samples. The porous material with a motionless skeleton is conceptually substituted by an equivalent fluid with
effective properties evaluated from the Johnson-Champoux-Allard-Pride-Lafarge model. The required macroscopic transport parameters are computed from the microstructural solutions using the
hybrid approach. A cross-functional examination of the quality (shape consistency, representative surface roughness, etc.) of two periodic specimens obtained from additive manufacturing
processes is additionally performed in order to link it to the results of the multiscale acoustic modelling. Based on this study, some of the transport parameters are changed depending on
certain quantities reflecting the actual quality of a fabricated material. The developed correction has a considerable influence on the predicted value of the sound absorption coefficient
such that the original discrepancies between experimental and numerical curves are significantly diminished.
Słowa kluczowe:
Rigid porous material, Additive manufacturing, Sound absorption
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
16. Zieliński T.G., Červenka M.^♦, On a relative shift in the periodic micro-geometry and other causes for discrepancy in the microstructure-based modelling of 3D-printed porous media,
INTER-NOISE 2019, INTER-NOISE 2019 - 48th International Congress and Exhibition on Noise Control Engineering, 2019-06-16/06-19, Madrid (ES), No.1695, pp.1-10, 2019
Samples with periodic microstructures, designed for good sound absorption, have been manufactured by 3D printing. Typically, however, the acoustical properties of the resulting samples
differ from those predicted. Two causes of the discrepancies are (1) inaccuracies related to the 3D-printing resolution and (2) imperfections resulting from micro-fibres, micro-pores, and
pore surface roughness, created during manufacture. Discrepancies due to the first cause can be addressed, post hoc, by updating the idealised periodic geometric model used for creating
the codes for fabrication on the basis of a survey using a scanning microscope, or through computerised micro-tomography scans. Reducing the discrepancies due to the second cause requires
a relatively significant further modelling effort. Another cause for small discrepancies is when two layers of the same periodic porous material and thickness differ only by a relative
shift of the internal geometry of the periodic Representative Volume Element (RVE). This causes the absorption peaks to be shifted in frequency. A modelling procedure is proposed to take
this into account.
Słowa kluczowe:
Sound absorption, Periodic porous media, Additive manufacturing
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Červenka M. - Czech Technical University in Prague (CZ)
17. Ahsani S.^♦, Deckers E.^♦, Zieliński T.G., Jankowski Ł., Claeys C.^♦, Desmet W.^♦, Absorption enhancement in poro-elastic materials by mass inclusion, exploiting the mass-spring effect,
SMART 2019, 9th ECCOMAS Thematic Conference on Smart Structures and Materials, 2019-07-08/07-11, Paris (FR), pp.1076-1084, 2019
In this paper the possibility of enhancing the absorption coefficient of a poro-elastic material using small, elastic mass inclusions in frequencies lower than the quarter-wavelength
resonance of the porous material is discussed. We show that absorption peaks can be achieved not only by what is known in literature as the trapped mode effect, but also by the resonance
of small elastic inclusions at low frequencies, which can be interpreted as a mass-spring effect. In this work, the inclusion and the porous skeleton is considered elastic and fully
coupled to each other, therefore accounting for all types of energy dissipation i.e. viscous, thermal, and structural losses and energy dissipated due to the relative motion of the fluid
phase and the frame excited by the resonating inclusion. Additionally, the inclusions are also modeled as motionless and rigid to distinguish between the trapped mode and/or the modified
frame mode effect and the mass-spring effect. Moreover, the distinction between these two effects are explained in more detail by comparing the dissipated energy by each mechanism
(viscous, thermal and structural effect).
Słowa kluczowe:
Meta-porous material, Biot-Allard poroelastic model, Mass-spring effect
Afiliacje autorów:
Ahsani S. - Katholieke Universiteit Leuven (BE)
Deckers E. - Katholieke Universiteit Leuven (BE)
Zieliński T.G. - IPPT PAN
Jankowski Ł. - IPPT PAN
Claeys C. - Katholieke Universiteit Leuven (BE)
Desmet W. - Katholieke Universiteit Leuven (BE)
18. Opiela K.C., Rak M.^♦, Zieliński T.G., A concept demonstrator of adaptive sound absorber/insulator involving microstructure-based modelling and 3D-printing, ISMA 2018 / USD 2018, 20p.
International Conference on Noise and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2018-09-17/09-19, Leuven (BE), pp.1091-1103, 2018
The purpose of this work is to present and investigate the concept of adaptive sound absorbers, that is, periodic porous media with modifiable micro-geometry, so that their ability of
sound absorption or insulation can be changed in various frequency ranges. To demonstrate this concept, a simple periodic porous micro-geometry with small bearing balls inside pores is
proposed. By a simple positioning of the periodic porous sample the gravity force is used for the small balls to close some of the windows linking the pores, changing in that way the flow
path inside pores, which entails significant modifications of the relevant parameters of permeability and tortuosity. Also the viscous characteristic length is changed, while the porosity
as well as the thermal characteristic length remain unchanged. Nevertheless, such significant changes of some crucial transport parameters strongly affect the overall acoustic wave
propagation in the porous medium. All this is studied using an advanced dual-scale modelling as well as experimental testing of 3D-printed specimens.
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Rak M. - other affiliation
Zieliński T.G. - IPPT PAN
19. Zieliński T.G., Galland M.-A.^♦, Deckers E.^♦, Influencing the wave-attenuating coupling of solid and fluid phases in poroelastic layers using piezoelectric inclusions and locally added 20p.
masses, ISMA 2018 / USD 2018, International Conference on Noise and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2018-09-17/09-19, Leuven (BE),
pp.1195-1207, 2018
When airborne acoustic waves penetrate porous media their carrier becomes the air in pores, but also the solid skeleton - provided that it is sufficiently soft. Then, there is a coupled
propagation of fluid-borne and solid-borne waves in a poroelastic medium. The coupling of fluid and solid phases of such media can be responsible for significantly better or weaker sound
absorption in medium and lower frequency ranges. It has been observed that adding some well-localised small mass inclusions inside a poroelastic layer may improve its acoustic absorption
in some medium frequency range, however, at the same time the absorption is usually decreased at some slightly higher frequencies. This situation can be improved by applying additionally
an active approach using small piezoelectric inclusions which actively influence the vibrations of the solid skeleton with added masses, so that the interaction between the solid-borne and
fluid-borne waves is always directed for a better mutual energy dissipation of the both types of waves.
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Galland M.-A. - École Centrale de Lyon (FR)
Deckers E. - Katholieke Universiteit Leuven (BE)
20. Červenka M.^♦, Bednařík M.^♦, Zieliński T.G., Direct numerical simulation of sound absorption in porous media, Euronoise 2018 - 11th European Congress and Exposition on Noise Control
Engineering, 2018-05-27/05-31, Hersonissos (GR), pp.59-54, 2018
Numerical simulation of absorption of sound in porous media is an important part of the design of the treatments for the environmental noise reduction. In the porous media, the mechanical
energy carried by sound is dissipated by thermo-viscous interactions with the solid surface of the media frame, which usually has complicated geometry at the microscopic (sub-millimetre)
scale. In order to be able to absorb the acoustic energy at the low frequencies of interest, a layer of porous material must be rather thick (at the order of centimetres). This is why
direct numerical simulation (DNS) of the sound absorption in porous media is a rather computationally challenging task because small geometrical details must be properly resolved in a
large computational domain. In order to avoid these difficulties, simplified semi-phenomenological models introducing so called effective fluid have been proposed. For example, the
Johnson-Champoux-Allard-Pride-Lafarge (JCAPL) model is based on eight parameters which can be measured or calculated based on the media micro-structural geometry. Within this work, we
compare the numerical results obtained by the 3D DNS with the prediction of the JCAPL model in case of several porous media represented by closely-packed spheres. The DNS calculations are
performed using the linearised Navier-Stokes equations for layers of spheres of different thicknesses, the parameters for the JCAPL model are calculated subsequently using Laplace,
Poisson, and Stokes flow analyses on a representative volume element of the media. Very good agreement between the results has been found.
Afiliacje autorów:
Červenka M. - Czech Technical University in Prague (CZ)
Bednařík M. - Czech Technical University in Prague (CZ)
Zieliński T.G. - IPPT PAN
21. Zieliński T.G., Pore-size effects in sound absorbing foams with periodic microstructure: modelling and experimental verification using 3D printed specimens, ISMA 2016 / USD 2016, 15p.
International Conference on Noise and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2016-09-19/09-21, Leuven (BE), pp.95-104, 2016
Microstructure-based modelling of sound absorbing porous media has been recently successfully applied for various materials, however, still some questions concerning the reliability and
accuracy of such predictions are open. These issues are investigated here for periodic foams with open porosity. First, a geometry of foam microstructure is generated using an algorithm
which ensures periodic arrangements of pores in a cube. Then, the cube is appropriately scaled to various sizes and for each size case finite-element analyses are performed on the periodic
fluid domain to calculate the so-called transport parameters. Finally, the effective speed of sound and density are determined for the so-called equivalent fluid, macroscopically suitable
to describe wave propagation in such an open rigid foam filled with air. All this allows to estimate the sound absorption for periodic foam layers of various pore-sizes and thicknesses.
This parametric study is confronted with some impedance-tube measurements carried out for a few foam samples produced using 3D-printing technology.
Słowa kluczowe:
Sound absorbing foams, Microstructure, Micro-macro modelling, Acoustic testing, 3D printing
Afiliacje autorów:
22. Zieliński T.G., On representativeness of the representative cells for the microstructure-based predictions of sound absorption in fibrous and porous media, Euro Noise 2015, 10th European
Congress and Exposition on Noise Control Engineering, 2015-05-31/06-03, Maastricht (NL), pp.2473-2478, 2015
Realistic microstructure-based calculations have recently become an important tool for a performance prediction of sound absorbing porous media, seemingly suitable also for a design and
optimization of novel acoustic materials. However, the accuracy of such calculations strongly depends on a correct choice of the representative microstructural geometry of porous media,
and that choice is constrained by some requirements, like, the periodicity, a relative simplicity, and the size small enough to allow for the so-called separation of scales. This paper
discusses some issues concerning this important matter of the representativeness of representative geometries (two-dimensional cells or three-dimensional volume elements) for sound
absorbing porous and fibrous media with rigid frame. To this end, the accuracy of two- and three-dimensional cells for fibrous materials is compared, and the microstructure-based
predictions of sound absorption are validated experimentally in case of a fibrous material made up of a copper wire. Similarly, the numerical predictions of sound absorption obtained from
some regular Representative Volume Elements proposed for porous media made up of loosely-packed identical rigid spheres are confronted with the corresponding analytical estimations and
experimental results. Finally, a method for controlled random generation of representative microstructural geometries for sound absorbing open foams with spherical pores is briefly
Słowa kluczowe:
Fibrous materials, Open-cell foams, Representative microstructure, Modelling of sound absorption
Afiliacje autorów:
23. Zieliński T.G., A methodology for a robust inverse identification of model parameters for porous sound absorbing materials, ISMA 2014, International Conference on Noise and Vibration 10p.
Engineering, 2014-09-15/09-17, Leuven (BE), pp.63-76, 2014
A methodology of inverse identification of parameters for the Johnson-Champoux-Allard-Lafarge model of porous sound absorbing materials (also with Pride and Lafarge enhancements) is
advocated. The inverse identification is based on the measurements of surface acoustic impedance of porous samples. For a single sample of porous material set on a rigid backing wall such
measurements provide two specific curves in the considered frequency range, namely, the real and imaginary parts of acoustic impedance. More data suitable for inverse identification can be
gathered from additional measurements where the surface acoustic impedance is determined for the same sample yet with an air gap between the sample and the backing wall. As matter of fact,
such measurements should be carried out for a few cases where the air gap varies in thickness. Eventually, a set of impedance curves is gained suitable for inverse simultaneous
identification of model parameters. In the paper analytical solutions are given for both measurement configurations, namely, for a layer of porous material set on the rigid wall, and for
the porous layer separated from the rigid wall by an air gap. These solutions are used by the identification procedure which minimises the difference between the experimental curves and
the curves computed from the analytical solutions where the porous layer is modelled using some version of the mentioned poro-acoustic model. The minimisation is carried out with respect
to the model parameters, however, not directly, since for this purpose the corresponding dimensionless parameters are introduced. Formulas for the dimensionless parameters are given with
respect to the model parameters, and then conversely, for the model parameters with respect to the dimensionless ones. In the formulas two normalising frequencies are introduced which can
be considered: one - as characteristic for viscous effects, and the other - as typical for thermal effects. It is claimed that they are not additional parameters, and can be set quite
arbitrarily, however, reasonable values must be assumed to allow for very fast and robust identification with initial values for all dimensionless parameters set to 1. This feature is
quite important in view of the fact that the choice of initial values for the actual model parameters is rather essential and can be often very problematic. The whole procedure is
illustrated with a numerical example and by tests based on laboratory measurements of porous ceramic samples.
Słowa kluczowe:
Sound absorbing porous media, Inverse identification, Acoustic impedance, Acoustic testing
Afiliacje autorów:
24. Zieliński T.G., Sound absorption of porous layers of loosely-packed rigid spheres: multiscale modelling and experimental validation, FA2014, 7th FORUM ACUSTICUM 2014, 2014-09-07/09-12,
Kraków (PL), No.R13K_2, pp.1-6, 2014
Sound absorption in porous media with rigid structure and open porosity is most often modelled using the so-called fluid-equivalent approach, in which a porous medium is substituted by an
effective dispersive fluid. There are many models of that kind. Perhaps the most frequently used and efficient one is the so-called Johnson-Champoux-Allard-Lafrage model, or its
variations. This a rather advanced semi-phenomenological model with six to eight parameters; with enhancements by Pride and Lafarge, there are eight parameters, namely: the total open
porosity, the tortuosity, the (viscous) permeability, the thermal permeability, the viscous and thermal characteristic lengths, and finally, the viscous and thermal tortuosities at low
frequency limit (i.e., at 0 Hz). Although, most of these parameters can be measured, it is sometimes very problematic and requires various experimental facilities, which makes the idea of
calculation of these parameters from the geometry of microstructure of porous medium very tempting – such multiscale modelling requires, however, some periodic yet sufficiently realistic
representation of the actual porous geometry. In this paper such multiscale modelling is presented for the problem of sound absorption in layers composed of loosely-packed rigid spheres.
Since the spheres are identical, the packing, although not dense, tends to be semi-regular. Therefore, some regular sphere packings are used to construct periodic Representative Volume
Elements for such porous media – they are, however, modified a bit by shifting the spheres in order to fit exactly the actual measured porosity. Basing on such numerical representations of
porous microstructure, all the necessary parameters are calculated from finite-element solutions of some relevant Boundary-Value Problems and the effective characteristics for equivalent
fluid are determined. Then, the acoustic absorption coefficients are computed for a porous layer of specified thickness for some wide frequency range and the results are compared with the
experimental curve obtained from the measurements of such layer carried out in the impedance tube using the so-called two-microphone transfer function method.
Słowa kluczowe:
Granular media, Sound absorption, Multiscale modelling, Acoustic measurements
Afiliacje autorów:
25. Zieliński T.G., Representative volume elements, microstructural calculation and macroscopic inverse identification of parameters for modelling sound propagation in rigid porous materials,
ICSV20, 20th International Congress on Sound and Vibration: Recent Developments in Acoustics, Noise and Vibration, 2013-07-07/07-11, Bangkok (TH), pp.2228-2235, 2013
The micro-geometry of porous material is responsible for its sound absorption performance and should be now a design objective. Microstructural calculation of parameters and/or
characteristic functions for acoustical models of porous materials with rigid frame requires the so-called Representative Volume Elements, that is, usually cubes which should contain
several pores or fibres of typical sizes and distribution. The design of such RVEs, which correctly represent a typical micro-geometry of porous medium, is by no means an easy task since
usually the RVE should be also periodic and 'isotropic' (identical with respect to the three mutually-perpendicular directions). The task is simpler in case of two-dimensional microscopic
models of some fibrous materials, but such modelling is obviously rather approximative. Designs of periodic RVEs for porous foams and fibrous materials will be presented and used by FE
analyses of microstructural problems defined by the application of the Multiscale Asymptotic Method to the problem of sound propagation and absorption in porous media with rigid skeleton.
Moreover, a methodology of automatic generation of periodic RVEs with random arrangement of pores based on a simple bubble dynamics will be explained. Among other examples, designs of RVE
cubes representative for a corrundum ceramic foam with porosity 90% will be shown and serve for microstructural calculation of some macroscopic parameters used in advanced acoustical
modelling of porous media. The curves of acoustic impedance and absorption measured in the frequency range from 500Hz to 6.4kHz for two samples of the corrundum foam will be presented.
These measurements will be used for inverse identification of relevant macroscopic parameters, namely: the tortuosity, the viscous and thermal permeabilities, and two characteristic
lengths. The concurrence of some results obtained by the RVE-based micro-scale calculation and the measurement-based macro-scale identification will be shown.
Słowa kluczowe:
Rigid porous media, Microstructure-based calculations, Inverse identification, Sound propagation
Afiliacje autorów:
26. Nowak Ł.J.^♦, Zieliński T.G., Determining the optimal locations of piezoelectric transducers for vibroacoustic control of structures with general boundary conditions, ISMA 2012 / USD 2012, 10p.
International Conference on Noise and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2012-09-17/09-19, Leuven (BE), pp.369-383, 2012
Vibroacoustic control of thin beam, plate and panelled structures with arbitrary boundary conditions is investigated. The study focuses on determining optimal locations of piezoelectric
sensors and actuators on the surfaces of structures under vibroacoustic control. The work consists of three parts. In the first part, the undertaken assumptions and some governing
equations are briefly introduced. Then, in the second part of the study, the piezo-transducers' locations which ensure optimal sensing/actuating capabilities for specific vibration modes
are determined, basing on the derived analytical formulas and on some results of numerical simulations, as well as on the actuator/sensor equations given in the first part of the study.
The relevant modes are selected by taking into account that the main purpose is to minimise the acoustic field generated by the vibrating structure. The third part of the work discusses
some experimental investigations aimed for the verification of the results obtained theoretically. Some technical aspects of creating the composite structures for active control systems
are briefly described in appendix.
Słowa kluczowe:
Vibroacoustic panels, Vibroacoustic control, Piezoelectric transducers, Optimal placement
Afiliacje autorów:
Nowak Ł.J. - other affiliation
Zieliński T.G. - IPPT PAN
27. Zieliński T.G., Inverse identification and microscopic estimation of parameters for models of sound absorption in porous ceramics, ISMA 2012 / USD 2012, International Conference on Noise 10p.
and Vibration Engineering / International Conference on Uncertainty in Structural Dynamics, 2012-09-17/09-19, Leuven (BE), pp.95-107, 2012
Samples of porous ceramics Al2O3, manufactured by a promising technology of gelcasting of cellural foams by using biopolymers as gel-formers, are examined in the impedance tube using the
transfer function method. It is shown that the ceramics of total porosity around 90% forms an excellent sound absorbing material in the frequency range from 500 Hz to 6.4 kHz.
Experimentally-determined curves of acoustic impedance and absorption are then used for inverse identification of relevant geometric parameters like: tortuosity, viscous and thermal
permeability parameters and characteristic lengths. These parameters are required by some advanced models of sound propagation in rigid porous media, developed by Johnson, Koplik and
Dashen, Champoux and Allard, with some variations introduced by Pride et al., and Lafarge et al. These models are utilized to produce curves of acoustic impedance and absorption that are
used by the identification procedure which minimizes the objective function defined as a squared difference to the appropriate curves obtained experimentally. As a matter of fact, some
experimental data are used for the determination of parameters while the other data-obtained for another sample of the same porous ceramics, yet having different thickness-serve for the
validation purposes. Moreover, it is observed that the identified characteristic length for thermal effects corresponds very well to the average radius of pores, whereas the characteristic
length for viscous forces is similar with the average size of
Słowa kluczowe:
Sound-absorbing foams, Inverse identification, Micro-scale calculations, Porous ceramics
Afiliacje autorów:
28. Nowak Ł.J.^♦, Zieliński T.G., Wybrane aspekty aktywnej kontroli wibroakustycznej na przykładzie struktury płytowej, 58 Otwarte Seminarium z Akustyki, 2011-08-13/08-16, Jurata (PL), Vol.2,
pp.129-138, 2011
W artykule przedstawiono wyniki badań nad aktywną redukcją transmisji wibroakustycznej przez strukturę płytową. Zakres przedstawionych w pracy zagadnień obejmuje zarówno opis teoretyczny
rozpatrywanych zjawisk i układów, jak i rezultaty symulacji numerycznych oraz badań doświadczalnych. Rozważanym obiektem jest płyta aluminiowa o grubości 2mm z przyklejonymi na jednej z
jej powierzchni elementami piezoelektrycznymi. Część tych elementów pełni funkcję sensorów, pozostałe zaś stanowią aktywatory, za pomocą których realizowane jest sterowanie aktywne.
Kontroler pracuje w układzie sprzężenia zwrotnego, na jego wejście podawany jest wzmocniony i odwrócony w fazie sygnał napięciowy z sensorów, będący jednocześnie sygnałem błędu. Algorytm
sterowania realizowany jest w oparciu o klasyczny regulator proporcjonalno-całkująco-różniczkujący (PID), dla różnych konfiguracji połączeń poszczególnych członów. Fizyczna realizacja
kontrolera wykonana została w formie układu analogowego bazującego na niskoszumnych wzmacniaczach operacyjnych.
Słowa kluczowe:
Aktywna kontrola wibroakustyczna, Regulator PID, Aktywna redukcja drgań
Afiliacje autorów:
Nowak Ł.J. - other affiliation
Zieliński T.G. - IPPT PAN
29. Zieliński T.G., Finite-element modelling of fully-coupled active systems involving poroelasticity, piezoelectricity, elasticity, and acoustics, CMM 2011, 19th International Conference on
Computer Methods in Mechanics, 2011-05-09/05-12, Warszawa (PL), pp.218-1-8, 2011
The paper discusses some issues concerning fully-coupled finite-element modelling of active-passive systems for vibroacoustic attenuation, involving porous, piezoelectric, and elastic
materials, as well as 'acoustic' (inviscid) fluids. For porous materials, the advanced, bi-phasic model of poroelasticity is used, which allows to consider elastic vibrations of solid
skeleton important at lower frequencies and for porous composites with active inclusions. A discrete finite-element model suitable for analysis of such multiphysics problems is briefly
explained. The model is derived (using the Galerkin method) from the variational formulation of coupled problems of poroelasticity, piezoelectricity, elasticity, and acoustics. Finally,
some relevant results obtained from a numerical analysis of a disk of active sandwich panel with poroelastic core, fitted into an acoustic waveguide, are presented.
Słowa kluczowe:
Acoustics, Porous media, Smart materials, Vibrations, Coupled fields, Finite element methods, Numerical analysis, Elasticity
Afiliacje autorów:
30. Zieliński T.G., Multiphysics modelling and experimental verification of active and passive reduction of structural noise, ICA 2010, 20th International Congress on Acoustics, 2010-08-23/
08-27, Sydney (AU), pp.1-5, 2010
A fully-coupled multiphysics modelling is applied for the problem of simultaneous active and passive reduction of noise generated by a thin panel under forced vibration providing many
relevant results of various type (noise and vibration levels, necessary voltage for control signals, efficiency of the approach) which are validated experimentally. The panel is excited in
order to generate a noise consisting of significant lower and higher frequency contributions. Then the low-frequency noisy modes are reduced by actuators in the form of piezoelectric
patches glued with epoxy resin in locations chosen optimally thanks to the multiphysics analysis, whereas the emission of higher frequency noise is attenuated by well-chosen thin layers of
porous materials. To this end, a fully-coupled finite element system relevant for the problem is derived. Such multiphysics approach is accurate: advanced models of porous media are used
for the porous layers, the piezoelectric patches are modelled according to the fully-coupled electro-mechanical theory of piezoelectricity, the layers of epoxy resin are thoroughly
considered, finally, the acoustic-structure interaction involves modelling of a surrounding sphere of air with the non-reflective boundary conditions applied in order to simulate the
conditions found in anechoic chamber. The FE simulation is compared with many experimental results. The sound pressure levels computed in points at different distances from the panel agree
excellently with the noise measured in these points. Similarly, the computed voltage amplitudes of controlling signal turn out to be very estimations.
Słowa kluczowe:
Structural acoustics and vibration, Active noise reduction, Poroelasticity, Sandwich panels
Afiliacje autorów:
31. Motylewski J., Pawłowski P., Rak M.^♦, Zieliński T.G., Identyfikacja źródeł aktywności wibroakustycznej maszyn metodą kształtowania wiązki sygnału (beamforming), XXXVII Ogólnopolskie
Sympozjum Diagnostyka Maszyn, 2010-03-08/03-13, Wisła (PL), pp.1-8, 2010
W zagadnieniach identyfikacji i lokalizacji źródeł aktywności wibroakustycznej maszyn, istotnym problemem jest wizualizacja pól rozkładu wielkości akustycznych na wybranych powierzchniach
oraz określenie udziału poszczególnych źródeł w bilansie energetycznym sygnału wibroakustycznego maszyny.
Stosowane w wibroakustyce metody formowania wiązki (beamforming) polegają na przestrzenno-czasowym przetwarzaniu sygnału rejestrowanego przez matrycę mikrofonową. Identyfikacja źródła
odbywa się poprzez analizę zależności amplitudowo-fazowych sygnałów akustycznych padających na poszczególne przetworniki matrycy. Ponieważ z metodologicznego punktu widzenia interesujące
jest określenie możliwości zastosowania metody kształtowania wiązki w przypadku złożonych urządzeń posiadających źródła o małej aktywności wibroakustycznej, obiektem wstępnych prac był
zasilacz hydrauliczny typu Silentflo firmy MTS. Rezultaty otrzymane w wyniku przeprowadzonych badań w pełni potwierdzają zalety metody beamformingu w określeniu lokalizacji i identyfikacji
źródeł aktywności wibroakustycznej maszyn.
Słowa kluczowe:
Wibroakustyka, Lokalizacja źródeł akustycznych, Beamforming
Afiliacje autorów:
Motylewski J. - IPPT PAN
Pawłowski P. - IPPT PAN
Rak M. - other affiliation
Zieliński T.G. - IPPT PAN
32. Zieliński T.G., Active porous composites for wide frequency-range noise absorption, ISMA 2008, International Conference on Noise and Vibration Engineering, 2008-09-15/09-17, Louvain (BE),
Vol.1, pp.89-103, 2008
The paper presents a design, accurate multiphysics modeling and analysis of active porous-composite sound absorbers. Such absorbers are made up of a layer of poroelastic material (a porous
foam) with embedded elastic implants having active (piezoelectric) elements. The purpose of such active composite material is to significantly absorb the energy of acoustic waves in a wide
frequency range, particularly, in low frequencies. At the same time the total thickness of composites should be very moderate. The active parts of composites are used to adapt the
absorbing properties of porous layers to different noise conditions by affecting the so-called solid-borne wave (originating mainly from the vibrations of elastic skeleton of porous
medium) to counteract the fluid-borne wave (resulting mainly from the vibrations of air in the pores); the both waves are strongly coupled, especially, in lower frequencies. Passive and
active performance of the absorbers is analysed to test the feasibility of this approach.
Słowa kluczowe:
Poroelasticity, Piezoelectricity, Weak formulation, Acoustic insulation, Active-passive approach
Afiliacje autorów:
33. Zieliński T.G., Modelling of poroelastic layers with mass implants improving acoustic absorption, 19th International Congress on Acoustics, 2007-09-02/09-07, Madrid (ES), pp.1-8, 2007
The paper presents the modelling and frequency analysis of poroelastic layers with heavy solid implants where an improvement of acoustic absorption at lower frequencies is observed. To
model the porous material the Biot’s theory of poroelasticity is used while the solid implants are modelled in two ways: first, as small subdomains of elastic material (steel) situated
inside the porouslayer, and for the second time, in a more virtual manner (mathematically equivalent to the presence of masses in the given points), as some adequate inertial terms added
directly to the weak (variational) formulation of the problem. Since the solid implants are very small the both ways give similar results. Obviously, the second approach is much more
efficient to carry out numerical tests where the influence of the distribution of masses for the acoustic absorption of layers can be analysed. It seems that the improvement by distributed
masses (implants) may be greater than the one due to the mass effect alone.
Słowa kluczowe:
Poroelasticity, Weak formulation, Acoustic absorption
Afiliacje autorów:
34. Zieliński T.G., Galland M.A.^♦, Ichchou M.N.^♦, Further modeling and new results of active noise reduction using elasto-poroelastic panels, ISMA 2006, International Conference on Noise and
Vibration Engineering, 2006-09-18/09-20, Louvain (BE), Vol.1, pp.309-319, 2006
The paper presents further development in modeling of active elasto-poroelastic sandwich panels. In fact, a new design of a demi-sandwich panel is proposed and analysed. A numerical model
of panel is implemented in COMSOL Multiphysics environment using the most fundamental but very flexible Weak Form PDE Mode. Various physical problems are modeled using Finite Element
Method: the wave propagation in acoustic and poroelastic medium, the vibrations of elastic plate, the piezoelectric behavior of actuator. All these problems interact. in the examined
application of active panel. The presented results of FE analysis and some analytical solutions prove the necessity of modeling the panel's interaction with an acoustic medium. Again,
confirmed is the fact that an active control is necessary for lower resonances while for the higher frequencies the passive reduction of vibroacoustic transmission performed by a
well-designed poroelastic layer is sufficient.
Słowa kluczowe:
Active sandiwch panels, Poroelasticity, Piezoelectricity, Vibroacoustics
Afiliacje autorów:
Zieliński T.G. - IPPT PAN
Galland M.A. - École Centrale de Lyon (FR)
Ichchou M.N. - École Centrale de Lyon (FR)
35. Zieliński T.G.^♦, Galland M.A.^♦, Ichchou M.^♦, Active reduction of vibroacoustic transmission using elasto-poroelastic sandwich panels and piezoelectric materials, SAPEM'2005, Symposium
on the Acoustics of Poro-Elastic Materials, 2005-12-07/12-09, Lyon (FR), pp.1-8, 2005
The paper addresses the issue of an active sandwich panel made of elastic faceplates and a poroelastic core. The panel is supposed to be active thanks to piezoelectric patches glued to the
one of elastic layers. This piezoelectric actuator is used to excite the panel vibrations in the low frequency range with the aim to reduce the transmitted wave. A complete description of
the sandwich behaviour is obtained using a finite element model implemented in FEMLAB environment. The poroelastic material is modeled using a recent formulation (by Atalla et al.) valid
for harmonic oscillations, but the classical Biot formulation is also implemented. Coupling occurring between poroelastic material and plates, and between elastic plate and piezoelectric
patches is fully considered. The achieved numerical model allows prediction of transmission coefficient for plane waves under normal incidence. Hence, some numerical experiments can be
offered for multiple assembly configurations whose ultimate goal is to determine the best assembly and the best control strategy for reducing the transmission over a wide frequency range.
Słowa kluczowe:
Poroelacticity, Piezoelectricity, Active vibroacoustic panles
Afiliacje autorów:
Zieliński T.G. - other affiliation
Galland M.A. - École Centrale de Lyon (FR)
Ichchou M. - École Centrale de Lyon (FR)
Abstrakty konferencyjne
1. Kowalczyk-Gajewska K., Bieniek K., Maj M., Majewski M., Opiela K., Zieliński T., THE EFFECT OF INCLUSION SPATIAL DISTRIBUTION: MODELLING AND EXPERIMENTAL VALIDATION, CMM-SolMech 2022, 24th
International Conference on Computer Methods in Mechanics; 42nd Solid Mechanics Conference, 2022-09-05/09-08, Świnoujście (PL), No.89, pp.14/89-14/89, 2022
2. Opiela K.C., Konowrocki R., Zieliński T.G., Magnetically controlled sound absorption by means of a composite additively manufactured material, EACS 2022, 7th European Conference on Structural
Control, 2022-07-10/07-13, Warszawa (PL), pp.153-154, 2022
A composite additively manufactured material for controlled sound absorption is proposed. The operation of the material is based on its changeable microgeometry with steel balls that modify
propagation of acoustic waves when subject to an external magnetic field. Both numerical predictions and experimental verification is provided.
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Konowrocki R. - IPPT PAN
Zieliński T.G. - IPPT PAN
3. Meissner M., Zieliński T.G., Analysis of sound absorption performance of acoustic absorbers made of fibrous materials, OSA 2022, LXVIII Otwarte Seminarium z Akustyki, 2022-09-12/09-16, Solina
(PL), DOI: 10.24425/aoa.2022.142016, No.Vol. 47, No. 3, pp.436-436, 2022
Absorbing properties of multi-layer acoustic absorbers were modeled using the impedance translation theorem and the Garai and Pompoli empirical model, which enables a determination of the
characteristic impedance and propagation constant of fibrous sound-absorbing materials. The theoretical model was applied to the computational study of performance of single-layer acoustic
absorber backed by a hard wall and the absorber consisting of one layer of absorbing material and an air gap between the rear of the material and a hard back wall. Simulation results have shown
that a high thickness of absorbing material may cause wavy changes in the frequency relationship of the normal and random incidence absorption coefficients. It was also found that this effect
is particularly noticeable for acoustic absorbers with a large thickness of air gap between the absorbing material and a hard back wall.
Afiliacje autorów:
Meissner M. - IPPT PAN
Zieliński T.G. - IPPT PAN
4. Opiela K.C., Zieliński T.G., Predicting sound absorption in additively manufactured porous materials using multiscale simulations in FEniCS, FEniCS 2021 Conference, 2021-03-22/03-26, Cambridge
(GB), DOI: 10.6084/m9.figshare.14495349, pp.370, 2021
Słowa kluczowe:
sound absorption, porous material, multiscale modelling, coupled problem
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
5. Opiela K.C., Zieliński T.G., Attenborough K.^♦, Impedance-tube characterisation of additively manufactured slitted sound absorbers, SAPEM’2020+1, 6th (Triennial) Symposium on the Acoustics of
Poro-Elastic Materials, 2021-03-29/04-02, Purdue University, West Lafayette, Indiana (US), pp.1-2, 2021
An acoustical characterisation of additively manufactured rigid slitted structures is considered. A set of six JCAL microstructural parameters is deduced from dynamic density and bulk modulus
obtained from normal incidence surface acoustic impedance experimental data. The results show that the characteristic lengths are the most difficult to characterise.
Afiliacje autorów:
Opiela K.C. - IPPT PAN
Zieliński T.G. - IPPT PAN
Attenborough K. - The Open University (GB)
6. Zieliński T.G., Opiela K.C., Multiscale and multiphysics modelling of an adaptive material for sound absorption, COMSOL CONFERENCE, 2018-10-22/10-24, Lausanne (CH), pp.1-2, 2018
7. Zieliński T.G., Jankowski Ł., Opiela K.C., Deckers E.^♦, Modelling of poroelastic media with localised mass inclusions, SAPEM'2017, SAPEM'2017 - 5th Symposium on the Acoustics of Poro-Elastic
Materials, 2017-12-06/12-08, Le Mans (FR), pp.1-2, 2017
8. Zieliński T.G., Multiphysics modelling of sound absorbing fibrous materials, COMSOL 2015, COMSOL Conference, 2015-10-14/10-16, Grenoble (FR), pp.1-3, 2015
Many of fibrous materials are very good sound absorbers, because the acoustic waves, which propagate in air and penetrate a fibrous layer, interact with the fibres so that the wave energy is
dissipated. The dissipation is related to some viscous and thermal effects occurring on the micro-scale level. On the macroscopic level, a fibrous medium can be treated as an effective inviscid
fluid, provided that the fibres are stiff. Such a fluid-equivalent approach allows to use the Helmholtz equation for the macroscopic description of sound propagation and absorption. It is
applied by the advanced Johnson-Allard models, which require from 5 to 8 parameters related to the micro-geometry of fibrous microstructure. These are the so-called transport parameters in
porous media: the open porosity and tortuosity, the permeability and its thermal analogue, two characteristic lengths (for viscous forces and thermal effects), etc. Moreover, some parameters
for air (which fills the medium) are also necessary.
Słowa kluczowe:
Fibrous materials, Sound absorption, Multiphysics modelling
Afiliacje autorów:
9. Zieliński T.G., Multiscale modelling of the acoustic waves in rigid porous and fibrous materials, PCM-CMM 2015, 3rd Polish Congress of Mechanics and 21st Computer Methods in Mechanics,
2015-09-08/09-11, Gdańsk (PL), pp.601-602, 2015
This paper presents the multiscale approach to the problem of acoustic waves propagating in a fluid (air) inside rigid fibrous of porous materials with open porosity. The approach essentially
consists of the finite element analyses of three relevant problems defined on the representative fluid domain of a porous medium, the averaging and up-scaling techniques applied to calculate
some necessary parameters from the porous microstructure which are used by a model for the effective properties of a homogenized fluid equivalent to the porous medium, and finally, the solution
of a relevant Helmholtz problem on the macro-scale level in order to estimate, for example, the acoustic absorption of the porous medium. In the paper, this approach is illustrated by two
examples: experimentally validated analyses of a fibrous material made up of a copper wire based on two Representative Volume Elements, and an analysis of a foam with spherical pores using a
randomly generated periodic representative cell.
Słowa kluczowe:
Multiscale modelling, Acoustic waves, Porous media, Fibrous materials
Afiliacje autorów:
10. Zieliński T.G., Microstructure generation for design of sound absorbing foams with spherical pores, SAPEM'2014, Symposium on the Acoustics of Poro-Elastic Materials, 2014-12-16/12-18, Stockholm
(SE), pp.1-2, 2014
The paper presents an approach for a morphological design of foams with spherical pores. It involves an algorithm for random generation of foam microstructure (controlled by some parameters),
which is used to compute the transport parameters, and then, the effective speed of sound. Eventually, the sound absorption of such designed foam can be estimated.
Słowa kluczowe:
Foams with spherical pores, Microstructure design, Sound absorption, Micro-macro modelling
Afiliacje autorów:
11. Zieliński T.G., Multiphysics modelling of sound absorption in rigid porous media based on periodic representations of their microstructural geometry, COMSOL 2013, COMSOL Conference, 2013-10-23/
10-25, Rotterdam (NL), pp.1-3, 2013
Sound absorption in porous materials with rigid frame and open porosity can be very effectively estimated by applying the Johnson-Allard model in order to substitute a porous medium with an
equivalent effective fluid and then utilise the Helmholtz equation for time-harmonic acoustics. The model uses several parameters which characterize the micro-geometry of porous material from
the macroscopic perspective; they are: the total porosity, the viscous permeability and its thermal analogue, the tortuosity, and two characteristic lengths - one specific for viscous forces,
the other for thermal effects. These parameters can be measured experimentally, however, recent computational powers allow to calculate them from the microstructure of porous medium provided
that a good representation of usually very complex micro-geometry can be found. Inverse identification of these parameters is also possible.
The microstructural approach is based on the Multiscale Asymptotic Method and leads to two uncoupled micro-scale Boundary-Value Problems (BVPs). The first one is a harmonic, viscous,
incompressible flow, with no-slip boundary conditions on the skeleton walls, driven in the specified direction by the pressure gradient of unit amplitude, uniform in the whole fluid domain. The
second one is a harmonic thermal flow with isothermal boundary conditions on the skeleton walls and the uniform source of unit amplitude in the whole fluid domain. A scaled Laplace problem
should also be solved in order to calculate some of the parameters. All BVPs must be solved using the same periodic cell representative for the microstructure.
COMSOL Multiphysics offers two extremely useful features which makes this numerical environment very suitable for such microstructure-based modelling of periodic media representative for porous
materials; they are: the periodic boundary conditions and a very convenient possibility of implementation of new mathematical models, or modification of the implemented ones, using symbolic
expressions. To illustrate this the acoustic absorption for a layer of freely packed assemblies of small rigid spheres was measured and compared with the result calculated from the
microsctructural analyses using COMSOL Multiphysics. The FE analyses were carried out using periodic representative volume elements (RVEs) of regular sphere packings, for example, the so-called
body-centered cubic (BCC) adjusted to match the actual porosity of 42%. The discrepancies between the numerical and experimental results - although not very big - suggest that better
microstructural representations are necessary. Such RVEs may be constructed, for example, by using techniques for random generation of periodic representative volume elements which have been
recently advocated by Zielinski.
Słowa kluczowe:
Multiphysics modelling, Rigid porous media, Sound absorption
Afiliacje autorów:
12. Zieliński T.G., Concurrence of the micro-scale calculation and inverse identification of parameters used for modelling acoustics of porous media, SolMech 2012, 38th Solid Mechanics Conference,
2012-08-27/08-31, Warszawa (PL), pp.216-217, 2012
There are several widely-used acoustic models of porous media, starting from that simple, purely phenomenological, model proposed by Delany and Bazely, and finishing with semi-phenomenological
propositions of Johnson et al., combined with the ones of Champoux and Allard, with some important variations proposed by Pride, Lafarge, and others. All these models use some average
macroscopic parameters, namely: the total porosity and flow resistivity (or permeability) - for the Delany-Bazely model - which are supplemented by the average tortuosity of pores and their
characteristic dimensions - in the case of more advanced semi-phenomenological models. These models allow to describe the acoustic wave propagation in porous media in a wide frequency range,
provided that the skeleton is rigid. However, using some formulas derived for these models with the Biot’s theory of poroelasticity permits to describe correctly sound propagation in soft
porous materials. Thus, the determination of the above-mentioned parameters is very important. For direct, experimental measurements specialistic equipment is required, different for various
parameters. Therefore, an inverse identification based on curves of, for example, acoustic impedance or absorption (measured for samples of known thickness) can be used to estimate the model
parameters. In this work, it will be shown that knowledge of microstructural geometry of porous medium is very helpful to validate correct estimation. Moreover, a periodic microscopic cell
consisting of a few pores representing an average morphology of porous ceramics is proposed to serve for numerical analyses to estimate permeability parameters. The concurrence of such
micro-scale derivation and inverse identification is discussed.
Słowa kluczowe:
Porous media, Micro-scale calculations, Inverse indentification, Sound waves
Afiliacje autorów:
13. Nowak Ł.J.^♦, Zieliński T.G., Active vibroacoustic control of beams and plates with general boundary conditions, SolMech 2012, 38th Solid Mechanics Conference, 2012-08-27/08-31, Warszawa (PL),
pp.294-295, 2012
Active vibroacoustic control of beam and plate structures with arbitrary boundary conditions is considered. The goal is to develop a method of minimizing sound radiation efficiency of such
structures. Primary sound field arise as a result of vibrations, due to external disturbances. It is assumed that the control system is compact - it does not contain any additional, ambient
microphones. Piezoelectric transducers, mounted on the surface of the controled object, are used as sensors and actuators. Accurate numerical model of the considered structure is needed to
determine optimal parameters of the control system. Theoretical background and the results of numerical and experimental research are briefly introduced.
Słowa kluczowe:
Active vibroacoustic control, Plate structures, Beams, Piezoelectric transducers
Afiliacje autorów:
Nowak Ł.J. - other affiliation
Zieliński T.G. - IPPT PAN
14. Zieliński T.G., Porous foams with active implants improving acoustic absorption, SAPEM'2008, Symposium on the Acoustics of Poro-Elastic Materials, 2008-12-17/12-19, Bradford (GB), pp.1-4, 2008
The paper presents an accurate multiphysics modeling and analysis of active porous-composite sound absorbers composed of a layer of poroelastic material (a porous foam) with embedded elastic
implants having active (piezoelectric) elements. The purpose of such active composite material is to significantly absorb the energy of acoustic waves in a wide frequency range. At the same
time the total thickness of composites should be very small. The active parts of composites are used to adapt the absorbing properties of porous layers to different noise conditions by
affecting the so-called solid-borne wave (originating mainly from the vibrations of elastic skeleton of porous medium) to counteract the fluid-borne wave (resulting mainly from the vibrations
of air in the pores); the both waves are strongly coupled, especially, in lower frequencies. Passive and active performance of the absorbers is analysed to test the feasibility of this
approach. Since the absorption should be actively improved by affecting the vibrations of the elastic skeleton of porous layers, it is apparent that the rigid-frame modelling cannot be used
here. Instead, the advanced biphasic theory of poroelasticity must be used to model porous material of the active absorbers.
Słowa kluczowe:
Poroelasticity, Active piezoelectric inclusions, Smart materials, Acoustic absorption
Afiliacje autorów:
Numer/data zgłoszenia Twórca / twórcy Numer patentuOgłoszenie o
patentowegoOgłoszenie o zgłoszeniu Tytuł udzieleniu patentu pdf
patentowym Kraj i Nazwa uprawnionego z patentu
442254 Konowrocki R., Zieliński T. G., Opiela K. C.Sposób adaptacyjnego pochłaniania dźwięku i izolacji akustycznej poprzez --
2022-09-12- modyfikację mikrogeometrii warstwy porowatejPL, Instytut Podstawowych Problemów Techniki PAN - | {"url":"http://oldwww.ippt.pan.pl/pl/staff.html?idj5=tzielins&mrip=3.238.202.29","timestamp":"2024-11-13T22:39:38Z","content_type":"application/xhtml+xml","content_length":"247761","record_id":"<urn:uuid:3c28c61b-62da-4116-8b41-15eeb0cab4d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00666.warc.gz"} |
Acceleration 83305 - math word problem (83305)
Acceleration 83305
A body of mass 500 kg is lifted with a uniformly accelerated rectilinear motion using a rope. Determine the acceleration at which the rope breaks if it sustains a load of 15,000 N.
Correct answer:
Did you find an error or inaccuracy? Feel free to
write us
. Thank you!
Tips for related online calculators
You need to know the following knowledge to solve this word math problem:
Units of physical quantities:
Themes, topics:
Grade of the word problem:
Related math problems and questions: | {"url":"https://www.hackmath.net/en/math-problem/83305","timestamp":"2024-11-13T21:36:47Z","content_type":"text/html","content_length":"50288","record_id":"<urn:uuid:455bd855-82eb-4ef4-a5a0-de2768d78abc>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00822.warc.gz"} |
47953: Business Technology Seminar
Seminar Information
Seminar Description:
In this seminar, we study a number of approaches for determining the
relative importance of nodes in networks
. The problem of relative relevance or importance in network and graphical data appears in a very broad spectrum of applications, such as: (1) Given a directed graph of web pages linking to other web
pages, which web page is the most "relevant" given a certain keyword query? (2) Given a social network with undirected friendships between users: Which user in the network is the most "similar" to a
chosen user? (3) Given an interaction network of users on eBay with a subset of nodes identified as fraudsters (labeled training data), which are the most likely other users that are fraudsters too
based on common interaction patterns? (4) Given an electrical network and assuming independent failure of edges: which node from a chosen subset is the one that is most likely reachable (i.e.
connected) to the source set ("source-terminal reliability")?
Our goal is to understand the connections between seemingly unconnected scenarios and algorithms by studying the workings, assumptions, and limitations of a few selected approached in depth, then
trying to find common properties and differences. In doing so, our analysis and discussions will be guided by questions, such as:
• What is a good measure of "relevance" or "similarity?"
• When or why does one approach work better (what is the right metric) than another?
• What are some conditions under which one model can be transferred into another model?
• Why can some methods be evaluated in polynomial time while others are computationally hard?
Recommended Background Knowledge
: The seminar is targeted towards PhD students with a good understanding of the following topics (or willingness to get up to speed during the seminar). We will, at time, read sections of these
1. Linear Algebra
2. Probability Theory
3. Fundamentals of network theory
4. Graphical models
5. Complexity theory
Instructor: Wolfgang Gatterbauer
Assistant Professor of Business Technologies, Tepper School of Business, CMU.
Courtesy Assistant Professor, School of Computer Science, CMU.
Office: Tepper 354
Office Hours: via email
Seminar Requirements
1. Presentations and in-class contributions (50%)
: We will study the workings and predictions of several different approaches. Everybody will read the papers upfront, and analyze the methods on suggested example graphs in the manner they find most
appropriate (e.g., simulation in Matlab, or handwritten drawings). Students are given 20-30min at the beginning of each class to discuss their findings with another student that is randomly assigned
by the instructor. Each team is expected to be able present their findings to the class (ideally on the overhead projector with handwritten slides). One pair of students are announced in the previous
class and is in lead of the topic for each class. The lead team is expected to understand the paper more thoroughly then everybody else. Material from each students is handed in each class.
Important: you do not need to hand in digitally prepared slides, often drawing by pen on paper together with your assigned partner is far more effective. Goal is to understand the paper, and to help
others understand your insights, not to produce fancy slides. Thus come with your pen and lots of paper. We may also possibly use
for collaborative drawing in class by students and instructors (to be determined). Here are some guiding questions:
• What is the paper's setting? How does it differ from other papers read so far?
• What are the explicit and implicit assumptions made by the paper? Why this assumptions? Are they justified or not?
• What is the abstract problem the paper tries to solve? How does it differ from other papers read so far?
• What are the main strengths and weaknesses of this paper? What prevented the authors from addressing the weaknesses?
• What are the main technical contributions? Where does the paper fit in the related literature?
• How does the method work and what does it predict on a specific illustrative graph?
• How does the method generalize to other possible applications? Is there any relationship to empirical work? Are there testable implications?
• What are the conclusions?
• What did you learn from the paper?
• What are some possible follow-questions to be resolved?
• What are some other questions for additional discussion to which you may not know the answer yourself?
2. Written research proposal or small research project with presentation. (50%)
: Minimum 1000 words and 3 drawings, maximum 5000 words (no limit on drawings). Students will hand in and present their research proposals on the last day of class. The discussion will be very
interactive, i.e. questions will be posed during presentation, not only after.
• 2a) Methods option (mainly targeted towards CS and ACO students)
□ Deals with a method / approach / algorithm that is motivated by or related to the class material.
□ Reviews, critiques and compares the relevant literature,
□ Proposes one or more pertinent and interesting research questions,
□ Communicates the (potential) contributions from studying these questions and their relevance both to research and practice,
□ Presents a possible approach, along with any developed model/method or analysis, and
□ Concludes with a summary and discussion of the main ideas and/or results and/or possible extensions.
• 2b) Applied option (mainly targeted towards more applied fields, e.g., Finance or OM)
□ Deals with a topic that is motivated by the students' discipline and data set of interest, yet still related to our class topic.
□ Motivates the relevance of the problem and reviews the current state-of the art
□ Adapts and compares one or several algorithms to this real-world application
□ Evaluates empirically how well the approach works, and why or why not.
□ Concludes with a summary and discussion of the main ideas and/or results and/or possible extensions.
Class schedule
Below is a tentative calendar. I may modify this list as we go along. Please see the date at the bottom of this page to learn if this post has been updated.
1. Oct 24: Random walks and variations
• (1a) Graph concepts, Variations of Shortest Paths, Random walks and PageRank Optional:
2. Oct 31: From Random walks to Label Propagation
• (1b) Personalized PageRank, HITS and variations Optional:
• (2) Label propagation Optional:
3. Nov 7: Belief Propagation and variations
• (3) Belief Propagation in Graphical Models
□ Ch 22.2 of [Murphy 2012]: Loopy belief propagation
□ Ch 8.4.4-8.4.7 of [Bishop 2006]: Sum-product algorithm to Loopy belief Propagation
• (4) Linearized Belief Propagation
4. Nov 14: Network reliability and electrical networks / Structural Similarity
• (5) Laplacian matrix and Electrical networks
• (6) Structural ("Regular") Similarity
• (7) Network reliability
5. Nov 21: Modeling the brain
• (8) Spreading activation
• (9) Neural networks (Feed-forward networks → Backpropagation)
• (10) Deep belief networks
6. Dec 5: From Graphs to Hypergraphs and Databases
• (11) Reliability → Boolean Functions → hypergraphs
• (12) Hypergraphs & databases → probabilistic databases (query reliability vs. query dissociation)
7. Dec 12: Student presentations and comparative review | {"url":"https://gatterbauer.name/47953/13fa/","timestamp":"2024-11-10T09:37:00Z","content_type":"text/html","content_length":"27386","record_id":"<urn:uuid:9fd386bb-0873-43af-b371-3083488033df>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00461.warc.gz"} |
Probabilistic Existence Results for Separable Codes
Separable codes were defined by Cheng and Miao in 2011, motivated by applications to the identification of pirates in a multimedia setting. Combinatorially, $\overline{t}$-separable codes lie
somewhere between $t$-frameproof and $(t-1)$-frameproof codes: all $t$-frameproof codes are $\overline{t}$-separable, and all $\overline{t}$-separable codes are $(t-1)$-frameproof. Results for
frameproof codes show that (when $q$ is large) there are $q$-ary $\overline{t}$-separable codes of length $n$ with approximately $q^{\lceil n/t\rceil}$ codewords, and that no $q$-ary $\overline{t}
$-separable codes of length $n$ can have more than approximately $q^{\lceil n/(t-1)\rceil}$ codewords.
The paper provides improved probabilistic existence results for $\overline{t}$-separable codes when $t\geq 3$. More precisely, for all $t\geq 3$ and all $n\geq 3$, there exists a constant $\kappa$
(depending only on $t$ and $n$) such that there exists a $q$-ary $\overline{t}$-separable code of length $n$ with at least $\kappa q^{n/(t-1)}$ codewords for all sufficiently large integers $q$. This
shows, in particular, that the upper bound (derived from the bound on $(t-1)$-frameproof codes) on the number of codewords in a $\overline{t}$-separable code is realistic.
The results above are more surprising after examining the situation when $t=2$. Results due to Gao and Ge show that a $q$-ary $\overline{2}$-separable code of length $n$ can contain at most $\frac{3}
{2}q^{2\lceil n/3\rceil}-\frac{1}{2}q^{\lceil n/3\rceil}$ codewords, and that codes with at least $\kappa q^{2n/3}$ codewords exist. Thus optimal $\overline{2}$-separable codes behave neither like
$2$-frameproof nor $1$-frameproof codes.
The paper also observes that the bound of Gao and Ge can be strengthened to show that the number of codewords of a $q$-ary $\overline{2}$-separable code of length $n$ is at most
q^{\lceil 2n/3\rceil}+\tfrac{1}{2}q^{\lfloor n/3\rfloor}(q^{\lfloor n/3\rfloor}-1). | {"url":"https://pure.royalholloway.ac.uk/en/publications/probabilistic-existence-results-for-separable-codes","timestamp":"2024-11-07T22:05:42Z","content_type":"text/html","content_length":"48537","record_id":"<urn:uuid:f4dc7fd2-5906-4e04-ae78-3aed33b4ff30>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00593.warc.gz"} |
Exploring Mathematical Patterns
Mathematics is replete with patterns that captivate the human mind and reveal the underlying order of the universe.
published : 14 March 2024
Exploring Mathematical Patterns
Mathematics is replete with patterns that captivate the human mind and reveal the underlying order of the universe. From simple arithmetic sequences to complex fractals, mathematical patterns offer
insight into the beauty and complexity of the world around us.
The Beauty of Fibonacci Sequence
The Fibonacci sequence is perhaps one of the most famous and mesmerizing mathematical patterns. In this sequence, each number is the sum of the two preceding ones: 0, 1, 1, 2, 3, 5, 8, 13, and so on.
This sequence appears in various natural phenomena, from the spiral of a seashell to the arrangement of petals in a flower.
The Fibonacci sequence exhibits remarkable properties and relationships that have fascinated mathematicians for centuries. Its prevalence in nature highlights the deep connection between mathematics
and the natural world.
The Intricacy of Fractals
Fractals are geometric shapes that exhibit self-similarity at different scales. These intricate patterns can be generated by simple mathematical equations and are found throughout nature, from the
branching of trees to the structure of coastlines.
The Mandelbrot set, one of the most famous fractals, is generated by iterating a simple equation in the complex plane. The resulting image reveals an infinite array of intricate patterns and
structures, each more complex than the last.
The Harmony of Geometric Patterns
Geometric patterns, such as tessellations and symmetry groups, abound in mathematics and art. These patterns are characterized by their regularity and repetition, reflecting the underlying order of
geometric shapes and structures.
Penrose tiling, for example, is a non-periodic tiling pattern that exhibits fivefold symmetry. Discovered by mathematician Roger Penrose, this pattern has inspired artists and mathematicians alike
with its intricate beauty and mathematical elegance.
In conclusion, mathematical patterns offer a glimpse into the underlying order and complexity of the universe. From the mesmerizing Fibonacci sequence to the intricate beauty of fractals and
geometric patterns, mathematics reveals a world of infinite wonder and exploration. As we continue to explore the mysteries of mathematical patterns, let us marvel at the beauty and elegance that
surround us. | {"url":"https://www.function-variation.com/article86","timestamp":"2024-11-06T10:49:58Z","content_type":"text/html","content_length":"16882","record_id":"<urn:uuid:51fa4ce4-4c0a-441e-a30c-46c5f06653b8>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00377.warc.gz"} |
How to Calculate and Solve for Entrance Velocity | Darcy's Law | Water Budget
The image above represents entrance velocity.
To calculate entrance velocity, one essential parameter is needed, and this parameter is Hydraulic Conductivity (k[i]).
The formula for calculating entrance velocity:
V[e]Â = k[i]
V[e]Â = Entrance Velocity
k[i]Â = Hydraulic Conductivity
Let’s solve an example;
Find the entrance velocity when the hydraulic conductivity is 8.\
This implies that;
k[i] = Hydraulic Conductivity = 8
V[e]Â = k[i]
V[e]Â = 8
Therefore, the entrance velocity is 8.
Read more:How to Calculate and Solve for Discharge | Chezy’s Equation | Water Budget
How to Calculate Entrance Velocity | Darcy’s Law With Nickzom Calculator
Nickzom Calculator – The Calculator Encyclopedia is capable of calculating the entrance velocity.
To get the answer and workings of the entrance velocity using the Nickzom Calculator – The Calculator Encyclopedia. First, you need to obtain the app.
Master Every Calculation Instantly
Unlock solutions for every math, physics, engineering, and chemistry problem with step-by-step clarity. No internet required. Just knowledge at your fingertips, anytime, anywhere.
You can get this app via any of these means:
Web – https://www.nickzom.org/calculator-plus
To get access to the professional version via web, you need to register and subscribe for NGN 1,500 per annum to have utter access to all functionalities.
You can also try the demo version via https://www.nickzom.org/calculator
Android (Paid) – https://play.google.com/store/apps/details?id=org.nickzom.nickzomcalculator
Android (Free) – https://play.google.com/store/apps/details?id=com.nickzom.nickzomcalculator
Apple (Paid) – https://itunes.apple.com/us/app/nickzom-calculator/id1331162702?mt=8
Once, you have obtained the calculator encyclopedia app, proceed to the Calculator Map, then click on Agricultural under Engineering.
Now, Click on Water Budget under Agricultural
Now, Click on Entrance Velocity under Water Budget
The screenshot below displays the page or activity to enter your values, to get the answer for the entrance velocity according to the respective parameters which are the Hydraulic Conductivity (k
Now, enter the values appropriately and accordingly for the parameters as required by the Hydraulic Conductivity (k[i]) is 8.
Finally, Click on Calculate
As you can see from the screenshot above, Nickzom Calculator– The Calculator Encyclopedia solves for the entrance velocity and presents the formula, workings and steps too. | {"url":"https://www.nickzom.org/blog/2020/02/16/how-to-calculate-and-solve-for-entrance-velocity-darcys-law-water-budget/","timestamp":"2024-11-10T04:37:24Z","content_type":"text/html","content_length":"238361","record_id":"<urn:uuid:b3b59442-83d9-4478-b65c-9ecaa990d294>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00691.warc.gz"} |
In the Dune stories, mentats are trained to do complex calculations without any mechanical aids. For normal people, instantly knowing the multiplication up to 10 is a useful skill. Extending the
multiplication table to twenty seems to require the skills of a mathematical savant.
However, calculating products of up to 20 actually only requires the single digit multiplication table and the ability to add small numbers mentally. My goal is to be able to do these calculations
without paper. How?
If one of the numbers is single digit and the other is between 10 and 20, the calculation works like this:
A number between 10 and twenty can be written as \(10 + a\) or \(1a\). For the results you have one single digit multiplication an addition of a number times ten.
$$1a * b = (a*b) + (10 * b)$$
Since \(10 * b\) is just \(b\) shifted left by one, you can get the result by identifying \(a*b\) and then adding \(b\) to the second digit.
$$1\color{blue}{5} * \color{red}{7} = (\color{magenta}{35} + \color{red}{7}0) = 105$$
The mental steps are “calculate \(\color{blue}{5} * \color{red}{7}\)” and then “add \(\color{red}{7}\) to the second digit.”
For the product of a single digit times and a number under twenty, you always use the normal multiplication table and single digit addition.
If both numbers are between 11 and 19, the calculation works out this way.
$$a * 1b = (10 +a) * (10 + b) = a * b + 10 * (a + b) + 100$$
To start, multiply the singles place values. For example \(1\color{blue}{5} * 1\color{red}{7}\) starts as \(\color{magenta}{35}\).
Then add \(\color{blue}{5} +\color{red}{7}\), giving \(\color{orange}{12}\).
Add that in the 10s place \(\color{magenta}{35} + 10 * \color{orange}{12}\) giving \(\color{brown}{155}\) and finally, increment the hundreds digit, giving \(\color{green}{255}\)
This involves multiplying single digit numbers, adding a pair of single digits, adding a digit and a number less than twenty and incrementing a digit. It works out to needing having six or so digits
in mind at once which can be less than one’s estimated working memory of 7 digits.
If one of the numbers is 10, add a zero to the right of the other value. For twenty, double the other number and add a zero. Now there’s a simple way to multiply two numbers where both are 20 or
With a little practice, I can do this in my head. More importantly, I’m confident of my result so that I’m getting so that I don’t need to double check on a calculator. | {"url":"https://blog.smilingy.com/tag/twenty/","timestamp":"2024-11-11T20:16:13Z","content_type":"text/html","content_length":"41888","record_id":"<urn:uuid:d1102f0a-3541-4ca0-b092-7f5f80576ac8>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00865.warc.gz"} |
Converting Fractions Into Decimals Worksheet With Answers - Decimal Worksheets
Converting Fractions Into Decimals Worksheets – Switching decimals from fractions can be a difficult approach for a number of kids Even so, BYJU’s Fractions To … Read more
Convert Fraction To Decimal Worksheet With Answers
Convert Fraction To Decimal Worksheet With Answers – Changing decimals from fractions is a tough approach for a number of children Nevertheless, BYJU’s Fractions To … Read more
Turning Fractions Into Decimals Worksheet
Turning Fractions Into Decimals Worksheet – Converting decimals from fractions is a demanding approach for several children Even so, BYJU’s Fractions To Decimals Worksheet can … Read more
Turn Fractions Into Decimals Worksheet
Turn Fractions Into Decimals Worksheet – Converting decimals from fractions is really a tough method for a lot of youngsters Even so, BYJU’s Fractions To … Read more
Converting Decimals To Fractions Worksheets With Answers
Converting Decimals To Fractions Worksheets With Answers – Switching decimals from fractions is actually a tough method for several youngsters Nevertheless, BYJU’s Fractions To Decimals … Read more
Converting Fractions To Decimals Worksheet With Answers
Converting Fractions To Decimals Worksheet With Answers – Changing decimals from fractions is actually a tough method for several children Even so, BYJU’s Fractions To … Read more
Changing Fractions Into Decimals Worksheet
Changing Fractions Into Decimals Worksheet – Changing decimals from fractions is a challenging procedure for a number of children Nonetheless, BYJU’s Fractions To Decimals Worksheet … Read more | {"url":"https://www.decimalworksheets.com/tag/converting-fractions-into-decimals-worksheet-with-answers/","timestamp":"2024-11-06T19:54:07Z","content_type":"text/html","content_length":"96748","record_id":"<urn:uuid:f4dddefe-59fb-404f-bdd7-9e8befa15da5>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00632.warc.gz"} |
How to calculate your hiking time
It is tricky to forecast and predict one’s hiking time. To calculate the tour time, you have to take different factors into account. We will show you what the most impacting variables are and how to
figure out your walking time.
Two key factors are decisive reasons to calculate the walking time: vertical elevation gain and distance. Both values have to be considered for the ascent as well as for the descent. International
alpine clubs now agreed to a consistent and standardized formula to calculate the average walking time on a hike in the mountains. Nevertheless, this is a rough estimation for your trip and other
factors may influence your time.
The formula:
The model understands that mountaineers need about one hour to climb 300 m / 985 ft in vertical elevation gain and one hour to descent 500 m / 1,640 ft. Furthermore, they can hike 4 km / 2.5 miles
per hour on flat terrain. Out of this basic information a formula can be derived:
• Calculation of horizontal and vertical distance
• Halve the smaller amount
• Sum up the two results
Example of a hiking time calculation:
For one of our acclimatization tours we expect 1.200 m / 3,937 ft in vertical elevation gain and 12 km / 7.5 miles in distance.
• First, we’ll calculate the time consumption for vertical distance: As we need one hour for about 300 meters in altitude, our ascent of 1.200 meters takes 4 hours. Furthermore, we have to add the
time of the descent. As we need one hour for 500 meters, it takes us 2 hours and 24 minutes for the 1.200 m downhill. This adds up to a total hiking time of 4 h ascent + 2 h 24 min descent = 6
hours and 24 minutes. Afterwards we calculate the horizontal time needed. As we walk about 4 km in one hour, the horizontal time would be 3 hours. Most of the times, this describes the whole trip
and not only the ascent. We don’t need to take the descent into account.
• The second step is to halve the smaller value. In this scenario it is the horizontal time of only 3 hours. Hence, the result is 1 h and 30 min.
• Finally, the last step adds the halved time to the rest of the hiking time. In our case this is 6 h 24 min + 1 h 30 min = 7 hours and 54 minutes.
Higher Altitudes:
If you are climbing in high altitudes the estimated time for elevation gain might change. The higher you go, the less oxygen is available in the air. Therefore, hiking and climbing becomes more
difficult, which can slow down your hiking time drastically. Franz Berghold and Wolfgang Schaffert created an estimation about elevation gain in higher altitudes. The following table was published in
their book “Handbuch der Trekking- und Höhenmedizin (2009)” (Handbook for trekking- and altitude medicine, 2009):
ALTITUDE (in meters) PROFESSIONAL CLIMBERS TREKKERS / AMATEURS
under 2.000 m / 6,562 ft 500 m / 1,640 ft vertical elevation gain per hour 300 m / 985 ft vertical elevation gain per hour
approx. 3.000 m / 9,843 ft 425 m / 1,395 ft 255 m / 837 ft
approx. 4.000 m / 13,123 ft 375 m / 1,230 ft 225 m / 738 ft
approx. 5.000 m / 16,404 ft 325 m / 1,066 ft 195 m / 640 ft
approx. 6.000 m / 19,685 ft 275 m / 902 ft 165 m / 540 ft
approx. 7.000 m / 22,965 ft 225 m / 738 ft 135 m / 443 ft
approx. 8.000 m / 26,247 ft 175 m / 575 ft 105 m / 345 ft
Example Cotopaxi:
One of the most popular mountains in Ecuador is Cotopaxi. From the refugee it is about 1.200 m / 3,937 ft in vertical elevation gain and 3 km / 1.86 miles to the summit.
• The speed for vertical gain is slower in higher altitudes. We are mountaineering at approximately 5.000 m / 16, 404 ft and therefore climb 195 m / 640 ft in vertical elevation gain in one hour.
This results in 6 h and 9 min for the ascent. For the descent we still calculate 500 m / 1,640 ft per hour, which is 2 h 24 min. The round trip takes us 6 h 9 min + 2 h 24 min = 8 hours 33
minutesThe horizontal distance is only 6 km / 3.7 miles for ascent and descent and as we walk 4 km / 2.5 miles per hour this would take us 1 h and 30 min
• For the second step we halve the smaller amount, which is the horizontal distance. This leads to a time of 45 minutes.
• We sum up the two numbers and get an estimated hiking time of: 8 h 33 min + 45 min = 9 hours and 18 minutes
Different factors:
Of course, this is only an estimated hiking time. You still have to consider different factors like weather conditions, climbing speed, equipment weight and trail conditions. In general, we
recommended to add at least one hour for breaks or unexpected obstacles.
andean summit adventure – we share your passion and realize your dreams.
2 thoughts on “How to calculate your hiking time”
1. […] planning your trip in Ecuador, check these articles on how to calculate your hiking time. Here, get... andeansummitadventure.rocks/best-haciendas-mountain-lodges-ecuador | {"url":"https://www.andeansummitadventure.rocks/hiking-time-calculations/","timestamp":"2024-11-06T06:12:06Z","content_type":"text/html","content_length":"63093","record_id":"<urn:uuid:8c99076b-4bb9-4072-a52a-04ec7b8dd1bc>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00204.warc.gz"} |
Content Domain IV: Trigonometry and Calculus
Descriptive Statements:
• Apply trigonometric functions to solve problems involving distance and angles.
• Apply trigonometric functions to solve problems involving the unit circle.
• Manipulate trigonometric expressions and equations using techniques such as trigonometric identities.
• Analyze the relationship between a trigonometric function and its graph.
• Use trigonometric functions to model periodic relationships.
Sample Item:
Which of the following are the solutions to 2 sin^2 θ = cos θ + 1 for 0 < θ ≤ 2π2 sine squared theta equals cosine theta plus 1 for 0 is less than theta is less than or equal to 2pi?
Correct Response and Explanation (Show Correct ResponseHide Correct Response)
B. This question requires the examinee to manipulate trigonometric expressions and equations using techniques such as trigonometric identities. Since sin^2 θ = 1 – cos^2 θ, 2 sin^2 θ = cos θ + 1 ⇒ 2
(1 – cos^2 θ) = cos θ + 1 ⇒ 2 cos^2 θ + cos θ –1 = 0 ⇒ (2 cos θ – 1)(cos θ + 1) = 0 ⇒ cos θ = ^1/[2] or cos θ = –1. Thus for 0 < θ ≤ 2π, θ = ^π/[3], ^5π/[3], or πsine squared theta equals 1 minus
cosine squared theta, 2 sine squared theta equals cosine theta plus 1 which becomes 2 times the quantity 1 minus cosine squared theta equals cosine theta plus 1 which becomes 2 cosine squared theta
plus cosine theta negative 1 equals 0 which becomes the quantity 2 cosine theta minus 1 times the quantity cosine theta plus 1 equals 0 which becomes cos theta equals 1 half or cosine theta equals
negative 1. Thus for 0 is less than theta is less than or equal to 2pi, theta equals pi over 3, 5pi over 3, or pi.
Descriptive Statements:
• Evaluate limits.
• Demonstrate knowledge of continuity.
• Analyze the derivative as the slope of a tangent line and as the limit of the difference quotient.
• Calculate the derivatives of functions (e.g., polynomial, exponential, logarithmic).
• Apply differentiation to analyze the graphs of functions.
• Apply differentiation to solve real-world problems involving rates of change and optimization.
Sample Item:
If f(x) = 3x^4 – 8x^2 + 6, what is the value of f of x equals 3x to the fourth minus 8x squared plus 6, what is the value of the limit as h approaches 0 of f of the quantity 1 plus h minus f of 1 all
over h?
1. –4negative 4
2. –1negative 1
3. 1
4. 4
Correct Response and Explanation (Show Correct ResponseHide Correct Response)
A. This question requires the examinee to analyze the derivative as the slope of a tangent line and as the limit of the difference quotient. The limit expression is equivalent to the derivative f'(1)
f of 1. Since it is much easier to evaluate the derivative of a polynomial, this is preferred over evaluating the limit expression. f'(x)= 12x^3 – 16x, so f'(1) = 12 – 16 = –4.f of x equals 12x cubed
minus 16x, so f of 1 equals 12 minus 16 equals negative 4
Descriptive Statements:
• Analyze the integral as the area under a curve and as the limit of the Riemann sum.
• Calculate the integrals of functions (e.g., polynomial, exponential, logarithmic).
• Apply integration to analyze the graphs of functions.
• Apply integration to solve real-world problems.
Sample Item:
A sum of $20002000 dollars is invested in a savings account. The amount of money in the account in dollars after t years is given by the equation A = 2000e^0.05tA equals 2000e to the 0.05t. What is
the approximate average value of the account over the first two years?
1. $21032103 dollars
2. $21052105 dollars
3. $22062206 dollars
4. $22102210 dollars
Correct Response and Explanation (Show Correct ResponseHide Correct Response)
A. This question requires the examinee to apply integration to solve real-world problems. The average value of a continuous function f(x) over an interval [a, b] is f(x)dxf of x over an interval from
a to b is 1 over b minus a times the integral from a to b of f of x dx. Since the independent variable t represents the number of years, the average daily balance over 2 years will be ^1/[2]1 half of
the integral of the function evaluated from 0 to 2: e^0.05tdt = e^0.1 – 1) ≈ 21031 half the integral from 0 to 2 of 2000 e to the 0.05t dt equals 1000 over 0.05 times e to the 0.1 minus 1, which is
apporximately equal to 2103. | {"url":"https://www.west.nesinc.com/TestView.aspx?f=HTML_FRAG/NT304_Profile_CD4.html","timestamp":"2024-11-03T23:32:10Z","content_type":"application/xhtml+xml","content_length":"27661","record_id":"<urn:uuid:da813b0f-f0f0-4e4b-abf5-a37b165abdfa>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00342.warc.gz"} |
Memory and Generalization: Understanding AI's Core Challenges
Written on
Chapter 1: The Essence of Generalization in AI
The capacity to generalize from unseen data is fundamental to machine learning. This crucial aspect has become the focus of significant research within the realm of artificial intelligence.
A pivotal question in AI research revolves around enabling algorithms to effectively generalize beyond their training data. Typically, machine learning models are developed under an i.i.d.
(independent and identically distributed) assumption, meaning that both training and testing datasets stem from the same distribution. Consequently, generalization involves identifying this shared
underlying distribution using only the training data.
However, the i.i.d. assumption often falters in real-world scenarios where environments are dynamic, necessitating out-of-distribution (o.o.d.) learning for effective adaptation. Humans excel at
generalization compared to machines; we are adept at recognizing shifts in distribution and can infer rules from just a few examples. Our ability to adjust our inference models flexibly stands in
contrast to many traditional ML models that face the issue of catastrophic forgetting, where neural networks seem to erase previous knowledge when introduced to new data.
Generalization is intricately linked to the concepts of overfitting and underfitting in training data. Overfitting occurs when a model becomes overly complex and captures noise rather than signal.
Common strategies to combat overfitting include using simpler models, pruning, and employing regularization techniques like dropout and L2 norms. However, these intuitions have been challenged by the
phenomenon of double descent, where higher-capacity models perform worse than their lower-capacity counterparts due to overfitting, yet even larger models can outperform lower-capacity models in
The performance of large-scale transformer-based models, such as GPT-3, has further complicated the understanding of generalization, showcasing an uncanny ability to tackle tasks without explicit
training on them.
Video Description: This video explores the quantification and comprehension of memorization in deep neural networks, shedding light on their generalization capabilities.
DeepMind's Flamingo model takes this a step further by merging language and vision models, demonstrating a capacity to generalize knowledge across diverse tasks. The capability to represent knowledge
that transcends specific tasks signifies a more sophisticated form of intelligence compared to a neural network that requires vast amounts of labeled data to classify objects.
Thus, the remarkable success of these models raises intriguing questions regarding the nature of generalization and the mechanisms through which it is achieved. As model sizes grow, with parameter
counts approaching those of the human brain, one wonders if these models simply retain all training data cleverly or if they possess a deeper understanding.
The relationship between generalization and memory is vital: when we extract understanding from data, we gain access to a more adaptable and concise representation of knowledge than mere memorization
allows. This skill is essential in many unsupervised learning contexts, such as disentangled representation learning. Consequently, the ability to generalize to unseen data is not just a core tenet
of machine learning but also a key aspect of intelligence itself.
Chapter 2: The Complexity of Memory in AI
According to Markus Hutter, intelligence shares many traits with lossless compression. The Hutter Prize, awarded for advancements in text file compression, underscores this notion. Alongside his
colleague Shane Legg, Hutter distilled intelligence into a formula that captures its essence: the capacity of an agent to derive value from various environments, accounting for their complexity.
In simpler terms, intelligence can be seen as the ability to efficiently extract knowledge from a complex environment. The Kolmogorov complexity function serves as a measure of this complexity,
representing the shortest code required to generate an object. When noise is overfitted, it must be remembered because it lacks meaningful correlation, rendering it irrelevant for understanding past
or future events.
Despite a shared consensus on the importance of generalization in machine learning and its connection to complexity, measuring these concepts remains challenging. A Google paper highlights over 40
different metrics attempting to characterize complexity and generalization, yielding inconsistent results.
The interplay between how much neural networks remember and forget is a critical aspect of their generalization capabilities. A recent paper by Pedro Domingos titled "Every Model Learned by Gradient
Descent Is Approximately a Kernel Machine" adds an intriguing perspective to this discussion:
"Deep networks…are in fact mathematically approximately equivalent to kernel machines, a learning method that simply memorizes the data and uses it directly for prediction via a similarity function
(the kernel). This greatly enhances the interpretability of deep network weights, by elucidating that they are effectively a superposition of the training examples." — Pedro Domingos
Domingos posits that learning in neural networks shares mathematical similarities with kernel-based methods, such as Support Vector Machines (SVMs). In kernel methods, training data is embedded in a
feature vector space through a nonlinear transformation, allowing for intuitive properties and comparisons among data points.
The field of deep metric learning addresses similar inquiries, striving to identify embedding spaces where sample similarity can be readily assessed. Conversely, the neural tangent kernel has been
employed to derive a kernel function corresponding to an infinitely wide neural network, yielding valuable theoretical insights into neural network learning.
Domingos' insights reveal a compelling correlation between models learned through gradient descent and kernel-based techniques. During training, the data is implicitly memorized within the network
weights, and during inference, both the memorized training data and the neural network's nonlinear transformations interact to classify test points, akin to kernel methods.
While the implications of these findings are not yet fully understood, they may provide insights into why neural networks trained using gradient descent struggle with o.o.d. learning. If these
networks rely heavily on memory, they may be less capable of generalizing when not also trained to forget, emphasizing the importance of regularization in enhancing model generalization.
Video Description: This video delves into the concept of learning to ponder, focusing on memory in deep neural networks and its implications for generalization.
Memory plays a crucial role in storing and retrieving information over time, especially in time series analysis. Recurrent Neural Networks (RNNs) and Long Short-Term Memory Networks (LSTMs) are two
popular architectures for handling time series data.
A classical benchmark for assessing memory in sequence models is the addition problem, where a model must compute the sum of two numbers presented at different time points. The challenge lies in
retaining information over extended durations, which becomes increasingly complex with longer time lags due to the vanishing and exploding gradient problems. These issues arise from repeatedly
applying the same layer during backpropagation, complicating training for certain tasks.
The struggle to maintain memory is linked to the challenge of learning slower time scales. Research has shown that addition problems can be solved by initiating stable dynamics within a subspace of
an RNN, allowing for information retention without interference from the rest of the network.
LSTMs, which have garnered significant attention, tackle memory challenges by introducing a cell state that preserves information across extended periods and gates to manage the flow of information.
This enables LSTMs to retain information over thousands of time steps and effectively solve tasks like the addition problem.
However, as mentioned earlier, memory also presents challenges: it can lead to overfitting, where information is retained instead of understood.
The language of dynamical systems provides a physicist's perspective on temporal phenomena, with differential equations at the core of most physical theories. These equations are inherently
memoryless; given an initial state and a complete description of the system's time evolution, the future behavior can be predicted indefinitely without memory.
In the field of dynamical systems reconstruction, where the goal is to recover a system from time series data, incorporating memory can sometimes hinder generalization. Models may overfit training
data by memorizing irrelevant patterns rather than identifying the optimal memoryless description of the system. This ongoing challenge is critical for learning models of complex systems, such as
climate or brain dynamics, where accurate long-term behavior prediction has significant practical implications.
In many real-world scenarios, we lack complete knowledge of the systems we observe. Thus, leveraging memory remains essential, especially when a more compressed representation is unavailable.
Striking a balance between memory and generalization will be instrumental in designing algorithms that enhance AI's adaptability and intelligence. | {"url":"https://spirosgyros.net/memory-generalization-ai.html","timestamp":"2024-11-12T17:28:28Z","content_type":"text/html","content_length":"16848","record_id":"<urn:uuid:5ed68062-82fc-4754-8034-ded089a0971a>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00838.warc.gz"} |
Time | Universe of Particles
To measure time, we need a clock. We all have our own biological clock, and that’s what we normally use. However, when we want to be precise, we build ourselves an instrument.
The way a clock works is that it takes something that moves at a predictable speed and make it give off a tick every time it has moved a precise distance. The smallest possible time unit we can
register is in other words a function of the smallest possible ruler and the fastest possible speed.
The smallest possible time unit is therefore the time it takes a photon to cross an electron. If something happens faster than this, the time laps cannot be registered in any way.
An instantaneous event is anything that happens faster than it takes a photon to cross an electron.
The electron as a clock
This Post Has 0 Comments
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://www.universeofparticles.com/time/","timestamp":"2024-11-10T19:04:06Z","content_type":"text/html","content_length":"79019","record_id":"<urn:uuid:6879d397-768b-45f8-991f-a98f0659bae8>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00792.warc.gz"} |
Scientific Computing - An Introduction using Maple and MATLAB - Maplesoft Books
Scientific Computing - An Introduction using Maple and MATLAB
Walter Gander, Martin J. Gander, and Felix Kwok
Tell others about this book:
Scientific computing is the study of how to use computers effectively to solve problems that arise from the mathematical modeling of phenomena in science and engineering. It is based on mathematics,
numerical and symbolic/algebraic computations and visualization.
This book serves as an introduction to both the theory and practice of scientific computing, with each chapter presenting the basic algorithms that serve as the workhorses of many scientific codes;
it explains both the theory behind these algorithms and how they must be implemented in order to work reliably in finite-precision arithmetic.
The book includes many programs written in Maple. Maple is often used to derive numerical algorithms. The theory is developed in such a way that students can learn by themselves as they work through
the text. Each chapter contains numerous examples and problems to help readers understand the material “hands-on”.
Other Details
Language: English ISBN: 978-3319043241
Publisher: Springer | {"url":"https://cn.maplesoft.com/books/details.aspx?id=470","timestamp":"2024-11-10T04:36:05Z","content_type":"application/xhtml+xml","content_length":"69144","record_id":"<urn:uuid:47d22026-3516-446c-b911-b785c632c1cb>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00239.warc.gz"} |
: Massachusetts
Local History: Massachusetts
Author Title Edition Format Price Puborg
1620: Charter of New England; Nov3 1620 Html Free YaleU-Law
1620: Mayflower Compact; Nov11 1620 Html Free YaleU-Law
1629: Charter of the Colony of New Plymouth Granted to William Bradford & Associates Jan13 1629 Html Free YaleU-Law
1629: The Charter of Massachusetts Bay; March 4 1629 Html Free YaleU-Law
1635: The Act of Surrender of the Great Charter of New England to His Majesty; Jun7 1635 Html Free YaleU-Law
1640: William Bradford, & Surrender of the Patent of Plymouth Colony to the Freeman; March 2 1640 Html Free YaleU-Law
1688: Commission of Sir Edmund Andros for the Dominion of New England; April 7 1688 Html Free YaleU-Law
1691: The Charter of Massachusetts Bay; October 7 1691 Html Free YaleU-Law
1725: Explanatory Charter of Massachusetts Bay; August 26 1725 Html Free YaleU-Law
1788: Ratification of the Constitution by the State of Massachusetts; February 6 1788 Html Free YaleU-Law
1879-] The public record of Hon, John D. Long [1838-1915; Democratic & Greenback Massachusetts] 1879] Bost? PDF Kindle EPub Free Libr Congress
A pocket almanack, for the year 1779 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1779 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1780 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1780 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1781 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1781 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1782 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1782 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1783 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1783 Bost PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1784 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1784 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1785 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1785 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1786 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1786 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1787 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1787 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1788 ... calculated for the use of the state of Massachusetts-Bay [LOC] 1788 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1789 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1789 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1790 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1790 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1791 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1791 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1792 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1792 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1793 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1793 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1794 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1794 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1795 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1795 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1796 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1796 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1797 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1797 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1798 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1798 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1799 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1799 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1800 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1800 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1801 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1801 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1802 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1802 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1803 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1803 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1804 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1804 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1805 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1805 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1806 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1806 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1807 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1807 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1808 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1808 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1809 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1809 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1810 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1810 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1811 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1811 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1812 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1812 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1813 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1813 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1814 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1814 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1815 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1815 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1816 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1816 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1817 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1817 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1818 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1818 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1819 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1819 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1820 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1820 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1821 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1821 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1822 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1822 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1823 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1823 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1824 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1824 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1825 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1825 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1826 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1826 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1827 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1827 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1828 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1828 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1829 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1829 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1830 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1830 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1831 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1831 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1832 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1832 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1833 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1833 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1834 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1834 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1835 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1835 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1836 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1836 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1837 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1837 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1838 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1838 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1839 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1839 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1840 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1840 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1841 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1841 Bost. PDF Kindle EPub Free StLibMass
A pocket almanack, for the year 1842 ...: calculated for the use of the state of Massachusetts-Bay [LOC] 1842 Bost. PDF Kindle EPub Free StLibMass | {"url":"http://digitalbookindex.com/_search/search010hstregionalmassachusettsa.asp","timestamp":"2024-11-14T04:10:00Z","content_type":"application/xhtml+xml","content_length":"49656","record_id":"<urn:uuid:8c76c2d7-ee8f-4f35-833d-62c10a2c6f51>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00254.warc.gz"} |
Transitioning between tables, equations, and graphs (oh my!) in physics and mathTransitioning between tables, equations, and graphs (oh my!) in physics and math
Transitioning between tables, equations, and graphs (oh my!) in physics and math
Title: Design and validation of a test for representational fluency of 9th grade students in physics and mathematics: The case of linear functions
Authors: Stijin Ceuppens, Johan Deprez, Wim Dehaene, Mieke De Cock
First author’s institution: Katholieke Universiteit Leuven (English: Catholic University of Leuven)
Journal: Physical Review Physics Education Research 14 020105 (2018)
It probably comes as no surprise that math and physics are deeply linked. Sometimes math is even called the language of physics since solving physics problems and discovering new physics requires
mathematics. Therefore, looking at students’ knowledge of mathematics is often important to understanding their knowledge of physics.
However, solving physics problems is more than just knowing the relevant physics knowledge and the relevant mathematics knowledge. For example, previous work has shown that having the mathematical
skills to solve a problem is not sufficient for actually solving the problem. Other work has shown that the context matters, suggesting that students may be more likely to solve a math problem
correctly than an identical problem with a physics context but otherwise the same mathematical solution. These studies all suggest that the interplay of math and physics may be just as important as
knowledge of math and knowledge of physics by themselves.
In addition, how the problem is represented can also be important for how well a student can solve a problem. Since previous work has shown students are better at using some representations than
others, students need the ability to switch from a less convenient representation to a more convenient one, which the authors of today’s paper call representational fluency. Representational fluency
also captures the student’s ability to interpret and construct the representations. Today’s paper looks at representational fluency in both mathematical and physical contexts and tries to determine
if this fluency is different between the two contexts.
So what exactly does this mean in less jargony terms? First, the authors want to see how well students can transition between multiple representations of the same thing (representational fluency).
For this study, that “thing” was linear functions which can be represented as graphs, tables, or algebraic functions. An example showing an identical function in the three representations is shown in
figure 1.
Figure 1: Examples of the same linear function represented graphically, tabularly, and algebraically
Second, the authors want to compare how students answer mathematics word problems and physics word problems that are identical in their mathematical solutions but differ in the context (pure
mathematics or 1D kinematics). Examples of these identical questions are shown in figure 2.
Figure 2: Example of nearly identical questions on the test for transitioning from an equation to a table. P9 on the left is the physics context while M9 on the right is the math context. P and M
added to question number for reader’s convenience. (Fig 2 in paper)
To do the study, the authors created a 48 item test, where half of the test used a mathematical context and the other half of the test used a 1D kinematics context. The authors then included 8
questions (4 on each half) to test each of the transitions between representations (graphs to table, table to graph, table to equation, etc.). Since linear functions have two parameters that can be
adjusted (the slope and the y-intercept) the authors varied the appearance of the function to see if different choices of parameters changed the difficulty of the question. For example, maybe a
question involving a function with a negative slope and positive y-intercept would be more difficult than a question with a positive slope and a negative y-intercept.
The researchers then gave the test to 385 9th grade students in Flanders, Belgium. The test was administered online and half of the students answered the math questions first and the other half
answered the physics questions first. The results were then analyzed with generalized estimating equations (GEE) to determine how the physical context, the parameters of the linear function in the
question, the representations used, and the gender of the students influence how many students answer the question correctly.
So what did they find? First, transitions between tables and graphs are the easiest for students to do, with around 87% of students able to correctly transition between these representations.
Transitioning between anything that involved a formula was more challenging as only 62% of students could correctly transition between graphs and formulas and only 57% of students could correctly
transition between tables and equations. Second, the percentage of students who can make a transition between representations is not related to the direction of the transition. Thus, if a student can
create a table from an equation, they can also create an equation from a table. Third the researchers found that students did perform better on the questions with a mathematical context (72% correct)
than the questions with a kinematics context (69% correct). Finally, the researchers found that if a student answered a question incorrectly, the most common reason was that the student switched the
sign on the y-intercept and slope (question asks for positive slope and negative y-intercept but student selects answer with positive y-intercept and negative slope) or only switched the sign of the
slope (asks for negative slope but student selects answer with positive slope). In general, questions with negative slopes or y-intercepts were more challenging for the student.
Since the researchers analyzed their data with GEE, they could also look at interactions between variables used in the study. For example, as shown in figure 3, the researchers found an interaction
between the tabular and algebraic representations and the linear functions with no intercept but a positive slope. This means that the transition was not symmetric: students do better on
transitioning from equations to tables than from tables to equations if the function used has a positive slope but not a y-intercept (y=3x for example).
Figure 3: Interactions between representation transition (graph G, table T, and algebraic F; TG means transition from Table to Graph) and parameters (intercept first, slope second, N is negative, P
is positive, and Z is zero). * and ** signify significant interactions. (Figure 3 in paper)
When exploring this interaction further, the researchers found that this difference in performance was mainly driven by mathematical context questions rather than physics questions (figure 4).
Figure 4: Interactions between representation transitions and parameters of linear function split by questions with physics context (left) and mathematics context (right). Letter pairs have same
meaning as in figure 3. (Created from figures 4 and 5 in paper)
A similar interaction was also found along gender lines when looking at transitions between equations and graphs for functions with a positive slope but without a y-intercept. Interestingly, only
males show an asymmetry here, being better at transitions from equations to graphs than transitioning from graphs to equations when the question was posed in a mathematical context.
After presenting their findings, the authors suggest ways to apply these findings in the classroom. First, they suggest that the instructor or a software program could do the work of transition
between representations so that students could instead focus on how one representation changes if another is changed without having to do the difficult transition for themselves. For example, a
student could change the position of a line on a graph and see how the corresponding equation also changes. In terms of a more “physicsy” context, the instructor could generate uniform linear motion
graphs with the use of a motion sensor and have the student practice matching the curve to a linear function and then have the student use the table to check that the function makes sense by plugging
in values.
So what can we take away from this paper? First, while students can easily transition between graphs and tables, they have a hard time transitioning between representations if one of the
representations is an equation. Second, students correctly answer questions without negative signs in the y-intercept or slope at a higher rate than questions with at least one of the quantities
being negative. Finally, efforts to improve student’s ability to understand the relations between the multiple representations should avoid having the students directly translate between the
A copy of the test used here is provided in the supplemental material. Details about the creation and validation of the test can be found in the paper.
Figures used under Creative Commons Attribution 4.0 International license.
I am a postdoc in education data science at the University of Michigan and the founder of PERbites. I’m interested in applying data science techniques to analyze educational datasets and improve
higher education for all students
2 comments
1. Really interesting article, thanks for sharing! One question: is the second link in the second paragraph correct? It links to an article about different representations of the same physic
problem, not a comparison of identical math and physics problems. I’m really curious to see the data comparing students success solving identical problems, one pure math and one with physics
context, since this is an issue I often see with my students. Thanks!
1. Sorry for taking a while to respond, Mike. I do believe the link is correct; however, I believe the phrasing was a misleading. The link is for a paper that shows the context of the question
matters, not specifically one about math and physics contexts. I’ve updated the article to better reflect that the second part of that sentence with the link is more speculation rather than
from the linked article. Thank you for the feedback! There is a 2013 paper by Planinic et al (https://journals.aps.org/prper/pdf/10.1103/PhysRevSTPER.9.020103) that compared identical
graphing questions between math, physics, and non-physics contexts that may be of interest. There are likely articles as well but this is the one that came to mind. | {"url":"https://perbites.org/2018/08/29/transitioning-between-tables-equations-and-graphs-oh-my-in-physics-and-math/","timestamp":"2024-11-02T15:44:54Z","content_type":"text/html","content_length":"97548","record_id":"<urn:uuid:4da1aa5e-5aba-4db4-9537-b57035224351>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00549.warc.gz"} |
Radians, like degrees, are a way of measuring angles.
One radian is equal to the angle formed when the arc opposite the angle is equal to the radius of the circle. So in the above diagram, the angle ø is equal to one radian since the arc AB is the same
length as the radius of the circle.
The video below shows you how to convert radians and degrees.
Now, the circumference of the circle is 2 PI r, where r is the radius of the circle. So the circumference of a circle is 2 PI larger than its radius. This means that in any circle, there are 2 PI
Therefore 360º = 2 PI radians.
Therefore 180º = PI radians.
So one radian = 180/ PI degrees and one degree = PI /180 radians.
Therefore to convert a certain number of degrees in to radians, multiply the number of degrees by PI /180 (for example, 90º = 90 × PI /180 radians = PI /2). To convert a certain number of radians
into degrees, multiply the number of radians by 180/ PI .
Arc Length
The length of an arc of a circle is equal to ∅, where ∅ is the angle, in radians, subtended by the arc at the centre of the circle (see below diagram if you don’t understand). So in the below
diagram, s = r∅ .
Area of Sector
The area of a sector of a circle is ½ r² ∅, where r is the radius and ∅ the angle in radians subtended by the arc at the centre of the circle. So in the below diagram, the shaded area is equal to ½
r² ∅ . | {"url":"https://revisionworld.com/a2-level-level-revision/maths/pure-mathematics/trigonometry/radians","timestamp":"2024-11-09T12:56:27Z","content_type":"text/html","content_length":"35861","record_id":"<urn:uuid:973c9b65-9c7e-4735-91f5-1fc9f7b19c8f>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00761.warc.gz"} |
Practical Astronomy With Your Calculator documents » PDFs Download
practical astronomy with your calculator PDFs / eBooks
[results with direct download]
Measurement Pythagoras’ theorem 6 Pythagoras lived around 500 BC. The theorem (or rule) which carries his name was well known before this time, but had not been
Physics Handbook 1 Dr. Martyn Overy PHYSICS HANDBOOK for AS/A2 LEVEL North Chadderton School
Section 6.4 Fundamental Theorem of Calculus 299 We can therefore continue our proof, letting , What happens to c as h goes to zero? As gets closer to x, it carries
The scales on a slide rule are logarithmic, in that the spacing between divisions (the lines on the scale) become closer together as the value
INTRODUCTION TO GRAVITY-WELL MODELS OF CELESTIAL OBJECTS 2 AUTHORS PREFACE TO THIS SAMPLER This is a brief introduction to the book, “Beyond the
Space Math http://spacemath.gsfc.nasa.gov i For more weekly classroom activities about astronomy and space visit
Chapter 1 The Basics of Celestial Navigation Celestial navigation, a branch of applied astronomy, is the art and science of finding one's geographic position through
© Mark Place www.LearnEarthScience.com 4 (3) Which of the four ellipses you drew do you believe is most similar in eccentricity to the Earth’s orbit?
The CAAP Science Test consists of eight passage sets, each of which contains scientific infor- Biology Sample Passage 2 Applied Force and Different Masses
Practical Astronomy with your Calculator or Spreadsheet Fourth Edition Now in its fourth edition, this highly regarded book is ideal for those who wish to solve a
Practical astronomy with your calculator or spreadsheet / Peter Duffett-Smith, Practical astronomy with your calculator / Peter Duffett-Smith. 3rd ed. 1988.
Title: Practical Astronomy With Your Calculator Keywords: Practical Astronomy With Your Calculator Created Date: 9/2/2014 11:53:17 AM
Practical Astronomy with your Calculator or Spreadsheet Author: Peter Duffett-Smith Language: English Format: pdf Pages: 238 Published: 2011 See the book cover
NEW CUSTOMER? START HERE. Practical Astronomy with Your Calculator or Spreadsheet (Peter Duffett-Smith, Jonathan Zwart). Now in its fourth edition, this highly
Spherical Astronomy Spherical and practical Astronomy as Applied to Geodesy, A programmable scientific calculator or computer software like Mathcad or a
W.Schroeder, Practical Astronomy 2. J.J. Nassau, Practical Astronomy 3. 4. Peter Duffett-Smith, Practical Astronomy with your calculator . B.Sc Part II
Spherical Astronomy COURSE Spherical and practical Astronomy as Applied to A programmable scientific calculator or computer software like
pseudocode given in Practical astronomy with your calculator by Peter Du?ett-Smith. The calculation relies on extrapolations on conditions present at a 0.0
Like us while we load stuff for you! Thanks! | {"url":"https://www.pdfsdownload.com/download-pdf-for-free/practical+astronomy+with+your+calculator","timestamp":"2024-11-11T18:07:39Z","content_type":"text/html","content_length":"59038","record_id":"<urn:uuid:370bd7af-2374-43b1-85aa-705bc3dcf47b>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00861.warc.gz"} |
Papers with Code - Kernel Mode Decomposition and programmable/interpretable regression networks
Kernel Mode Decomposition and programmable/interpretable regression networks
Mode decomposition is a prototypical pattern recognition problem that can be addressed from the (a priori distinct) perspectives of numerical approximation, statistical inference and deep learning.
Could its analysis through these combined perspectives be used as a Rosetta stone for deciphering mechanisms at play in deep learning? Motivated by this question we introduce programmable and
interpretable regression networks for pattern recognition and address mode decomposition as a prototypical problem. The programming of these networks is achieved by assembling elementary modules
decomposing and recomposing kernels and data. These elementary steps are repeated across levels of abstraction and interpreted from the equivalent perspectives of optimal recovery, game theory and
Gaussian process regression (GPR). The prototypical mode/kernel decomposition module produces an optimal approximation $(w_1,w_2,\cdots,w_m)$ of an element $(v_1,v_2,\ldots,v_m)$ of a product of
Hilbert subspaces of a common Hilbert space from the observation of the sum $v:=v_1+\cdots+v_m$. The prototypical mode/kernel recomposition module performs partial sums of the recovered modes $w_i$
based on the alignment between each recovered mode $w_i$ and the data $v$. We illustrate the proposed framework by programming regression networks approximating the modes $v_i= a_i(t)y_i\big(\theta_i
(t)\big)$ of a (possibly noisy) signal $\sum_i v_i$ when the amplitudes $a_i$, instantaneous phases $\theta_i$ and periodic waveforms $y_i$ may all be unknown and show near machine precision recovery
under regularity and separation assumptions on the instantaneous amplitudes $a_i$ and frequencies $\dot{\theta}_i$. The structure of some of these networks share intriguing similarities with
convolutional neural networks while being interpretable, programmable and amenable to theoretical analysis.
PDF Abstract
Results from the Paper
results from this paper
to get state-of-the-art GitHub badges and help the community compare results to other papers. | {"url":"https://paperswithcode.com/paper/kernel-mode-decomposition-and","timestamp":"2024-11-11T07:28:47Z","content_type":"text/html","content_length":"125829","record_id":"<urn:uuid:08efaa86-7b54-41fd-a9d4-1f33de74d2f2>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00104.warc.gz"} |
Cameron's Osbo Page
Osbox Osbox is a pure strategy game in which two players push knotwork
Cameron Browne dice onto a board in order to create closed paths.
(c) 2008
Pieces: Players share a common pool of six-sided Osbo dice showing knot segments on each face. Note that two of the faces are identical.
The six faces of an Osbo die.
Start: The game is played on a 4x4 square board which is initially empty. The first player (Horz) owns the left and right sides and the second player (Vert) owns the top and bottom sides. Players can
only move along their home sides.
Play: Players take turns placing a die of their choice along one of their home sides and pushing it onto the board, as shown below. The die is pushed one space only.
Horz pushes a die onto the board.
Pushing a die onto the board may cause existing dice to be pushed along the same row/column, as shown below. Path segments do not need to match those on neighbouring dice, but dice cannot be pushed
onto rows/columns that are full.
The mover scores points for each path closed by their move which passes through more than row and more than one column. A path's score is given by the number of crossings it contains, with
self-crossings counting twice. Scoring paths are removed from the board as they occur.
A scoring move that pushes existing dice across to complete a knot.
Horz scores 8 points in the above example for completing the larger knot on the right which is then removed from the board. The smaller knot on the left does not score any points as it lies along a
single row.
Aim: The game ends when the board is full and is won by the player with the highest score. The game is tied if scores are equal.
The fact that paths cannot lie completely within a single row or column ensures that players cannot simply push a thin knot along a single row/column to score it multiple times and reduces the number
of cheap points from trivial pairings along the board edges.
The capture rule makes Osbox a sort of celtic knotwork Tetris. Each scoring move frees up the board, making games longer and more complex hence smaller boards can be used (e.g. 4x4). This rule
introduces some interesting tactics in that players may tempt the opponent into completing small knots whose removal opens the way for larger knots next turn.
Scoring may be simplifed to the number of dice captured each turn rather than completed path length (Horz would instead score 5 points in the example above). This form of scoring is more intuitive
and easier to calculate.
Any Side: The current player may enter their die from any side of the board, not just their own side.
Multiplayer Osbox: There's no reason that Osbox should not work with three or four players each owning a single board side. Then again, there's no reason that it should work either.
Osbox rules copyright (c) Cameron Browne, April 2008.
Osbox is a pure strategy game intended for players interested in the Osbo dice but who don't like randomness in their games. Osbox is similar in principle to a knotwork version of Tetris as suggested
by Dan Isdell. | {"url":"http://cambolbro.com/games/osbox/","timestamp":"2024-11-09T13:30:50Z","content_type":"text/html","content_length":"7184","record_id":"<urn:uuid:2db762fc-1649-4b44-b277-cdba50040805>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00288.warc.gz"} |
deductive, inductive, and abductive reasoning Reasoning is the process of using existing knowledge to draw conclusions, make predictions, or construct explanations. Three methods of reasoning are the
deductive, inductive, and abductive approaches.
Whereas the most familiar forms are inference from a general principle or law to individual instances (deduction), or from several instances to a law (induction), abduction is an equally important
constituent of scholarship, serving to identify possible explanations for a set of observations.
Guessing may combine elements of deduction, induction, abduction, and the purely random selection of one choice from a set of options. Guessing may also from one particular to another particular, as
opposed to deduction, induction, and abduction, where at least one of the premises or the conclusion is general. av A Gottfridsson · 2017 — ”Deduction proves that something must be; induction shows
that something actually is operative; Abduction [] suggests that something may be.” (Gold et al. “Deduction, induction and abduction” i Flick, U. (red.) The SAGE handbook of qualitative data
induction; He arrived at the deduction that the butler didn't do it. Through his powers of av K Rundcrantz · 2007 · Citerat av 14 — qualitative case study is inductive in its nature so new
theoretical concepts principles; induction, deduction and abduction (Johansson, 2004, 2005;. Wallén av G Sandström · Citerat av 13 — induction and deduction (Lundequist, 1995; Alvesson and
Sköldberg, The chapter introduced the notion of abduction, as a method to reason about different. Deduction vs Induction vs Abduction. Ajibot is an intelligent personal assistant using advanced
natural language understanding technology. Box Braids av B Eliasson · 2014 · Citerat av 4 — Abduction includes other elements than induction and deduction, and can the basis (as in induction), but
an abductive approach also uses theoretical In a narrower sense, analogy is an inference or an argument from one particular to another particular, as opposed to deduction, induction, and abduction,
where The Pod of Asclepius is a healthcare technology podcast for the technical crowd. No fluff, no sales pitches, just important innovative health tech ideas and Pierce, expanding on the notions
of abduction, induction and deduction and their role in the theorizing process.
Deduction, Induction, and Abduction Brianna L. Kennedy Robert Thornberg When conducting qualitative research, scholars should consider the relation between data collection and analysis as …
Can we say something about Charles Peirce's abduction in rela- tion to his categories? 28 Mar 2018 of the three logic operations, namely deduction, induction, abduction (or hypothesis), the last is
the only one which introduces any new idea duction is of particular interest, especially in relation with induction (here the fundamental reference is [5]) and deduction.
av C Nordgren · Citerat av 1 — Induction, deduction, and abduction in between the generalised ethical level and the situated one would vitalise ethics in the design research community.
Given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. For example, given that all bachelors are unmarried males, and given that this person is a bachelor, one
can deduce that this person is an unmarried male. Inductive reasoning (induction) Induction, Deduction, and Abduction Certainty is known or proven to be true on the basis of evidence. This question
asks for the respondent to explain God with deductive proofs.
London: SAGE. (13 s.)**. Abduction, deduction and induction: can these concepts be used for an understanding of methodological processes in interpretative case studies? H Åsvoll. type-token structure
embody the logical relations of abduction, deduction and. induction.
Skilsmässa delad ekonomi
Deduction and induction are discussed in the nursing literature. However, abduction has been largely neglected by nurse scholars. Induction, deduction, abduction Kennedy-Lewis, Brianna L.
(författare) Utrecht University Thornberg, Robert, 1968- (författare) Linköpings universitet,Pedagogik och didaktik,Utbildningsvetenskap (creator_code:org_t) Los Angeles : Sage Publications, 2018
2018 Engelska. Ingår i: The SAGE handbook of qualitative data collection. 2013-02-27 · Induction vs Deduction In logic theory, Induction and deduction are prominent methods of reasoning.
C. Pearce believed that by selecting among the 2/ The 3 forms are (1) Deduction, (2) Induction, and (3) Abduction.
Sociala orättvisor
sova två timmar eller inte allslindbergs buss ab örebrodnv assuranceingenjörsvägen vålberghur fort får man köra med släpvagn
Deduction and induction do not give us access to these kind of entities. They are things that to a large extent have to be discovered. Discovery processes presupposes creativity and imagination,
virtues that are not very prominent in inductive analysis (statistics and econometrics) or deductive-logical reasoning. We need another mode of inference.
Induction, deduction, abduction2018Inngår i: The SAGE handbook of qualitative data collection / [ed] Uwe Flick, Los Angeles: Sage Publications, 2018, s. av R Johansson · 2013 · Citerat av 24 —
»Deduction proves that something must be«. Induction is that mode of reasoning which adopts a conclu- [abduction] is, that the former infers the existence of. Abduction is a third form of logic, in
addition to deduction and induction.
El installations materielsamhallsvetenskap
Corpus ID: 10044083. Abduction , Deduction and Induction in Qualitative Research @inproceedings{Reichertz2005AbductionD, title={Abduction , Deduction and Induction in Qualitative Research}, author=
{J. Reichertz}, year={2005} }
av H Kankainen — of the three method approaches of induction, deduction and abduction. Induction is when theories are created from empirical observations (left); deduction is
So there are logical relations between 3 concepts, 2016-10-06 · Induction notes regularities but doesnt explain them eg why the sun rises. Abduction: here we infer the best explanation for the facts,
so that abduction is also called “inference to the best explanation (IBE)”. It may not always be the right explanation, so that the conclusion, as with induction, and unlike deduction, is not
guaranteed. Abduction, deduction and induction describe forms of reasoning. Deduction and induction are discussed in the nursing literature. However, abduction has been largely neglected by nurse
Induction. 21 Jul 2020 Abstract Abduction, deduction and induction are different forms of inference in science. However, only a few attempts have been made to Abduction is a form of reasoning where
assumptions are made to explain reasoning from deduction, which involves determining what logically follows from a set and induction, which involves inferring general relationships from exam
Deduction requires a fair amount of general information to give you a specific conclusion that is probably kind of obvious. So philosophy, and basically life as well, abduction. Abductive inference
is one of the three funda- mental modes of logical reasoning - the others being deduction and induction - characterized by These classes of reasoning are commonly referred to as abduction,
deduction, and induction sensu stricto, respectively. | {"url":"https://hurmaninvesterarcuqg.web.app/99333/33447.html","timestamp":"2024-11-08T16:55:42Z","content_type":"text/html","content_length":"12997","record_id":"<urn:uuid:2338749d-ec48-4f7e-b13c-f0642c0f262b>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00284.warc.gz"} |
Many recent neural network-based methods in simulation-based inference (SBI) use normalizing flows as conditional density estimators. However, density estimation for SBI is not limited to normalizing
flows—one could use any flexible density estimator. For example, continuous normalizing flows trained with flow matching have been introduced for SBI recently [Wil23F]. Given the success of
score-based diffusion models [Son19G], it seems promising to apply them to SBI as well, which is the primary focus of the paper presented here [Sha22S].
Diffusion models for SBI
The idea of diffusion models is to learn sampling from a target distribution, here $p(\cdot \mid x)$, by gradually adding noise to samples $\theta_0 \sim p(\cdot \mid x)$ until they converge to a
stationary distribution $\pi$ that is easy to sample, e.g., a standard Gaussian. On the way, one learns to systematically reverse this process of diffusing the data. Subsequently, it becomes possible
to sample from the simple noise distribution and to gradually transform the noise sample back into a sample from the target distribution.
More formally, the forward noising process $(\theta_t)_{t \in [0, T]}$ can be defined as a stochastic differential equation (SDE)
$$ d\theta_t = f_t(\theta_t)dt + g_t(\theta_t)dw_t, $$
where $f_t: \mathbb{R}^d \to \mathbb{R}^d$ is the drift coefficient, $g_t: \mathbb{R}^d \to \mathbb{R} \times \mathbb{R}^d$ is the diffusion coefficient, and $w_t$ is a standard $\mathbb{R}^d$-valued
Brownian motion.
Under mild conditions, the time-reversed process $(\bar{\theta}_t) := (\theta_{T-t})_{t \in [0, T]}$ is also a diffusion process [And82R], evolving according to
$$ d\bar{\theta}_t = [-f_{T-t}(\bar{\theta}_t) + g^2_{T-t}(\bar{\theta}_t)\nabla_{\theta}\log p_{T-t}(\bar{\theta}_t \mid x)]dt +g_{T-t}(\bar{\theta}_t)dw_t. $$
Given these two processes, one can diffuse a data point $\theta_0 \sim p(\cdot \mid x)$ into noise $\theta_T \sim \pi$ by running the forward noising process and reconstruct it as $\bar{\theta}_T \
sim p_T(\cdot \mid x) = p(\cdot \mid x)$ using the time-reversed denoising process.
In the SBI setting, we do not have access to the scores of the true posterior, $\nabla_{\theta}\log p_t(\theta_t \mid x)$. However, we can approximate the scores using score-matching [Son21S].
Score-matching for SBI
One way to perform score-matching is training a time-varying score network $s_{\psi}(\theta_t, x, t) \approx \nabla_\theta \log p_t(\theta_t \mid x)$ to approximate the score of the perturbed
posterior. This network can be optimized to match the unknown posterior by minimizing the conditional denoising posterior score matching objective given by
$$ \mathcal{J}^{DSM}_{post}(\psi) = \frac{1}{2}\int_0^T \lambda_t \mathbb{E}[||s_{\psi}(\theta_t, x, t) - \nabla_\theta \log p_{t|0}(\theta_t \mid \theta_0)||^2]dt. $$
Note that this objective does not require access to the actual score function of the posterior $\nabla_\theta \log p_t(\theta_t \mid x)$, but only to that of the transition density $p_{t|0}(\theta_t
\mid \theta_0)$, which is defined by the forward noising process (see paper for details). The expectation is taken over $p_{t|0}(\theta_t \mid \theta_0) p(x \mid \theta_0) p(\theta_0)$, i.e., over
samples from the forward noise process, samples from the likelihood (the simulator) and samples from the prior. Thus, the training routine for performing score-matching in the SBI setting amounts to
1. Draw samples $\theta_0 \sim p(\theta)$ from the prior, simulate $x \sim p(x \theta_0)$ from the likelihood, and obtain $\theta_t \sim p_{t|0}(\theta_t \mid \theta_0)$ using the forward noising
2. Use these samples to train the time-varying score network, minimizing a Monte Carlo estimate of the denoising score matching objective.
3. Generate samples from the approximate score-matching posterior $\bar{\theta}_T \sim p(\theta \mid x_o)$ by sampling $\bar{\theta}_0 \sim \pi$ from the noise distribution and plugging $\nabla_\
theta \log p_t(\theta_t \mid x_o) \approx s_{\psi}(\theta_t, x_o, t)$ into the reverse-time process to obtain $\bar{\theta}_T$.
The authors call their approach neural posterior score estimation (NPSE). In a similar vein, score-matching can be used to approximate the likelihood $p(x \mid \theta)$, resulting in neural
likelihood score estimation (NLSE) (requiring additional sampling via MCMC or VI).
Sequential neural score estimation
Neural posterior estimation enables amortized inference: once trained, the conditional density estimator can be applied to various $x_o$ to obtain corresponding posterior approximations with a single
forward pass through the network. In some scenarios, amortization is an excellent property. However, if simulators are computationally expensive and one is interested in only a particular observation
$x_o$, sequential SBI methods can help to explore the parameter space more efficiently, obtaining a better posterior approximation with fewer simulations.
The idea of sequential SBI methods is to extend the inference over multiple rounds: In the first round, training data comes from the prior. In the subsequent rounds, a proposal distribution tailored
to be informative about $x_o$ is used instead, e.g., the current posterior estimate. Because samples in those rounds do not come from the prior, the resulting posterior will not be the desired
posterior but the proposal posterior. Several variants of sequential neural posterior estimation have been proposed, each with its own strategy for correcting this mismatch to recover the actual
posterior (see [Lue21B] for an overview).
[Sha22S] present score-matching variants for both sequential NPE (similar to the one proposed in [Gre19A]) and for sequential NLE.
Empirical results
Figure 1. [Sha22S], Figure 2. Posterior accuracy of various SBI methods on four benchmarking tasks. Measured in two-sample classification test accuracy (C2ST, 0.5 is best).
The authors evaluate their approach on a set of four SBI benchmarking tasks [Lue21B]. They find that score-based methods for SBI perform on par with and, in some cases, better than existing
flow-based SBI methods (Figure 1).
With score-based diffusion models, this paper presented a potent conditional density estimator for SBI. It demonstrated similar performance to existing SBI methods on a subset of benchmarking tasks,
particularly when simulation budgets were low, such as in the two moons task. However, the authors did not extend their evaluation to real-world SBI problems, which are typically more
high-dimensional and complex than the benchmarking tasks.
It is important to note that diffusion models can be more computationally intensive during inference time than existing methods. For instance, while normalizing flows can be sampled and evaluated
with a single forward pass through the neural network, diffusion models necessitate solving an SDE to obtain samples or log probabilities from the posterior. Therefore, akin to flow-matching methods
for SBI, score-matching methods represent promising new tools for SBI, but they imply a trade-off at inference time that will depend on the specific problem. | {"url":"https://transferlab.ai/pills/2024/sequential-neural-score-estimation/","timestamp":"2024-11-13T15:16:46Z","content_type":"text/html","content_length":"56804","record_id":"<urn:uuid:79c5066b-b9ac-4d5f-87bf-c8ab0fe1c6c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00462.warc.gz"} |
A Review of Wavelets Solution to Stochastic Heat Equation with Random Inputs
A Review of Wavelets Solution to Stochastic Heat Equation with Random Inputs ()
Received 6 November 2015; accepted 20 December 2015; published 23 December 2015
1. Introduction
Several applications in science and engineering involve stochasticity in input data. This is usually the result of the stochastic nature of the model coefficients, boundary or initial conditions
data, the geometry in which the problem is set, and the source term. Uncertainty may also be introduced into an applied problem owing to the intrinsic variability inherent in the system being
modelled [1] . Generally, stochastic volatility leads to randomcoefficients in model equations.
The stochastic heat equation with random inputs (SHERI) is a stochastic partial differential equation (SPDE) that has received considerable attention in recent years. The approach to the solution
depends on the type of random input present in the equation. Usually, the SHERI is analyzed and solved for only a random source term or for random coefficients only (see for example [2] ). In this
paper, we analyze the wavelet solution to the SHERI where both a random source term and random coefficient are present. The equation is given by:
In this case,
Currently, several numerical methods are available for solving SPDEs. These include the classical and popular Monte Carlo method (MCM), the stochastic Galerkin method (SGM), and the stochastic
collocation method (SCM). It is well known that MCMs have very slow convergence rates since they do not exploit the regularity available in the solution of SPDE’s with respect to input stochastic
parameters. Stochastic Galerkin methods and SCM’s tend to have faster convergence rates compared to MCM’s. However, often, scientific and engineering problems involve irregular dependencies of the
quantity of interest with respect to the random variable. As such, SGM’s and SCM’s become inefficient and may not converge at all [3] .
In order to overcome the pitfall of global approximation, localized methods are used to arrest the inefficiencies inherent in SCM’s and SGM’s. Adaptive wavelet collocation methods are relied upon to
remedy this situation. The use of this method has the additional advantage of eliminating the dreaded curse of dimensionality. Moreover, it maintains a better convergence rate in addition to
producing optimal approximation, not only for PDE's, but also, for PDE-constrained optimal problems [4] . We consider wavelet based-methods in this paper.
Wavelet-based methods for solving differential equations may be classified in two ways, the wavelet collocation methods and the adaptive wavelet schemes. To implement the adaptive wavelet scheme, we
consider a second-generation wavelets constructed form the lifting scheme. Wavelets constructed in this form constitute a Riesz basis and have compact support, the desirable properties that guarantee
a multiresolution analysis and required approximation.
The rest of the paper is organized as follows: In Section 2 we review the concept of multiresolution analysis in wavelet bases. This is one of the key concepts that will be used in the paper. In
addition, the general properties of wavelet solutions to SPDE’s are considered. Section 3 analyzes the solution of the SHE with random coefficients. The stochastic heat equation with random source
term is solved in Section 4, while a detail analysis of the full stochastic heat equation with all types of random inputs is solved and analyzed in Section 5. The paper ends with the conclusion in
Section 6.
2. Preliminaries
2.1. Wavelets and Multiresolution Analysis
A wavelet is a function
5) Each subspace V[j] is spanned by integer translates of a single function[0].
6) There exists a function[0], such that the sequence [0]. The approximation of a function ^j is defined as the orthogonal projection of [j]. In general a function [j]. To compute the orthogonal
projection requires that there exists a unique function[j] is then defined by:
2.2. Wavelets
The goal of multiresolution analysis is to develop representations of a function ^j. To achieve this we seek to expand the given function in terms of basis functions
First, we define the Haar wavelet. Let X denote an infinite dimensional Banach space. A set
such that[i]. The Haar orthogonal system (see for example [7] ) forms an absolute basis for the spaces
In the space
and where we put
The Haar basis is convenient for
In general Daubechies wavelets depend on an integer
2.3. Weak and Strong Solutions of SDE
Solutions of SDE’s may be classified as weak or strong. If there exist a probability space with filtration, Brownian motion
A weak solution of the stochastic differential equation above is a triple
holds for all
2.4. Wavelet Approximation to Stochastic Differential Equations
The solution of a SDE requires the evaluation of an integral of the type:
1) Obtain an approximation for fractional noise
2) Apply an appropriate numerical scheme (for example, implicit or explicit Euler scheme) to obtain an approximation of the solution
3) Prove the almost sure convergence of the approximation to the solution.
The fractional integral of the function f with respect to the function g is defined as:
See, for example, [6] .
2.5. Second Generation Wavelets
Second-generation wavelets are a generalized form of bi-orthogonal wavelets. Their applications easily fit functions defined on bounded domains. These wavelets form a Riesz basis for certain
desirable function spaces. The lifting scheme is a method for constructing second generation wavelets that are no longer translates and dilates of a single scaling function. The lifting scheme is
given by:
See, for example, [1] .
2.6. The Wavelet Stochastic Collocation Method
The second generation collocation method makes the treatment of nonlinear terms in PDE’s easier to handle. Moreover, the use of wavelets enables the solution of differential equations with localized
structures or sharp transitions more amenable. In order to solve such problem more efficiently, the use of computational grids that adapts dynamically in time to reflect local changes in the solution
play an effective role.
Wavelet-based numerical algorithms may be classified into two main types namely the wavelet-Garlekin method and the wavelet collocation method. The wavelet-Garlekin algorithm uses gridless wavelet
coefficient space while the collocation method relies on dynamically adaptive computational grid [8] . A clear advantage of the wavelet-collocation method is that it facilitates the easy treatment of
nonlinear terms in a stochastic partial differential equation. However, traditional biorthogonal wavelets are not suitable for handling boundaries. Omitting the translation-dilation relationship,
biorthogonal wavelets, leads to second generation wavelets [9] which uses second generation MRA of a function space as given below.
3) for each
Here, the MRA is not based on the scaling function
Given the scaling function coefficients
Second generation wavelet transform may be considered in terms of filter banks, where filters not only act locally but may be potentially different for each coefficient. Now we can set
1) Compact support that is zero outside the interval
3) Linear combinations of
Define the detail function as:
The lifting scheme is applied to infinite or periodic domains for the construction of the first-generation wavelets. The lifting scheme has the following advantages:
1) Faster implementation of the wavelet transform by a factor of 2.
2) No auxiliary memory required. The original signal is replaced with its wavelet transform.
3) Inverse wavelet transform is simply the reversal of the order of operations and switching of addition and operations. The scaling function and mother wavelet have vanishing moments, that is
where D is the domain over which the wavelets are constructed.
2.7. Grid Adaptation
Consider the function
where the grid points
[8] . Let ε denote the prescribed threshold, then the approximation
and the number of significant wavelet coefficients
where the coefficients
The adaptive grid is calculated as follows:
1) Sample
2) Perform the forward wavelet transform to obtain the values of
3) Analyze wavelet coefficients
4) Incorporate into the mask M all grid points associated with the scaling functions at the coarsest level of res- olution.
5) Starting from
The process of grid adaptation for the solution of PDE’s is made up of the following steps [10] :
1) Use the values of the solution
2) Analyze wavelet coefficients
3) Extend the mask M with grid points associated with type I or II adjacent wavelets.
4) Perform the reconstruction check procedure to obtain a complete mask M.
5) Construct the new computational grid
When solutions of differential equations are intermittent in both space and time, methods combining adjustable time step with spatial grid to obtain approximate solutions. However, several problems
depend on small spatial scales that are highly localized and as such, using a uniformly fine grid does not necessarily lead to and efficient method of solution. To address this concern, locally
adapted grids are appealed to.
Wavelets can be used to used as an efficient tool to develop adaptive numerical methods capable of limiting the global approximation error associated with the numerical scheme. In addition to being
fast, such wavelet- based schemes are asymptotically optimal when applied to elliptic differential equations [10] [11] . Moreover, they are fast.
The second generation adaptive wavelet can be used to discretize PDE’s as follows:
In order to construct grid points that adapt to intermittent solution, we consider the collocation points
The second generation wavelet decomposition takes the form:
[9] This approximation is known as nonlinear approximation in wavelet basis. The method is a combination of the fast second generation wavelet transform with finite difference approximation of
3. The Case of Random Input Coefficient
In real thermal environments, the heat transfer coefficient of media surfaces are subject to temporal and spatial variations due to several factors [12] . However, accurately predicting spatial
distribution of the heat transfer coefficient is very complicated since these external influences are usually nonlinear and are fleeting in nature [13] . In addition, the complexity is compounded by
a measurement uncertainty of more than fifty percent for the overall heat transfer coefficients of heat transfer surfaces during heat exchangers [14] . Due to the inherent uncertainties described
above, the distribution of temperature and thermal stresses in media is analyzed taking into account probability theory. The stochastic heat equation devoid of a source term but characterized by a
random input is given by
If κ is random, three possible approaches to the solution are possible. Two of these methods are provided by [15] . We outline the third method here. We assume that the stochastic input coefficient κ
In this case the solution is a complex nonlinear function of the coefficient κ [16] . A reasonably approximate solution may be obtained by applying the stochastic collocation method or the adaptive
wavelet stochastic method [1] . This method exploits the properties of compactly supported wavelet that form Reisz bases. When implemented as interpolating wavelet bases, they induce norms that are ^
J is approximately constant [17] .
We assume a stochastic solution of the form:
where W[0] = 1 and [min] < κ[max] < ∞. Here,
To obtain the approximation given by the equation above which yields an optimal wavelet basis by minimizing the total mean square error, we consider the sample space Ω equipped with the
and the random variables
4. Stochastic Heat Equation with Source Term
We consider the heat equation with an additional forcing term. The quation now becomes:
A weak solution may be given as
which is almost Holder-
The greatest difficulty encountered in solving this problem involves the representation of the source term. [20] [21] have shown that spectral methods can be relied upon to obtain an accurate enough
solution. Thus, we assume a solution of the form:
where u[k] are deterministic coefficients and
For any intermediate resolution level j (0 ≤ j < J) we have
5. SHERI
We consider the partial differential equation with random inputs in the form:
Using polynomials that have the property of diagonal interpolation matrix, leads to the stochastic collocation method. We re-formulate the problem by letting D denote a bounded domain in
Theorem 1. Find
The above problem may be solved using Lagrange Interpolation in parameter space. Let
After solving for the finite element approximation of the solution
Instead of using global polynomial interpolating spaces, piecewise polynomial interpolation spaces requiring only a fixed polynomial degree is needed. this method is based on refining the grid used
and is suitable for problems having solutions with irregular behavior.
For each parameter dimension
and where
hence we have:
The hierarchical sparse-grid approximation of L is given by:
The approximation spaces
2) Supp
4) There is a constant C, independent of the level L, such that
For example, consider the hat function:
The major disadvantage of this that the linear hierarchical basis does not form a stable multiscale splitting of the approximation scale. The scheme does not ensure efficiency and optimality with
respect to complexity as previously claimed.
A multi-resolution wavelet approximation though similar, performs better to achieve optimality since it possesses the additional property:
5) Riesz Property: The basis
By implication, other methods without this property are not
6. Conclusion
Analytical Error Estimates
Suppose the wavelet decomposition is truncated at level J, we define the residual of the truncation by
This error is a function of the wavelet thresholding parameter
Wavelets can handle periodic boundary conditions efficiently. Moreover, the use of antiderivatives of wavelet bases as trial functions smoothen singurarities in wavelets. The basic principle is
summarized as follows:
1) Represent the geometric region for the bvp in terms of wavelet series.
2) Represent the functions defined on the boundary and on the interior of the region in terms of wavelet series defined on a rectangular region containing the domain.
3) Convert the differential equation to some weak form.
4) Formulate and solve the wavelet Garlerkin problem for the domain and differential equation, using localized wavelets as orthonormal basis.
An important property of this method is that the coding for the solution is independent of the geometry of the boundary [28] . The wavelet basis is more efficient than finite element basis for the
approximation of the boundary measure. The associated error E is given by:
We have shown that wavelet-based solution to the stochastic heat equation with random inputs is stable. Computational methods based on the wavelet transform are analyzed for every possible type of
stochastic heat equation. The methods are shown to be very convenient for solving such problems, since the initial and boundary conditions are taken into account automatically. The results reveal
that the wavelet algorithms are very accurate and efficient.
^*Corresponding author. | {"url":"https://scirp.org/journal/paperinformation?paperid=62141","timestamp":"2024-11-08T18:20:16Z","content_type":"application/xhtml+xml","content_length":"162075","record_id":"<urn:uuid:6ac6f1c3-c93d-4336-b8bb-febc48f98e39>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00251.warc.gz"} |
A note on power domination in grid graphs
The problem of monitoring an electric power system by placing as few measurement devices in the system as possible is closely related to the well known vertex covering and dominating set problems in
graphs (see [T.W. Haynes, S.M. Hedetniemi, S.T. Hedetniemi, M.A. Henning, Power domination in graphs applied to electrical power networks, SIAM J. Discrete Math. 15(4) (2002) 519-529]). A set S of
vertices is defined to be a power dominating set of a graph if every vertex and every edge in the system is monitored by the set S (following a set of rules for power system monitoring). The minimum
cardinality of a power dominating set of a graph is its power domination number. In this paper, we determine the power domination number of an n×m grid graph.
ASJC Scopus subject areas
• Discrete Mathematics and Combinatorics
• Applied Mathematics
Dive into the research topics of 'A note on power domination in grid graphs'. Together they form a unique fingerprint. | {"url":"https://pure.uj.ac.za/en/publications/a-note-on-power-domination-in-grid-graphs","timestamp":"2024-11-02T20:44:15Z","content_type":"text/html","content_length":"49450","record_id":"<urn:uuid:47c62d2e-1e2a-4d4e-882b-47673844b548>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00335.warc.gz"} |
TR04-072 | 19th August 2004 00:00
Hausdorff Dimension and Oracle Constructions
Bennett and Gill (1981) proved that P^A != NP^A relative to a
random oracle A, or in other words, that the set
O_[P=NP] = { A | P^A = NP^A }
has Lebesgue measure 0. In contrast, we show that O_[P=NP] has
Hausdorff dimension 1.
This follows from a much more general theorem: if there is a
relativizable and paddable oracle construction for a complexity
theoretic statement Phi, then the set of oracles relative to which
Phi holds has Hausdorff dimension 1.
We give several other applications including proofs that the
polynomial-time hierarchy is infinite relative to a Hausdorff
dimension 1 set of oracles and that P^A != NP^A intersect coNP^A
relative to a Hausdorff dimension 1 set of oracles. | {"url":"https://eccc.weizmann.ac.il//eccc-reports/2004/TR04-072/index.html","timestamp":"2024-11-10T04:27:40Z","content_type":"application/xhtml+xml","content_length":"20652","record_id":"<urn:uuid:9c6781fb-918c-46de-bae7-efb2ec731ba8>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00512.warc.gz"} |
Watt-Hours Calculator: Easily Calculate Energy Usage - Free Tool - Calculator Pack
Watt-hours Calculator
Do you ever wonder how much energy is being consumed by your appliances or devices? It can be difficult to keep track of, but with the Watt-hours Calculator, you can easily estimate the amount of
energy used and the cost of that energy. This tool is especially helpful for those looking to save money on their electricity bill or reduce their carbon footprint. With just a few simple inputs, the
Watt-hours Calculator can provide accurate results for a variety of devices, from your phone charger to your refrigerator. Say goodbye to guessing and hello to informed energy usage decisions with
the Watt-hours Calculator.
Watt-hours Calculator
Calculate the total energy consumption in watt-hours.
Watt-hours Calculator Results
Power (Watt) 0
Time (hours) 0
Total Energy Consumption (Watt-hours) 0
Share results with your friends
calculating watt-hours is essential in various electrical applications. Our watt hours calculator streamlines this calculation. To gain insights into related electrical calculations and understand
their implications, link it with our watt to kwh calculator. This integrated approach empowers you to work efficiently with electrical energy.
How to Use the Watt-hours Calculator
Calculating energy consumption is essential in understanding the amount of energy used by an appliance over a specific period. The Watt-hours Calculator is a useful tool that calculates the total
energy consumption in watt-hours. In this blog post, we will explore how to utilize the Watt-hours Calculator effectively and the significance of its applications.
Instructions for Utilizing the Calculator
The Watt-hours Calculator requires two input fields, namely, Power (Watt) and Time (hours).
The Power (Watt) field is where the power consumption of an appliance is entered in watts, and the Time (hours) field is where the time the appliance is used is entered in hours. It is important to
provide accurate input data to obtain an accurate result.
The output fields of the Watt-hours Calculator are Power (Watt), Time (hours), and Total Energy Consumption (Watt-hours).
The Power (Watt) field displays the value entered in the Power input field. The Time (hours) field displays the value entered in the Time input field. The Total Energy Consumption (Watt-hours) field
displays the result of the calculation, which is the product of the Power and Time input values.
Watt-hours Calculator Formula
The formula for calculating energy consumption in watt-hours is simple. The total energy consumption (in watt-hours) is equal to the power consumption (in watts) multiplied by the time of use (in
hours). This can be expressed as follows:
Total Energy Consumption (Watt-hours) = Power (Watt) x Time (hours)
Illustrative Examples
Suppose a 60-watt bulb is used for five hours. To calculate the total energy consumption in watt-hours, we input the values in the Watt-hours Calculator. The Power (Watt) field is filled with "60,"
and the Time (hours) field is filled with "5." Upon clicking the "Calculate" button, the Total Energy Consumption (Watt-hours) is computed and displayed as "300."
Illustrative Table Example:
Power (Watt) Time (hours) Total Energy Consumption (Watt-hours)
In conclusion, the Watt-hours Calculator is a valuable tool in determining energy consumption in watt-hours. By following the instructions outlined in this blog post, you can easily calculate the
total energy consumption of an appliance. Accurate energy consumption calculations enable us to make informed decisions and take necessary measures to reduce energy consumption, conserve energy, and
save money. | {"url":"http://calculatorpack.com/watt-hours-calculator/","timestamp":"2024-11-05T16:03:36Z","content_type":"text/html","content_length":"32997","record_id":"<urn:uuid:ea2a32fd-ac33-4528-9ecb-51fc01b8b6c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027884.62/warc/CC-MAIN-20241105145721-20241105175721-00000.warc.gz"} |
Collection of Solved Problems
Thermal Insulation of a House
Task number: 1278
Owners of an older block of flats with perimeter walls 20 cm thick have decided to save money on heating by insulating the house from the outside using a polystyrene insulation layer with a thickness
of 10 cm. Consider the model example of a winter day with the outdoor temperature of −10 °C and the room temperature of 20 °C. Determine the temperature between the wall and polystyrene. What is the
reduction factor of heat losses?
• Hint
To solve this task we need to use the thermal conduction equation, which indicates that the heat passing through the material is proportional to the temperature difference between both sides of
the material.
Given that both materials - concrete and polystyrene - are in succession, the heat in steady state passing through the concrete must subsequently pass through the polystyrene "just as quickly".
Otherwise, there would be changes of temperature between the two materials. We determine the temperature between the two materials from the fact, that the heat flow through the concrete must
equal the heat flow through polystyrene.
• Numerical values
d[1] = 0.2 m tthe thickness of the wall
d[2] = 0.1 m the thickness of polystyrene insulation layer
t[1] = 20 °C the room temperature
t[2] = −10 °C the outdoor temperature
t = ? the temperature between the wall and polystyrene
Q[1] /Q[1]= ? pthe ratio of the heat lost through the uninsulated and insulated wall, respectively
From The Handbook of Chemistry and Physics:
λ[1] = 1.3 Wm^−1K^−1 the thermal conductivity coefficient of the wall (panel = concrete)
λ[2] = 0.1 Wm^−1K^−1 the thermal conductivity coefficient of polystyrene
• Analysis
At steady state, the heat which passed through the panel wall must equal the heat which passed through the polystyrene insulation. Since the amount of heat that passes through the material is
proportional to the temperature differences between its both sides (assuming that these temperatures are constant), we can determine the temperature between the wall and the polystyrene using the
equality of the passed heats.
The heat that passes through an insulated wall is equal to the heat that only passes through the wall at given difference between the room temperature and the temperature at the wall-insulation
contact. It is also equal to the heat transferred through the polystyrene insulation at given difference between the outside temperature and the temperature at the wall-insulation contact.
• Solution
The thermal conduction equation is:
\[Q=\lambda\frac{S\Delta t}{d}\tau,\]
where Q is the heat transferred per time τ, λ is the thermal conductivity coefficient (characterizing the material), S is the surface area, d is the thickness of the material through which the
heat passes and Δt is the temperature difference between both sides of the material, which remains constant.
To determine the temperature between the wall and the polystyrene insulation we use the fact that at steady state the heat transferred though the wall equals the heat transferred through the
polystyrene insulation (otherwise the temperature between the wall and polystyrene would change):
We can thereof evaluate the unknown temperature t:
\[d_2\lambda_1\left(t_1-t\right)=d_1\lambda_2\left(t-t_2\right)\] \[d_2\lambda_1t+d_1\lambda_2t=d_2\lambda_1t_1+d_1\lambda_2t_2\] \[t=\frac{d_2\lambda_1t_1+d_1\lambda_2t_2}{d_2\lambda_1+d_1\
lambda_2}=\frac{0.1\cdot{1.3}\cdot{20}+0.2\cdot{0.1}\cdot(-10)}{0.1\cdot{1.3}+0.2\cdot{0.1}}\,\mathrm{^\circ C}=16\,\mathrm{^\circ C}\]
To calculate the reduction factor of heat losses, we need to evaluate the heat, which is transferred through both uninsulated and insulated wall within the given time. An uninsulated wall:
The heat transferred through an insulated wall equals the heat that passes through just the wall at smaller temperature difference:
For t we now substitute the expression we have evaluated earlier:
\[Q_2=\lambda_1\frac{S\left(t_1-\frac{d_2\lambda_1t_1+d_1\lambda_2t_2}{d_2\lambda_1+d_1\lambda_2}\right)}{d_1}\tau\] \[Q_2=\lambda_1\frac{S\tau}{d_1}\frac{d_2\lambda_1t_1+d_1\lambda_2t_1-d_2\
lambda_1t_1-d_1\lambda_2t_2}{d_2\lambda_1+d_1\lambda_2}\] \[Q_2=\lambda_1\frac{S\tau}{d_1}\frac{d_1\lambda_2t_1-d_1\lambda_2t_2}{d_2\lambda_1+d_1\lambda_2}=\frac{\lambda_1\lambda_2\left(t_1-t_2\
Note: We can see that both thermal conductivity coefficients λ[1] and λ[2] are in an equivalent position, so that we would get to the same expression if we evaluated the amount of heat
transferred through the polystyrene (we derived the temperature t from the equality of both heats, therefore both heats must equal). This corresponds to the heat transferred through an insulated
wall (wall + polystyrene).
Also, the resulting relationship gives us guidance on how to determine the "average thermal conductivity" for a composite material. If we would like to find such
that could be used for a wall with insulation as a whole (total thickness
), then by comparing the obtained expression and the general thermal conduction equation we get:
\[\lambda\frac{S\left(t_1-t_2\right)}{d_1+d_2}\tau=\frac{\lambda_1\lambda_2\left(t_1-t_2\right)}{d_2\lambda_1+d_1\lambda_2}S\tau\] \[\lambda=\frac{\lambda_1\lambda_2\left(d_1-d_2\right)}{d_2\
Finally, we determine the ratio of both heats:
{0.1}}=0.13=13\%\, \]
• Answer
The temperature between the wall and the polystyrene is 16 °C and the heat loss decreases to 13 % of the original value after insulating the wall. | {"url":"https://physicstasks.eu/1278/thermal-insulation-of-a-house","timestamp":"2024-11-11T16:50:29Z","content_type":"text/html","content_length":"32832","record_id":"<urn:uuid:11e0d302-63d5-4b35-b1d5-5af1a5d0223f>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00704.warc.gz"} |
Jean-Yves Tourneret
(Université de Toulouse, FR)
Title — Bayesian Fusion of Multiple Images - Beyond Pansharpening
Abstract — This presentation will discuss new methods for fusing high spectral resolution images (such as hyperspectral images) and high spatial resolution images (such as panchromatic images) in
order to provide images with improved spectral and spatial resolutions. These methods are based on Bayesian estimators exploiting prior information about the target image to be recovered, constructed
by interpolation or by using dictionary learning techniques. Different implementations based on MCMC methods, optimization strategies or on the resolution of Sylvester equations will be explored
Biography — Jean-Yves TOURNERET (SM08) received the ingenieur degree in electrical engineering from the Ecole Nationale Supérieure d'Electronique, d'Electrotechnique, d'Informatique, d'Hydraulique et
des Télécommunications (ENSEEIHT) de Toulouse in 1989 and the Ph.D. degree from the National Polytechnic Institute from Toulouse in 1992. He is currently a professor in the university of Toulouse
(ENSEEIHT) and a member of the IRIT laboratory (UMR 5505 of the CNRS). His research activities are centered around statistical signal and image processing with a particular interest to Bayesian and
Markov chain Monte Carlo (MCMC) methods. He has been involved in the organization of several conferences including the European conference on signal processing EUSIPCO'02 (program chair), the
international conference ICASSP'06 (plenaries), the statistical signal processing workshop SSP'12 (international liaisons), the International Workshop on Computational Advances in Multi-Sensor
Adaptive Processing CAMSAP 2013 (local arrangements), the statistical signal processing workshop SSP'2014 (special sessions), the workshop on machine learning for signal processing MLSP'2014 (special
sessions). He has been the general chair of the CIMI workshop on optimization and statistics in image processing hold in Toulouse in 2013 (with F. Malgouyres and D. Kouamé) and of the International
Workshop on Computational Advances in Multi-Sensor Adaptive Processing CAMSAP 2015 (with P. Djuric). He has been a member of different technical committees including the Signal Processing Theory and
Methods (SPTM) committee of the IEEE Signal Processing Society (2001-2007, 2010-present). He has been serving as an associate editor for the IEEE Transactions on Signal Processing (2008-2011,
2015-present) and for the EURASIP journal on Signal Processing (2013-present). | {"url":"https://speakerdeck.com/s3_seminar/jean-yves-tourneret","timestamp":"2024-11-14T21:05:02Z","content_type":"text/html","content_length":"175667","record_id":"<urn:uuid:1b66409e-22dd-4705-84ac-bd8f2cb1d929>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00784.warc.gz"} |
Create one-sided spectrum amplitude vectors from the two-sided equivalents.
amp1 = fold(amp2)
amp1 = fold(amp2,dim)
Two-sided amplitudes.
Type: double
Dimension: vector | matrix
Dimension on which to perform the calculation.
(default: first non-singular dimension).
Type: integer
Dimension: scalar
One-sided amplitudes.
Examples with even number of inputs:
amp1 = fold([1,2,5,3,5,2])
amp1 = [Matrix] 1 x 4
Example with odd number of inputs:
amp1 = fold([1,2,5,3,3,5,2])
amp1 = [Matrix] 1 x 4
fold is only meaningful for symmetrical inputs, usually produced from real time domain signals. The positive half of the spectrum (below the Nyquist frequency) is doubled, thus effectively folding
the spectrum in half.
freq with the 'onesided' option can be useful to create appropriate x-axis values. | {"url":"https://www.openmatrix.org/help/topics/reference/oml_language/SignalProcessing/fold.htm","timestamp":"2024-11-09T17:44:09Z","content_type":"application/xhtml+xml","content_length":"8273","record_id":"<urn:uuid:51107e4e-1697-4338-b8bc-5ef71bf2cf01>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00058.warc.gz"} |
manual pages
count_motifs {igraph} R Documentation
Graph motifs
Graph motifs are small connected subgraphs with a well-defined structure. These functions search a graph for various motifs.
count_motifs(graph, size = 3, cut.prob = rep(0, size))
graph Graph object, the input graph.
size The size of the motif.
cut.prob Numeric vector giving the probabilities that the search graph is cut at a certain level. Its length should be the same as the size of the motif (the size argument). By default no cuts are
count_motifs calculates the total number of motifs of a given size in graph.
count_motifs returns a numeric scalar.
See Also
Other graph motifs: motifs(), sample_motifs()
g <- barabasi.game(100)
motifs(g, 3)
count_motifs(g, 3)
sample_motifs(g, 3)
version 1.3.4 | {"url":"https://igraph.org/r/html/1.3.4/count_motifs.html","timestamp":"2024-11-01T23:44:24Z","content_type":"text/html","content_length":"9628","record_id":"<urn:uuid:39f0e979-cbce-4a8f-be27-310d663f0338>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00171.warc.gz"} |
seminars - On uniformly rotating binary stars
We study the asymptotic profiles, uniqueness and orbital stability of MCCANN’s uniformly rotating binary stars (Houston J Math 32(2):603–631, 2006) governed by the Euler–Poisson system. A new
uniqueness result will be importantly used in stability analysis. Moreover, we apply our framework to the study of uniformly rotating binary galaxies of the Vlasov–Poisson system through REIN’s
reduction (Handbook of differential equations: evolutionary equations, vol III, pp 383–476, 2007). | {"url":"http://www.math.snu.ac.kr/board/index.php?mid=seminars&page=21&sort_index=Time&order_type=asc&l=en&document_srl=1093383","timestamp":"2024-11-04T15:45:24Z","content_type":"text/html","content_length":"45504","record_id":"<urn:uuid:e8c4c2e9-f2d5-4931-bf3c-ee2fe2ffe479>","cc-path":"CC-MAIN-2024-46/segments/1730477027829.31/warc/CC-MAIN-20241104131715-20241104161715-00501.warc.gz"} |
Test for Normality in SPSS - Quick SPSS Tutorial
Test for Normality in SPSS
This quick tutorial will explain how to test whether sample data is normally distributed in the SPSS statistics package.
It is a requirement of many parametric statistical tests – for example, the independent-samples t test – that data is normally distributed. There are a number of different ways to test this
requirement. We’re going to focus on the Kolmogorov-Smirnov and Shapiro-Wilk tests.
Quick Steps
1. Click Analyze -> Descriptive Statistics -> Explore…
2. Move the variable of interest from the left box into the Dependent List box on the right.
3. Click the Plots button, and tick the Normality plots with tests option.
4. Click Continue, and then click OK.
5. Your result will pop up – check out the Tests of Normality section.
The Data
Our example data, displayed above in SPSS’s Data View, comes from a pretend study looking at the effect of dog ownership on the ability to throw a frisbee.
Frisbee Throwing Distance in Metres (highlighted) is the dependent variable, and we need to know whether it is normally distributed before deciding which statistical test to use to determine if dog
ownership is related to the ability to throw a frisbee.
Test for Normality
To begin, click Analyze -> Descriptive Statistics -> Explore… This will bring up the Explore dialog box, as below.
The set up here is quite easy.
First, you’ve got to get the Frisbee Throwing Distance variable over from the left box into the Dependent List box. You can either drag and drop, or use the blue arrow in the middle.
The Factor List box allows you to split your dependent variable on the basis of the different levels of your independent variable(s). In our example, Dog Owner, our independent variable, has two
levels – owner and non-owner – so we could add Dog Owner to the Factor List box, and look at our dependent variable split on that basis. However, since we can perfectly well test for normality
without adding in this extra complexity, we’ll just leave the box empty.
Once you’ve got the variable you want to test for normality into the Dependent List box, you should click the Plots button. The Plots dialog box will pop up.
In this box, you want to make sure that the Normality plots with tests option is ticked, and it’s also sensible to select both descriptive statistics options (Stem-and-leaf and Histogram).
Now click Continue, which will take you back to the Explore dialog box. This should now look something like this.
You’re now ready to test whether your data is normally distributed.
Press the OK button.
The Result
The Explore option in SPSS produces quite a lot of output. Here’s what you need to assess whether your data distribution is normal.
SPSS runs two statistical tests of normality – Kolmogorov-Smirnov and Shapiro-Wilk.
If the significance value is greater than the alpha value (we’ll use .05 as our alpha value), then there is no reason to think that our data differs significantly from a normal distribution – i.e.,
we can reject the null hypothesis that it is non-normal.
As you can see above, both tests give a significance value that’s greater than .05, therefore, we can be confident that our data is normally distributed.
A complication that can arise here occurs when the results of the two tests don’t agree – that is, when one test shows a significant result and the other doesn’t. In this situation, use the
Shapiro-Wilk result – in most circumstances, it is more reliable.
Q-Q Plot
SPSS also provides a normal Q-Q Plot chart which provides a visual representation of the distribution of the data.
If a distribution is normal, then the dots will broadly follow the trend line.
As you can see above, our data does cluster around the trend line – which provides further evidence that our distribution is normal.
Put this Q-Q plot together with the results of the statistical tests, and we’re safe in assuming that our data is normally distributed. This means that at least one of the criteria for parametric
statistical testing is satisfied.
If you wish to export the SPSS output for your test of normality to another application such as Word, Excel, or PDF, check out our tutorial.
Okay, that’s this tutorial over and done with. You should now be able to interrogate your data in order to determine whether it is normally distributed. | {"url":"https://ezspss.com/test-for-normality-in-spss/","timestamp":"2024-11-15T00:45:53Z","content_type":"text/html","content_length":"53074","record_id":"<urn:uuid:b8ae703d-af17-42e5-a9c3-7edbb9c565c7>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00753.warc.gz"} |
Linear Algebra
This is a set of lecture notes and problems I created for Introduction to Linear Algebra taught in Spring 2003 at Applied Mathematics and Statistics Department at Stony Brook University. There are
many typos, as I have never fully proof-read them. Please let me know if you find them useful
Lecture Notes
Lecture 01 Introduction. Numbers. Sets.
Lecture 02 Linear Equations.
Lecture 03 Linear Systems. Row Echelon Form. Gaussian Elimination.
Lecture 04 Overview of Linear Systems.
Lecture 05 Matrices and Matrix Operations.
Lecture 06 Inverse and Transpose. Matrices and Linear Systems.
Lecture 07 Matrix Equations and the Inverse.
Lecture 08 Vector Spaces and Subspaces.
Lecture 09 Linear Combinations. Linear Dependence.
Lecture 10 Spanning Sets. Basis.
Lecture 11 Examples of Bases. Dimension.
Lecture 12 More Examples. Dimension and Basis of the Span.
Lecture 13 Dimensions and Basis of the Span. Rank.
Lecture 14 Functions. Linear Functions.
Lecture 15 Homogeneous Systems.
Lecture 16 Image and Kernel. Matrix of a Linear Function.
Lecture 17 Dimension and Basis of Image and Kernel.
Lecture 18 Image and Kernel and Matrices. Linear Functions as a Space.
Lecture 19 Area of a Parallelogram.
Lecture 20 Permutations.
Lecture 21 General Properties of Area and Volume. Determinant.
Lecture 22 Properties of Determinants – 1.
Lecture 23 Properties of Determinants – 2.
Lecture 23 – Addendum Proofs.
Lecture 24 Application of Determinants. Kramer’s Rule. Inverse.
Lecture 25 Euclidean Spaces. Norm. Cauchy Inequality.
Lecture 26 Orthogonality.
Lecture 27 Orthogonal Bases. Gram-Schmidt Process.
Lecture 28 Operators. Change of Basis. Matrix of an Operator.
Lecture 29 Change of Matrix of an Operator. Diagonalizable Operators.
Lecture 30 Eigenvalues and Eigenvectors.
Lecture 31 Symmetric Matrices.
Lecture 32 Powers and Square Roots of Matrices.
Lecture 33 Invariant Spaces. Jordan Canonical Form.
Lecture 34 Functions of Operators
Problem Sets
Problem Set 1 Linear Systems.
Problem Set 2 Matrices.
Problem Set 3 Vector Spaces. Bases and Dimensions.
Problem Set 4 Linear Functions.
Problem Set 5 Determinants.
Problem Set 6 Euclidean Spaces. Orthogonality. Norms.
Problem Set 7 Linear Operators. Eigenvalues and Eigenvectors. | {"url":"http://www.andant.info/linear-algebra/","timestamp":"2024-11-08T11:30:00Z","content_type":"text/html","content_length":"49868","record_id":"<urn:uuid:b314729a-a897-489a-be13-3dba43b4eb3d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00536.warc.gz"} |
Technical note: Effects of uncertainties and number of data points on line fitting – a case study on new particle formation
Articles | Volume 19, issue 19
© Author(s) 2019. This work is distributed under the Creative Commons Attribution 4.0 License.
Technical note: Effects of uncertainties and number of data points on line fitting – a case study on new particle formation
Fitting a line to two measured variables is considered one of the simplest statistical procedures researchers can carry out. However, this simplicity is deceptive as the line-fitting procedure is
actually quite a complex problem. Atmospheric measurement data never come without some measurement error. Too often, these errors are neglected when researchers make inferences from their data.
To demonstrate the problem, we simulated datasets with different numbers of data points and different amounts of error, mimicking the dependence of the atmospheric new particle formation rate (J[1.7]
) on the sulfuric acid concentration (H[2]SO[4]). Both variables have substantial measurement error and, thus, are good test variables for our study. We show that ordinary least squares (OLS)
regression results in strongly biased slope values compared with six error-in-variables (EIV) regression methods (Deming regression, principal component analysis, orthogonal regression, Bayesian EIV
and two different bivariate regression methods) that are known to take errors in the variables into account.
Received: 24 Oct 2018 – Discussion started: 11 Dec 2018 – Revised: 05 Sep 2019 – Accepted: 09 Sep 2019 – Published: 09 Oct 2019
Atmospheric measurements always come with some measurement error. Too often, these errors are neglected when researchers make inferences based on their data. Describing the relationship between two
variables typically involves making deductions in a more general context than that in which the variables were directly studied. If the relationship is not defined correctly, the inference is also
not valid. In some cases, the bias in the analytical method is even given a physical meaning.
When analysing the dependencies of two or more measured variables, regression models are usually applied. Regression models can be linear or non-linear, depending on the relationship between the
datasets that are analysed. Standard regression models assume that the independent variables of the model have been measured without error and that the model only accounts for errors in the dependent
variables or responses. In cases where the measurements of the predictors contain error, estimating using standard methods (usually ordinary least squares, OLS) does not tend to provide the true
parameter values, even when a very high number of data points is used. In linear models, the coefficients are underestimated (e.g. Carroll et al., 2006); however, in non-linear models, the bias is
likely to be more complicated (e.g. Schennach, 2004). If predictor variables in regression analyses contain any measurement error, methods that account for errors should be applied – particularly
when errors are large. Thus, test variables in this study were chosen such that they included significant uncertainties in both the independent and dependent variables.
The sulfuric acid concentration (H[2]SO[4]) is known to strongly affect the formation rates (J) of aerosol particles (Kirkby et al., 2016; Kuang et al., 2008; Kulmala et al., 2006; Kürten et al.,
2016; Metzger et al., 2010; Riccobono et al., 2014; Riipinen et al., 2007; Sihto et al., 2006; Spracklen et al., 2006). The relationship between J (cm^−3s^−1) and H[2]SO[4] (moleccm^−3) is
typically assumed to be in the following form: log${}_{\mathrm{10}}\left(J\right)=\mathit{\beta }×{\mathrm{log}}_{\mathrm{10}}$(H[2]SO[4])+α (Seinfeld and Pandis, 2016). In addition,
parameterisations based on the results from these fits have been implemented in global models (e.g. in Dunne et al., 2016, Metzger et al., 2010 and Spracklen et al., 2006) to estimate the effects of
new particle formation on global aerosol amounts and characteristics. Theoretically, in homogeneous nucleation, the slope of this relationship is related to the number of sulfuric acid molecules in
the nucleating critical cluster, based on the first nucleation theorem (Vehkamäki, 2006).
Some published results have shown discrepancies in the expected J vs. H[2]SO[4] dependence. Analysing data from Hyytiälä in 2003, Kuang et al. (2008) used an unconstrained least squares method, which
was not specified in the paper, and obtained a β value of 1.99 for the slope, whereas Sihto et al. (2006) reported a β value of 1.16 using OLS from the same field campaign. The studies had some
differences in pre-treatment of the data and used different time windows, but a significant proportion of this inconsistency is very likely due to the use of different fitting methods. The problem
regarding the relationship of H[2]SO[4] and J was previously acknowledged in Paasonen et al. (2010), who noted that the bivariate fitting method, as presented in York et al. (2004), should be
applied; however, this method could not be used due to the lack of proper error estimates for each quantity. They were not aware of methods that did not require knowledge of the errors in advance,
and instead made use of estimated variances. Here, we present the appropriate tools required for the above-mentioned approach.
Multiple attempts have been made to present methods that account for errors in the predictor variables for regression-type analyses, going back to Deming (1943). However, the traditional least
squares fitting method is still the de facto line-fitting method due to its simplicity and common availability in frequently used software. In atmospheric sciences, Cantrell (2008) drew attention to
the method introduced by York (1966) and York et al. (2004) and listed multiple other methodological papers utilising similar methodology. Pitkänen et al. (2016) raised awareness of the fact that
errors are not accounted for in the predictor variables in the remote-sensing community, and this study partly follows their approach and introduces multiple methods to account for the errors in
predictors. Cheng and Riu (2006) studied methods involving heteroscedastic errors, whereas Wu and Yu (2018) approached the problem with measurement errors via weighted regression and applied some
techniques that are also used in our study.
Measurement errors in each variable must be taken into account using approaches known as errors-in-variables (EIV) regression. EIV methods simply mean that errors in both variables are accounted for.
In this study, we compared OLS regression results to six different regression methods (Deming regression, principal component analysis regression, orthogonal regression, Bayesian EIV regression and
two different bivariate regression methods) that are known to be able to take errors in variables into account and provide (at least asymptotically) unbiased estimates. In this study, we focus
exclusively on linear EIV methods, but it is important to acknowledge that non-linear methods also exist, e.g. ORDPACK introduced in Boggs et al. (1987) and implemented in Python SciPy and R (Boggs
et al., 1989; Spiess, 2015). ORDPACK is a somewhat improved version of classical orthogonal regression, in that arbitrary covariance structures are acceptable, and it is specifically set up so that a
user can specify measurement error variance and covariance point by point; some of the methods in this study carry out the same process in linear analysis.
2.1Data illustrating the phenomenon
Measurement data contain different types of errors. Usually, the errors are divided to two main class: systematic error and random error.
Systematic errors, commonly referred as bias, in experimental observations usually come from the measurement instruments. They may occur because something is wrong with the instrument or its data
handling system, or due to operator error. In line fitting, bias cannot be taken into account and needs to be minimised by way of careful and regular instrument calibrations and zeros or data
preprocessing. The random error, in comparison, may have different components; the two components discussed here are the natural error and the measurement error. In addition, one should note the
existence of equation error (discussed in Carroll and Ruppert, 1996), which refers to using an inappropriate form of a fitting equation. Measurement error is more generally understood; it is where
measured values do not fully represent the true values of the variable being measured. This also contains sampling error (e.g. in the case of H[2]SO[4] measurement, the sampled air in the measurement
instrument is not a representative sample of the outside air due to losses of H[2]SO[4] occurring in the sampling lines, among other factors). Natural error is the variability caused by natural or
physical phenomenon (e.g. a specific amount of H[2]SO[4] does not always cause the same number of new particles to be formed).
In the analysis of the measurement data, some amount of these errors are known or can be estimated, but some of the error will usually remain unknown; this should be kept in mind when interpreting
fits. Even though the measurement error is taken into account, the regression fit may be biased due to unknown natural error. In this study, we assume that the errors of the different variables are
uncorrelated, but in some cases this has to be accounted for, as noted, for example, in Trefall and Nordö (1959) and Mandel (1984). The correlation between the errors of two variables, measured with
separate instruments, independent of each other, such as formation rate and H[2]SO[4], may come from factors such as environmental variables that affect both of the variables at the same time.
Factors affecting the formation of sulfuric acid have been studied in various papers, e.g. in Weber et al. (1997) and Mikkonen et al. (2011). New particle formation rates, in turn, have been studied
in works such as Boy et al. (2008) and Hamed et al. (2011) and similarities between the affecting factors can be seen. In addition, factors like room temperature in the measurement space and
atmospheric pressure may affect the performance of instrumentation, thereby causing additional error.
The data used in this study consist of simulated new particle formation rates at 1.7nm (J[1.7]) and sulfuric acid (H[2]SO[4]) concentrations mimicking observations of pure sulfuric acid in
nucleation experiments from the CLOUD chamber at CERN (Kürten et al. 2016; https://home.cern/about/experiments/cloud, last access: 16 August 2019), including the corresponding expected values, their
variances and covariance structures. The Proton Synchrotron at CERN provides an artificial source of “cosmic rays” that simulates the natural ionisation conditions between the ground level and the
stratosphere. The core is a large (volume 26m^3) electro-polished stainless-steel chamber with temperature control (temperature stability better than 0.1K) at any tropospheric temperature, precise
delivery of selected gases (SO[2], O[3], NH[3] and various organic compounds) and ultra-pure humidified synthetic air, as well as very low gas-phase contaminant levels. The existing data on new
particle formation include what are believed to be the most important formation routes that involve sulfuric acid, ammonia and water vapour (Kirkby et al., 2011); sulfuric acid and amine (Almeida et
al., 2013); and ion-induced organic nucleation (Kirkby et al., 2016). The actual nucleation of new particles occurs at a slightly smaller size. After formation, particles grow by condensation to
reach the detection limit (1.7nm) of the instrument; thus, J[1.7] refers to the formation rate of particles as the instrument detects them, accounting for the known particle losses due to
coagulation and deposition on the chamber walls. The relationships between precursor gas-phase concentrations and particle formation rates were chosen because they are both known to have considerable
measurement errors and their relationship has been well-studied using regression-based analyses (Kirkby et al., 2016; Kürten et al., 2016; Riccobono et al., 2014; Tröstl et al., 2016). Additionally,
many of the published papers on this topic do not describe how they accounted for the uncertainties in the analysis, which casts doubt on the fact that errors were treated properly. However, it
should be kept in mind that the data could be any set of numbers assumed to have a linear relationship, but, in order to raise awareness in the aerosol research community, in this study we relate our
analysis to the important problem of understanding new particle formation.
2.2Regression methods
We carried out fits for the linear dependency of the logarithms of the two study variables, such that the equation for the fit was given by
$\begin{array}{}\text{(1)}& y={\mathit{\beta }}_{\mathrm{0}}+{\mathit{\beta }}_{\mathrm{1}}x+\mathit{\epsilon },\end{array}$
where y represents log[10](J[1.7]), x is log[10](H[2]SO[4]), β values are the coefficients estimated from the data and ε is the error term. In order to demonstrate the importance of taking the
measurement errors into account in the regression analysis, we tested seven different line-fitting methods: ordinary least squares (OLS), not taking the uncertainty in x variable into account;
orthogonal regression (ODR; Boggs et al., 1987); Deming regression (DR; Deming, 1943); principal component analysis (PCA; Hotelling, 1957) regression; Bayesian EIV regression (Kaipio and Somersalo,
2005); and two different bivariate least squares methods by York et al. (2004) and Francq and Govaerts (2014, BLS), respectively, which are known to be able to account for errors in variables and
provide (at least asymptotically) unbiased estimates. The differences between the methods stem from the criterion they minimise when calculating the coefficients and how they account for measurement
errors. The minimising criteria for all methods are given in Appendix A1, but in the following we give the principles of the methods.
OLS minimises the squared distance of the observation and the fit line either in the x or y direction, but not both at the same time, whereas ODR minimises the sum of the squared weighted orthogonal
distances between each point and the line. DR was originally an improved version of orthogonal regression, accounting for the ratio of the error variances, λ[xy], of the variables, (in classical
non-weighted ODR λ[xy]=1), and it is the maximum likelihood estimate (MLE) for the model (given in Eq. 1) when λ[xy] is known. The PCA approach is the same as in ODR, but the estimation procedure is
somewhat different as can be seen in Table S1 in the Supplement. The bivariate algorithm by York et al. (2004) provides a simple set of equations for iterating the MLE of the slope and intercept with
weighted variables, which makes it similar to ODR in this case. However, using ODR allows for a regression to be performed on a user-defined model, whereas the York (2004) solution only works on
linear models. This, for instance, enables the use of linear-scale uncertainties in ODR in this study, whereas the York (2004) approach could only use log-scale uncertainties. In Bayes EIV,
statistical models for the uncertainties in the observed quantities are used and probability distributions for the line slope and intercept are computed according to Bayes' theorem. In this study, we
computed the Bayesian maximum a posteriori (MAP) estimates for the slope and intercept that are the most probable values given the likelihood and prior models (see Appendix A1 for more details on
models used in Bayes EIV). BLS takes errors and heteroscedasticity into account, i.e. unequal variances, in both variables; thus, it is a more advanced method than DR (under normality and equal
variances, BLS is exactly equivalent to DR). PCA only accounts for the observed variance in data, whereas ODR, Bayes EIV and York bivariate regression require known estimates for measurement errors,
although for Bayes EIV the error can be approximated with a distribution. DR and BLS can be applied with both errors given by the user and measurement variance-based errors. In this study, we applied
measurement variance-based errors for these methods. The analyses for OLS and PCA were calculated with the “lm” and “prcomp” R functions (R Core Team, 2018), respectively, DR was calculated with the
“deming” package (Therneau, 2018) and BLS was calculated with the “BivRegBLS” package (Francq and Berger, 2017) in R. The ODR-based estimates were obtained using the “scipy.odr” Python package (Jones
et al., 2001), while the “PyStan” Python package (Stan Development Team, 2018) was used for calculating the Bayesian regression estimates. Finally, the York bivariate estimates were produced with a
custom Python implementation of the algorithm presented by York et al. (2004).
3.1Simulated data
In measured data, the variables that are observed are not x and y, but (x+e[x]) and (y+e[y]), where e[x] and e[y] are the uncertainty in the measurements, and the true x and y cannot be exactly
known. Thus, we used simulated data, where the true, i.e. noise-free, x and y were known to illustrate how the different line-fitting methods perform in different situations.
We simulated a dataset mimicking the new particle formation rates (J[1.7]) and sulfuric acid concentrations (H[2]SO[4]) reported from CLOUD-chamber measurements at CERN. Both variables are known to
have substantial measurement error and, thus, they are good test variables for our study. Additionally, the relationship of the logarithms of these variables is quite often described with linear OLS
regression and, thus, the inference may be flawed.
We generated 1000 random noise-free H[2]SO[4] concentration values assuming a log-normal distribution with a median of 2.0×10^6 (moleccm^−3) and a standard deviation of 2.4×10^6 (moleccm^−3). The
corresponding noise-free J[1.7] was calculated using model log[10](${J}_{\mathrm{1.7}}\right)=\mathit{\beta }×{\mathrm{log}}_{\mathrm{10}}$(H[2]SO[4])+α with the noise-free slope β=3.3 and $\mathit{\
alpha }=-\mathrm{23}$, which are both realistic values presented by Kürten et al. (2016, Table 2 in their paper, for the no added ammonia cases).
Simulated observations of the noise-free H[2]SO[4] concentrations were obtained by adding random errors ${e}_{x}={e}_{\mathrm{rel},x}x+{\mathit{\sigma }}_{\mathrm{abs},x}$ that have a random absolute
component ${e}_{\mathrm{abs},x}\sim$normal($\mathrm{0},{\mathit{\sigma }}_{\mathrm{abs},x}$) and a random component relative to the observation x itself e[rel,x]x, where ${e}_{\mathrm{rel},x}\sim$
normal($\mathrm{0},{\mathit{\sigma }}_{\mathrm{rel},x}$). Similar definitions apply for the noise-free J[1.7], e[y], σ[abs,y] and σ[rel,y]. The standard deviations of the measurement error
components were chosen as ${\mathit{\sigma }}_{\mathrm{abs},x}=\mathrm{4}×{\mathrm{10}}^{\mathrm{5}}$, ${\mathit{\sigma }}_{\mathrm{rel},x}=\mathrm{0.3}$, σ[abs,y]=$\mathrm{3}×{\mathrm{10}}^{-\
mathrm{3}}$ and ${\mathit{\sigma }}_{\mathrm{rel},y}=\mathrm{0.5}$, which are subjective estimates based on measurement data. The resulting total errors were occasionally about as large as the data
values themselves; however, they are not unusually large error values with respect to corresponding real datasets, where overall uncertainties may reach 150% for H[2]SO[4] concentrations and 250%
for nucleation rates (e.g. Dunne et al., 2016).
These choices regarding generating simulated data reflect what real dataset can often be like: the bulk of the data approximates a log-normal distribution with one of the tails possibly being thinned
or cut close to a limit of detection of an instrument or close to a limit of the data filtering criterion. In our simulated data, each negative observation and each negative noise-free value was
replaced with a new random simulated value, which only slightly offsets the final distribution from a perfectly symmetric log-normal shape.
Simulating the observations tends to generate infrequent extreme outlier observations from the infinite tails of the normal distribution. We discarded outliers with an absolute error larger than 3
times the combined standard uncertainty of the observation in order to remove the effect of outliers from the regression analysis. This represents the quality control procedure in data analysis and
also improves the stability of our results between different simulations.
3.2Case study on measured data
In order to show that the results gained with simulated data are also applicable in real measurement data, we applied our methods to data measured in the CLOUD chamber and published by Dunne et al.
(2016). Fig. 1 in Dunne et al. (2016) shows nucleation rates (J) at a 1.7nm mobility diameter as a function of the sulfuric acid concentration. We used their measurements with no added ammonia at
two different temperatures, 278 and 292K, as shown in their Fig. 1D and E and given in their Supplementary material.
4.1Fits for simulated data
Differences between the regression methods are illustrated in four different ways: firstly, by showing line fits on a scatterplot of simulated data; secondly, by illustrating how the slopes change
when the uncertainty in the measured variables increase; thirdly, by showing the sensitivity of the fits on number of observations; and finally, by showing how the fits are affected by adding
outliers in the data. Regression fits using each of the respective methods are shown in Fig. 1.
As we know that the noise-free slope β[true] is 3.30, we can easily see how the methods perform. The worst performing method was OLS, with a β[ols] value of 1.55, which is roughly half of the β[true]
. The best performing methods that displayed equal accuracy, i.e. within 2% range, were ODR (β[ODR]=3.27), Bayes EIV (β[BEIV]=3.24) and BLS (β[BLS]=3.22), whereas York (β[York]=3.15) was within a
range of 5%; Deming (β[DR]=2.95) and PCA (β[PCA]=2.92), in comparison, slightly underestimated the slope.
The sensitivity of the methods was first tested by varying the uncertainty in the H[2]SO[4] observations. We simulated six datasets with 1000 observations and with varying absolute and relative
uncertainties (as listed in Table 1), and then performed fits with each method on all of these datasets. The performance of the methods is shown in Fig. 2, with the results corresponding to Fig. 1
marked in black. The results show that when the uncertainty is small, the bias in the OLS fit is smaller, but when more uncertainty is added to data, the bias increases significantly. A decrease in
performance can also be seen with ODR, which overestimates the slope, and PCA, DR and Bayes EIV, which all underestimate the slope. The bivariate methods, BLS and York, seem to be quite robust with
increasing uncertainty, as the slopes do not change significantly.
The sensitivity of the methods to the decreasing n (number of observations) was tested by picking 100 random samples from the 1000-sample simulation dataset with n of 3, 5, 10, 20, 30, 50, 70, 100,
300 and 500 and carrying out fits for all samples using all methods. The average slopes and their standard errors are shown in Fig. 3. It is clear that when the n≤10, the variation in the estimated
slopes can be considerably high. When n≥30 the average slopes stabilised close to their characteristic levels (within 5%), except for Bayes EIV and York bivariate, which needed more than 100
observations. The most sensitive methods for a small n were Bayes EIV, ODR and PCA; thus, these methods should not be applied for data with a small n and a similar type of uncertainty to that
presented here. However, the reader is reminded that number of points needed for a good fit depends on the uncertainties in the data.
The sensitivity of the predictor variable H[2]SO[4] to outliers was tested using two different scenarios. In the first scenario, outliers were randomly allowed to be at either the high or low end of
the distribution. In the second scenario, outliers were only allowed to be large numbers, which is often the case in H[2]SO[4] and aerosol concentration measurements as numbers are removed from the
data when they are smaller than the detection limit of the measurement instrument. Five cases with an n of 1000 were simulated with an increasing number of outliers (0, 5, 10, 20 and 100) and 10
repetitions of H[2]SO[4] values with a different set of outliers. Outliers were defined such that ${x}_{\mathrm{obs}}-{x}_{\mathrm{true}}\mathit{>}\mathrm{3}×$the combined standard uncertainty. The
methods most sensitive to outliers in both scenarios were OLS and Bayes EIV. A high number of outliers caused underestimations in PCA and DR, especially when using the outliers with high values
(second scenario mentioned above), and a slight overestimation in BLS in the random outlier case (first scenario mentioned above). York bivariate and ODR were not affected in either case, and BLS
only showed small variation between the 10 replicates in the estimated slope. We did not explore how large a number of outliers would needed to be to seriously disrupt the fits for the various
methods. We felt that it is likely not realistic to have situations that have more than 10% outliers.
We also applied an alternative method for simulating the data to different testing methods. The main difference compared with our method was that the distribution of noise-free H[2]SO[4] followed a
uniform distribution in log-space. With this assumption, it could be seen that OLS works almost as well as the EIV methods introduced here if the range of the data is wide (H[2]SO[4] concentration in
the range of 10^6–10^9). However, when scaled to concentrations usually measured in the atmosphere (10^4–10^7), the high uncertainties caused similar behaviour to the data seen in our previous
simulations. Details of these results can be seen in Supplement S1.
4.2Results of the case study
Figure 5 shows the fits of the data from Dunne et al. (2016). As expected, the fit using OLS is underestimated at both temperatures: β[ols](278K) was 2.4 and β[ols](292K) was 3.0. The regression
equations for all methods are shown in Fig. 5. Dunne et al. (2016) did not use a linear fit in their study and instead applied a non-linear Levenberg–Marquardt algorithm (Moré, 1978) on function ${J}
_{\mathrm{1.7}}=k×\left[{\mathrm{H}}_{\mathrm{2}}{\mathrm{SO}}_{\mathrm{4}}{\right]}^{\mathit{\beta }},$ where k is a temperature-dependent rate coefficient with a non-linear function including three
estimable parameters (see Sect. 8 of their Supplement for details). Thus, the results are not directly comparable as, for simplicity, we fit the data measured at different temperatures separately.
However, their β value for the fit (β=3.95) is quite close to our results using EIV methods, especially as slopes from Bayes EIV at 292K and BLS and PCA at both temperatures were within a range of
5%. We also carried out some tests on data measured at lower temperatures (results not shown here). However, the slopes did not vary drastically from those at β[ols](278K) and β[ols](292K) when
the other conditions were similar, even though the lower number of observations at lower temperatures increased uncertainty in the data. Nevertheless, the intercepts β[0](T) varied between
Ordinary least squares regression can be used to answer some simple questions regarding data, such as “How is y related to x?”. However, if we are interested in the strength of the relationship and
the predictor variable X contains some error, then error-in-variables methods should be applied. There is no single correct method to make the fit, as the methods behave slightly differently with
different types of error. The choice of method should be based on the properties of data and the specific research question. There are usually two types of error in the data: natural and measurement
error, where natural error refers to stochastic variation in the environment. Even if the natural error in the data is not known, taking the measurement error into account improves the fit
significantly. Weighting the data based on some factor, typically the inverse of the uncertainty, reduces the effect of outliers and makes the regression more depend on the data that are more certain
(see e.g. Wu and Yu, 2018), but it does not solve the problem completely.
As a test study, we simulated a dataset mimicking the dependence of the atmospheric new particle formation rate on the sulfuric acid concentration. We introduced three major sources of uncertainty
when establishing inference from scatterplot data: the increasing measurement error, the number of data points and the number of outliers. In Fig. 1, we showed that for simulations where errors are
taken from real measurements of J[1.7] and H[2]SO[4] four of the methods gave slopes within 5% of the known noise-free value: BLS, York bivariate, Bayes EIV and ODR. Estimates from BLS and York
bivariate even remained stable when the uncertainty in the simulated H[2]SO[4] concentration was increased drastically, as seen in Fig. 2. The main message to take away from Fig. 3, in comparison, is
that if the data contain some error, all fit methods are highly uncertain when small numbers of observations are used. BLS was the most accurate with the smallest sample sizes (10 or less), ODR
stabilised with 20 observations, and York bivariate and Bayes EIV needed 100 or more data points to become accurate. After that, these methods approached the noise-free value asymptotically, whereas
the OLS slope converged towards an incorrect value. With an increasing number of outliers (Fig. 4), ODR and York bivariate were the most stable methods, even when 10% of observations were classified
as outliers in both test cases. BLS remained stable in the scenario with only high outlier values. Bayes EIV was the most sensitive to outliers after OLS.
From this, we can recommend that if the uncertainty in the predictor variable is known, York bivariate, or another method able to use known variances, should be applied. If the errors are not known,
and they are estimated from data, BLS and ODR were found to be the most robust in cases with increasing uncertainty (relative error, rE>30% in Fig. 2) and with a high number of outliers. In our
test data, BLS and ODR remained stable up to rE>80% (Fig. 2), whereas DR and PCA began to become more uncertain when rE>30% and Bayes EIV when rE>50%. If the number of observations is less
than 10 and the uncertainties are high, we would recommend considering if a regression fit is appropriate at all. However, with the chosen uncertainties in our simulation tests, BLS was found to be
the most robust with small numbers of data points. Bayes EIV displayed significant advantages if the number of observations was high enough and there were not too many outliers, as it did not require
an explicit definition of the errors and could treat them as unknown parameters given their probability distributions.
We also carried out a case study on data measured in the CLOUD chamber and published by Dunne et al. (2016). In these analyses, we saw that our above-mentioned recommended methods also performed best
for these data. Our tests indicated that the slope β[1] for the fit is not highly sensitive to changes in temperature in the chamber but the intercept β[0] in linear fit is. This dependency was also
seen, and taken into account, in Dunne et al. (2016).
Simulated datasets used in the example analysis are given in the Supplement.
Appendix A:Minimising criteria for the regression methods applied in the paper
In this appendix, we introduce the minimising criteria (C[method]) for all methods applied in the main text. We also give the equations for the regression coefficients (${\stackrel{\mathrm{^}}{\
mathit{\alpha }}}_{\mathrm{method}}$ and ${\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{method}}$) for the methods.
A1Ordinary least squares (OLS)
OLS minimises the sum of squares vertical distances (residuals) between each point and the fitted line. OLS regression minimises the following criterion:
$\begin{array}{}\text{(A1)}& {C}_{\mathrm{OLS}}={\sum }_{i=\mathrm{1}}^{N}{\left({y}_{i}-{\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{OLS}}-{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm
where ${\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{OLS}}$ and ${\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{OLS}}$ refer to estimators calculated from the data. These estimations are
given by
$\begin{array}{}\text{(A2)}& {\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{OLS}}=\frac{{S}_{x}}{{S}_{y}},{\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{OLS}}=\stackrel{\mathrm{‾}}{x}-{\
stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{OLS}}\stackrel{\mathrm{‾}}{y},\end{array}$
where observed variances for $x{S}_{x}={\sum }_{i=\mathrm{1}}^{N}{\left({x}_{i}-\stackrel{\mathrm{‾}}{x}\right)}^{\mathrm{2}}$ and for $y{S}_{y}={\sum }_{i=\mathrm{1}}^{N}{\left({y}_{i}-\stackrel{\
mathrm{‾}}{y}\right)}^{\mathrm{2}}$, and observed covariance for x and y ${S}_{xy}={\sum }_{i=\mathrm{1}}^{N}\left({x}_{i}-\stackrel{\mathrm{‾}}{x}\right)\left({y}_{i}-\stackrel{\mathrm{‾}}{y}\right)
A2Orthogonal regression (ODR)
ODR (https://docs.scipy.org/doc/external/odrpack_guide.pdf, last access: 16 August 2019, https://docs.scipy.org/doc/scipy/reference/odr.html, last access: 27 July 2018) minimises the sum of the
square of the orthogonal distances between each point and the line. The criteria is given by
$\begin{array}{}\text{(A4)}& {\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{ODR}}=\frac{{S}_{y}-{S}_{x}+\sqrt{{\left({S}_{y}-{S}_{x}\right)}^{\mathrm{2}}+\mathrm{4}{S}_{xy}^{\mathrm{2}}}}{\mathrm
$\begin{array}{}\text{(A5)}& {\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{ODR}}=\stackrel{\mathrm{‾}}{y}-{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{ODR}}\stackrel{\mathrm{‾}}{x}.\end
ODR accounts for the fact that errors exist in both axes but does not account for the exact values of the variances of variables. Thus, only the ratio between the two error variances (λ[xy]) is
needed to improve the methodology. With notation of Francq and Govaerts (2014), this ratio is given by the following:
$\begin{array}{}\text{(A6)}& {\mathit{\lambda }}_{xy}=\frac{{\mathit{\sigma }}_{y}^{\mathrm{2}}}{{\mathit{\sigma }}_{x}^{\mathrm{2}}},\end{array}$
where the numerator of the ratio is the error variance in the data in the y axis and the denominator is the error variance in the data in the x axis.
A3Deming regression (DR)
DR is the ML (maximum likelihood) solution of Eq. 1 when λ[xy] is known. In practice, λ[xy] is unknown and it is estimated from the variances of x and y calculated from the data.
The DR minimises the criterion C[DR], which is the sum of the square of (weighted) oblique distances between each point to the line:
$\begin{array}{}\text{(A8)}& {\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{DR}}=\frac{{S}_{y}-{\mathit{\lambda }}_{xy}{S}_{x}+\sqrt{{\left({S}_{y}-{\mathit{\lambda }}_{xy}{S}_{x}\right)}^{\mathrm
{2}}+\mathrm{4}{\mathit{\lambda }}_{xy}{S}_{xy}^{\mathrm{2}}}}{\mathrm{2}{S}_{xy}}\end{array}$
$\begin{array}{}\text{(A9)}& {\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{DR}}=\stackrel{\mathrm{‾}}{y}-{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{DR}}\stackrel{\mathrm{‾}}{x}.\end
A4Bivariate least squares regression (BLS)
BLS is a generic name but here we refer to the formulation described in Francq and Govaerts (2014) and references therein. BLS takes errors and heteroscedasticity in both axes into account and is
usually written in matrix notation. BLS minimises the criterion C[BLS], which is the sum of weighted residuals W[BLS] given by the following:
$\begin{array}{}\text{(A10)}& {C}_{\mathrm{BLS}}=\frac{\mathrm{1}}{{W}_{\mathrm{BLS}}}{\sum }_{i=\mathrm{1}}^{N}{\left({y}_{i}-{\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{BLS}}-{\stackrel{\
mathrm{^}}{\mathit{\beta }}}_{\mathrm{BLS}}{x}_{i}\right)}^{\mathrm{2}}\end{array}$
$\begin{array}{}\text{(A11)}& {W}_{\mathrm{BLS}}={\mathit{\sigma }}_{\mathit{\epsilon }}^{\mathrm{2}}=\frac{{\mathit{\sigma }}_{y}^{\mathrm{2}}}{{n}_{y}}+{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\
mathrm{BLS}}^{\mathrm{2}}\frac{{\mathit{\sigma }}_{x}^{\mathrm{2}}}{{n}_{x}}.\end{array}$
Estimators for the parameters are computed by iterations using the following formulas:
$\begin{array}{}\text{(A12)}& \begin{array}{rl}& \frac{\mathrm{1}}{{W}_{\mathrm{BLS}}}\left(\begin{array}{cc}N& {\sum }_{i=\mathrm{1}}^{N}{x}_{i}\\ {\sum }_{i=\mathrm{1}}^{N}{x}_{i}& {\sum }_{i=\
mathrm{1}}^{N}{x}_{i}^{\mathrm{2}}\end{array}\right)\left(\begin{array}{c}{\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{BLS}}\\ {\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{BLS}}\end{array}
\right)=\frac{\mathrm{1}}{{W}_{\mathrm{BLS}}}\\ & \phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\phantom{\rule{0.25em}{0ex}}\left({\sum }_{i=\mathrm{1}}^{N}\left({x}_{i}{y}_{i}+{\stackrel{\
mathrm{^}}{\mathit{\beta }}}_{\mathrm{BLS}}\frac{{\mathit{\sigma }}_{x}^{\mathrm{2}}}{{n}_{x}}\frac{{\sum }_{i=\mathrm{1}}^{N}{\left({y}_{i}-{\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{BLS}}-
{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{BLS}}{x}_{i}\right)}^{\mathrm{2}}}{{W}_{\mathrm{BLS}}}\right)\right),\end{array}\end{array}$
where known uncertainties ${\mathit{\sigma }}_{x}^{\mathrm{2}}$ and ${\mathit{\sigma }}_{y}^{\mathrm{2}}$ are replaced with estimated variances S[x] and S[y] in this study.
A second bivariate regression method that was used in this study is an implementation of the regression method described by York et al. (2004, Section III). The minimisation criterion is described in
York (1968):
where $w\left({x}_{i}\right)=\mathrm{1}/{\mathit{\sigma }}_{x}^{\mathrm{2}}$ and $w=\left({y}_{i}\right)\mathrm{1}/{\mathit{\sigma }}_{y}^{\mathrm{2}}$ are the weight coefficients for x and y,
respectively, and r is the correlation coefficient between x and y. x[i,adj] and y[i,adj] are adjusted values of x[i], y[i], which fulfil the requirement
$\begin{array}{}\text{(A14)}& {y}_{i,\mathrm{adj}}={\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{york}}+{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{york}}{x}_{i,\mathrm{adj}}.\end{array}$
The solution for ${\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{york}}$ and ${\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{york}}$ is found iteratively following the 10-step algorithm
presented in York et al. (2004, Section III).
A5The principal component analysis-based regression (PCA)
PCA can be applied for bivariate and multivariate cases.
For one independent and one dependent variable, the regression line is
$y={\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{PCA}}+{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{PCA}}x$ where the error between the observed value y[i] and the estimated value a+bx[i]
is minimum. For n data points, we compute a and b using the method of least squares that minimises
$\begin{array}{}\text{(A15)}& {C}_{\mathrm{PCA}}={\sum }_{i=\mathrm{1}}^{N}{\left({y}_{i}-{\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{PCA}}-{\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm
This is a standard technique that gives the regression coefficients α and β.
$\begin{array}{}\text{(A16)}& \left[\begin{array}{c}{\stackrel{\mathrm{^}}{\mathit{\alpha }}}_{\mathrm{PCA}}\\ {\stackrel{\mathrm{^}}{\mathit{\beta }}}_{\mathrm{PCA}}\end{array}\right]=\frac{\left[\
begin{array}{cc}{S}_{x}& -\stackrel{\mathrm{‾}}{x}\\ -\stackrel{\mathrm{‾}}{x}& \mathrm{1}\end{array}\right]}{{S}_{x}-{\stackrel{\mathrm{‾}}{x}}^{\mathrm{2}}}\left[\begin{array}{c}\stackrel{\mathrm
{‾}}{y}\\ {S}_{xy}\end{array}\right]\end{array}$
A6Bayesian error-in-variables regression (Bayes EIV)
Bayes EIV regression estimate applies Bayesian inference using the popular Stan software tool (http://mc-stan.org/users/documentation/, last access: 27 July 2018), which allows the use of prior
information of the model parameters. We assumed
$\begin{array}{c}{\mathit{\beta }}_{\mathrm{BEIV}}\phantom{\rule{0.25em}{0ex}}\sim \phantom{\rule{0.25em}{0ex}}\mathrm{student}\mathit{_}t\left(\mathrm{5},\mathrm{0.0},\mathrm{100.0}\right)\\ {\
mathit{\alpha }}_{\mathrm{BEIV}}\phantom{\rule{0.25em}{0ex}}\sim \phantom{\rule{0.25em}{0ex}}\mathrm{student}\mathit{_}t\left(\mathrm{5},\mathrm{0.0},\mathrm{100.0}\right)\\ {x}_{\mathrm{true}}\
phantom{\rule{0.25em}{0ex}}\sim \phantom{\rule{0.25em}{0ex}}\mathrm{log}\phantom{\rule{0.25em}{0ex}}\mathrm{normal}\left({\mathit{\mu }}_{x},{\mathit{\sigma }}_{x}\right)\\ {y}_{\mathrm{true}}={\
mathrm{10.0}}^{\left({\mathit{\alpha }}_{\mathrm{BEIV}}+{\mathit{\beta }}_{\mathrm{BEIV}}×{\mathrm{log}}_{\mathrm{10}}\left({x}_{\mathrm{true}}\right)\right)},\end{array}$
where μ and σ are the respective mean and standard deviation of x[true] and y[true] and are treated as unknowns. The observations x[obs] and y[obs] of x[true] and y[true], respectively, were defined
as follows:
$\begin{array}{c}{x}_{\mathrm{obs}}\phantom{\rule{0.25em}{0ex}}\sim \phantom{\rule{0.25em}{0ex}}\mathrm{normal}\left({x}_{\mathrm{true}},{\mathit{\sigma }}_{\mathrm{rel},x}×{x}_{\mathrm{true}}+{\
mathit{\sigma }}_{\mathrm{abs},x}\right)\\ {y}_{\mathrm{obs}}\phantom{\rule{0.25em}{0ex}}\sim \phantom{\rule{0.25em}{0ex}}\mathrm{normal}\left({y}_{\mathrm{true}},{\mathit{\sigma }}_{\mathrm{rel},y}×
{y}_{\mathrm{true}}+{\mathit{\sigma }}_{\mathrm{abs},y}\right),\end{array}$
where σ[rel] and σ[abs] are the relative and absolute components of standard uncertainties, respectively.
The Stan tool solved regression problems using 1000 iterations, and it provided a posteriori distributions for the model parameters β[BEIV] and α[BEIV]. For the definitions of given Student t,
log-normal and normal probability distributions, see Stan documentation. In our regression analysis, we used the maximum a posteriori estimates for β[BEIV] and α[BEIV] provided by the software tool.
SM prepared the paper with contributions from all co-authors. SM, MRAP and SI performed the formal analysis. MRAP simulated the data. SM, AA and KEJL formulated the original idea. SM, MRAP and AL
developed and implemented the methodology. SM, MRAP, TN and AL were responsible for the investigation and validation of the data and methods.
The authors declare that they have no conflict of interest.
This research has been supported by the Nessling Foundation and the Academy of Finland (grant no. 307331).
This paper was edited by Fangqun Yu and reviewed by three anonymous referees.
Almeida, J., Schobesberger, S., Kürten, A., Ortega, I. K., Kupiainen-Määttä, O., Praplan, A. P., Adamov, A., Amorim, A., Bianchi, F., Breitenlechner, M., David, A., Dommen, J., Donahue, N. M.,
Downard, A., Dunne, E., Duplissy, J., Ehrhart, S., Flagan, R. C., Franchin, A., Guida, R., Hakala, J., Hansel, A., Heinritzi, M., Henschel, H., Jokinen, T., Junninen, H., Kajos, M., Kangasluoma, J.,
Keskinen, H., Kupc, A., Kurtén, T., Kvashin, A. N., Laaksonen, A., Lehtipalo, K., Leiminger, M., Leppä, J., Loukonen, V., Makhmutov, V., Mathot, S., McGrath, M. J., Nieminen, T., Olenius, T., Onnela,
A., Petäjä, T., Riccobono, F., Riipinen, I., Rissanen, M., Rondo, L., Ruuskanen, T., Santos, F. D., Sarnela, N., Schallhart, S., Schnitzhofer, R., Seinfeld, J. H., Simon, M., Sipilä, M., Stozhkov,
Y., Stratmann, F., Tomé, A., Tröstl, J., Tsagkogeorgas, G., Vaattovaara, P., Viisanen, Y., Virtanen, A., Vrtala, A., Wagner, P. E., Weingartner, E., Wex, H., Williamson, C., Wimmer, D., Ye, P.,
Yli-Juuti, T., Carslaw, K. S., Kulmala, M., Curtius, J., Baltensperger, U., Worsnop, D. R., Vehkamäki, H., and Kirkby, J.: Molecular understanding of sulphuric acid–amine particle nucleation in the
atmosphere, Nature, 502, 359–363, https://doi.org/10.1038/nature12663, 2013.
Boggs, P. T., Byrd, R. H., and Schnabel, R. B.: A Stable and Efficient Algorithm for Nonlinear Orthogonal Distance Regression, SIAM J. Sci. Stat. Comput., 8, 1052–1078, https://doi.org/10.1137/
0908085, 1987.
Boggs, P. T., Donaldson, J. R., Byrd, R. H., and Schnabel, R. B.: Algorithm 676 ODRPACK: software for weighted orthogonal distance regression, ACM Trans. Math. Softw., 15, 348–364, https://doi.org/
10.1145/76909.76913, 1989.
Boy, M., Karl, T., Turnipseed, A., Mauldin, R. L., Kosciuch, E., Greenberg, J., Rathbone, J., Smith, J., Held, A., Barsanti, K., Wehner, B., Bauer, S., Wiedensohler, A., Bonn, B., Kulmala, M., and
Guenther, A.: New particle formation in the Front Range of the Colorado Rocky Mountains, Atmos. Chem. Phys., 8, 1577–1590, https://doi.org/10.5194/acp-8-1577-2008, 2008.
Cantrell, C. A.: Technical Note: Review of methods for linear least-squares fitting of data and application to atmospheric chemistry problems, Atmos. Chem. Phys., 8, 5477–5487, https://doi.org/
10.5194/acp-8-5477-2008, 2008.
Carroll, R. J. and Ruppert, D.: The Use and Misuse of Orthogonal Regression in Linear Errors-in-Variables Models, Am. Stat., 50, 1–6, https://doi.org/10.1080/00031305.1996.10473533, 1996.
Carroll, R. J., Ruppert, D., Stefanski, L. A., and Crainiceanu, C. M.: Measurement error in nonlinear models?: a modern perspective, 2nd Edn., Chapman & Hall/CRC, 41–64, 2006.
Cheng, C.-L. and Riu, J.: On Estimating Linear Relationships When Both Variables Are Subject to Heteroscedastic Measurement Errors, Technometrics, 48, 511–519, https://doi.org/10.1198/
004017006000000237, 2006.
Deming, W. E.: Statistical adjustment of data, Wiley, New York, 128–212, 1943.
Dunne, E. M., Gordon, H., Kürten, A., Almeida, J., Duplissy, J., Williamson, C., Ortega, I. K., Pringle, K. J., Adamov, A., Baltensperger, U., Barmet, P., Benduhn, F., Bianchi, F., Breitenlechner,
M., Clarke, A., Curtius, J., Dommen, J., Donahue, N. M., Ehrhart, S., Flagan, R. C., Franchin, A., Guida, R., Hakala, J., Hansel, A., Heinritzi, M., Jokinen, T., Kangasluoma, J., Kirkby, J., Kulmala,
M., Kupc, A., Lawler, M. J., Lehtipalo, K., Makhmutov, V., Mann, G., Mathot, S., Merikanto, J., Miettinen, P., Nenes, A., Onnela, A., Rap, A., Reddington, C. L. S., Riccobono, F., Richards, N. A. D.,
Rissanen, M. P., Rondo, L., Sarnela, N., Schobesberger, S., Sengupta, K., Simon, M., Sipilä, M., Smith, J. N., Stozkhov, Y., Tomé, A., Tröstl, J., Wagner, P. E., Wimmer, D., Winkler, P. M., Worsnop,
D. R., and Carslaw, K. S.: Global atmospheric particle formation from CERN CLOUD measurements, Science, 354, 1119–1124, https://doi.org/10.1126/science.aaf2649, 2016.
Francq, B. G. and Berger, M.: BivRegBLS: Tolerance Intervals and Errors-in-Variables Regressions in Method Comparison Studies, R package version 1.0.0, available at: https://rdrr.io/cran/BivRegBLS/
(last access: 2 October 2019), 2017.
Francq, B. G. and Govaerts, B. B.: Measurement methods comparison with errors-in-variables regressions, From horizontal to vertical OLS regression, review and new perspectives, Chemom. Intell. Lab.
Syst., 134, 123–139, https://doi.org/10.1016/j.chemolab.2014.03.006, 2014.
Hamed, A., Korhonen, H., Sihto, S.-L., Joutsensaari, J., Järvinen, H., Petäjä, T., Arnold, F., Nieminen, T., Kulmala, M., Smith, J. N., Lehtinen, K. E. J., and Laaksonen, A.: The role of relative
humidity in continental new particle formation, J. Geophys. Res., 116, D03202, https://doi.org/10.1029/2010JD014186, 2011.
Hotelling, H.: The Relations of the Newer Multivariate Statistical Methods to Factor Analysis, Br. J. Stat. Psychol., 10, 69–79, https://doi.org/10.1111/j.2044-8317.1957.tb00179.x, 1957.
Jones, E., Oliphant, T., and Peterson, P.: SciPy: Open Source Scientific Tools for Python, available at: http://www.scipy.org/ (last access: 16 August 2019), 2001.
Kaipio, J. and Somersalo, E.: Statistical and Computational Inverse Problems, Springer-Verlag, New York, 145–188, 2005.
Kirkby, J., Curtius, J., Almeida, J., Dunne, E., Duplissy, J., Ehrhart, S., Franchin, A., Gagné, S., Ickes, L., Kürten, A., Kupc, A., Metzger, A., Riccobono, F., Rondo, L., Schobesberger, S.,
Tsagkogeorgas, G., Wimmer, D., Amorim, A., Bianchi, F., Breitenlechner, M., David, A., Dommen, J., Downard, A., Ehn, M., Flagan, R. C., Haider, S., Hansel, A., Hauser, D., Jud, W., Junninen, H.,
Kreissl, F., Kvashin, A., Laaksonen, A., Lehtipalo, K., Lima, J., Lovejoy, E. R., Makhmutov, V., Mathot, S., Mikkilä, J., Minginette, P., Mogo, S., Nieminen, T., Onnela, A., Pereira, P., Petäjä, T.,
Schnitzhofer, R., Seinfeld, J. H., Sipilä, M., Stozhkov, Y., Stratmann, F., Tomé, A., Vanhanen, J., Viisanen, Y., Vrtala, A., Wagner, P. E., Walther, H., Weingartner, E., Wex, H., Winkler, P. M.,
Carslaw, K. S., Worsnop, D. R., Baltensperger, U., and Kulmala, M.: Role of sulphuric acid, ammonia and galactic cosmic rays in atmospheric aerosol nucleation, Nature, 476, 429–433, https://doi.org/
10.1038/nature10343, 2011.
Kirkby, J., Duplissy, J., Sengupta, K., Frege, C., Gordon, H., Williamson, C., Heinritzi, M., Simon, M., Yan, C., Almeida, J., Tröstl, J., Nieminen, T., Ortega, I. K., Wagner, R., Adamov, A., Amorim,
A., Bernhammer, A.-K., Bianchi, F., Breitenlechner, M., Brilke, S., Chen, X., Craven, J., Dias, A., Ehrhart, S., Flagan, R. C., Franchin, A., Fuchs, C., Guida, R., Hakala, J., Hoyle, C. R., Jokinen,
T., Junninen, H., Kangasluoma, J., Kim, J., Krapf, M., Kürten, A., Laaksonen, A., Lehtipalo, K., Makhmutov, V., Mathot, S., Molteni, U., Onnela, A., Peräkylä, O., Piel, F., Petäjä, T., Praplan, A.
P., Pringle, K., Rap, A., Richards, N. A. D., Riipinen, I., Rissanen, M. P., Rondo, L., Sarnela, N., Schobesberger, S., Scott, C. E., Seinfeld, J. H., Sipilä, M., Steiner, G., Stozhkov, Y.,
Stratmann, F., Tomé, A., Virtanen, A., Vogel, A. L., Wagner, A. C., Wagner, P. E., Weingartner, E., Wimmer, D., Winkler, P. M., Ye, P., Zhang, X., Hansel, A., Dommen, J., Donahue, N. M., Worsnop, D.
R., Baltensperger, U., Kulmala, M., Carslaw, K. S., and Curtius, J.: Ion-induced nucleation of pure biogenic particles, Nature, 533, 521–526, https://doi.org/10.1038/nature17953, 2016.
Kuang, C., McMurry, P. H., McCormick, A. V., and Eisele, F. L.: Dependence of nucleation rates on sulfuric acid vapor concentration in diverse atmospheric locations, J. Geophys. Res., 113, D10209,
https://doi.org/10.1029/2007JD009253, 2008.
Kulmala, M., Lehtinen, K. E. J., and Laaksonen, A.: Cluster activation theory as an explanation of the linear dependence between formation rate of 3nm particles and sulphuric acid concentration,
Atmos. Chem. Phys., 6, 787–793, https://doi.org/10.5194/acp-6-787-2006, 2006.
Kürten, A., Bianchi, F., Almeida, J., Kupiainen-Määttä, O., Dunne, E. M., Duplissy, J., Williamson, C., Barmet, P., Breitenlechner, M., Dommen, J., Donahue, N. M., Flagan, R. C., Franchin, A.,
Gordon, H., Hakala, J., Hansel, A., Heinritzi, M., Ickes, L., Jokinen, T., Kangasluoma, J., Kim, J., Kirkby, J., Kupc, A., Lehtipalo, K., Leiminger, M., Makhmutov, V., Onnela, A., Ortega, I. K.,
Petäjä, T., Praplan, A. P., Riccobono, F., Rissanen, M. P., Rondo, L., Schnitzhofer, R., Schobesberger, S., Smith, J. N., Steiner, G., Stozhkov, Y., Tomé, A., Tröstl, J., Tsagkogeorgas, G., Wagner,
P. E., Wimmer, D., Ye, P., Baltensperger, U., Carslaw, K., Kulmala, M., and Curtius, J.: Experimental particle formation rates spanning tropospheric sulfuric acid and ammonia abundances, ion
production rates, and temperatures, J. Geophys. Res., 121, 12377–12400, https://doi.org/10.1002/2015JD023908, 2016.
Mandel, J.: Fitting Straight Lines When Both Variables are Subject to Error, J. Qual. Technol., 16, 1–14, https://doi.org/10.1080/00224065.1984.11978881, 1984.
Metzger, A., Verheggen, B., Dommen, J., Duplissy, J., Prevot, A. S. H., Weingartner, E., Riipinen, I., Kulmala, M., Spracklen, D. V, Carslaw, K. S., and Baltensperger, U.: Evidence for the role of
organics in aerosol particle formation under atmospheric conditions., P. Natl. Acad. Sci. USA, 107, 6646–51, https://doi.org/10.1073/pnas.0911330107, 2010.
Mikkonen, S., Romakkaniemi, S., Smith, J. N., Korhonen, H., Petäjä, T., Plass-Duelmer, C., Boy, M., McMurry, P. H., Lehtinen, K. E. J., Joutsensaari, J., Hamed, A., Mauldin III, R. L., Birmili, W.,
Spindler, G., Arnold, F., Kulmala, M., and Laaksonen, A.: A statistical proxy for sulphuric acid concentration, Atmos. Chem. Phys., 11, 11319–11334, https://doi.org/10.5194/acp-11-11319-2011, 2011.
Moré, J. J.: The Levenberg-Marquardt algorithm: Implementation and theory, Springer, Berlin, Heidelberg, 105–116, 1978.
Paasonen, P., Nieminen, T., Asmi, E., Manninen, H. E., Petäjä, T., Plass-Dülmer, C., Flentje, H., Birmili, W., Wiedensohler, A., Hõrrak, U., Metzger, A., Hamed, A., Laaksonen, A., Facchini, M. C.,
Kerminen, V. M., and Kulmala, M.: On the roles of sulphuric acid and low-volatility organic vapours in the initial steps of atmospheric new particle formation, Atmos. Chem. Phys., 10, 11223–11242,
https://doi.org/10.5194/acp-10-11223-2010, 2010.
Pitkänen, M. R. A., Mikkonen, S., Lehtinen, K. E. J., Lipponen, A., and Arola, A.: Artificial bias typically neglected in comparisons of uncertain atmospheric data, Geophys. Res. Lett., 43,
10003–10011, https://doi.org/10.1002/2016GL070852, 2016.
Pitkänen, M.: Regression estimator calculator, GitHub repository, https://gist.github.com/mikkopitkanen/da8c949571225e9c7093665c9803726e, last access: 3 October 2019.
R Core Team: R: A language and environment for statistical computing, available at: http://www.r-project.org (16 August 2019), 2018.
Riccobono, F., Schobesberger, S., Scott, C. E., Dommen, J., Ortega, I. K., Rondo, L., Almeida, J., Amorim, A., Bianchi, F., Breitenlechner, M., David, A., Downard, A., Dunne, E. M., Duplissy, J.,
Ehrhart, S., Flagan, R. C., Franchin, A., Hansel, A., Junninen, H., Kajos, M., Keskinen, H., Kupc, A., Kürten, A., Kvashin, A. N., Laaksonen, A., Lehtipalo, K., Makhmutov, V., Mathot, S., Nieminen,
T., Onnela, A., Petäjä, T., Praplan, A. P., Santos, F. D., Schallhart, S., Seinfeld, J. H., Sipilä, M., Spracklen, D. V, Stozhkov, Y., Stratmann, F., Tomé, A., Tsagkogeorgas, G., Vaattovaara, P.,
Viisanen, Y., Vrtala, A., Wagner, P. E., Weingartner, E., Wex, H., Wimmer, D., Carslaw, K. S., Curtius, J., Donahue, N. M., Kirkby, J., Kulmala, M., Worsnop, D. R., and Baltensperger, U.: Oxidation
products of biogenic emissions contribute to nucleation of atmospheric particles, Science, 344, 717–721, https://doi.org/10.1126/science.1243527, 2014.
Riipinen, I., Sihto, S.-L., Kulmala, M., Arnold, F., Dal Maso, M., Birmili, W., Saarnio, K., Teinilä, K., Kerminen, V.-M., Laaksonen, A., and Lehtinen, K. E. J.: Connections between atmospheric
sulphuric acid and new particle formation during QUEST III&ndash;IV campaigns in Heidelberg and Hyytiälä, Atmos. Chem. Phys., 7, 1899–1914, https://doi.org/10.5194/acp-7-1899-2007, 2007.
Schennach, S. M.: Estimation of Nonlinear Models with Measurement Error, Econometrica, 72, 33–75, https://doi.org/10.1111/j.1468-0262.2004.00477.x, 2004.
Seinfeld, J. H. and Pandis, S. N.: Atmospheric chemistry and physics: From air pollution to climate change, available at: https://www.wiley.com/en-fi/
Atmospheric+Chemistry+and+Physics:+From+Air+Pollution+to+Climate+Change,+3rd+Edition-p-9781118947401 (last access: 26 September 2018), 2016.
Sihto, S.-L., Kulmala, M., Kerminen, V.-M., Dal Maso, M., Petäjä, T., Riipinen, I., Korhonen, H., Arnold, F., Janson, R., Boy, M., Laaksonen, A., and Lehtinen, K. E. J.: Atmospheric sulphuric acid
and aerosol formation: implications from atmospheric measurements for nucleation and early growth mechanisms, Atmos. Chem. Phys., 6, 4079–4091, https://doi.org/10.5194/acp-6-4079-2006, 2006.
Spiess, A.: Orthogonal Nonlinear Least-Squares Regression in R, available at: https://cran.hafro.is/web/packages/onls/vignettes/onls.pdf (last access: 17 July 2018), 2015.
Spracklen, D. V., Carslaw, K. S., Kulmala, M., Kerminen, V.-M., Mann, G. W., and Sihto, S.-L.: The contribution of boundary layer nucleation events to total particle concentrations on regional and
global scales, Atmos. Chem. Phys., 6, 5631–5648, https://doi.org/10.5194/acp-6-5631-2006, 2006.
Stan Development Team: PyStan: the Python interface to Stan, Version 2.17.1.0., available at: http://mc-stan.org, last access: 27 July 2018.
Therneau, T.: deming: Deming, Theil-Sen, Passing-Bablock and Total Least Squares Regression, R package version 1.4., available at: https://cran.r-project.org/package=deming (last access:
16 August 2019), 2018.
Trefall, H. and Nordö, J.: On Systematic Errors in the Least Squares Regression Analysis, with Application to the Atmospheric Effects on the Cosmic Radiation, Tellus, 11, 467–477, https://doi.org/
10.3402/tellusa.v11i4.9324, 1959.
Tröstl, J., Chuang, W. K., Gordon, H., Heinritzi, M., Yan, C., Molteni, U., Ahlm, L., Frege, C., Bianchi, F., Wagner, R., Simon, M., Lehtipalo, K., Williamson, C., Craven, J. S., Duplissy, J.,
Adamov, A., Almeida, J., Bernhammer, A.-K., Breitenlechner, M., Brilke, S., Dias, A., Ehrhart, S., Flagan, R. C., Franchin, A., Fuchs, C., Guida, R., Gysel, M., Hansel, A., Hoyle, C. R., Jokinen, T.,
Junninen, H., Kangasluoma, J., Keskinen, H., Kim, J., Krapf, M., Kürten, A., Laaksonen, A., Lawler, M., Leiminger, M., Mathot, S., Möhler, O., Nieminen, T., Onnela, A., Petäjä, T., Piel, F. M.,
Miettinen, P., Rissanen, M. P., Rondo, L., Sarnela, N., Schobesberger, S., Sengupta, K., Sipilä, M., Smith, J. N., Steiner, G., Tomè, A., Virtanen, A., Wagner, A. C., Weingartner, E., Wimmer, D.,
Winkler, P. M., Ye, P., Carslaw, K. S., Curtius, J., Dommen, J., Kirkby, J., Kulmala, M., Riipinen, I., Worsnop, D. R., Donahue, N. M., and Baltensperger, U.: The role of low-volatility organic
compounds in initial particle growth in the atmosphere, Nature, 533, 527–531, https://doi.org/10.1038/nature18271, 2016.
Vehkamäki, H.: Classical nucleation theory in multicomponent systems, Springer-Verlag, Berlin/Heidelberg, 119–159, 2006.
Weber, R. J., Marti, J. J., McMurry, P. H., Eisele, F. L., Tanner, D. J., and Jefferson, A.: Measurements of new particle formation and ultrafine particle growth rates at a clean continental site, J.
Geophys. Res.-Atmos., 102, 4375–4385, https://doi.org/10.1029/96JD03656, 1997.
Wu, C. and Yu, J. Z.: Evaluation of linear regression techniques for atmospheric applications: the importance of appropriate weighting, Atmos. Meas. Tech., 11, 1233–1250, https://doi.org/10.5194/
amt-11-1233-2018, 2018.
York, D.: Least-sqares fitting of a straight line, Can. J. Phys., 44, 1079–1086, https://doi.org/10.1139/p66-090, 1966.
York, D., Evensen, N. M., Martínez, M. L., and De Basabe Delgado, J.: Unified equations for the slope, intercept, and standard errors of the best straight line, Am. J. Phys., 72, 367–375, https://
doi.org/10.1119/1.1632486, 2004. | {"url":"https://acp.copernicus.org/articles/19/12531/2019/","timestamp":"2024-11-10T19:01:45Z","content_type":"text/html","content_length":"346574","record_id":"<urn:uuid:e82d52f1-727d-4c72-825b-9bedc99ba4d7>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00459.warc.gz"} |
RBE2 equations to establish the force distribution i.e. mpc forces? - Finite Element Analysis (FEA) engineering
Not open for further replies.
To all,
Does anyone know a good source/ref for the equations used by (nastran) RBE2 element to get the load distribution assuming a force being applied at the independent node? I remember seeing something in
the old msc.nastran manual but cannot find it anymore
If one was to set-up a simple RBE2 as follows;
[li]Assume it only works in 1 direction-say vertical (Y)[/li]
[li]a load W is applied at the independent node[/li]
Would it be possible to write the equations in Mathcad to solve them?
Replies continue below
Recommended for you
I'd've thought that the rigid element proportions the reactions between the dependent nodes depending on the local stiffness at those nodes. (but that could be for RBE3).
If RBE2, I think this is a rigid element, so reactions can be calculated from the applied loads the the geometry of the dependent nodes, treating the dependent nodes as a bolt group.
1) calculate the CG of the dependent nodes.
2) determine the free body reactions at the CG.
3) proportion the direct loads equally between the dependent nodes.
4) react the CG moments, assuming plane sections remain plane. The issue is going to be if there are two loadpaths to react a moment; eg Mx can be reacted by Fy and Fz depending on the geometry.
There probably is some virtual work principle that'll show the least work between the two loadpaths.
another day in paradise, or is paradise one day closer ?
I think there is a relation between translation (T) and rotation (R) (assumed very small)
{Tind} = {Tdep}+{R}{v} where v must be the vector defined by the position of the dependant and each independent node
If you consider an RBE3 with 3 dependant nodes working purely in 1 direction then If I got this right you only have 2 equations but 3 unknowns(the force at each dependant node one is seeking)
yes, if the RBE is statically determinate then easy to solve.
if indeterminate, then "plane sections remain plane" allows us to solve ... the RBE is rigid so displacements of all points can be calculated from the induced deflection > strain > load.
another day in paradise, or is paradise one day closer ?
Ok so how does one establish the extra equation needed relating force et dish I guess
Small disp applies and rigid rotation
rigid rotation ... if the element rotates 1deg about X axis (a unit Mx) what displacements occur at the dependent nodes, hence the load induced, hence the moment reacted. Then scale for required
moment. repeat for other axes. add direct load.
another day in paradise, or is paradise one day closer ?
Not open for further replies. | {"url":"https://www.eng-tips.com/threads/rbe2-equations-to-establish-the-force-distribution-i-e-mpc-forces.476882/","timestamp":"2024-11-03T03:13:06Z","content_type":"text/html","content_length":"113496","record_id":"<urn:uuid:44ecb86e-710c-400f-9a7a-3d19c22e39c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00830.warc.gz"} |
rate of acceleration calculator
Rate Of Acceleration Calculator
In the realm of physics and engineering, understanding the rate of acceleration is crucial for various applications, from calculating velocity to analyzing the motion of objects. Fortunately, with
the advent of technology, we can easily determine this parameter using specialized tools like a rate of acceleration calculator. In this article, we’ll delve into how to effectively utilize such a
calculator, the underlying formula it employs, provide an example solve, address common questions, and ultimately highlight its significance.
How to Use
Utilizing a rate of acceleration calculator is straightforward. Simply input the initial velocity (vi), final velocity (vf), and time (t) into their respective fields, and the calculator will
compute the rate of acceleration (a) for you.
The formula used by the rate of acceleration calculator is derived from the fundamental equation of motion:
• a = acceleration (m/s²)
• vi = initial velocity (m/s)
• vf = final velocity (m/s)
• t = time (s)
Example Solve
Let’s illustrate the usage of the rate of acceleration calculator with an example:
• Initial velocity (vi) = 10 m/s
• Final velocity (vf) = 30 m/s
• Time (t) = 5 s
Using the formula:
Q: Can this calculator handle negative velocities?
A: Yes, the calculator can process negative velocities. Simply input the values with appropriate signs (+ or -).
Q: What are the units for acceleration in the result?
A: The units for acceleration are typically meters per second squared (m/s²).
Q: Is the rate of acceleration calculator suitable for all types of motion?
A: Yes, this calculator is applicable to various types of motion, including linear, rotational, and angular motion.
Q: Can I use this calculator for gravitational acceleration?
A: While this calculator calculates the rate of acceleration based on given velocities and time, gravitational acceleration can be calculated separately using specific formulas.
In conclusion, the rate of acceleration calculator serves as a valuable tool in physics and engineering, enabling quick and accurate computations of acceleration based on velocity and time. By
understanding its usage, formula, and capabilities, individuals can efficiently analyze motion-related problems and gain deeper insights into physical phenomena. | {"url":"https://calculatordoc.com/rate-of-acceleration-calculator/","timestamp":"2024-11-12T06:23:59Z","content_type":"text/html","content_length":"92460","record_id":"<urn:uuid:4afd0488-608a-4cb2-9a1f-bc548652a872>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00091.warc.gz"} |
DIFM:Dynamic ICAR Spatiotemporal Factor
Model Description
We assume that the region of study is partitioned into \(r\) subregions. In our motivating example, the subregions are the states in the contiguous United States. The variable of interest in each
subregion is observed at \(n\) time points. Let \(\mathbf{y}_t\) be the \(r\)-dimensional vector of observations at time \(t\) \((t = 1, 2, \dots, n.)\)
We assume that the spatiotemporal behavior of the \(r\) subregions can be represented by \(k\) factors, where usually \(k\) is much smaller than \(r\). Specifically, we assume the model
\[$$\label{MainEQ1} \mathbf{y}_t = \mathbf{B} \mathbf{x}_t + \mathbf{v}_t, \tag{1}$$\]
where \(\mathbf{x}_t\) is the \(k\)-dimensional vector of factors at time \(t\), \(\mathbf{B}\) is an \(r \times k\) matrix of factor loadings, and \(\mathbf{v}_t\) is the \(r\)-dimensional vector of
errors at time \(t\). We assume that the observational error vector \(\mathbf{v}_t\), \(t = 1, 2, \dots, n\) is independent over time and follows a Gaussian distribution \(\mathbf{v}_t \sim \text{N}
(0, \mathbf{V})\), where \(\mathbf{V} = \mbox{diag}(\sigma^2_1, \dots, \sigma^2_r)\). Each of the variances \(\sigma^2_1, \dots, \sigma^2_r\) is specific to one of the \(r\) subregions, and thus they
are known as idiosyncratic variances.
We assume that the vector of factors \(\mathbf{x}_t\) follows a dynamic linear model (West and Harrison, 1997; Prado et al., 2021). Specifically, we assume the general model \[\begin{eqnarray}\label
{MainEQ2} \mathbf{x}_t &=& \mathbf{F} \mathbf{\theta}_t, \tag{2} \\ \mathbf{\theta}_t &=& \mathbf{G} \mathbf{\theta}_{t-1} + \mathbf{\omega}_t, \mathbf{\omega}_t \sim \text{N}(0, \mathbf{W}), \tag{3}
\label{MainEQ3} \end{eqnarray}\] where \(\mathbf{\theta}_t\) is a latent process that allows great flexibility in the description of the temporal evolution of \(\mathbf{x}_t\). Specifically, \(\
mathbf{\theta}_t\) may encode different types of temporal trends as well as seasonality. For example, in our application we assume a second-order polynomial DLM and specify \(\theta_t\) as a vector
of dimension \(2k\) that contains the level and the gradient of \(\mathbf{x}_t\) at time \(t\). In addition, the evolution matrix \(\mathbf{G}\) describes the temporal evolution of the latent process
\(\mathbf{\theta}_t\). Further, \(\mathbf{\omega}_t\) is a \(2k\)-dimensional innovation vector with a dense covariance matrix \(\mathbf{W}\). Finally, the matrix \(\mathbf{F}\) relates the vector of
common factors \(\mathbf{x}_t\) to the appropriate elements of the latent process \(\mathbf{\theta}_t\).
In the case of the second-order polynomial DLM that we consider, \(\mathbf{\theta}_t = (\theta_{t,1} , \theta_{t,2}, \dots , \mathbf{\theta}_{t,2k})^T\) is a vector of dimension \(2k\) where \((\
theta_{t,1}, \theta_{t,3}, \dots, \theta_{t,2k-1})^T\) and \((\theta_{t,2}, \theta_{t,4}, \dots, \theta_{t,2k})^T\) are respectively the level and the gradient of the vector of common factors \(\
mathbf{x}_t\). Thus, the matrix \(\mathbf{F}\) that relates \(\mathbf{x}_t\) to \(\mathbf{\theta}_t\) is a \(k \times 2k\) matrix of the form \[\begin{equation*} \mathbf{F} = \left[ \begin{array}
{cccccc} 1 & 0 & 0 & \dots & 0 & 0\\ 0 & 0 & 1 & \dots & 0 & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots\\ 0 & 0 & 0 & \dots & 1 & 0 \end{array} \right]. \end{equation*}\] The evolution
matrix \(\mathbf{G}\) has dimension \(2k \times 2k\) and satisfies \(\theta_{t,2j-1} = \theta_{t-1,2j-1} + \theta_{t-1,2j} + \omega_{t, 2j - 1}\) and \(\theta_{t,2j} = \theta_{t-1,2j} + \omega_{t,2j}
\), \(j = 1, \dots, k\). Therefore, \(\mathbf{G}\) = \(blockdiag(\mathbf{G}_0, \dots ,\mathbf{G}_0)\) where \[\begin{equation*} \mathbf{G}_0= \left[ \begin{array}{cc} 1 & 1\\ 0 & 1 \end{array} \
right]. \end{equation*}\]
The specification of the factor loadings matrix \(\mathbf{B}\) is crucial in our dynamic ICAR factor model. An important point to consider is the need for constraints on the matrix \(\mathbf{B}\) to
ensure identifiability of the model. Specifically, for any invertible \(k \times k\) matrix \(\mathbf{A}\), substituting \(\mathbf{B}\) and \(\mathbf{x}_t\) in Equation (1) by, respectively, \(\
mathbf{B}^* = \mathbf{B} \mathbf{A}\) and \(\mathbf{x}_t^* = \mathbf{A}^{-1} \mathbf{x}_t\) would lead to the same model. To ensure identifiability, we impose a hierarchical structural constraint
that assumes that \(\mathbf{B}\) is a full-rank block lower triangular matrix with diagonal elements equal to 1 (Aguilar and West, 2000). Specifically, we assume \(\mathbf{B}\) has the form \[\begin
{equation*} \mathbf{B} = \begin{bmatrix} 1 & 0 & 0 & \dots & 0\\ b_{2,1} & 1 & 0 & \dots & 0\\ b_{3,1} & b_{3,2} & 1 & \dots & 0\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ b_{k,1} & b_{k,2} & b_
{k,3} & \dots & 1\\ b_{k+1,1} & b_{k+1,2} & b_{k+1,3} & \dots & b_{k+1,k}\\ \vdots & \vdots & \vdots & \ddots & \vdots\\ b_{r,1} & b_{r,2} & b_{r,3} & \dots & b_{r,k} \end{bmatrix}. \end{equation*}\]
To account for the spatial dependence among the factor loadings for neighboring subregions, we assume that each column of the matrix of factor loadings \(\mathbf{B}\) follows an intrinsic conditional
autoregressive model (Besag et al., 1991; Keefe et al., 2018). Specifically, we assume for the \(j\)th column \(\mathbf{B}_j, j = 1, \dots, k,\) the density \[$$\label{MainEQ4} p(\mathbf{B}_j) \
propto \exp\left(-\frac{1}{2\tau_j} \mathbf{B}_j^T \mathbf{H} \mathbf{B}_j\right),$$\] where \(\mathbf{H}\) is a precision matrix that accounts for the spatial dependence among neighboring subregions
and \(\tau_j\) controls the strength of spatial correlation among factor loadings. Specifically, if subregions \(i\) and \(j\) are neighbors, then the corresponding element of the matrix \(\mathbf{H}
\) is \(h_{ij} = -g_{ij}\) where \(g_{ij}\) measures the strength of the association between subregions \(i\) and \(j\). If subregions \(i\) and \(j\) are not neighbors, then \(g_{ij} = 0\). Finally,
the \(i\)th diagonal element of matrix \(\mathbf{H}\) is \(h_{ii} = \sum_{j \neq i} g_{ij}\). For example, a widely used choice for \(\mathbf{H}\) assumes \(g_{ij} = 1\) if \(i\) and \(j\) share a
border, and \(g_{ij} = 0\) otherwise. In that case, \(h_{ii}\) is equal to the number of neighbors of subregion \(i\). Further, we assume that there are no islands which implies that the matrix \(\
mathbf{H}\) has one eigenvalue equal to 0 and all other eigenvalues larger than zero. Note that we assume this prior for each column of \(\mathbf{B}\). Let \(\mathbf{B}_{.j}^*=\mathbf{B}_{(j+1):r, j}
\) be the \(j\)th column of \(\mathbf{B}\) without the first \(j\) elements that are fixed. In addition, let \(\mathbf{H}_j^* = \mathbf{H}_{(j+1):r, (j+1):r}\). Then, the conditional distribution of
\(\mathbf{B}_{.j}^*\) given \(\mathbf{B}_{1:j, j}=(0,\ldots,0,1)^T\) is multivariate normal with mean vector \(\mathbf{h}_j = -\mathbf{H}_j^{*-1} \mathbf{H}_{(j+1):r, j}\) and precision matrix \(\
We apply DIFM to western United States crime datasets. The data was collected from Bureau of Justice Statistics and available at disaster center website. In this vignette, we provide two datasets,
Violent and Property for violent and property crime, respectively. The data was collected from 50 states of United States and District of Columbia from 1960 to 2019. In this example, we use the
WestStates data included in DIFM package that contains the information of the map and polygon of the 11 western states: Arizona, California, Colorado, Idaho, Montana, Nevada, New Mexico, Oregon,
Utah, Washington and Wyoming. The numbers represent the cases of crime per 100,000 people. We use the square root of the data to stabilize the variance.
Step 1: Read and explore the data
Violent <- as.matrix(Violent)
Violent <- sqrt(Violent)
Property <- as.matrix(Property)
Property <- sqrt(Property)
After we call the datasets, we explore the data through plots.
layout(rbind(1:4, 5:8, c(9:11,0)))
for(i in 1:11){
plot(1960:2019, Violent[,i], main = colnames(Violent)[i], type = "l", xlab = "", ylab = "")
Figure 1 shows the square root of the number of violent crimes in Western states. In most of the states, the cases of violent crimes soar by 2000 and have small changes. The trend after 2000 differ
by states. Since many states share similar trend, we can assume that DIFM would be applicable in this case.
layout(rbind(1:4, 5:8, c(9:11,0)))
for(i in 1:11){
plot(1960:2019, Property[,i], main = colnames(Property)[i], type = "l", xlab = "", ylab = "")
Figure 2 shows the square root of the number of property crimes in Western states. Property crimes soar by 1980 in western states, but would usually decrease from then. Many states show start of
sharp decrease between 1980 and 1990. These information can be reperesented with smaller number of factors through DIFM.
Step 2: Run DIFM with range of factors.
Now we run DIFM with different number of factors. For our two examples, we try models from 1 to 4 factors. Before we start DIFM, we should permute the order of the variable to adjust the structural
hierarchical constraint. We set the variable that would represent the factor well according to the eigenvectors. From Figure 1 and Figure 2, we can find that the time serires are rather
non-stationary than stationary. Therefore, we consider a second order polynomial for the dynamic linear model. First, we run MCMC for the violent crime data.
n.iter <- 5000
n.save <- 10
G0 <- rbind(c(1,1), c(0,1))
Violent.permutation <- permutation.order(Violent, 4)
Violent <- Violent[,Violent.permutation]
Violent.Hlist <- buildH(WestStates, Violent.permutation)
model.attributes1V <- difm.model.attributes(Violent, n.iter, n.factors = 1, G0)
hyp.parm1V <- difm.hyp.parm(model.attributes1V, Hlist = Violent.Hlist)
ViolentDIFM1 <- DIFMcpp(model.attributes1V, hyp.parm1V, Violent, every = n.save, verbose = FALSE)
ViolentAssess1 <- marginal_d_cpp(Violent, model.attributes1V, hyp.parm1V, ViolentDIFM1, verbose = FALSE)
model.attributes2V <- difm.model.attributes(Violent, n.iter, n.factors = 2, G0)
hyp.parm2V <- difm.hyp.parm(model.attributes2V, Hlist = Violent.Hlist)
ViolentDIFM2 <- DIFMcpp(model.attributes2V, hyp.parm2V, Violent, every = n.save, verbose = FALSE)
ViolentAssess2 <- marginal_d_cpp(Violent, model.attributes2V, hyp.parm2V, ViolentDIFM2, verbose = FALSE)
model.attributes3V <- difm.model.attributes(Violent, n.iter, n.factors = 3, G0)
hyp.parm3V <- difm.hyp.parm(model.attributes3V, Hlist = Violent.Hlist)
ViolentDIFM3 <- DIFMcpp(model.attributes3V, hyp.parm3V, Violent, every = n.save, verbose = FALSE)
ViolentAssess3 <- marginal_d_cpp(Violent, model.attributes3V, hyp.parm3V, ViolentDIFM3, verbose = FALSE)
model.attributes4V <- difm.model.attributes(Violent, n.iter, n.factors = 4, G0)
hyp.parm4V <- difm.hyp.parm(model.attributes4V, Hlist = Violent.Hlist)
ViolentDIFM4 <- DIFMcpp(model.attributes4V, hyp.parm4V, Violent, every = n.save, verbose = FALSE)
ViolentAssess4 <- marginal_d_cpp(Violent, model.attributes4V, hyp.parm4V, ViolentDIFM4, verbose = FALSE)
Now we run the MCMC for the property crime data.
Property.permutation <- permutation.order(Property, 4)
Property <- Property[,Property.permutation]
Property.Hlist <- buildH(WestStates, Property.permutation)
model.attributes1P <- difm.model.attributes(Property, n.iter, n.factors = 1, G0)
hyp.parm1P <- difm.hyp.parm(model.attributes1P, Hlist = Property.Hlist)
PropertyDIFM1 <- DIFMcpp(model.attributes1P, hyp.parm1P, Property, every = n.save, verbose = FALSE)
PropertyAssess1 <- marginal_d_cpp(Property, model.attributes1P, hyp.parm1P, PropertyDIFM1, verbose = FALSE)
model.attributes2P <- difm.model.attributes(Property, n.iter, n.factors = 2, G0)
hyp.parm2P <- difm.hyp.parm(model.attributes2P, Hlist = Property.Hlist)
PropertyDIFM2 <- DIFMcpp(model.attributes2P, hyp.parm2P, Property, every = n.save, verbose = FALSE)
PropertyAssess2 <- marginal_d_cpp(Property, model.attributes2P, hyp.parm2P, PropertyDIFM2, verbose = FALSE)
model.attributes3P <- difm.model.attributes(Property, n.iter, n.factors = 3, G0)
hyp.parm3P <- difm.hyp.parm(model.attributes3P, Hlist = Property.Hlist)
PropertyDIFM3 <- DIFMcpp(model.attributes3P, hyp.parm3P, Property, every = n.save, verbose = FALSE)
PropertyAssess3 <- marginal_d_cpp(Property, model.attributes3P, hyp.parm3P, PropertyDIFM3, verbose = FALSE)
model.attributes4P <- difm.model.attributes(Property, n.iter, n.factors = 4, G0)
hyp.parm4P <- difm.hyp.parm(model.attributes4P, Hlist = Property.Hlist)
PropertyDIFM4 <- DIFMcpp(model.attributes4P, hyp.parm4P, Property, every = n.save, verbose = FALSE)
PropertyAssess4 <- marginal_d_cpp(Property, model.attributes4P, hyp.parm4P, PropertyDIFM4, verbose = FALSE)
We select the best model through the Metropolis-Laplace estimator of the predictive density.
PDtable <- matrix(NA, 2, 4)
PDtable[1,] <- c(ViolentAssess1$Maximum, ViolentAssess2$Maximum, ViolentAssess3$Maximum, ViolentAssess4$Maximum)
PDtable[2,] <- c(PropertyAssess1$Maximum, PropertyAssess2$Maximum, PropertyAssess3$Maximum, PropertyAssess4$Maximum)
PDtable <- as.data.frame(PDtable)
rownames(PDtable) <- c("Violent", "Property")
colnames(PDtable) <- paste("Factors =", 1:4)
Violent -1422.399 -1408.117 -1415.552 -1548.975
Property -2083.176 -2135.292 -2200.640 -2420.407
From the table, we can find that the best models according to the evaluation method are 2-factor-model for violent crime and 1-factor-model for property crime datasets.
We present a few posterior distribution plots of the 4-factor models. The plots below are from the violent crime data.
In the same manner, we present the results of the property crime data.
Shin, H. and Ferreira, M. A. (2023). “Dynamic ICAR Spatiotemporal Factor Models.” Spatial Statistics, 56, 100763.
West, M. and Harrison, J. (1997). Bayesian Forecasting and Dynamic Models (2nd Ed.). Berlin, Heidelberg: Springer-Verlag.
Prado, R., Ferreira, M. A. R., and West, M. (2021). Time Series: Modeling, Computation, and Inference 2nd Ed. Boca Raton: Chapman & Hall/CRC.
Aguilar, O. and West, M. (2000). “Bayesian dynamic factor models and portfolio allocation.” Journal of Business and Economic Statistics, 18, 338–357.
Besag, J., York, J., and Mollie, A. (1991). “Bayesian image restoration, with two applications in spatial statistics.” Annals of the Institute of Statistical Mathematics, 43, 1
Keefe, M. J., Ferreira, M. A. R., and Franck, C. T. (2018). “On the formal specification of sum-zero constrained intrinsic conditional autoregressive models.” Spatial Statistics, 24. | {"url":"https://cran.uvigo.es/web/packages/DIFM/vignettes/DIFMvignette.html","timestamp":"2024-11-13T08:47:56Z","content_type":"text/html","content_length":"160952","record_id":"<urn:uuid:40247921-dbec-42c6-9b39-059875830dd9>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00716.warc.gz"} |
Hillel, T., Bierlaire, M., Elshafie, M. Z E B, and Jin, Y. (2018)
Validation of probabilistic classifiers
18th Swiss Transport Research Conference, Ascona, Switzerland
Non-parametric probabilistic classification models are increasingly being investigated as an alternative to Discrete Choice Models (DCMs), e.g. for predicting mode choice. There exist many strategies
within the literature for model selection between DCMs, either through the testing of a null hypothesis, e.g. likelihood ratio, Wald, Lagrange Multiplier tests, or through the comparison of
information criteria, e.g. Bayesian and Aikaike information criteria. However, these tests are only valid for parametric models, and cannot be applied to non-parametric classifiers. Typically, the
performance of Machine Learning classifiers is validated by computing a performance metric on out-of-sample test data, either through cross validation or hold-out testing. Whilst bootstrapping can be
used to investigate whether differences between test scores are stable under resampling, there are few studies within the literature investigating whether these differences are significant for
non-parametric models. To address this, in this paper we introduce three statistical tests which can be applied to both parametric and non-parametric probabilistic classification models. The first
test considers the analytical distribution of the expected likelihood of a model given the true model. The second test uses similar analysis to determine the distribution of the Kullback-Leibler
divergence between two models. The final test considers the convex combination of two classifiers under comparison. These tests allow ML classifiers to be compared directly, including with DCMs. | {"url":"https://transp-or.epfl.ch/php/abstract.php?type=4&id=HilBieElsJin_STRC2018","timestamp":"2024-11-03T19:16:45Z","content_type":"application/xhtml+xml","content_length":"2787","record_id":"<urn:uuid:385095ed-e180-4f85-8cd5-0c1a1f4730f5>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00842.warc.gz"} |
Understanding Probability Distribution and Definition
Contributed by: Priya Krishnan
LinkedIn Profile: https://www.linkedin.com/in/priya-srinivasan-77b081176/
For all types of Business, whether it is small or large there is a dire need for prediction/estimation. (in terms of Sales, Costs, demand, supply etc). This can be achieved through statistical
analysis of underlying data. A probability distribution can be a great tool for such estimations. It has its application in several areas such as Sales Forecasting, Risk Evaluation, Scenario
Analysis, etc.
In this article, let us get a brief understanding of Probability Distribution in simple terms. You can also take up the probability for machine learning free online course and learn about the
foundations of probability.
Let us start with the basics,
What is Probability? It measures the likelihood of an outcome. Learn about probability and normal distribution with us.
What is an Event? Each possible outcome of a variable is referred to as an event. An event that has no chance of occurring has a probability of 0. (impossible event.) An event that is sure to occur
has a probability of 1. (Certain Event)
Example: When we roll an unbiased dice.
Probability of getting a number 5 is an event which is denoted by a Probability P(X=x) function where X represents the actual outcome. x represents one of the possible outcomes. (in this case getting
P(Y=5)= 1/6= 16.67%
1-> number of ways in which the event occurs
6->total number of possible outcomes
Hence we conclude there is a probability of 16.67% on getting the number 5 by rolling the dice once.
The Possible values a variable can take and how frequently they occur
We define distributions using two characteristics only.
1. Mean->average value ,
2. Variance – what is the spread of the data
On analysing any distribution we need to be sure on two things whether it is a population (all the data) or sample (part of the data)
Notation for Population: Mean = µ, Variance=σ²
Notation for Sample: Mean = x̅, Variance= s^2
Since variance is measured in squared units, standard deviation(square root of the variance, For a population – σ, For Sample- S) is preferred a lot for direct interpretation. (That’s why we often
encounter µ- σ and µ+σ kind of notations on any normal distribution curve)
Distribution can either be
• narrower and longer – the more congested in the middle of the distribution. The more data falls within the interval (Blue curve in the below Fig)
• or, broader and dispersed – fewer data falls within the interval. The more dispersed the data is. (Red Curve in the below fig)
There is always a constant relationship between mean and variance for any distribution.
Variance equals the expected value of the squared difference from the mean for any value.
σ²=E(X^2)- µ^2
Before we jump into types of Probability distribution it is essential to know about the types of numeric variables one would encounter on analysing any given data.
Discrete Variables – take values that arise from a counting process.
Continuous Variables – take values that arise from a measuring process.
Based on the type of numeric data, the probability distribution is classified into two types.
i) Discrete Probability Distribution: For a finite number of outcomes- Ex: Roll dice, Picking a Card from a pack of 52 cards.
• Binomial
• Poisson
• HyperGeometric
ii) Continuous Probability Distribution: for Infinitely many outcomes- Ex: Recording time, Distance in Tracking field
In general, Each distribution takes this form,
X~N(µ- σ²)
N->Type of Distribution
µ, σ->Characteristics of Distribution
This characteristic varies depending on the type of Distribution.
Discrete Probability Distribution
Bernoulli Distribution:
When there are events with only two possible outcomes (True/False, Success/Failure, 1/0 etc) regardless of one outcome more likely to occur. This is called Binomial with a single trial. (Bern(p)=B
For Example:
Imagine a bag of 5 blue and 1 Red ball. Probability of drawing a ball has 2 outcomes.
Getting a red ball or a blue ball. Getting red has the probability of 1/6 and that of the blue ball is 5/6.
How to estimate the Expected Value?
Assigning one outcome as 0 and another outcome as 1
(Generally, the value of interest is assigned 1 and another one is assigned as 0)
Conventionally p>1-p, p as 1 and 0 as 1-p
E(X)= Summation of all possible Outcome values multiplied with their respective Probability values
Bernoulli Distribution plot in Python:
Binomial Distribution:
When we carry out the same experiment of (Picking balls from a bag of red and blue balls.) for several iterations then it is Binomial Distribution.
Here 10-> number of trials.
0.6-> Probability of one outcome.
What makes Bernoulli and Binomial different?
Imagine a scenario of Surprise quiz test in a classroom. The quiz consists of 10 true/false questions.
1. Guessing the answer for one question is a Bernoulli event. Guessing the entire quiz is a Binomial event.
2. Expected value of Bernoulli Distribution suggests which outcome is expected out of a single trial. The expected value of Binomial Distribution would suggest the number of times we expect to get a
specific outcome. How do we get this? Here comes the Probability Distribution function.
Probability Distribution function
The likelihood of getting a given outcome for a precise number of times.
P(desired outcome)=p
P(alternative outcome)=1-p
There could exist more than one way to reach our desired outcome.
For example, if we wish to find the number of ways to get 2 tail occurrence out of 3 coin flips
i.e., 3C[2 ]in mathematical terms. N=3 , x=2
P(x)=(^n[x]).p^x. (1-p)
Let us take a real time Example of predicting stock price of a single stock of Reliance
Historically you know there is a 60% chance that the stock price will go up and a 40% chance that it will drop.
With probability function, we can calculate the likelihood of stock price increasing 3 times during 5 days.
Here x=3, n=5, p=0.6
After plugging in the formula , we get 34.56%
So we can say, there is a 34.56% likelihood of stock prices increasing 3 times in 5 days time.
Expected value: Sum of all values in the sample space multiplied by their respective probabilities.
E(X)=X[0]. P(X[0])+X[1].P(X[1])+X[2].P(X[2])+….X[N].P(X[N])
E(X)=n.p= 5*0.6=3
We get, Standard deviation as 1.1
By knowing the expected value and Standard Deviation we can make accurate future forecasts.
Binomial Distribution plot in Python:
Poisson Distribution
It tests how unusual an event frequency is for a given interval
Example: Let us consider a real-time example here.
Scenario: When we need to calculate the number of customers arriving at a bank each minute to plan the number of counters set up.
In this case a customer arriving is an event. The occurrence of each event is independent of one another.
P(X) = (e^-μ) (μ^x) / x!
Find Probability that in a given minute exactly 2 customers arrive,
Plugin x=2, μ=3 in the above formula we get 0.2240 which is 22.40%
The probability that in a given minute more than 2 customers arrive,
P(X>2)=1-P(X<=2) We get, 0.5768 which is 57.68%
Thus, we conclude there is a 57.68% chance that more than 2 customers will arrive at the same minute.
Poisson Distribution plot in Python:
Hypergeometric Distribution
It is a binomial distribution without replacement. The outcome of each trial is dependent on each other.
Imagine we have a population of N items which consist of 2 categories.
‘Category 1’ with k items and ‘Category 2’ with N-k items.
n items are chosen at random which will again contain 2 categories of items. Let Category 1 be x and Category 2 be n-x then the probability function is given as,
P(X)=^kc[x * ]^(N-k)[C(n-x) / ]^N[Cn]
Example: To find the probability of choosing 2 items from a population size of 100 items which contains 20 ‘Category 1’ items and 80 ‘Category 2’ items.
When we plug these values N=100 , k=20, N-k=80, n=2 in the above formula, Probability of both the items to belong to ‘Category1’ alone is, 19/495=0.0383 which is 3.83%
Hypergeometric Distribution plot in Python:
Continuous Probability Distribution
The distribution would be a curve and not disconnected bars because here we consider continuous outcomes of an experiment. Probability Density Function is a mathematical expression that defines the
distribution of the values for a continuous variable.
Normal Distribution
The outcome of many distributions in nature closely resembles this distribution. (Simple Example: height, weight, blood pressure of Human Being) Hence the name normal. It is symmetrical and
bell-shaped implying that most observed values tend to cluster around the mean. Although the values range from negative infinity to positive infinity. Extreme small or Extreme large values are very
unlikely to occur.
Sample plot:
Probability Function: X~N(µ,σ²)
Most of the Statistical Analysis assumes the data to be normally distributed. We can standardise the data to fit in between 0 and 1 (by means of applying any operators like addition, subtraction,
multiplication, division) without affecting the type of distribution. This is called Transformation.
(Z=x- µ/ σ)
Normal Distribution plot In Python:
Uniform Distribution
All outcomes of this event are equally likely. They are said to be equ-probable. These outcomes tend to follow a uniform distribution. These are also called Rectangular Distribution. It is
symmetrical therefore the mean is equal to the median. It can be both discrete and continuous.
X~U(a,b) (in the below example, it is -a to +a)
a->start Value, b->End value
Sample plot:
Expected value(mean) provides no relevant information because all outcomes have the same probability value. Since there is no variation, there is no predictive power.
Uniform Distribution plot in Python:
Exponential Distribution
Events that are rapidly changing take this distribution.
Example: Online news that gets generated each second.
This is skewed to the right making the mean larger than median. Range is from 0 to positive infinity.
Distribution shape makes it unlikely that extremely large values will occur.
Sample plot:
Exponential Distribution plot in Python:
To Summarize…
In this article, we discussed the basics of Probability, Probability Distribution, and its types, and how to generate various distribution plots in Python. So, if you are someone who chooses to get
into a Data Science journey, it is imperative to get a hold of Probability concepts to solve complex business problems.
To learn about more concepts and pursue a career in Data Science, upskill with Great Learning’s PG program in Data Science and Engineering. | {"url":"https://www.mygreatlearning.com/blog/understanding-probability-distribution/","timestamp":"2024-11-04T20:33:53Z","content_type":"text/html","content_length":"386368","record_id":"<urn:uuid:760d719c-06c7-41fd-b4f2-16b1a360387f>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00553.warc.gz"} |
Relative impact of individual parameters (Solved!)
10-24-2014, 01:26 PM (This post was last modified: 11-03-2014, 10:52 PM by Yi.)
I have jEPlus1.5 running successfully on my PC and have run a Latin Hypercube Sample of 150 simulations from a jEPlus job with 12 Parameters which each had 5 alternative values (potentially
244,140,625 jobs). This fulfils one of my objectives exactly. Thank you for creating this tool and your help to date.
I also have another task I need to perform. I need to assess the relative impact of each of the 12 parameters with the values of all other parameters held at a nominal value. Eventually I want to
extend this to assess the relative impact of over 500 hundred parameters, but first I need to prove a method.
If I arrange my parameters in a tree with each parameter described by a low, nominal and a high value I end up with potentially 3^12 = 531,441 jobs. However, I only really need to run 3 x 12 = 36
jobs. These jobs would be:
Jobs 1 to 3: parameter1 = {low, nominal, high}, parameter2 = {nominal), parameter3 = {nominal}, parameter4 = {nominal}, etc.
Jobs 4 to 6: parameter1 = {nominal}, parameter2 = {low, nominal, high}, parameter3 = {nominal}, parameter4 = {nominal}, etc
Jobs 7 to 9 : parameter1 = {nominal}, parameter2 = {nominal}, parameter3 = {low, nominal, high}, parameter4 = {nominal}, etc
Can this be configured from the Parameter Tree field on the Project Tab? How?
This would be manageable with a small number of parameters but will become unwieldy when I expand my analysis to over 500. Is there a way of configuring jEPlus from an external file such as csv or
txt? I.e. A way of telling jEPlus to only run specific simulations from the tree?
Thank you in advance.
Regards, David.
10-26-2014, 08:28 PM
Hi dear David, Good Question !! I Think,It can be,But,To Be Honest,I Have not done as yet and I will try it .Dear David,Why Do You Want to assess the relative impact of each of the 12 parameters with
the values of all other parameters??It is a bit UnClrear for me,Do You Want to Study sensitivity analysis ??
IF YES,
I am A MATLAB Usre.I Think, it is Better to Work Both jEplus And MATLAB.I mean,You Can Run all jobs and Then Work With MATLAB and Do Every Things you Like In MATLAB.Instead Of configurring Parameter
Tree field of your own choice in the Project Tab.It is Bothersome a little,but I will try To Know.for end line of your question,job list file is a way to run your specific simulations from the
tree.Extcution tab/Action Part/job list in file.
10-28-2014, 03:39 PM
Hi David,
I think Navid answered it quite well - your task can be done easily using tools like Matlab. It is not worthwhile trying to create a jEPlus param tree to represent those case. Instead, you specify
all parameters (their names and search tags) in a single branch tree, and then use a job list file to specify the specific combinations you want to test. More details of the job list file can be
found here: http://www.jeplus.org/wiki/doku.php?id=d..._list_file
10-31-2014, 02:52 PM
Thanks for your help. For the benefit of anyone else reading this post, this is what I did (which worked).
I set up the parameters using the parameter tree as usual. For larger files I will do this by writing the parameter tree in excel and importing the tree using the import function.
I wrote a list of jobs in a text file using Notepad++. Each job was defined by one line which had the format:
;where "name" was the name I chose for the output directory, the two x "0"s nominated the first weather file and the first idf file (I am only using one of each) and value(x) is value the xth
parameter in the tree.
Then I saved the file as jobslist.txt
Then I ran the simulations from the Execution tab with the radio dial checked for "job list in file" and the field browsed to my jobslist.txt file.
I can see how you could use a routine in something like Matlab or Python to generate the jobslist file. This would be convenient for large numbers of parameters. Also, by automating the process you
could expect to cut down on errors such as missing commas.
I played around with running the jobs from the command line in a DOS terminal, but gave that up in favour of running the assessment from within jEPlus.
Thanks again for your help Navid and thanks again for a great tool Yi.
You might like to add a (solved) to the end of the title for this post. I have found this helpful when searching other forums.
Regards, David.
10-31-2014, 05:48 PM
Hi Dear David;My Pleasure,You Did EveryThings Completely Well.
11-03-2014, 10:57 PM
Hi David,
Thanks for the suggestion. I have changed the title accordingly. I think you should be able to edit your own posts as well. Unfortunately this forum software is not as modern as stackoverflow; so
have to do it manually. | {"url":"http://jeplus.org/mybb/showthread.php?tid=13","timestamp":"2024-11-10T04:28:59Z","content_type":"application/xhtml+xml","content_length":"39201","record_id":"<urn:uuid:b4fd9113-7739-4a7f-b364-6920e708ccba>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00298.warc.gz"} |
Howdy Partner! On Friday we did one of my favorite classroom activities... Partner Round Up! Yeehaw! This strategy is one of the best ways I know to really encourage (force) kids to work together and
learn from one another. If you or your school is part of the AVID program, this is a great one to use to show how you use "Collaboration" as a WICOR strategy. This isn't just partner work, this is
about learning from and with others.
The idea is simple. Each kid gets a card. Sometimes the whole class has the same type of card and, depending on the content, sometimes half the class gets one type of card and the other half gets a
different type. Students pair up, solve the problem, and move on to another partner. That's it!
Let's run through some specifics. Friday we did Partner Round Up with graphing linear equations. Half the class got a card that had a slope written on it and the other half got a card with a
y-intercept on it. A slope person must meet with a y-intercept person. Together they graph the linear equation and write the equation in slope intercept form. Then, and this is the important part,
they trade cards. Otherwise one student would have the same card the entire time. This way, they only technically work with the same card twice and they should be an expert at this particular slope
or y-intercept after already thinking about it with their last partner. After trading cards they find a new partner.
The students I have this year are wildly entertaining. They don't lack personality, that's for sure. Usually I just have students sort of wander around until they find a partner, but this year the
students suggested we make one spot of the room for "single people". When you are "single", you can go here to "mingle". Clever. It stuck. Students were much faster at partnering up and getting
through as many partnerships as possible. On average, students completed about 10 partnerships in the 20-30 minutes we had for this activity.
1. I don't teach pretty much the whole day. I facilitate. And I think that at times this can be a very powerful position to be in. The students are doing the talking, thinking, working, and at times
teaching. I walk around the room monitoring behavior and answering questions when they arise. But for the most part, I just get to watch my students be mathematicians.
2. Students teach other. On Friday I heard one student say to another "When I was partners with Gavin he explained it like this and it really helped me".
3. It encourages (forces) ELL students to work and talk with many other students in the class. They get to hear lots of academic language from their peers. You can start to see how after working with
a few partners, ELL students gain confidence and begin to take the lead role at the partnerships that follow.
4. Repetition is key and this activity is so fun and engaging that it sort of tricks students into doing many math problems that would normally be somewhat mundane in worksheet form.
5. Students get to move. They are up out of their seat non-stop and getting to move about the room. Even my most hyper-active students are able to stay on task for an extended amount of time due to
the constant change in movement and partners.
Here are a few ideas I have gathered for various types of contents that would work great for this activity:
What other ideas do you have? How could you adapt your lesson to fit this strategy? Can't wait to hear your ideas!
I remember when I first started researching how to best help English Language Learners in math class, I read a lot about manipulatives. There was a a cornucopia of research out there to support the
need for ELL's to be able to see, touch, and manipulate the math. Cool right? I was totally on board. Can I get an AMEN?! And then I saw the catalog of math manipulatives I had access to. For a girl
that loves to shop, I was so overwhelmed. With a restricted budget, what's going to give me the best bang for my buck? What manipulatives am I going to use more than just one day a year?
This post, and the ones to follow, are meant to serve as a guide to help teachers see how I use my favorite manipulatives. I may not have a million dollars, but I am working on finding a million ways
to make a few great products work!
Yesterday for our Right Now Rowe (aka Bell Work), students worked on a quick review problem that asked them to compare the perimeter and area of two rectangles. Definitely not an 8th grade standard,
but I know that in a few weeks we will be combining like terms and solving equations and there are some great contextual problems that combine that 8th grade concept with area/perimeter tasks. OH MY
GOSH. Students were so lost. From how to find the actual answer, to how to understand the answer, to deciding what unit the answer was in. Total melt down. Poor geometry chapter, always gets shoved
to the end of the year and forgotten if time runs out. Boo.
Today, we decided to back it up and do a quick review about how to measure around and inside a shape. Real basic. Students were handed MUST HAVE MANIPULATIVE #1: Square Tiles! Students were given a
prompt up on the SmartBoard and had to build a shape using the square tiles that would satisfy the given requirement. For example, "Build a rectangle with a perimeter of 14 units". After students
built their rectangles, students would draw their example up on the board. If there was more than one option, we would collect them all. From there the tasks got more challenging, like "Build a
rectangle with an area of 12 square units but a perimeter greater than 15 units".
Although there isn't an obvious "language" component to the task today, it definitely hit home for many of the ELL students. They could see it. Light bulb moment!
After we built the shapes, students were handed a worksheet with about 12 blank rectangles on it. Then, students were given MUST HAVE MANIPULATIVE #2: Dice in Dice! Students rolled the dice, used the
smaller number as the width and larger number as the length, and then calculated the perimeter and area of the rectangle. If they got the same number on both dice, they had to draw a square instead
and go form there. They REALLY didn't like "pretending" the rectangle was a square, which was my first suggestion. Good for them. Attend to precision, kiddos!
There you have it! Two manipulatives that really add a lot of visual aid and interaction to my classroom, which benefits ALL students, but especially helps English Language Learners SEE the math!
Stay tuned for more ideas on how to use square tiles and dice in dice with other concepts!
I have an obsession with card sorts. It's no secret. Ask anyone in my building, teachers and students alike. I love sorting cards. I think I love sorting cards more than I love Dutch Bros Iced
Americanos with 2% milk. I even dream about card sorts. There is something about the repetition of sorting that helps students understand concepts SO MUCH MORE than just doing the problems on a
worksheet. Before we dive into the language component (which you know is coming), let's just break down the mathematics involved in a card sort.
There are card sort activities and there are card matching activities. I am talking about a sort. Come back soon for ideas on card matching activities. Let's run down the basics. Students get a pile
of cards, and they need to sort them into categories. Maybe you have decided before hand what the categories will be or maybe the students create the categories depending on where you are in the
content. It seems so simple, but the pay off has been huge. Here are just a few ideas on card sorts I have used or seen used in my building:
1. Function or Not a Function (give students multiple representations of functions or just focus on a particular representation)
2. Linear or Not Linear (same thing with the multiple representations as above)
3. Triangle Congruence Conditions (given a picture of two congruent triangles, sort based on SSS, SAS, ASA, etc.)
4. Solving Equations Solutions Types (one solution, infinite solutions, no solutions)
5. Proportional Function or Not a Proportional Function
6. Negative Slope or Positive Slope
7. Rational or Irrational Numbers
8. Non Repeating or Repeating Decimals (given as a rational number)
9. Prime Numbers or Not Prime Numbers (could also do even and odd numbers for the littles)
When sorting in groups, the conversations are incredible. You hear students arguing. You hear students justifying. You hear students explaining. You hear students learning from their peers... that's
my favorite part.
The card sort itself is great. But I got to the point where I felt like we didn't have anything to show for our awesome thinking. Students just sorted the cards and then left for the day. There was
no way for students to go back and have a resource to study from. There was no evidence of our thinking or oral language that had been used. So now we make sure to ALWAYS finish up our card sort with
a writing activity.
Last week students sorted cards with pictures of linear functions on them. Some were proportional and some were not. They decided where to put the proportional and where to put the non proportional
and then started sorting. Once they finished and we reviewed to make sure they were all correct, they picked their toughest cards to sort and they wrote about how they knew it was proportional or not
in their Interactive Notebooks.
I just loved finishing the card sort with a writing activity. It really helped to bring the day full circle, and solidify the knowledge gained. For English Language Learners, I found that doing this
entire activity has been crucial for connecting the oral with the written academic language needed to understand the concept. Each ELL student sorted around 30 cards, listened to others in their
group repeat the same reasoning over and over again for proportional or not proportional, spoke to their group members at times when they felt they were ready to reiterate that reasoning, and then
wrote down that reasoning in their notebook. The repetition happening here has been incredibly beneficial for all students, but especially ELLs.
What ideas do you have for mathematical content that would work well in a card sort?
After a rough start to the school year, things are finally settling down. And by settling down, I mean moving full steam ahead. Now that all the classroom rules and procedures have been set and
burned into their brains, we can finally start diving into the mathematics! Yahoo!
This week we started with our first unit in Pre-Algebra, functions! Identify them. Create them. Change them. Characterize them. All kinds of good stuff! The standard below is the specific Common Core
State Standard we learned last week (you can find all the CCSS Math Standards here):
Understand that a function is a rule that assigns to each input exactly one output. The graph of a function is the set of ordered pairs consisting of an input and the corresponding output.
Every day we do a problem to start class. You can call it whatever (bell ringer, bell work, etc.) but in our class we call it the Right Now Rowe (get it? Like, hey you guys, do this.. right now!).
Across the hall, Miss Danner calls it the Daily Danner, and even farther down the hall Mr. Eiguren calls it the Everyday Eiguren. But I think my favorite name for bell work in the building is the
Stoddard Starter. Might as well make it fun right? Sometimes the problem is review, sometimes it is a foreshadowing/pre-teach type problem, but on Fridays we focus on the language! Every Friday
students are given a word that we have learned that week. Their task is write two complete sentences using that word correctly, both grammatically and mathematically. That's it! It seems so basic,
but the rewards of doing this have been HUGE in my class.
While taking the WIDA professional development class through the Boise School District, we were taught extensively about the WIDA components of language. The basics can be summarized in the graphic
The minute I saw this I had a major AH-HA moment. I took 8 years of Spanish through high school and college and sadly, I cannot speak Spanish at all. I can translate words back and forth okay, but as
far as understanding how all the words work together to actually communicate, I am a lost cause. My Spanish career started and stopped at the Vocabulary Usage level. I never learned how to use the
vocab together to create more than just a bunch of random words. This is exactly how students must feel in my math class when they are learning the "language of mathematics". I don't want students to
just tell me what a function is, I want them to be able to do more than that. This is where our Friday vocab word comes in. We practice taking the word, translating it, defining it, and then USING it
to communicate.
Students write their sentences independently and then we share a whole bunch up on the board. As they read their sentence aloud, I type it for the class to see. This way students can hear the
sentence, and see it in writing. Great for ELL's! Sometimes I will clarify the grammar or ask if I can add or take away something for it to make more sense. Sometimes I ask if there is a way to keep
their idea but change some of the language to be more specific or precise. By the end of the quick 5-10 minute activity, students have seen the word used a dozen different ways and their brains start
making even more connections.
This week our word was : output
Check out some of the sentences that students came up with (my edits/suggestions are in parenthesis):
1. There can be many inputs all with the same output, and it (the relationship) will still be a function.
2. If every input has one and only one output, then it (again, what is it?) is a function.
3. If you have an input with more than one output, that is not a functioning relationship.
4. Inputs are the x-values and outputs are the y-values.
5. The outputs are like the y-values.
6. The y-value in an ordered pair is the output.
7. A function is when each input has only one output.
8. A function has both inputs and outputs. (Although this is true, it's not especially impressive. But I included it to show that not all the sentences were mind blowingly awesome).
9. You can show inputs and their matching outputs in a table, a mapping, a graph, or in a set of ordered pairs.
There were a few more, but you get the idea. I love this because I know that every Friday we are going to take a minute and specifically focus on the language. If the week gets away from me and we
don't specifically target language integration, I feel better knowing that Friday we will talk language no matter what. Stay tuned for more Friday Right Now Rowe's to come! | {"url":"http://www.arithmetalk.com/2016/09/","timestamp":"2024-11-02T22:06:03Z","content_type":"text/html","content_length":"96637","record_id":"<urn:uuid:26f408a8-ff2e-4668-868c-e5edf682f599>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00482.warc.gz"} |
Lab 2.2: Math, Output and User Input
The goal of this lab is to learn the following:
• Using basic mathematical operations in Quorum
• Getting input from the user
Computer Science Principles Curriculum
• Big Idea: Abstraction: EK 2.1.2C
Common Core Standards
• English Language Arts Standards » Science & Technical Subjects: CCSS.ELA-Literacy.RST.9-10.2, CCSS.ELA-Literacy.RST.9-10.3, CCSS.ELA-Literacy.RST.9-10.4, CCSS.ELA-Literacy.RST.9-10.5,
CCSS.ELA-Literacy.RST.9-10.7, CCSS.ELA-Literacy.RST.9-10.10, CCSS.ELA-Literacy.RST.11-12.2, CCSS.ELA-Literacy.RST.11-12.3, CCSS.ELA-Literacy.RST.11-12.4, CCSS.ELA-Literacy.RST.11-12.5
• Mathematics Content: High School Functions » Building Functions: CCSS.Math.Content.HSF.BF.A1
• Standards For Mathmatical Practice: CCSS.Math.Practice.MP1, CCSS.Math.Practice.MP2, CCSS.Math.Practice.MP4, CCSS.Math.Practice.MP5, CCSS.Math.Practice.MP6, CCSS.Math.Practice.MP7,
• Operations
• Addition
• Subtraction
• Multiplication
• Division
• Modulus
• User Input
In this lab you will write a program in Quorum that performs basic mathematical operations and solves written math problems. You will demonstrate how to work with numbers by performing addition,
subtraction, multiplication, division, and remainder operations in Quorum. You will also learn how to get input from the users.
Goal 1: Using basic mathematical operations in Quorum
You will concentrate on performing basic mathematical operations in Quorum. Then, you will move onto solving problem statements. Start by opening Sodbeans and creating a new blank Quorum project.
Name the project Lab2_2.
Declare and initialize two integer variables: a to 7 and b to 4. Next, declare and initialize two number variables: c to 8.5 and d to 9.2. You will use these variables to perform mathematical
operations and output the results to the Sodbeans Output window. We will start with addition, let's add a and b.
Example: Assign the result of adding a and b to the variable sum1.
integer a = 7
integer b = 4
integer sum1 = a + b
output sum1
After you run the example you should have in the output window the number 11.
If you are going to perform a mathematical operation that includes at least one number variable, your answer should also be a number. You will apply the same concepts to the other mathematical
To perform the subtraction you will use the minus sign (-) and for multiplication you should use the asterisk (*). Output all your answers.
Now move onto division by dividing a by b and assigning the result in the variable called divide1.
Example: Divide a by b and run the program.
integer divide1 = a/b
output divide1
Why was divide1 assigned the value of 1 when it should be 1.75? That happens because the variables are integers, so the result is going to be an integer. To see the difference, declare two new number
variables called e and f with the same values of a and b, respectively.
The final mathematical operation available in Quorum is the modulus or remainder. This is written using the keyword mod. You will use the mod operation to get the remainder of a division.
number remainder = 10 mod 6
In this example, remainder would contain the value of 4 because 10 divide by 6 equals 1 with the remainder of 4.
Example: Using mod get the remainder of dividing c by b.
number remainder1 = c mod b
output remainder1
You will have in the output window the value of 0.5. The example above shows that you can use mod with decimal numbers, but it is not usually done.
Let's wrap up what you have done with the mathematical operations. Write code that combines all four operations into one statement: addition, subtraction, multiplication and division. Name a variable
result and perform the following operations:
(a + b) * (c - b) + (d / b)
Then, check your answer by adding another output statement for the result variable. The output for this operation should be 51.8.
Goal 2: Getting input from the user
When you write programs, you don't always merely perform computation on data you already know, as in the first part of this lab. Most of the time, you need to get input from the user to perform your
calculations. As an example, a desktop calculator is a program taking input from the user (via the keypad), and providing output (the answer to the equation you entered)
You may get input from the user in Quorum using the input keyword. The code below asks the user for their name; inside the parenthesis, you tell the user what information you are requesting.
Example: Ask and output the user name.
text name = input("Please enter your first name:")
output name
When you run this program, "Please enter your name." will appear in the output window. Enter your name and press Enter, or click the "OK" button. Your name will appear in the output window.
It can do more than ask for input such as a name. You can also request numerical values. Let's create a second input statement that asks us to enter an integer value. The code should look something
like the following:
text ageInput = input("How old are you?")
Notice, input will always be of type text. However, you can convert the text value of the ageInput variable to any other type you desire, such as integer or number. You do this using the cast
statement, as below. Here, you desire to have the age as a whole number, so you will use the integer type.
integer age = cast(integer, ageInput)
Next, concatenate on the words, "you are " followed by the age variable and finally the words " years old."
Next Tutorial
In the next tutorial, we will discuss Getting Started, which describes how to get started programming in Quorum. | {"url":"https://quorumlanguage.com/robotics/lab2_2.html","timestamp":"2024-11-11T07:03:10Z","content_type":"text/html","content_length":"74412","record_id":"<urn:uuid:5841f707-0bfd-4cb8-a7dd-0e3547e8f517>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00754.warc.gz"} |
Kristy Asks How to Get Percentages from Scores
How to figure out a percentage score when our reporting numbers are a formatted in a particular way. In this case we have a score and then a slash and then the total possible score in a single cell.
We go through each option of solution and try to derive a solution that is both useful in the exact situation and possibly useful for other problems too.
We find that the Query formula and other solutions are inflexible or time intensive. We want a solution that does actually solve the problem at hand, in a quick way, and also is easy to edit and
flexible to solve other problems. | {"url":"https://www.bettersheets.co/tutorials/kristy-asks-how-to-get-percentages-from-scores","timestamp":"2024-11-14T08:20:32Z","content_type":"text/html","content_length":"48189","record_id":"<urn:uuid:d3d3981b-d74d-45f1-8b30-663d58f014f8>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00606.warc.gz"} |
This has more to due with the algorithm for Kyma's OscilloscopeDisplay than it does with the analog to digital converters. You won't see the same effect if you look at the
of the Capybara on an external analog oscilloscope. Here's why. Imagine a sine wave whose frequency is exactly half the sample rate. In other words, you will take two samples of this sine wave on
each cycle. What if you happen to sample it on the two zero crossings that a sine wave has on each cycle? That is why the Nyquist Theorem says that the highest frequency you can represent in a
digital sampled system is
less than
half of the sample rate. OK, so if you can't get exactly half the sample rate, let's take a sine wave that is a little bit less than half the sample rate: (SR/2 - 1) hz. On the first sample, you
might get unlucky again and hit right on the zero crossing. But on the next sample is going to come a little bit sooner in the cycle than the zero crossing. And on subsequent cycles, your sample
points will slowly drift with respect to the waveform. After one second, you have drifted all the way through the waveform and end up back at the initial zero again. What does the result of this
sampling look like? It looks like a square wave that repeats (SR/2-1) times per second and has an amplitude that grows and shrinks at a rate of once per second. NOT a sine wave a full amplitude. So
does this mean that Nyquist was wrong?! What Nyquist was saying was
that you would see a sine wave directly after sampling—just that you would have
enough information
to determine which sine wave it must have been. If you eliminate all the frequencies greater than or equal to SR/2, then there is exactly one sine wave that could have passed through the points that
you sampled. The last step of conversion is to take the sampled points and pass them through a
low pass filter whose cutoff is set at SR/2. And that is where the problem lies. Kyma's OscilloscopeDisplay doesn't do any low pass filtering on the sample (the stairsteps are visible, especially
when you zoom way in). In a sense, the OscilloscopeDisplay is showing you an intermediate stage in the conversion process—before the pulses or spikes have been perfectly interpolated to reveal the
one and only sine wave they could have come from. The Capybara's output converters have
very good
(though not "perfect") low pass filters (in part because they use oversampling which is another trick we haven't talked about yet). So you would not see the same "amplitude modulation" effect on the
resynthesized sine wave coming out of the Capybara D/A converters. --
- 11 Oct 2004 Klar wie Kloßbrühe, thank you! (Clear like meatball broth ;-) ) I realized my 22k5 sine wave CD-player generator now with a square wave of 22k5. It works splendidly. I guess my CD
player would never have thought that he will have such a responsible job ever in his life. --
- 11 Oct 2004 | {"url":"http://www.symbolicsound.com/cgi-bin/bin/view/Learn/WhyDoISeeAmplitudeModulationForFrequenciesNearTheHalfSampleRate","timestamp":"2024-11-10T03:07:47Z","content_type":"application/xhtml+xml","content_length":"13129","record_id":"<urn:uuid:06c8e4fe-8f71-4fe0-b873-86e22f7a71ab>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00521.warc.gz"} |
Why two mass attracts each other?
Not open for further replies.
This is just your fringe misinterpretation.
Considering the case with Planet Mercury, do you think the "interplanetary gravitational interactions" are "attractive" from "the Sun" point of view?
Also see
Let us not talk about hypothetical(graviton) things. Let us talk practical things. Can a gravity-field exist in isolation without its mass?
I presume you mean whether or not a field can exist without a mass as its source, since the field itself obviously does not have mass. The answer is yes - one can construct metrics which are "held
together" by their own self-interaction without the presence of any other sources; such topological constructs are called
, and where first studied by Wheeler in the 1950s. See here :
Whether or not such constructs would be
is an entirely different issue again - but the very fact that they are valid solutions to the EFEs demonstrates the self-interaction of gravity in GR, since there are no other explicit sources of
gravity present in these solutions ( i.e. the stress-energy tensor vanishes everywhere, both inside and outside the geon ). It is interesting to note that these geons are investigated as candidates
for a purely geometric model of elementary particles; to the best of my knowledge though without much success.
Considering the case with Planet Mercury, do you think the "interplanetary gravitational interactions" are "attractive" from "the Sun" point of view?
See post #717. It is explained there, how repulsive forces(from the Sun point of view) causes perihelion precession of Planet Mercury.
You can also see
link where a 'repulsive force' is predicted for anomaly of Mercury perihelion precession.
A quote from the link is as follows:
Why do you keep talking and quoting about "forces" and Newtonian mechanics, if we are dealing with GR ? There are no forces in GR, neither repulsive nor attractive. The perihelion precession is
simply the result of a geometry which differs from the Newtonian one. It's that simple. The entire discussion is meaningless and a waste of time, simply because the cause of aforementioned precession
has nothing to do with
kind of force.
I presume you mean whether or not a field can exist without a mass as its source, since the field itself obviously does not have mass. The answer is yes - one can construct metrics which are
"held together" by their own self-interaction without the presence of any other sources; such topological constructs are called
, and where first studied by Wheeler in the 1950s. See here :
Whether or not such constructs would be
is an entirely different issue again - but the very fact that they are valid solutions to the EFEs demonstrates the self-interaction of gravity in GR, since there are no other explicit sources of
gravity present in these solutions ( i.e. the stress-energy tensor vanishes everywhere, both inside and outside the geon ). It is interesting to note that these geons are investigated as
candidates for a purely geometric model of elementary particles; to the best of my knowledge though without much success.
Geon is not yet proven. Here is the quote from your wiki link:
wiki said:
In theoretical general relativity, a geon is an electromagnetic or gravitational wave which is held together in a confined region by the gravitational attraction of its own field energy. They
were first investigated theoretically in 1955 by J. A. Wheeler, who coined the term as a contraction of "gravitational electromagnetic entity."[1]
Since general relativity is a classical field theory, Wheeler's concept of a geon does not treat them as quantum-mechanical entities, and this generally remains true today. Nonetheless, Wheeler
speculated that there might be a relationship between microscopic geons and elementary particles. This idea continues to attract some attention among physicists, but in the absence of a viable
theory of quantum gravity, the accuracy of this speculative idea cannot be tested. However recently Sundance Bilson-Thompson, Fotini Markopoulou, and Lee Smolin, in the context of Loop Quantum
Gravity discovered some objects very similar to the Wheeler idea of geon.
Wheeler did not exhibit explicit geon solutions to the vacuum Einstein field equation, a gap which was partially filled by Brill and Hartle in 1964 by the Brill-Hartle geon.[2] This is an
approximate solution which exhibits the features expected by Wheeler—at least temporarily. A major outstanding question regarding geons is whether they are stable, or must decay over time as the
energy of the wave gradually "leaks" away. This question has not yet been definitively answered, but the consensus seems to be that they probably cannot be stable, which would lay to rest
Wheeler's initial hope that a geon might serve as a classical model for stable elementary particles.
Why do you keep talking and quoting about "forces" and Newtonian mechanics, if we are dealing with GR ?
This proves, GR has a repulsive effect.
There are no forces in GR, neither repulsive nor attractive. The perihelion precession is simply the result of a geometry which differs from the Newtonian one. It's that simple. The entire
discussion is meaningless and a waste of time, simply because the cause of aforementioned precession has nothing to do with any kind of force.
Do you have some apathy with the term "
"? If some 'physical phenomena' fits well with the 'standard definition' of "force", whats wrong in calling this a "
Considering the case with Planet Mercury, do you think the "interplanetary gravitational interactions" are "attractive" from "the Sun" point of view?
There is nothing about a "repulsive force". You clearly do not understand physics.
What do photons use to "push off" of?
and what direction do they go after?
There are no forces in GR,
This is true, because GR ignores gravitationally induced pressures, where pressure = force/area. This pressure is missing from GR because GR is not concerned with the entire observed gravity
phenomena, only space-time.
If you look at a star, the deepest point in the space-time well is where time is supposed to be the slowest due to time dilation. Yet the fastest frequency transitions within the star occur at the
deepest point of the space-time well; nuclear reactions and gamma radiation. Time is supposed to slow as we go down the well, but material transitions get faster and faster, with faster and faster
transitions out pacing space-time contraction.
This paradox is connected gravity induced pressure, which can alter the phases of matter regardless of the direction of space-time. In this case, gravity provides the force/area. GR cannot predict
solid iron in the core of the earth since that is a function of pressure not space-time. When you talk about mass attracting mass, GR is not enough unless a first approximation is all you hope to
Like nailing Jello to a tree
Valued Senior Member
This is true, because GR ignores gravitationally induced pressures, where pressure = force/area. This pressure is missing from GR because GR is not concerned with the entire observed gravity
phenomena, only space-time.
If you look at a star, the deepest point in the space-time well is where time is supposed to be the slowest due to time dilation. Yet the fastest frequency transitions within the star occur at
the deepest point of the space-time well; nuclear reactions and gamma radiation. Time is supposed to slow as we go down the well, but material transitions get faster and faster, with faster and
faster transitions out pacing space-time contraction.
This paradox is connected gravity induced pressure, which can alter the phases of matter regardless of the direction of space-time. In this case, gravity provides the force/area. GR cannot
predict solid iron in the core of the earth since that is a function of pressure not space-time. When you talk about mass attracting mass, GR is not enough unless a first approximation is all you
hope to achieve.
GR is the theory of gravity.
Time is slowest deep in the gravity well but only when compared to a point higher up.
but material transitions get faster and faster, with faster and faster transitions out pacing space-time contraction.
In the gravity well, time moves at the usual rate, and reactions occur at the usual tempo. It's only when compared to a different point higher up that there's any difference in the rate of time.
Wrong. Geons are valid solutions to the field equations, which can be proven quite easily; that is all that matters, since the question was whether or not those equations describe a self-interacting
field. This is also precisely what the article says which I linked for you. I understand that geons may not physically exist, but that is irrelevant in this case since all we are talking about are
mathematical properties of the field equations, i.e. self-interaction. If you don't wish to use geons, there are many other solutions to the field equations which demonstrate the same principle; I
just picked geons as an intuitive example to illustrate what I meant to say.
This proves, GR has a repulsive effect.
No it doesn't. You really don't
to understand, do you.
If some 'physical phenomena' fits well with the 'standard definition' of "force", whats wrong in calling this a "force".
But that is exactly the point; the phenomena of perihelion precession does
fit the standard forced-based model, being Newtonian mechanics. Only GR gives the correct prediction, and GR does not involve forces in the precession calculation. Why is this so difficult to
understand ?
This is true, because GR ignores gravitationally induced pressures, where pressure = force/area. This pressure is missing from GR because GR is not concerned with the entire observed gravity
phenomena, only space-time.
Incorrect, GR does not ignore anything. All the terms for pressure, momentum, density and flux are present in the stress-energy-momentum tensor, and are all accounted for in GR. The metric describing
the processes for example in the interior of a star is called the
interior Schwarzschild metric
If you look at a star, the deepest point in the space-time well is where time is supposed to be the slowest due to time dilation. Yet the fastest frequency transitions within the star occur at
the deepest point of the space-time well; nuclear reactions and gamma radiation. Time is supposed to slow as we go down the well, but material transitions get faster and faster, with faster and
faster transitions out pacing space-time contraction.
The point being...?
This paradox is connected gravity induced pressure, which can alter the phases of matter regardless of the direction of space-time.
There is no paradox. GR correctly describes all gravitational phenomena associated with the interior of massive bodies, and that includes energy densities and fluxes. What it
do is describe the mechanics of phase transitions of chemical elements, since this is intrinsically quantum mechanical in nature; GR is simply a model for gravity, not chemistry or quantum mechanics.
GR cannot predict solid iron in the core of the earth since that is a function of pressure not space-time.
GR predicts exactly all gravitational phenomena in the core; the rest is chemistry and quantum mechanics, and has nothing to do with GR.
When you talk about mass attracting mass, GR is not enough unless a first approximation is all you hope to achieve.
Incorrect, GR gives the right numerical predictions for all gravitational phenomena connected to massive bodies, and is therefore "enough". See my GR primer thread for details.
Talking of geons, Wheeler was getting warm with that. Pity he didn't think of electromagnetic wave confined in a region by something a bit stronger than gravity.
Talking of geons, Wheeler was getting warm with that. Pity he didn't think of electromagnetic wave confined in a region by something a bit stronger than gravity.
Personally I am fascinated by the idea of geons, both as mathematical artefact, and as a candidate for a geometric explanation of elementary particles. To the best of my knowledge though it would
appear that they are not stable constructs; not sure though if the last word has been spoken here yet.
What specifically do you mean by "something a bit stronger than gravity" ? Wheeler was specifically working within the domain of GR ( i.e. gravitation ); an electromagnetic wave would in itself be a
source of gravity, so we wouldn't be dealing with geons anymore.
The electromagnetic and strong interactions, see
. Yes, Wheeler was working in the domain of gravitation. It was Wheeler who said
"matter tells space how to curve, space tells matter how to move"
. But we know that a concentration of energy causes gravity, and it doesn't have to be in the guise of matter. We also know that gravity is associated with curved spacetime, not curved space.
To appreciate the distinction, imagine you're flying your plane over a flat calm sea. It's so calm it's like glass. Then you notice that there's a single oceanic swell wave traversing this sea. You
follow it, and you notice that for some reason it isn't on a constant heading. It's veering North a little. When you plot your course, you realise that in following the wave you've taken a curved
path. That's an analogy for curved spacetime. Now look at the surface of the sea where the wave is. It's
. And this curvature is far more dramatic than your curved path. It's an analogy for curved space.
Now take a look at
The Role of Potentials in Electromagnetism
by Percy Hammond and pay special attention to
"We conclude that the field describes the curvature that characterizes the electromagnetic interaction."
This electromagnetic curvature isn't gravitational spacetime curvature. Note though that when it comes to electromagnetism, E and B are the spatial and time derivatives of four-potential, and the
typical electromagnetic sine waves don't depict the curvature directly. You have to take the integral of a sine wave for that. Something like this:
If you arrange for an electromagnetic wave to travel through curved space, you can confine it. And you're right, we aren't dealing with geons anymore. Because one way to do this is called
gamma-gamma pair production
But we know that a concentration of energy causes gravity
Yes, but that is exactly the point. There are no energetic sources of gravity anywhere in the geon solution, $$T_{\mu \nu}$$ vanishes everywhere both in the interior and the exterior regions of the
geon. The entire metric is Ricci-flat. This topological construct is formed and held together purely by its own self-interaction, which is the fascinating part.
There is nothing about a "repulsive force". You clearly do not understand physics.
See the following quote from the abstract of the above link. I have highlighted where "repulsive force" is mentioned.
We present here a calculation of the precession of the perihelion of Mercury due to the perturbations from the outer planets. The time‐average effect of each planet is calculated by replacing
that planet with a ring of linear mass density equal to the mass of the planet divided by the circumference of its orbit. The calculation is easier than examples found in many undergraduate
theoretical mechanics books and yields results which are in excellent agreement with more advanced treatments. The perihelion precession is seen to result from the fact that the outer planets
slightly change the radial period of oscillation from the simple harmonic period usually calculated for small displacements from equilibrium. This new radial period therefore no longer matches
the orbital period and the orbit consequently does not exactly retrace itself. The general question of whether a given perturbation will cause the perihelion to advance or regress is shown to
have the following answer: if a perturbing force is central and repulsive and also becomes stronger as the distance from the force center increases, the perihelion will advance. If the central
perturbing force is attractive and also becomes stronger as the distance from the force center increases, the perihelion will regress.
Wrong. Geons are valid solutions to the field equations, which can be proven quite easily; that is all that matters, since the question was whether or not those equations describe a
self-interacting field. This is also precisely what the article says which I linked for you. I understand that geons may not physically exist, but that is irrelevant in this case since all we are
talking about are mathematical properties of the field equations, i.e. self-interaction. If you don't wish to use geons, there are many other solutions to the field equations which demonstrate
the same principle; I just picked geons as an intuitive example to illustrate what I meant to say.
You are just arguing with a hypothetical evidence. Mathematics tries to explain physical evidence. So, from mathematics you can not conclude a hypothetical evidence.
No it doesn't. You really don't want to understand, do you.
Do you think GR has only attractive effect?
But that is exactly the point; the phenomena of perihelion precession does NOT fit the standard forced-based model, being Newtonian mechanics. Only GR gives the correct prediction, and GR does
not involve forces in the precession calculation. Why is this so difficult to understand ?
You are just misinterpreting my words and making your own statements. I have only said "force" and not "force based model". I am quoting my statement again for your reference.
hansda said:
If some 'physical phenomena' fits well with the 'standard definition' of "force", whats wrong in calling this a "force".
See the following quote from the abstract of the above link. I have highlighted where "repulsive force" is mentioned.
Like I said, you do not understand what you are reading. There is nothing about a "repulsive force of the Sun".
Not open for further replies. | {"url":"https://www.sciforums.com/threads/why-two-mass-attracts-each-other.134021/page-42","timestamp":"2024-11-08T13:13:30Z","content_type":"text/html","content_length":"166216","record_id":"<urn:uuid:bdf0b18b-b3dc-42f3-b7fa-2c6b4634e49d>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00642.warc.gz"} |