content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
A Roadmap to Learn DataScience by the close of 2023!
Start with the Overview of Data Science. Read some Data Science related blogs and also research some Data Science-related things.
For example;
1. Read blogs on Introduction to Data Science,
2. Why to choose data science as a career,
3. Industries That Benefits the Most From Data Science,
4. Top 10 Data Science Skills to Learn in 2020, etc,
And make a complete mind makeup to start your journey in Data Science. Make yourself self-motivated to learn Data Science and build some awesome projects on Data Science. Do it regularly and also
start learning one one new concept on Data Science. It will be very better to join some workshops or conferences on Data Science before you start your journey. Make your goal clear and move on toward
your goal.
1) Mathematics
Math skill is very important as they help us in understanding various machine learning algorithms that play an important role in Data Science.
Part 1:
• Linear Algebra
• Analytic Geometry
• Matrix
• Vector Calculus
• Optimization
Part 2:
• Regression
• Dimensionality Reduction
• Density Estimation
• Classification
2) Probability
Probability is also significant to statistics, and it is considered a prerequisite for mastering machine learning.
• Introduction to Probability
• 1D Random Variable
• The function of One Random Variable
• Joint Probability Distribution
• Discrete Distribution;
• Binomial (Python | R)
• Bernoulli
• Geometric etc
• Continuous Distribution;
• Uniform
• Exponential
• Gamma
• Normal Distribution (Python | R)
3) Statistics
Understanding Statistics is very significant as this is a part of Data analysis.
Introduction to Statistics
• Data Description
• Random Samples
• Sampling Distribution
• Parameter Estimation
• Hypotheses Testing (Python | R)
• ANOVA (Python | R)
• Reliability Engineering
• Stochastic Process
• Computer Simulation
• Design of Experiments
• Simple Linear Regression
• Correlation
• Multiple Regression (Python | R)
• Nonparametric Statistics;
• Sign Test
• The Wilcoxon Signed-Rank Test (R)
• The Wilcoxon Rank Sum Test
• The Kruskal-Wallis Test (R)
• Statistical Quality Control
• Basics of Graphs
4) Programming
One needs to have a good grasp of programming concepts such as Data structures and Algorithms. The programming languages used are Python, R, Java, Scala. C++ is also useful in some places where
performance is very important.
5) Machine Learning
ML is one of the most vital parts of data science and the hottest subject of research among researchers, so each year, new advancements are made in this. One at least needs to understand the basic
algorithms of Supervised and Unsupervised Learning. There are multiple libraries available in Python and R for implementing these algorithms.
• Introduction:
• Intermediate:
6) Deep Learning
Deep Learning uses _TensorFlow _and _Keras _to build and train neural networks for structured data.
• Artificial Neural Network
• Convolutional Neural Network
• Recurrent Neural Network
• TensorFlow
• Keras
• PyTorch
• A Single Neuron
• Deep Neural Network
• Stochastic Gradient Descent
• Overfitting and Underfitting
• Dropout Batch Normalization
• Binary Classification
7) Feature Engineering
In Feature Engineering, discover the most effective way to improve your models.
• Baseline Model
• Categorical Encodings
• Feature Generation
• Feature Selection
8) Natural Language Processing
In NLP, distinguish yourself by learning to work with text data.
• Text Classification
• Word Vectors
9) Data Visualization Tools
Make great data visualizations. A great way to see the power of coding!
1. Excel VBA
2. BI (Business Intelligence):
• Tableau
• Power BI
• Qlik View
• Qlik Sense
10) Deployment
The last part is doing the deployment. Definitely, whether you are fresher or 5+ years of experience, or 10+ years of experience, deployment is necessary. Because deployment will definitely give you
the fact that you worked a lot.
• Microsoft Azure
• Heroku
• Google Cloud Platform
• Flask
• DJango
11) Keep Practicing
β Practice makes a man perfect,β which tells the importance of continuous practice in any subject to learn anything.
So keep practicing and improving your knowledge day by day. Below is a complete diagrammatical representation of the Data Scientist Roadmap.
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse
|
{"url":"https://dev.to/ngenzimativo/a-roadmap-to-learn-datascience-by-the-close-of-2023-476l","timestamp":"2024-11-05T00:05:03Z","content_type":"text/html","content_length":"73261","record_id":"<urn:uuid:48fc1005-442c-4f86-8f74-23c38628fca0>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00296.warc.gz"}
|
Università degli Studi di Perugia
Study-unit Code
Course Regulation
Coorte 2020
Type of study-unit
Type of learning activities
Attività formativa integrata
Code 70097005
CFU 5
Teacher Giuseppe Saccomandi
Teachers • Giuseppe Saccomandi
Hours • 45 ore - Giuseppe Saccomandi
Learning activities Base
Area Matematica, informatica e statistica
Academic discipline MAT/09
Type of study-unit
Language of instruction Italiano but with on-line lectures on English
Contents Basic Methods of Operation Research
Reference texts https://www.youtube.com/yongwang
Educational objectives The ability to solve some linear optimisation problem with and without software tools.
Prerequisites Calculus
Teaching methods Classical
Other information None
Learning verification modality Oral and written examination
o Linear Programming
o Simplex Algorithm and Goal Programming
o Sensitivity Analysis and Duality
o Transportation, Assignment, and Transshipment Problems
o Network Models
o Integer Programming
Extended program o Nonlinear Programming
o Decision Making under Uncertainty
o Game Theory
o Inventory Theory
o Markov Chains
o Dynamic Programming
o Queuing Theory
Code 70026681
CFU 4
Teacher Paolo Carbone
Teachers • Paolo Carbone
Hours • 54 ore - Paolo Carbone
Learning Affine/integrativa
Area Attività formative affini o integrative
Academic SECS-S/02
Type of
Language of Italian
Contents Introduction to probability theory: continuous, discrete and mixed random variables. Probability density functions. Probability distribution functions. Joint random variables.
Introduction to stochastic processes: definitions, autocorrelation and autocovariance functions. Fundamentals of stochastic processes.
Reference Roy. D. Yates, David J. Goodman, Probability and Stochastic Processes, John Wiley & Sons Inc; 2nd International Edition, 2004.
Handouts by the instructor.
The objective of this module is to provide the students with the knowledge to correctly apply probability theory and measurement theory. At the end of this class the student will have
- the concept of continuous and discrete random variable and its PDF and CDF
- the concept of function of a random variable
- the concept of joint random variables and vector of random variables
objectives - the concept of function of two random variables
- the concept of stochastic process and of its general properties
Moreover, he will be able to:
- solve exercises that require modeling using discrete, continuous and mixed random variables
- solve exercises using the concept of stochastic process
Prerequisites Calculus I is mandatory. Students must also be able to perform simple mathematical modeling in two dimensions and capable to solve simple double integrals.
Teaching Frontal lectures at the blackboard. Students are expected to provide active participation and show autonomous study capabilities. Proficiency in solving exercises can only be developed
methods by complementing attendance of lectures with dedicated sessions in the solution of exercises at home.
Other Information about available services for people with disabilities and/or with learning disabilities, see:
Learning Written and oral tests. The written test requires solving two exercises (7 and 8 points each) and in answering 15 quizzes with 4 possible answers, of which only one is the correct one.
verification To each correct quiz answer 1 point is assigned. Wrong answers results in minus half a point. The oral examination covers all course content program and consists both in questions about
modality the theory and in the solution of exercises. Its lenght is about 20-25 minutes. The final grade is based on both grades in the written and oral examinations.
Set theory. Sample spaces and random events. How to assign probabilities: classical, empirical and subjective approach. Conditional probability. Total probability theorem. Bayes
theorem. Combinatorial calculus: permutations, dispositions, combinations. Random variables. Cumulative3 distribution function. Probability density function. Discrete random variables:
Bernoulli, geometrical, binomial, Pascal, uniform discrete. Mode, median, expected value. Transformed random variables. Expected value of a transformed random variable. Variance and
Extended standard deviation. Central and non-central moments. Conditional mass probability. Continuous random variables. Cumulative and density functions. Expected value. Probability models:
program uniform, exponential, Gaussian. Mixed random variables. Transformed continuous random variables. Conditional continuous random variables. Couples of random variables. Marginal
probability density functions. Transformation of two random variables. Rayleigh and Rice probability models. Orthogonal random variables. Correlation, covariance. Correlation
coefficient. Conditioning two random variables. Random vectors. Gaussian random vectors. Central limit theorem. De-Moivre Laplace formula. Introduction to stochastic processes. Moments.
Wide-sense and strict-sense stationary random processes. Ergodic processes.
|
{"url":"https://www.unipg.it/en/ects/ects-course-catalogue-2023-24?annoregolamento=2023&layout=insegnamento&idcorso=195&idinsegnamento=191917","timestamp":"2024-11-12T07:13:02Z","content_type":"application/xhtml+xml","content_length":"61210","record_id":"<urn:uuid:fa89ecbf-dcf4-4e99-b7f8-1ce193fa2ece>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00692.warc.gz"}
|
How to specify a random intercept cross-lagged panel model in OpenMx?
Replied on Fri, 02/03/2023 - 11:02
I ran
It looked like you set the variances of `RIx` and `RIy` to zero. So, I added
mxPath(c('RIx', 'RIy'), arrows=2, values=1.5),
into your model and it ran, only giving a warning about the Hessian being non-positive definite (probably because the data were nonsense).
Does that help?
Replied on Fri, 02/03/2023 - 11:59
Dear mhunter, thank you! My
Dear mhunter, thank you! My code already contained the line:
mxPath(from=c("RIy", "RIx"), arrows=2, free=TRUE, values=0)
If I set this to
mxPath(from=c("RIy", "RIx"), arrows=2, free=TRUE, values=1.5)
I indeed also get the error about the Hessian (also with the real data). Also, there are no estimates - instead, the starting values are returned:
Warning message:
In model 'riclpm' Optimizer returned a non-zero status code 5. The Hessian at the solution does not appear to be convex. See ?mxCheckIdentification for possible diagnosis (Mx status RED).
> summary(res)
Summary of riclpm
The Hessian at the solution does not appear to be convex. See ?mxCheckIdentification for possible diagnosis (Mx status RED).
free parameters:
name matrix row col Estimate Std.Error A
1 x_x A cx2 cx1 0.1 NA !
2 y_x A cy2 cx1 0.1 NA !
3 x_y A cx2 cy1 0.1 NA !
4 y_y A cy2 cy1 0.1 NA !
5 riclpm.S[9,9] S RIx RIx 1.5 NA !
6 riclpm.S[9,10] S RIx RIy 0.1 NA !
7 riclpm.S[10,10] S RIy RIy 1.5 NA !
8 vx1 S cx1 cx1 0.1 NA !
9 vx S cx2 cx2 0.1 NA !
10 rxy1 S cx1 cy1 0.1 NA !
11 vy1 S cy1 cy1 0.1 NA !
12 rxy S cx2 cy2 0.1 NA !
13 vy S cy2 cy2 0.1 NA !
14 riclpm.M[1,11] M 1 cx1 0.1 8601467937 !
15 riclpm.M[1,12] M 1 cx2 0.1 8866184397 !
16 riclpm.M[1,13] M 1 cx3 0.1 9126759905 !
17 riclpm.M[1,14] M 1 cx4 0.1 7286763147 !
18 riclpm.M[1,15] M 1 cy1 0.1 8601467226 !
19 riclpm.M[1,16] M 1 cy2 0.1 8866192940 !
20 riclpm.M[1,17] M 1 cy3 0.1 9126760586 !
21 riclpm.M[1,18] M 1 cy4 0.1 7286752751 !
Replied on Fri, 02/03/2023 - 12:04
Interestingly, if I re-run
Interestingly, if I re-run that model with mxTryHard, the results are identical to the lavaan solution. So... I guess this points to a problem of bad starting values? I still don't understand why the
error about the Hessian appears, nor why the model doesn't converge with mxRun.
Replied on Fri, 02/03/2023 - 13:21
In reply to Interestingly, if I re-run by cjvanlissa
Starting values are not feasible
If I use `mxOption(NULL,"Default optimizer","NPSOL")` to switch optimizers to NPSOL, and then run the model, NPSOL returns status code 10 ("Starting values are not feasible"). Looking at the gradient
with `res$output$gradient` reveals that some of the gradient elements are `NA` at the start values.
I'm not sure why there's no warning about the infeasible start when running the model with SLSQP, which is the on-load default optimizer. I think there ought to be such a warning. It's a useful
Replied on Fri, 02/03/2023 - 13:40
In reply to Starting values are not feasible by AdminRobK
Thank you, AdminRobK! I
Thank you, AdminRobK! I understand what you're saying, but I do not know what I should change to avoid the gradients being NA at the start values. Could you point me in the right direction please?
Replied on Fri, 02/03/2023 - 14:40
In reply to Thank you, AdminRobK! I by cjvanlissa
start values
I don't have any specific advice, beyond adjusting the start values. It looks like the set of start values you chose initialized optimization near the boundary of the feasible space, so that
numerically differentiating the fitfunction led to non-finite values of some of the gradient elements.
As far as finding good start values is concerned, mhunter has already in this thread mentioned `mxTryHard()` and `mxAutoStart()`.
Replied on Fri, 02/03/2023 - 13:58
mhunter Joined: 07/31/2009
Starting values matter
Ahh! I missed that you already had the variances for `RIx` and `RIy`. The problem seems to have been bad starting values.
As a general rule, starting values of zero are not a good idea for many parameters in many models. The initial problem was that you started a covariance matrix at all zeros. OpenMx probably
(depending on mxOptions) moved these starting values to all 0.1. However, a covariance matrix of all 0.1 is also nonsense. So, you gave OpenMx nonsense implausible starting values and OpenMx did not
When I changed the starting value of the variances of `RIx` and `RIy` to 1.5, OpenMx did not find a reasonable solution, but at least it was able to start. `mxTryHard()` tries several "random"
starting values, and at lease one of those worked to get a reasonable solution.
OpenMx requires users to think about plausible starting values for their parameters. lavaan generally does not. One option, if you're really struggling to think of plausible starting values is to use
`mxAutoStart()` which uses unweighted (or optionally weighted) least squares to find starting values on many models.
ss <- mxAutoStart(riclpm)
res <- mxRun(ss)
Replied on Fri, 02/03/2023 - 14:20
In reply to Starting values matter by mhunter
Ahh, of course. I somehow
Ahh, of course. I somehow misremembered that mxRun wrapped mxAutoStart. Thank you!
Replied on Mon, 04/17/2023 - 10:02
lf-araujo Joined: 11/25/2020
Exact Hamaker2015 specs
I came across this discussion this morning after trying to specify Hamaker2015 model exactly, and I reach the same specs as you. However, there aren't variances associated with ps and qs on hers (p.
115 on the paper)! But I couldn't get this to work in OpenMx.
The relevant lines in your code are:
mxPath(from=xminone, arrows=2, free=TRUE, values=c(.1), labels = "vx"),
mxPath(from=yminone, arrows=2, free=TRUE, values=c(.1), labels = "vy"),
Also the are no covars between xminone and yminone, it should be between vx and vy (or I am missing something). The line below would need to be edited too.
mxPath(from=xminone, to = yminone, arrows=2, free=TRUE, values=0, labels = "rxy"),
Anyway, if I make both changes so to be exactly the paper's spec I always end up with "All fit attempts resulted in errors - check starting values or model specification".
- Have you tried to reproduce Hamaker's exact specification?
|
{"url":"https://openmx.ssri.psu.edu/comment/9713","timestamp":"2024-11-14T15:00:50Z","content_type":"text/html","content_length":"50817","record_id":"<urn:uuid:c7348887-47d4-4526-bab7-5eb346015b12>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00539.warc.gz"}
|
Mastering Excel Formula Copying Techniques - D2ArmorPicker
Mastering Excel Formula Copying Techniques. Microsoft Excel is a powerful tool used by millions worldwide for data analysis, financial modeling, and various other applications. One of its key
features is the ability to use and manipulate formulas, which can perform calculations and automate tasks. In this article, we will explore the implications of copying a formula from cell D49 to E49
in Excel. We will cover the nature of relative and absolute cell references, how Excel handles copied formulas, and practical examples to illustrate these concepts.
The Basics of Excel Formulas
Mastering Excel Formula Copying Techniques. Before diving into the specifics of copying a formula, it’s essential to understand the basics of how formulas work in Excel. Formulas in Excel are
expressions that operate on values in a range of cells or a single cell. They can perform operations such as addition, subtraction, multiplication, and division, as well as more complex functions
like SUM, AVERAGE, VLOOKUP, and many others.
Relative and Absolute References
In Excel, cell references within formulas can be relative, absolute, or mixed. These references determine how a formula behaves when copied from one cell to another.
• Relative References: These adjust based on the relative position of rows and columns. For example, if a formula in D49 is =B49 + C49 and is copied to E49, it will adjust to =C49 + D49.
• Absolute References: These remain constant regardless of where they are copied. An absolute reference uses dollar signs ($) to fix the column and/or row. For example, if a formula in D49 is =
$B$49 + $C$49, it will remain =$B$49 + $C$49 when copied to E49.
• Mixed References: These are a combination of relative and absolute references. For example, $B49 + C$49 will adjust partially when copied.
Understanding these references is crucial for predicting how formulas will change when copied to different cells.
What Happens When You Copy a Formula from D49 to E49?
Example 1: Simple Addition with Relative References
Mastering Excel Formula Copying Techniques. Let’s start with a straightforward example. Suppose the formula in cell D49 is =B49 + C49. This formula adds the values in cells B49 and C49. When you copy
this formula from D49 to E49, Excel will adjust the cell references based on their relative positions. The new formula in E49 will be =C49 + D49.
• Original Formula in D49: =B49 + C49
• Copied Formula in E49: =C49 + D49
This adjustment occurs because the original formula uses relative references. Excel maintains the same relative distance between the cells referenced in the formula and the new cell (E49) where the
formula is copied.
Example 2: Using Absolute References
Now, consider a formula with absolute references. Suppose the formula in cell D49 is =$B$49 + $C$49. This formula adds the values in cells B49 and C49 but uses absolute references to ensure that the
specific cells are always referenced, regardless of where the formula is copied.
• Original Formula in D49: =$B$49 + $C$49
• Copied Formula in E49: =$B$49 + $C$49
Since the formula uses absolute references, the copied formula in E49 remains exactly the same, pointing to the fixed cells B49 and C49.
Example 3: Mixed References
Consider a formula with mixed references. Suppose the formula in cell D49 is =$B49 + C$49. This formula adds the value in the fixed column B of the same row (49) to the value in the fixed row 49 of
the adjacent column C.
• Original Formula in D49: =$B49 + C$49
• Copied Formula in E49: =$B49 + D$49
In this case, the column reference for B remains fixed, but the column reference for C adjusts to D. Similarly, the row reference for 49 remains fixed for C but adjusts for D when moving
Practical Applications and Examples
Understanding how formulas adjust when copied is not just theoretical—it has practical applications in various scenarios, such as data analysis, budgeting, and report generation.
Practical Example 1: Sales Report
Imagine you are creating a sales report. In column D, you calculate the total sales by adding the base price (column B) and the additional charges (column C).
• Original Formula in D49: =B49 + C49
• Copied Formula in E49: =C49 + D49
If you want to calculate the percentage increase in sales, you need to adjust your references accordingly. For instance:
• Percentage Increase Formula in E49: =(D49 - B49) / B49
When copying this formula, ensure the references correctly reflect the cells needed for each calculation.
Practical Example 2: Budget Planning
In budget planning, you might use absolute references to ensure consistency. Suppose you have a fixed cost in cell B1 that applies to multiple calculations in your spreadsheet.
• Original Formula in D49: =$B$1 + C49
• Copied Formula in E49: =$B$1 + D49
Using absolute references ensures that the fixed cost from cell B1 is consistently added across different rows and columns.
Tips for Managing Formulas in Excel
Tip 1: Use the Fill Handle
Excel’s fill handle allows you to quickly copy formulas across a range of cells. Click on the bottom-right corner of the cell containing your formula and drag it across the cells where you want to
copy the formula. Excel will automatically adjust the references based on the type of reference (relative, absolute, or mixed).
Tip 2: Verify Adjustments with F2
After copying a formula, press F2 to enter edit mode. This allows you to see which cells are being referenced in the copied formula, ensuring that it adjusts correctly.
Tip 3: Use Named Ranges
Named ranges can simplify formulas and make them easier to understand and manage. Instead of using cell references like B49, you can define a name for that range (e.g., BasePrice) and use it in your
formula. This approach can reduce errors when copying formulas.
Tip 4: Utilize Excel’s Formula Auditing Tools
Excel offers tools to help you trace and audit formulas, ensuring accuracy and identifying errors. Use features like Trace Precedents and Trace Dependents to understand how formulas are connected
across your spreadsheet.
Copying formulas in Excel is a fundamental skill that enhances your ability to analyze and manipulate data efficiently. Understanding the behavior of relative, absolute, and mixed references is
crucial for ensuring your formulas work correctly when copied to new locations. Whether you’re creating a sales report, planning a budget, or performing complex data analysis, mastering these
concepts will improve your proficiency with Excel and help you achieve accurate results.
By leveraging the power of Excel’s formulas and understanding how they adjust when copied, you can save time, reduce errors, and streamline your data management tasks. Remember to practice using
different types of references and utilize Excel’s built-in tools to verify and audit your formulas. With these skills, you’ll be well-equipped to handle a wide range of data-driven challenges in your
professional and personal projects. Read More D2armorpicker.
|
{"url":"https://d2armorpicker.org/if-the-formula-in-cell-d49-is-copied-to-cells-e49/","timestamp":"2024-11-12T06:43:50Z","content_type":"text/html","content_length":"59399","record_id":"<urn:uuid:eac4b5d6-06e6-4a5c-b4ed-55ef23dafede>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00837.warc.gz"}
|
Time and Quantum Mechanics
… there are known knowns: there are things we know we know. We also know there are known unknowns: that is to say we know there are some things we do not know. But there are also unknown
unknowns — the ones we don’t know we don’t know.” — Donald Rumsfeld.
I’m doing a popular talk on “The Nature of Time & Quantum Mechanics” tomorrow at Balticon. I’m deliberately not including anything from my paper “Quantum Time“.
Instead I look at a couple of areas at the intersection of time & quantum mechanics. There are too many such areas for one talk. In accordance with my father’s rule of three (you can only get three
points across in any one talk) I selected three of them, one from each of Donald Rumsfeld’s categories.
1. The delayed choice quantum eraser. I find this amazing: if you try to see which slit the particle went thru in the double slit experiment, it becomes a single slit experiment. But if you do
something that should tell you which slit it went thru — and then deliberately erase your knowledge — the single slit experiment turns back to a double slit experiment & we recover the
interference pattern. And this is the case even if we do the probe/erase after the particle has gone thru the two slits! Weird but well understood & tested.
2. The time symmetric formalism of Aharonov, Bergmann, & Lebowitz. They formulated quantum mechanics in a time symmetric way, demonstrating that it is not essentially asymmetric in time. It’s just
usually drawn that way, as Jessica Qubit might put it. There has been some speculation that their formalism could imply retro causation. I doubt it myself but this would be a known unknown.
3. The competition between the inflationary universe model & the ekpyrotic (cyclic) model of the universe. The inflationary model now has a bit of competition in the ekpyrotic model of Steinhardt &
Turok (see their book Endless Universe for a popular treatment). Colliding branes, bouncing universes, & decaying dark energy oh my! We have no idea what about the start, expansion, & finish of
the universe we don’t know. We don’t even know if the terms start & finish make sense, universe-wise.
I’ve put the slides for the talk up as a pdf & as html.
I can no other answer make, but, thanks, and thanks.
Lately it appears to me what a long, strange trip it’s been.
— Robert Hunter of the Grateful Dead
We are all travellers in the wilderness of the world, and the best we can find in our travels is an honest friend.
— Robert Louis Stephenson
I thank my long time friend Jonathan Smith for invaluable encouragement, guidance, and practical assistance.
I thank the anonymous reviewer who pointed out that I was using time used in multiple senses in an earlier work.
I thank Ferne Cohen Welch for extraordinary moral and practical support.
I thank Linda Marie Kalb and Diane Dugan for their long and ongoing moral and practical support.
I thank my brothers Graham and Gaylord Ashmead and my brother-in-law Steve Robinson for continued encouragement.
I thank Oz Fontecchio, Bruce Bloom, Shelley Handin, and Lee and Diane Weinstein for listening to a perhaps baroque take on free will and determinism. I thank Arthur Tansky for many helpful
conversations and some proofreading. I thank Chris Kalb for suggesting the title.
I thank John Cramer, Robert Forward, and Catherine Asaro for helpful conversations (and for writing some fine SF novels). I thank Connie Willis for several entertaining conversations about wormhole
physics, closed causal loops and the like (and also for writing several fine SF stories).
I thank Stewart Personick for many constructive discussions. I thank Matt Riesen for suggesting the use of Rydberg atoms. I thank Terry the Physicist for useful thoughts on tunneling and for
generally hammering the ideas here. I thank Andy Love for some useful experimental suggestions, especially the frame mixing idea. I thank Dave Kratz for helpful conversations. I thank Paul Nahin for
some useful email. I thank Jay Wile for some necessary sarcasm.
I thank John Myers and others at QUIST and DARPA for useful conversations.I thank the participants at the third Feynman festival for many good discussions, including Gary Bowson, Fred Herz, Y. S.
Kim, Marilyn Noz, A. Vourdas, and others. I thank Howard Brandt for his suggestion of internal decoherence.
I thank the participants at The Clock and The Quantum Conference at the Perimeter Institute for many good discussions, including J. Barbour, L. Vaidman, R. Tumulka, S. Weinstein, J. Vaccaro, R.
Penrose, H. Price, and L. Smolin.
I thank the participants at the Third International Conference on the Nature and Ontology of Spacetime for many good discussions, including V. Petkov, W. Unruh, J. Ferret, H. Brown, and O. Maroney.
I thank the participants at the fourth Feynman festival for many good discussions, including N. Gisin, J. Peřina, Y. S. Kim, L. Skála, A. Vourdas, A. Khrennikov, A Zeilinger, J. H. Samson, and H.
I thank the librarians of Bryn Mawr College, Haverford College, and the University of Pennsylvania for their unflagging helpfulness. I thank Mark West and Ashleigh Thomas for help getting set up at
the University of Pennsylvania.
I thank countless other friends and acquaintances, not otherwise acknowledged, for listening to and often contributing to the ideas here.
I acknowledge a considerable intellectual debt to Yakir Aharonov, Julian Barbour, Paul Nahin, Huw Price, L. S. Schulman, Victor J. Stenger, and Dieter Zeh.
I thank Balticon for having me speak on this. And I thank Chris Heimark and the other members of my Macintosh Programming SIG for inviting a talk on quantum time.
Finally, I thank the six German students at the Cafe Destiny in Olomouc who over a round of excellent Czech beer helped push this to its final form.
And of course, none of the above are in any way responsible for any errors of commission or omission in this work.
Quantum Time now up on the physics archive.
|
{"url":"https://timeandquantummechanics.com/2010/05/","timestamp":"2024-11-09T17:03:58Z","content_type":"application/xhtml+xml","content_length":"57352","record_id":"<urn:uuid:8ce8850e-7af2-4e03-ba27-4964365e49cc>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00626.warc.gz"}
|
Mathematical Christmas Cracker Jokes
You're reading: Blackboard Bold
Mathematical Christmas Cracker Jokes
Rummaging around on my computer today, I found a set of mathematical Christmas cracker jokes I wrote for a party thrown for a group of mathematicians a couple of years ago, where I hacked apart a set
of crackers and replaced the toys with tiny slide rules, the paper hats with ones cut into fractal curves along the top, and the existing awful jokes with awful mathematical ones. I thought I’d share
them with you all, since they’re more likely to be appreciated by maths fans.
I should add, these were designed to fit the Christmas cracker format of being terrible jokes, and I regret nothing. If you have any Christmas maths jokes of your own, feel free to share them in the
comments. Merry Christmas!
At Christmas, how will you perform the inverse operation to exponentiation?
Yule log.
What’s Santa Claus’s favourite graph with no loops?
The Christmas Tree.
What do algebraic geometers study at Christmas?
What’s purple and won’t get much for Christmas?
A finitely presented grape.
Where do all of Santa’s maps go to?
The Ho-Ho-Hodomain.
What do group theorists buy to hang on their doors at Christmas?
Wreath products.
Why does Father Christmas equal minus Christmas Father?
How does Santa solve systems of simultaneous congruences?
Using the Chinese Reindeer Theorem.
Why doesn’t Gödel’s constructible universe exist at Christmas?
Because there’s Nöel.
Why isn’t every man in a red suit with a beard Father Christmas?
Because correlation doesn’t imply Claus-ality.
4 Responses to “Mathematical Christmas Cracker Jokes”
1. Elijah Allen
I was searching to see if a Yule Log joke existed…I was not disappointed.
2. Lois Gadd
How about
Why did the mathematical chicken cross the road?
3. jam
|
{"url":"https://aperiodical.com/2012/12/mathematical-christmas-cracker-jokes/","timestamp":"2024-11-06T02:35:30Z","content_type":"text/html","content_length":"41258","record_id":"<urn:uuid:6e7c0cc0-ba6b-404c-8f5c-000eaab32d1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00550.warc.gz"}
|
Engineering Mechanics: Statics
Chapter 3: Rigid Body Basics
A couple is a set of equal and opposite forces that exerts a net moment on an object but no net force. Because the couple exerts a net moment without exerting a net force, couples are also sometimes
called pure moments.
The moment exerted by a couple also differs from the moment exerted by a single force in that it is independent of the location you are taking the moment about. In the example below we have a couple
acting on a beam. Each force has a magnitude F and the distance between the two forces is d.
Now we have some point A, which is distance x from the first of the two forces. If we take the moment of each force about point A, and then add these moments together for the net moment about point A
we are left with the following formula.
$$M=-(F\ast x)+(F\ast(x+d))$$
If we rearrange and simplify the formula above, we can see that the variable x actually disappears from the equation, leaving the net moment equal to the magnitude of the forces (F) times the
distance between the two forces (d).
$$M=-(F\ast x)+(F\ast x)+(F\ast d)\\\\M=(F\ast d)$$
This means that no matter what value of x we have, the magnitude of the moment exerted by the couple will be the same. The magnitude of the moment due to the couple is independent of the location we
are taking the moment about. This will also work in two or three dimensions as well. The magnitude of the moment due to a couple will always be equal to the magnitude of the forces times the
perpendicular distance between the two forces.
Source: Engineering Mechanics, Jacob Moore et al., http://mechanicsmap.psu.edu/websites/3_equilibrium_rigid_body/3-3_couples/couples.html
Basically: Couples are made from two forces in opposite directions that create a moment around an axis
Application: Turning the steering wheel of your car, you push one hand up and the other down to turn the wheel. To calculate the size of the couple, you multiply the force exerted by the distance
between your hands (the diameter of the wheel).
Looking Ahead: While moments are more common in Ch 4 rigid body equations, it’s important to know what couples are and how to find them.
|
{"url":"https://pressbooks.library.upei.ca/statics/chapter/couples/","timestamp":"2024-11-05T02:48:13Z","content_type":"text/html","content_length":"83918","record_id":"<urn:uuid:a3671080-0c5a-4a4b-bbe9-efd4ce887ef9>","cc-path":"CC-MAIN-2024-46/segments/1730477027870.7/warc/CC-MAIN-20241105021014-20241105051014-00088.warc.gz"}
|
About Nemo · Nemo.jl
Nemo is a library for fast basic arithmetic in various commonly used rings, for the Julia programming language. Our aim is to provide a highly performant package covering
• Commutative Algebra
• Number Theory
• Group Theory
Nemo consists of wrappers of specialised C/C++ libraries:
Nemo also uses AbstractAlgebra.jl to provide generic constructions over the basic rings provided by the above packages.
Julia is a sophisticated, modern programming language which is designed to be both performant and flexible. It was written by mathematicians, for mathematicians.
The benefits of Julia include
• Familiar imperative syntax
• JIT compilation (provides near native performance, even for highly generic code)
• REPL console (cuts down on development time)
• Parametric types (allows for fast generic constructions over other data types)
• Powerful metaprogramming facilities
• Operator overloading
• Multiple dispatch (dispatch on every argument of a function)
• Efficient native C interface (little or no wrapper overhead)
• Experimental C++ interface
• Dynamic type inference
• Built-in bignums
• Able to be embedded in C programs
• High performance collection types (dictionaries, iterators, arrays, etc.)
• Jupyter support (for web based notebooks)
The main benefits for Nemo are the parametric type system and JIT compilation. The former allows us to model many mathematical types, e.g. generic polynomial rings over an arbitrary base ring. The
latter speeds up the runtime performance, even of highly generic mathematical procedures.
|
{"url":"https://nemocas.github.io/Nemo.jl/latest/about/","timestamp":"2024-11-11T01:54:48Z","content_type":"text/html","content_length":"11649","record_id":"<urn:uuid:95621461-d240-48e3-a91b-0ad9ef3fb631>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00598.warc.gz"}
|
Epi Vignettes: Case-Control Study
Jul 1, 2015
Epi Vignettes: Case-Control Study
A brief synopsis of epidemiologic study design and methods with sample analytic code in R.
In this second installment in the series, I discuss case control sampling and analytic techniques. As before, the intention of this series is to: 1) Briefly describe the study design, 2)
Qualitatively talk about the analysis strategy, and 3)Quantitatively demonstrate the analysis and provide sample R code.
• Data description: Assume a binary outcome (case or control), an exposure and several covariates.
• Study design: Case-Control. Sampling in a case-control study is done by the outcome, with the exposure ascertained by retrospective analysis (e.g., asking the participants to recall). While a
case-control study can be prospective, those are infrequent; therefore this post will be concerned only with historic exposure assessment. A case, and one or more controls, are sampled from a
population with the goal of creating the counter-factual occurrence: the same person had and did not have the outcome. If the controls are as similar as possible to the cases on all other
characteristics, in theory the risk factors for disease can be elucidated. Case-control studies are frequently "matched" on one or more covariates to balance the groups on potential confounding
factors, in attempt to achieve exchangeability. The matching may be individual, exact matching, or by matching within a range of allowable values, termed frequency matching. Matched designs may
require special techniques in the analysis.
• Goal of analysis: Describe the odds of outcome given an exposure. More specifically, we are describing the odds of exposure conditioned on case status (case or control), which mirror the odds of
outcome given an exposure. Case-control studies cannot be used to describe the incidence of an outcome, as this is fixed by study design. That is, the researcher controls the ratio of cases to
• Statistical techniques: Logistic regression techniques are appropriate when adjusting or controlling for potential confounding. Otherwise, a standard contingency table can be used to directly
calculate the odds of exposure. The measure of association in a logistic regression analysis will be on the log-odds scale, which is then converted to an odds ratio for presentation. The odds
ratio is not an intuitive concept; this post will assume the reader is familiar with odds. However, frequently the odds ratio is used synonymously with relative risk, which is incorrect. The odds
ratio is a biased estimate of the relative risk, which may approximate the true risk in certain situations (the rare outcome assumption). Consult an epidemiologist if you are uncomfortable with
the underlying assumptions of the odds ratio.
□ Unconditional logistic regression: This technique is appropriate in an unmatched case-control study, or one where frequency matching was used and multiple controls overlap multiple cases
(i.e., a control could be matched to more than one case). The beta estimates for each coefficient in the regression equation are interpreted as the log odds of outcome, given a unit change
(or presence of, in the case of categorical variables), and the exponentiated coefficients are interpreted as the corresponding change in odds.
□ Conditional logistic regression: This technique is appropriate in a matched study design. It is common to also include the matched variables in the model specification to control for possible
residual confounding from the matching process. In essence, the intercept (baseline log odds) is estimated for each matched pair. By failing to specify the matched-pairs, the estimation of
parameters would be incorrect (and possibly fail to converge). For each observation (subject) in the dataset, a matched pairs identifier specifies which case (or control) was matched to a
given subject. The interpretation of the coefficient estimates is the same as in unconditional logistic regression.
Sample codes in R
Unconditional logistic regression
model = glm(outcome~ exposure+covariates, data=casecontrol, family=binomial(link="logit"))
summary(model) #summary of the model
round(exp(coef(model)),2) #coefficient estimates: odds ratios
round(exp(confint(model)),2) #confidence intervals
Conditional logistic regression (package:survival)
model = clogit(outcome~ exposure+covariates+matched_covariates+strata(matched_pairs_identifier), data=casecontrol)
summary(model) #summary of the model
round(exp(coef(model)),2) #coefficient estimates: odds ratios
round(exp(confint(model)),2) #confidence intervals
Cite: Goldstein ND. Epi Vignettes: Case-Control Study. Jul 1, 2015. DOI: 10.17918/goldsteinepi.
|
{"url":"https://goldsteinepi.com/blog/epivignettescase-controlstudy/index.html","timestamp":"2024-11-07T09:42:33Z","content_type":"text/html","content_length":"7362","record_id":"<urn:uuid:f4432d73-23f4-4756-a748-83f64cb36d37>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00522.warc.gz"}
|
Mathematics Category
4th Grade Entrance Exam (7 words) remember to read the instructions at the bottom of the test
Example: Jane baked 115 muffins. Lisa baked 12 times as many. How many muffins did Lisa bake?: [m] 1,380, There are 14,240 supplies in a classroom. They are arranged on shelves that hold 8 supplies
each. How many shelves are in the library?: [m] 1,780, Joe has 1,850 crayons. Ted has 739 crayons. How many more crayons does Joe have than Ted?: [m] 1,111, Karen and Christopher made egg rolls to
share at the school potluck. Christopher rolled 219 egg rolls. Karen rolled 229 egg rolls. What is the total number of egg rolls Karen and Christopher rolled?: [m] 448
Created 2017-11-30
|
{"url":"https://knowledgemouse.com/categories/18?obj=Quiz","timestamp":"2024-11-14T17:04:30Z","content_type":"text/html","content_length":"26866","record_id":"<urn:uuid:a46534b1-b9c1-41a9-804b-d4ad265a1cd4>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00349.warc.gz"}
|
The sector of the ring is the part of the circle that is bounded by the inner and outer arc of this ring and the two outer radii of this ring.
Formula of the area of the ring sector
The sector of the ring is the part of the circle that is bounded by the inner and outer arc of this ring and the two outer radii of this ring.
Formula of the area of the ring sector
|
{"url":"https://decimal-to-binary.com/ploschad-sector-ring.html?id=2&print=1","timestamp":"2024-11-09T15:36:25Z","content_type":"text/html","content_length":"10805","record_id":"<urn:uuid:1f1095d9-002f-44f1-8ef9-be741dee1b93>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00096.warc.gz"}
|
Search: [complex] - Shaarli -- Adrien Brochier
Kontsevich's formality theorem and the consequent star-product formula rely on the construction of an $L_\infty$-morphism between the DGLA of polyvector fields and the DGLA of polydifferential
operators. This construction uses a version of graphical calculus. In this article we present the details of this graphical calculus with emphasis on its algebraic features. It is a morphism of
differential graded Lie algebras between the Kontsevich DGLA of admissible graphs and the Chevalley-Eilenberg DGLA of linear homomorphisms between polyvector fields and polydifferential operators.
Kontsevich's proof of the formality morphism is reexamined in this light and an algebraic framework for discussing the tree-level reduction of Kontsevich's star-product is described.
|
{"url":"http://abrochier.org/sha/?searchtags=complex","timestamp":"2024-11-03T22:36:02Z","content_type":"text/html","content_length":"16664","record_id":"<urn:uuid:7ada723b-8e94-424a-a0fe-861421c29428>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00421.warc.gz"}
|
Mastering Array Math with PHP’s array_sum() Function: A Comprehensive Guide
Mastering Array Math with PHP's `array_sum()` Function: A Comprehensive Guide
Mastering Array Math with PHP’s array_sum() Function: A Comprehensive Guide
If you’re working with PHP arrays, you might find yourself needing to calculate the sum of all array elements frequently. That’s where the built-in array_sum() function comes in handy! In this blog
post, we’ll dive deep into understanding what array_sum() is, its use cases, technical reference, and practical examples using popular PHP frameworks like Laravel, CakePHP, CodeIgniter, Blade,
Smarty, Drupal, and Symfony. Let’s get started!
What is array_sum()?
array_sum() is a PHP function that returns the sum of all elements in an array. This is particularly useful when you have an associative or numeric array where you want to find the total sum of its
values. It saves time and effort compared to manually iterating through the array and calculating each sum individually.
Use Cases
Here are some common scenarios where using array_sum() can simplify your PHP code:
1. Calculate the total price of an order array containing item prices.
2. Find the average of two arrays by summing them and then dividing by 2 (using another array for the number of elements).
3. Sum up numerical data from a database query result to get statistics.
Availability and Compatibility
array_sum() is an essential part of PHP’s core library, meaning it’s available in all versions starting from PHP 4.0.1. It can be used with various popular PHP frameworks like Laravel, CakePHP,
CodeIgniter, Blade, Smarty, Drupal, and Symfony to perform array calculations more efficiently.
Technical Reference
array_sum() accepts an array as its only argument:
$sum = array_sum($yourArray);
This function returns the sum of all elements in the given array, including keys if it’s an associative array. If you want to exclude keys and only calculate the sum of values, use array_sum() with
$sum = array_sum(array_values($yourArray));
Practical Examples
Now that we’ve covered the basics let’s dive into some practical examples using popular PHP frameworks.
1. Laravel
Suppose you have an array of item prices in a Laravel controller:
$itemPrices = [3.5, 4.2, 7.8];
$totalPrice = array_sum($itemPrices);
2. CakePHP
In CakePHP, you can use array_sum() within your controller:
$items = [
['price' => 3.5],
['price' => 4.2],
['price' => 7.8]
$totalPrice = array_sum(array_column($items, 'price'));
3. CodeIgniter
Here’s how to use array_sum() in a CodeIgniter controller:
$itemPrices = [3.5, 4.2, 7.8];
$totalPrice = array_sum($itemPrices);
4. Blade (Laravel’s Templating Engine)
You can even use array_sum() in your Blade templates:
$prices = [3.5, 4.2, 7.8];
$totalPrice = array_sum($prices);
Total Price: {{ $totalPrice }}
5. Smarty (Template Engine)
In Smarty, you can use PHP functions like array_sum() directly within your templates:
{assign var="prices" value=[3.5, 4.2, 7.8]}
{assign var="totalPrice" value=`array_sum($prices)`}
Total Price: {$totalPrice}
6. Drupal
In Drupal, use array_sum() within your custom module’s PHP code:
$items = [3.5, 4.2, 7.8];
$totalPrice = array_sum($items);
7. Symfony
With Symfony, you can use array_sum() within your controller:
$items = [3.5, 4.2, 7.8];
$totalPrice = array_sum($items);
8. Yii (Framework)
In Yii, you can use array_sum() within your controller:
$itemPrices = [3.5, 4.2, 7.8];
$totalPrice = array_sum($itemPrices);
In conclusion, array_sum() is a powerful PHP function that can save you time and effort when calculating the sum of elements in an array. Its compatibility with various popular frameworks like
Laravel, CakePHP, CodeIgniter, Blade, Smarty, Drupal, Symfony, and Yii further enhances its versatility. By following this comprehensive guide, you now have a solid understanding of what array_sum()
is, its use cases, technical reference, and practical examples to help you implement it in your PHP projects more efficiently. Happy coding!
|
{"url":"https://devcodediaries.com/php/2024/01/11/array_sum().html","timestamp":"2024-11-08T18:44:00Z","content_type":"text/html","content_length":"23629","record_id":"<urn:uuid:bb68a7a0-0997-46e7-9705-d92c95713b21>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00347.warc.gz"}
|
The alongside is a plot of binding energy per nucleon E(b) ,against th
The alongside is a plot of binding energy per nucleon Eb ,against the nuclear mass M,A,B,C,D,E,F correspond to different nuclei. Consider four reactions.
(i) A + B → C +ε (ii) C → A + B + ε
(iii) D + E → F + ε (iv) F → D + E + ε where ε is the energy released. In which reactions , is ε positive?
The correct Answer is:A
A+B→C+ε→ fusion
F→D+E+ε→ fission .
Updated on:21/07/2023
Knowledge Check
The above is a plot of binding energy per nucleon Eb against the nuclear mass M,A,B,C,D,E,F correspond to different nuclei Consider four reactions:
• The above is a plot of binding energy per nucleon Eb, against the nuclear mass M, A, B, C, D, E, F correspond to different nuclei. Consider four reactions
(i) A + B rarr C + varepsilon " (ii) "C rarr A + B + varepsilon
(iii)D+E→F+ɛ and (iv) F→D+E+ɛ
Where ɛ is the energy released ? In which reactions is positive.
• The dimensions of 12∈0E2, where ∈0 is permittivity of free space and E is electric field, is :-
|
{"url":"https://www.doubtnut.com/qna/649446455","timestamp":"2024-11-10T02:26:16Z","content_type":"text/html","content_length":"345903","record_id":"<urn:uuid:2df55b08-f5b2-4421-bab5-0ba349503713>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.3/warc/CC-MAIN-20241110005602-20241110035602-00898.warc.gz"}
|
The Beta Spectrum of He⁶: Limits on the Axial Vector and Pseudoscalar Coupling Constants of Beta Decay
1957 Theses Doctoral
The Beta Spectrum of He⁶: Limits on the Axial Vector and Pseudoscalar Coupling Constants of Beta Decay
We have performed a careful measurement of the shape of the beta spectrum of He⁶. A detailed study of the phenomenon of electron scattering in our thin lens magnetic spectrometer enabled us to
interpret the spectrum shape from the end point at Wₒ = 3.50 ± .02 Mev. down to 1/14 Wₒ = 0.250 Mev. The experimental shape has been compared with the theoretically predicted shape for allowed
spectra. The influence of the pseudoscalar interaction on the shape of the He⁶ spectrum has also been considered. From these measurements we have been able to set limits on the Fierz interference in
the Gamow-Teller interaction as well as on the magnitude of the pseudoscalar coupling constants. These limits have been interpreted in terms of the relative magnitudes of the axial vector,
pseudoscalar, and tensor coupling constants using the two component theory of the neutrino and assuming that the complete beta decay Hamiltonian proposed by Lee and Yang is or is not invariant under
time reversal. We have also calculated the effect on the spectrum shape of the production of inner Bremsstrahlung in beta decay and have shown this effect to be at the limit of experimental
• 1957 Arthur Schwarzschild PhD Dissertation - THE BETA-SPECTRUM OF HELIUM-6.pdf application/pdf 8.42 MB Download File
More About This Work
Academic Units
Thesis Advisors
Wu, Chien-Shiung
Ph.D., Columbia University
Published Here
January 3, 2020
|
{"url":"https://academiccommons.columbia.edu/doi/10.7916/d8-v8wh-k736","timestamp":"2024-11-08T12:18:53Z","content_type":"text/html","content_length":"18902","record_id":"<urn:uuid:82c46a8c-1fb8-48b1-ad4a-6f45cced9f80>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00378.warc.gz"}
|
How to Turn Off the Axes for Subplots in Matplotlib-How Matplotlib (2024)
How to Turn Off the Axes for Subplots in Matplotlib
How to Turn Off the Axes for Subplots in Matplotlib is an essential skill for data visualization enthusiasts and professionals alike. This article will delve deep into the various methods and
techniques to achieve this effect, providing you with a thorough understanding of the process. We’ll explore different scenarios, offer practical examples, and discuss best practices to help you
master the art of turning off axes for subplots in Matplotlib.
Understanding the Importance of Turning Off Axes in Matplotlib
Before we dive into the specifics of how to turn off the axes for subplots in Matplotlib, it’s crucial to understand why this technique is valuable. In many data visualization scenarios, you may want
to create clean, minimalist plots that focus solely on the data without the distraction of axes. This can be particularly useful when:
1. Creating heatmaps or image plots
2. Displaying multiple small plots in a grid layout
3. Emphasizing the overall pattern or trend rather than specific values
4. Designing infographics or artistic visualizations
By learning how to turn off the axes for subplots in Matplotlib, you’ll have greater control over the appearance of your plots and be able to create more visually appealing and effective data
Basic Techniques for Turning Off Axes in Matplotlib
Let’s start with some basic techniques for turning off axes in Matplotlib. These methods will work for both single plots and subplots.
Using set_axis_off()
One of the simplest ways to turn off the axes for a subplot in Matplotlib is by using the set_axis_off() method. This method completely removes the axes, including the axis lines, ticks, and labels.
Here’s a simple example:
import matplotlib.pyplot as pltfig, ax = plt.subplots()ax.plot([1, 2, 3, 4], [1, 4, 2, 3])ax.set_axis_off()plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we create a simple line plot and then use ax.set_axis_off() to remove the axes completely. The resulting plot will show only the line, without any axes or ticks.
Using set_visible(False)
Another approach to turn off the axes is by using the set_visible(False) method. This method allows you to hide specific elements of the axes, giving you more control over what is displayed.
Here’s an example:
import matplotlib.pyplot as pltfig, ax = plt.subplots()ax.plot([1, 2, 3, 4], [1, 4, 2, 3])ax.spines['top'].set_visible(False)ax.spines['right'].set_visible(False)ax.spines['bottom'].set_visible(False)ax.spines['left'].set_visible(False)ax.set_xticks([])ax.set_yticks([])plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we hide each spine (the lines that form the box around the plot) individually using set_visible(False). We also remove the ticks by setting them to empty lists. This approach gives
you more flexibility in choosing which elements to hide.
Advanced Techniques for Turning Off Axes in Subplots
Now that we’ve covered the basics, let’s explore some more advanced techniques for turning off axes in subplots. These methods are particularly useful when working with multiple subplots or when you
need more fine-grained control over the appearance of your plots.
Using plt.subplots_adjust()
When working with multiple subplots, you may want to adjust the spacing between them to create a more compact layout. The plt.subplots_adjust() function allows you to control the spacing around the
subplots, which can be useful when turning off axes.
Here’s an example:
import matplotlib.pyplot as pltimport numpy as npfig, axs = plt.subplots(2, 2, figsize=(8, 8))fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")for ax in axs.flat: ax.plot(np.random.rand(10)) ax.set_axis_off()plt.subplots_adjust(wspace=0, hspace=0)plt.show()
In this example, we create a 2×2 grid of subplots, turn off the axes for each subplot, and then use plt.subplots_adjust() to remove the spacing between the subplots. This creates a more compact and
seamless appearance.
Using fig.tight_layout()
The tight_layout() function is another useful tool for adjusting the layout of your subplots. It automatically adjusts the spacing between subplots to minimize overlaps and maximize the use of the
figure area.
Here’s an example:
import matplotlib.pyplot as pltimport numpy as npfig, axs = plt.subplots(2, 2, figsize=(8, 8))fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")for ax in axs.flat: ax.plot(np.random.rand(10)) ax.set_axis_off()fig.tight_layout()plt.show()
In this example, we create a similar 2×2 grid of subplots, turn off the axes, and then use fig.tight_layout() to automatically adjust the spacing. This can be particularly useful when working with
subplots of different sizes or when adding titles and labels.
Turning Off Axes for Specific Types of Plots
Different types of plots may require different approaches to turning off axes. Let’s explore some common plot types and how to effectively turn off their axes.
Heatmaps are a great example of when you might want to turn off the axes to create a cleaner visualization. Here’s how you can do it:
import matplotlib.pyplot as pltimport numpy as npdata = np.random.rand(10, 10)fig, ax = plt.subplots()im = ax.imshow(data, cmap='viridis')ax.set_axis_off()plt.colorbar(im)plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we create a heatmap using imshow(), turn off the axes, and add a colorbar. The result is a clean heatmap visualization without distracting axes.
Image Plots
When displaying images, you often want to turn off the axes to show the image without any additional elements. Here’s how:
import matplotlib.pyplot as pltimport numpy as npimage = np.random.rand(100, 100)fig, ax = plt.subplots()ax.imshow(image, cmap='gray')ax.set_axis_off()plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
This example creates a grayscale image plot and turns off the axes, resulting in a clean image display.
Polar Plots
Polar plots can also benefit from turning off axes in certain situations. Here’s an example:
import matplotlib.pyplot as pltimport numpy as npr = np.arange(0, 2, 0.01)theta = 2 * np.pi * rfig, ax = plt.subplots(subplot_kw={'projection': 'polar'})ax.plot(theta, r)ax.set_axis_off()plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we create a polar plot and turn off the axes, resulting in a clean spiral visualization.
Customizing Axis Visibility for Subplots
Sometimes, you may want more fine-grained control over which parts of the axes to display or hide. Let’s explore some techniques for customizing axis visibility in subplots.
Selectively Hiding Spines
You can hide specific spines (the lines that form the box around the plot) while keeping others visible. This can be useful for creating plots with a minimalist aesthetic.
import matplotlib.pyplot as pltimport numpy as npfig, ax = plt.subplots()x = np.linspace(0, 10, 100)y = np.sin(x)ax.plot(x, y)ax.spines['top'].set_visible(False)ax.spines['right'].set_visible(False)ax.spines['left'].set_position(('data', 0))ax.spines['bottom'].set_position(('data', 0))plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we hide the top and right spines while keeping the left and bottom spines. We also position the remaining spines at the origin (0, 0) to create a minimalist plot with axes only at
the center.
Customizing Tick Visibility
You can also customize the visibility of ticks and tick labels independently of the axes themselves. This allows for more flexibility in your plot design.
import matplotlib.pyplot as pltimport numpy as npfig, ax = plt.subplots()x = np.linspace(0, 10, 100)y = np.cos(x)ax.plot(x, y)ax.set_xticks([])ax.set_yticks([])ax.spines['top'].set_visible(False)ax.spines['right'].set_visible(False)plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we remove the ticks using set_xticks([]) and set_yticks([]), while keeping the bottom and left spines visible. This creates a plot with axis lines but no tick marks or labels.
Turning Off Axes for Multiple Subplots
When working with multiple subplots, you may want to turn off axes for some subplots while keeping them for others. Let’s explore some techniques for handling this scenario.
Using a Loop to Turn Off Axes
You can use a loop to iterate through your subplots and selectively turn off axes based on certain conditions.
import matplotlib.pyplot as pltimport numpy as npfig, axs = plt.subplots(2, 2, figsize=(10, 10))fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")for i, ax in enumerate(axs.flat): x = np.linspace(0, 10, 100) y = np.sin(x + i) ax.plot(x, y) if i % 2 == 0: # Turn off axes for even-numbered subplots ax.set_axis_off() else: ax.set_title(f"Subplot {i+1}")plt.tight_layout()plt.show()
In this example, we create a 2×2 grid of subplots and use a loop to plot different sine waves in each subplot. We then use a condition to turn off the axes for even-numbered subplots while keeping
them for odd-numbered subplots.
Using GridSpec for Complex Layouts
For more complex layouts where you want fine-grained control over subplot sizes and axis visibility, you can use GridSpec.
import matplotlib.pyplot as pltimport matplotlib.gridspec as gridspecimport numpy as npfig = plt.figure(figsize=(12, 8))gs = gridspec.GridSpec(3, 3)ax1 = fig.add_subplot(gs[0, :])ax2 = fig.add_subplot(gs[1, :-1])ax3 = fig.add_subplot(gs[1:, -1])ax4 = fig.add_subplot(gs[-1, 0])ax5 = fig.add_subplot(gs[-1, -2])axes = [ax1, ax2, ax3, ax4, ax5]for i, ax in enumerate(axes): x = np.linspace(0, 10, 100) y = np.sin(x + i) ax.plot(x, y) if i in [1, 3]: # Turn off axes for specific subplots ax.set_axis_off() else: ax.set_title(f"Subplot {i+1}")fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.tight_layout()plt.show()
In this example, we use GridSpec to create a complex layout with subplots of different sizes. We then selectively turn off axes for specific subplots while keeping them for others.
Best Practices for Turning Off Axes in Matplotlib
As you become more proficient in turning off axes for subplots in Matplotlib, it’s important to keep some best practices in mind to ensure your visualizations are effective and professional.
Consistency Across Subplots
When working with multiple subplots, it’s important to maintain consistency in how you handle axes visibility. If you’re turning off axes for some subplots, consider whether it makes sense to do so
for all of them or if there’s a logical pattern to follow.
import matplotlib.pyplot as pltimport numpy as npfig, axs = plt.subplots(2, 2, figsize=(10, 10))fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")for ax in axs.flat: x = np.linspace(0, 10, 100) y = np.random.rand(100) ax.plot(x, y) ax.set_axis_off()plt.tight_layout()plt.show()
In this example, we maintain consistency by turning off the axes for all subplots in a 2×2 grid.
Providing Context Without Axes
When you turn off axes, it’s important to consider how you’ll provide context for your data. This might involve adding annotations, text labels, or other visual cues to help viewers understand the
information being presented.
import matplotlib.pyplot as pltimport numpy as npfig, ax = plt.subplots(figsize=(8, 6))x = np.linspace(0, 10, 100)y = np.sin(x)ax.plot(x, y)ax.set_axis_off()ax.text(5, 0, "Sine Wave", ha='center', va='center', fontsize=16)ax.annotate("Peak", xy=(np.pi/2, 1), xytext=(np.pi/2, 1.2), arrowprops=dict(arrowstyle="->"))ax.annotate("Trough", xy=(3*np.pi/2, -1), xytext=(3*np.pi/2, -1.2), arrowprops=dict(arrowstyle="->"))plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we turn off the axes but add text labels and annotations to provide context for the sine wave plot.
Balancing Aesthetics and Information
While turning off axes can create cleaner, more visually appealing plots, it’s important to balance aesthetics with the need to convey information. Consider whether removing axes might make it harder
for viewers to understand your data.
import matplotlib.pyplot as pltimport numpy as npfig, (ax1, ax2) = plt.subplots(1, 2, figsize=(12, 5))fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")x = np.linspace(0, 10, 100)y1 = np.sin(x)y2 = np.cos(x)ax1.plot(x, y1, label='Sine')ax1.plot(x, y2, label='Cosine')ax1.legend()ax1.set_title("With Axes")ax2.plot(x, y1, label='Sine')ax2.plot(x, y2, label='Cosine')ax2.legend()ax2.set_axis_off()ax2.set_title("Without Axes")plt.tight_layout()plt.show()
In this example, we create two subplots side by side, one with axes and one without, to demonstrate how turning off axes affects the readability of the plot.
Advanced Techniques for Axis Manipulation in Matplotlib
As you become more comfortable with turning off axes for subplots in Matplotlib, you may want to explore more advanced techniques for manipulating axes. These techniques can help you create even more
customized and visually appealing plots.
Using Axis Artists
Axis artists provide a more flexible way to customize individual axis elements. While they’re more complex to use than standard axes, they offer greater control over the appearance of your plots.
import matplotlib.pyplot as pltfrom mpl_toolkits.axisartist.axislines import SubplotZeroimport numpy as npfig = plt.figure(figsize=(8, 6))ax = SubplotZero(fig, 111)fig.add_subplot(ax)ax.axis["xzero"].set_visible(True)ax.axis["xzero"].label.set_text("X axis")ax.axis["yzero"].set_visible(True)ax.axis["yzero"].label.set_text("Y axis")ax.axis["top"].set_visible(False)ax.axis["right"].set_visible(False)x = np.linspace(-5, 5, 100)y = x**2ax.plot(x, y)plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we use axis artists to create a plot with only the x and y axes visible, positioned at zero. This creates a minimalist plot with axes only where they intersect.
Creating Inset Axes
Inset axes allow you to create smaller plots within your main plot. This can be useful for showing detailed views or related information without cluttering your main plot.
import matplotlib.pyplot as pltfrom mpl_toolkits.axes_grid1.inset_locator import inset_axesimport numpy as npfig, ax = plt.subplots(figsize=(8, 6))x = np.linspace(0, 10, 100)y = np.sin(x)ax.plot(x, y)axins = inset_axes(ax, width="40%", height="30%", loc=1)axins.plot(x, y)axins.set_xlim(2, 4)axins.set_ylim(0.5, 1)axins.set_axis_off()ax.indicate_inset_zoom(axins)plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we create a main plot with a sine wave and an inset plot showing a zoomed-in view of a specific region. We turn off the axes for the inset plot to create a clean, focused view.
Handling Special Cases When Turning Off Axes
There are some special cases where turning off axes requires additional consideration or techniques. Let’s explore a few of these scenarios.
3D Plots
When working with 3D plots, turning off axes can be a bit more complex. Here’s an example of how to handle this:
import matplotlib.pyplot as pltfrom mpl_toolkits.mplot3d import Axes3Dimport numpy as npfig = plt.figure(figsize=(8, 6))ax = fig.add_subplot(111, projection='3d')x = np.linspace(-5, 5, 100)y = np.linspace(-5, 5, 100)X, Y = np.meshgrid(x, y)Z = np.sin(np.sqrt(X**2 + Y**2))surf = ax.plot_surface(X, Y, Z, cmap='viridis')ax.set_axis_off()plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we create a 3D surface plot and turn off the axes using set_axis_off(). This removes all axis lines, ticks, and labels, creating a clean 3D visualization.
Plots with Colorbars
When working with plots that include colorbars, you may want to turn off the axes for the main plot while keeping the colorbar visible. Here’s how you can achieve this:
import matplotlib.pyplot as pltimport numpy as npfig, ax = plt.subplots(figsize=(8, 6))data = np.random.rand(20, 20)im = ax.imshow(data, cmap='viridis')ax.set_axis_off()cbar = plt.colorbar(im)cbar.set_label('Value')plt.title("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")plt.show()
In this example, we create a heatmap using imshow(), turn off the axes for the main plot, and add a colorbar. The colorbar remains visible and provides context for the color scale used in the
Troubleshooting Common Issues
When turning off axes for subplots in Matplotlib, you may encounter some common issues. Let’s address a few of these and provide solutions.
Unexpected Whitespace
Sometimes, turning off axes can lead to unexpected whitespace around your plots. This can often be resolved by adjusting the figure size or using tight_layout().
import matplotlib.pyplot as pltimport numpy as npfig, axs = plt.subplots(2, 2, figsize=(8, 8))fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")for ax in axs.flat: ax.plot(np.random.rand(10)) ax.set_axis_off()plt.tight_layout()plt.show()
In this example, we use tight_layout() to automatically adjust the spacing between subplots and reduce unwanted whitespace.
Overlapping Elements
When turning off axes, you may find that plot elements overlap in unexpected ways. This can often be resolved by manually adjusting the plot limits or using plt.subplots_adjust().
import matplotlib.pyplot as pltimport numpy as npfig, axs = plt.subplots(2, 2, figsize=(8, 8))fig.suptitle("How to Turn Off the Axes for Subplots in Matplotlib - how2matplotlib.com")for ax in axs.flat: ax.plot(np.random.rand(10)) ax.set_axis_off()plt.subplots_adjust(wspace=0, hspace=0)plt.show()
In this example, we use plt.subplots_adjust() to remove the space between subplots, preventing overlap and creating a more compact layout.
Mastering how to turn off the axes for subplots in Matplotlib is a valuable skill that can significantly enhance your data visualizations. By removing unnecessary axes, you can create cleaner, more
focused plots that effectively communicate your data.
Throughout this article, we’ve explored various techniques for turning off axes, from basic methods like set_axis_off() to more advanced approaches using axis artists and inset axes. We’ve also
covered best practices, special cases, and troubleshooting tips to help you handle a wide range of visualization scenarios.
|
{"url":"https://saynotocaps.org/article/how-to-turn-off-the-axes-for-subplots-in-matplotlib-how-matplotlib","timestamp":"2024-11-02T05:25:00Z","content_type":"text/html","content_length":"96865","record_id":"<urn:uuid:0c56e28f-b09e-46fd-8509-69f30e94aea5>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00360.warc.gz"}
|
\]). What will be times of ascent and descent? Give mathematical proof.
Calculate the net acceleration of the ball in the presence of constant air resistance. Use the kinematic expressions relating displacement, initial velocity, final velocity, acceleration and time.
Formulae used:
The expression for Newton’s second law of motion is
\[\Rightarrow {F_{net}} = Ma\] …… (1)
Here, \[{F_{net}}\] is the net force on the object, \[M\] is the mass of the object and \[a\] is the acceleration of the object.
The kinematic expression relating initial velocity \[u\], final velocity \[v\], acceleration \[a\] and displacement \[s\] in a free fall is
\[\Rightarrow {v^2} = {u^2} - 2as\] …… (2)
The kinematic expression relating displacement \[s\], initial velocity \[u\], time \[t\] and acceleration \[a\] in a free fall is
\[\Rightarrow s = ut - \dfrac{1}{2}a{t^2}\] …… (3)
Complete step by step answer:
Calculate the time of ascent of the ball while going upward.Calculate the net acceleration \[{a_1}\] on the ball when thrown upward.
When the ball of mass \[M\] is going upward, the weight of the ball and the air resistance acts in downward direction.
Apply Newton’s second law to the ball.
\[ \Rightarrow - R - Mg = - M{a_1}\]
\[ \Rightarrow {a_1} = g + \dfrac{R}{M}\]
Hence, net acceleration on the ball going upward is \[g + \dfrac{R}{M}\].
The final velocity of the ball when it reaches its maximum height is zero.
Let \[{u_1}\] and \[{v_1}\] are the initial and final velocities of the ball while going upward and rewrite equation (2).
\[ \Rightarrow {v_1}^2 = {u_1}^2 - 2{a_1}s\]
Substitute \[g + \dfrac{R}{M}\] for \[{a_1}\] and \[0\,{\text{m/s}}\] for \[{v_1}\] in the above equation and rearrange it for the displacement \[s\] of the ball.
\[{\left( {0\,{\text{m/s}}} \right)^2} = u_1^2 - 2\left( {g + \dfrac{R}{M}} \right)s\] …… (4)
\[ \Rightarrow s = \dfrac{{u_1^2}}{{2\left( {g + \dfrac{R}{M}} \right)}}\]
Rewrite equation (3) for the displacement of the ball while going upward.
\[\Rightarrow s = {u_1}{t_a} - \dfrac{1}{2}{a_1}t_a^2\]
Here, \[{t_a}\] is the time of ascent of the ball.
Substitute \[g + \dfrac{R}{M}\] for \[{a_1}\]a in the above equation and rearrange it for \[{t_a}\].
\[\Rightarrow s = {u_1}{t_a} - \dfrac{1}{2}\left( {g + \dfrac{R}{M}} \right)t_a^2\]
\[ \Rightarrow \dfrac{1}{2}\left( {g + \dfrac{R}{M}} \right)t_a^2 - {u_1}{t_a} + s = 0\]
\[ \Rightarrow {t_a} = \dfrac{{{u_1} \pm \sqrt {u_1^2 - 2\left( {g + \dfrac{R}{M}} \right)s} }}{{g + \dfrac{R}{M}}}\]
Substitute \[0\] for \[u_1^2 - 2\left( {g + \dfrac{R}{M}} \right)s\] in the above equation.
\[ \Rightarrow {t_a} = \dfrac{{{u_1} \pm \sqrt 0 }}{{g + \dfrac{R}{M}}}\] …… (from equation (4))
\[ \Rightarrow {t_a} = \dfrac{{{u_1}}}{{g + \dfrac{R}{M}}}\]
Hence, the expression for time ascent the ball is \[\dfrac{{{u_1}}}{{g + \dfrac{R}{M}}}\].
Calculate the time of descent of the ball while going downward.
Calculate the net acceleration \[{a_2}\] on the ball when coming downward.
When the ball of mass \[M\] is coming downward, the weight of the ball acts in downward direction and the air resistance acts in the upward direction.
Apply Newton’s second law to the ball.
\[\Rightarrow R - Mg = - M{a_2}\]
\[ \Rightarrow {a_2} = g - \dfrac{R}{M}\]
Hence, net acceleration on the ball coming downward is \[g - \dfrac{R}{M}\].
The final velocity of the ball when it reaches the ground is zero.
Let \[{v_1}\] and \[{v_2}\] are the initial and final velocities of the ball while coming downward and rewrite equation (3).
\[\Rightarrow s = {v_1}{t_d} - \dfrac{1}{2}{a_2}t_d^2\]
Here, \[{t_d}\] is the time of descent of the ball.
Substitute \[g - \dfrac{R}{M}\] for \[{a_2}\] and \[0\,{\text{m/s}}\] for \[{v_1}\] in the above equation.
\[\Rightarrow s = \left( {0\,{\text{m/s}}} \right){t_d} - \dfrac{1}{2}\left( {g - \dfrac{R}{M}} \right)t_d^2\]
\[ \Rightarrow s = - \dfrac{1}{2}\left( {g - \dfrac{R}{M}} \right)t_d^2\]
The displacement of the ball while moving upward and coming downward is the same.
Substitute \[\dfrac{{u_1^2}}{{2\left( {g + \dfrac{R}{M}} \right)}}\] for \[s\] in the above equation and solve for \[{t_d}\].
\[ \Rightarrow \dfrac{{u_1^2}}{{2\left( {g + \dfrac{R}{M}} \right)}} = - \dfrac{1}{2}\left( {g - \dfrac{R}{M}} \right)t_d^2\]
\[ \Rightarrow u_1^2 = \left( {g - \dfrac{R}{M}} \right)\left( {g + \dfrac{R}{M}} \right)t_d^2\]
\[ \Rightarrow {t_d} = \dfrac{{u_1^2}}{{\left( {g - \dfrac{R}{M}} \right)\left( {g + \dfrac{R}{M}} \right)}}\]
\[ \Rightarrow {t_d} = \sqrt {\dfrac{{u_1^2}}{{{g^2} - {{\left( {\dfrac{R}{M}} \right)}^2}}}} \] \[\Rightarrow\because {a^2} - {b^2} = \left( {a + b} \right)\left( {a - b} \right)\]
\[ \Rightarrow {t_d} = \dfrac{{{u_1}}}{{\sqrt {{g^2} - {{\left( {\dfrac{R}{M}} \right)}^2}} }}\]
Hence, the expression for the time of descent of the ball is \[\dfrac{{{u_1}}}{{\sqrt {{g^2} - {{\left( {\dfrac{R}{M}} \right)}^2}} }}\].
The time of ascent for the ball moving upward and the time of descent for the ball coming downward are different when a constant air resistance is considered and same when air resistance is
|
{"url":"https://www.vedantu.com/question-answer/a-ball-of-mass-m-is-thrown-upward-if-the-air-class-11-physics-cbse-5f8903e20592a47a579c6c9f","timestamp":"2024-11-14T14:31:56Z","content_type":"text/html","content_length":"197082","record_id":"<urn:uuid:9b9f8b4d-420e-4f2d-a990-19508d89110a>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00067.warc.gz"}
|
Course title in Estonian
Ülevaade algebrast ja arvuteooriast
Course title in English
An Overview of Algebra and Number Theory
Assessment form
lecturer of 2024/2025 Autumn semester
Alar Leibak (language of instruction:Estonian)
lecturer of 2024/2025 Spring semester
Not opened for teaching. Click the study programme link below to see the nominal division schedule.
Course aims
The idea of this course is to give students an overview of the basics of modern (discrete) mathematics and to demonstrate how to apply them in practice.
Brief description of the course
Fundamentals of propositional logic: truth table, logical operations, logical equivalence, tautology, contradiction, contingent propositions. Definitions in
mathematics. Fundamentals of set theory: axioms, operations with sets, mappings, binary relations (incl. partially ordered sets), cardinality. Basics in number
theory: divisibility, modular arithmetics, fundamental theorem of arithmetics, primes, arithmetical functions. Basics in algebra: algebraic operation and its Cayley table, algebraic systems with one
binary operation (quasigroup, semigroup, monoid, group) and two binary operations (ring and field).
Learning outcomes in the course
Upon completing the course the student:
- is able to define mathematical notions;
- is able to check the logical equivalence of given propositions;
- applies Euclidean algorithm to calculating the greatest common divisor of given two integers;
- is able to construct simple mathematical proofs.
Study programmes containing that course
|
{"url":"https://ois2.tlu.ee/tluois/subject/MLM7414.DT","timestamp":"2024-11-11T17:07:28Z","content_type":"text/html","content_length":"5569","record_id":"<urn:uuid:3b4f56aa-7f67-46e3-9fc7-3e3bfd364b08>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00741.warc.gz"}
|
CIT 214 - MySQL Homework 2 solved
Colonial Adventure Tours
Single Table Queries
Use Workbench/Command Line to create the commands that will run the
following queries/problem scenarios.
Use MySQL and the Colonial Adventure Tours database to complete the following
1. List the last name of each guide that does not live in Massachusetts (MA).
2. List the trip name of each trip that has the type Biking.
3. List the trip name of each trip that has the season Summer.
4. List the trip name of each trip that has the type Hiking and that has a distance longer
than 10 miles.
5. List the customer number, customer last name, and customer first name of each
customer that lives in New Jersey (NJ), New York (NY) or Pennsylvania (PA). Use the
IN operator in your command.
6. Repeat Exercise 5 and sort the records by state in descending order and then by
customer last name in ascending order.
7. How many trips are in the states of Maine (ME) or Massachusetts (MA)?
8. How many trips originate in each state?
9. How many reservations include a trip price that is greater than $20 but less than $75?
10. How many trips of each type are there?
11. Colonial Adventure Tours calculates the total price of a trip by adding the trip price
plus other fees and multiplying the result by the number of persons included in the
reservation. List the reservation ID, trip ID, customer number, and total price for all
reservations where the number of persons is greater than four. Use the column name
TOTAL_PRICE for the calculated field.
12. Find the name of each trip containing the word “Pond.”
13. List the guide’s last name and guide’s first name for all guides that were hired before
June 10, 2013.
14. What is the average distance and the average maximum group size for each type of
15. Display the different seasons in which trips are offered. List each season only once.
16. List the reservation IDs for reservations that are for a paddling trip. (Hint: Use a
17. What is the longest distance for a biking trip?
18. For each trip in the RESERVATION table that has more than one reservation, group
by trip ID and sum the trip price. (Hint: Use the COUNT function and a HAVING clause.)
19. How many current reservations does Colonial Adventure Tours have and what is the
total number of persons for all reservations?
Note: If you could answer any 17 queries correctly, you will get full points.
When you have all of your commands ready to run, test them in MySQL and save it as a
sql script.
What to Hand In
Create a sql script and upload it back to the Assignments link.
|
{"url":"https://codeshive.com/questions-and-answers/cit-214-mysql-homework-2-solved/","timestamp":"2024-11-03T02:52:26Z","content_type":"text/html","content_length":"98081","record_id":"<urn:uuid:b6aecada-1fbe-4039-b4f1-f67d61f2a795>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00022.warc.gz"}
|
Fundamental problems of algorithmic algebra
No physical items for this record
Publisher's description: Computer algebra systems represent a rapidly growing application of computer science to all areas of scientific research and computation. Well-known computer algebra systems
such as Maple, Macsyma, Mathematica and Reduce are now a basic tool on most computers. Underlying these systems are efficient algorithms for various algebraic operations. The field of Computer
Algebra, or Algorithmic Algebra, constitutes the study of these algorithms and their properties, and represents a rich intersection of theoretical computer science with very classical mathematics.
This book focuses on a collection of core problems in this area; in effect, they are the computational versions of the classical Fundamental Problem of Algebra and its derivatives. It attempts to be
self-contained in its mathematical development while addressing the algorithmic aspects of problems. General prerequisites for the book, beyond some mathematical sophistication, is a course in modern
algebra. A course in the analysis of algorithms would also increase the appreciation of some of the themes on efficiency. The book is intended for a first course in algorithmic algebra (or computer
algebra) for advanced undergraduates or beginning graduate students in computer science. Additionally, it will be a useful reference for professionals in this field. Examples in pseudo-code make the
text usable with any computer mathematics system.
There are no comments on this title.
|
{"url":"https://catalogue.i2m.univ-amu.fr/bib/11860","timestamp":"2024-11-12T16:08:59Z","content_type":"text/html","content_length":"61913","record_id":"<urn:uuid:c0a3490b-4c90-4e81-862c-02d036971a75>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00396.warc.gz"}
|
Radio Navigation
Here you need to know what information each navaidnavaid —Navigation Aid is providing you with.
• If it is a bearing (NDBNDB —Non-Directional Beacon or VORVOR —VHF Omnidirectional Range or VDFVDF —VHFVHFVHF —Very High Frequency —Very High Frequency direction finding) then that is an angle
expressed in mathematical terms as the angle "theta" from an axis (in this case, north.)
• If the beacon is giving you a distance (like DMEDME —Distance Measuring Equipment) then that is expressed in mathematical terms as the distance "rho" from the origin.
To get a fix, you need two position lines, be they distances or bearings. VDFVDF —VHFVHFVHF —Very High Frequency —Very High Frequency direction finding can be received automatically by any two
stations listening out on frequency, but for the remainder you need two separate beacons. Combining the terms for the two beacons (eg VORVOR —VHF Omnidirectional Range = "theta", DMEDME —Distance
Measuring Equipment = "rho", so VORVOR —VHF Omnidirectional Range/DMEDME —Distance Measuring Equipment = rho/theta) will give the answer.
Get instant access to 815 Radio Navigation exam questions.
Start your free trial today.
Allow multiple correct answers
Want to try all 15 questions for Navaid Fixes?
Sign up now.
|
{"url":"https://www.examcopilot.com/subjects/radio-navigation/navaid-summary/navaid-fixes","timestamp":"2024-11-12T04:13:59Z","content_type":"text/html","content_length":"41939","record_id":"<urn:uuid:b06c42df-1331-4f81-9d9d-916c7cb12f0b>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00237.warc.gz"}
|
Performance Bounds on Sparse Representations Using Redundant Frames
Publication , Journal Article
Akçakaya, M; Tarokh, V
We consider approximations of signals by the elements of a frame in a complex vector space of dimension $N$ and formulate both the noiseless and the noisy sparse representation problems. The
noiseless representation problem is to find sparse representations of a signal $\mathbf{r}$ given that such representations exist. In this case, we explicitly construct a frame, referred to as the
Vandermonde frame, for which the noiseless sparse representation problem can be solved uniquely using $O(N^2)$ operations, as long as the number of non-zero coefficients in the sparse representation
of $\mathbf{r}$ is $\epsilon N$ for some $0 \le \epsilon \le 0.5$, thus improving on a result of Candes and Tao \cite{Candes-Tao}. We also show that $\epsilon \le 0.5$ cannot be relaxed without
violating uniqueness. The noisy sparse representation problem is to find sparse representations of a signal $\mathbf{r}$ satisfying a distortion criterion. In this case, we establish a lower bound on
the trade-off between the sparsity of the representation, the underlying distortion and the redundancy of any given frame.
Duke Scholars
|
{"url":"https://scholars.duke.edu/publication/1289671","timestamp":"2024-11-10T08:38:34Z","content_type":"text/html","content_length":"57244","record_id":"<urn:uuid:1f1c8733-2b74-46b7-adc2-0de08784f133>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00384.warc.gz"}
|
Balancing Sets of Vectors for IEEE Trans. Inf. Theory
IEEE Trans. Inf. Theory
Balancing Sets of Vectors
View publication
For n > 0, d≥ 0, n = d (mod2), let K(n,d) denote the minimal cardinality of a family V of ± 1 vectors of dimension n, such that for any + 1 vector w of dimension n there is a viv such that v·w ≤ d,
where v · w is the usual scalar product of v and w. A generalization of a simple construction due to Knuth shows that K(n, d)≤[n/(d + 1)]. A linear algebra proof is given here that this construction
is optimal, so that K(n,d) = [n/(d +1)] for all n = d (mod2). This construction and its extensions have applications to communication theory, especially to the construction of signal sets for optical
data links. © 1988 IEEE
|
{"url":"https://research.ibm.com/publications/balancing-sets-of-vectors","timestamp":"2024-11-14T01:40:29Z","content_type":"text/html","content_length":"73790","record_id":"<urn:uuid:3659ea67-fc72-4e37-a9c5-5237196b45de>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00393.warc.gz"}
|
In geometry, a square is a regular quadrilateral, which means that it has four equal sides and four equal angles (90-degree angles, or (100-gradian angles or right angles). It can also be defined as
a rectangle in which two adjacent sides have equal length. A square with vertices ABCD would be denoted ◻ ABCD.
|
{"url":"https://chapter-07-10.hugoinaction.com/blog/page/3/","timestamp":"2024-11-12T02:57:56Z","content_type":"text/html","content_length":"4526","record_id":"<urn:uuid:2c10f4ec-58b6-4aa9-959c-3209fd8d137d>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00470.warc.gz"}
|
They give a bit more context in this video. (from 2017)
By the way, I got that link from an article in The Guardian, and I can't find anything in either of those two articles that really adds on top of what was known in 2017. It could just be hard for a
layperson to understand, and so was oversimplified?
TLDW is that researchers have known for decades that this tablet showed the Babylonians knew the Pythagorean Theorem for 1000 years before Pythagoras was born. So, that part isn't new.
They seem to be saying that what's new is that they understand each line of this tablet describes a different right triangle, and that due to the Babylonians counting in base 60, they can describe
many more right triangles for a unit length than we can in base 10.
They feel like this can have many uses in things like surveying, computing, and in understanding trigonometry.
My take is that this was a very interesting discovery, but that they probably felt pressure to figure out a way to describe it as useful in the modern world. But we've known about the useful parts of
this discovery for forever. Our clocks are all base 60. And our computers are binary, not base 10, just to start with.
We overvalue trying to make every advance in knowledge immediately useful. Knowledge can be good for its own sake.
"Having many more right triangles for a unit length" would have an incredible benefit in constructing enormous triangly things.
Instead becoming more acute about triangly things... we were more obtuse and went base ten
Well yeah, who's got 60 fingers? I mean sure, there's Fingers Georg, but that guy's weird.
People used to count 12 knuckles times 5 fingers for a total base 60.
Using only 5+5 fingers is the dumbed down version.
Wasn't it the Sumerians that did use base 60 and just went to counting knuckles and joints to get to the base 60 system ... never fully understood it when I read about it either
Here is a demonstration
Sumerians and Babylonians used the same cuneiform writing system with a base of 6Γ 10, but it seems like they also used to count to 60 as 12Γ 5... and what we're left with, is the simplified 5+5=10.
Also, we shall remember that:
π π π °π π π π π π π π π ΅ π π π π π π π π π π π ΅π € π π π π ¬π Ύπ
Now Iβ m wondering why the Babylonians didnβ t have giant triangle shaped orbital habitats.
Base 60 is based.
They can math.
Base 12 is a good compromise between math and meat imo
One, two, three, four, five, six, seven, eight, nine, to market, stayed home.
Some days I wonder what would be different if weβ d evolved with six fingers on each hand.
We've evolved with 14 knuckles on each hand... and a brain that struggles to keep 7 elements at once in operating memory. You can also count up to 1023 with just 10 fingers (in binary). It's not a
lack of fingers problem.
I'm not sure what problem you're referring to. I mean if we naturally leant towards base 12, I wonder what would be different, if anything?
The problem is our brains have a limited operating memory. People can (unless disabled) easily track 1 o 2 items at once, even 3, 4, 5... and start losing track somewhere around 6 or 7; 8 is
considered exceptional.
That's why kids don't generally use their fingers to count 2+2, but start using them for "harder" operations like 4+4.
Base 10 is already past our brain's limits... but we're kind of fine with it because we can use our fingers (think of it as evolving at a time before formal education when most people were
Base 60 is also past our brain's limits, but it's easily divisible into easy to track 1, 2, 3, 4, 5, or 6 pieces (aka $lcm(1..6)$), which makes it highly useful. The Babylonians still used to write
it down as base 6Γ 10, and it was common to count on knuckles and fingers as 12Γ 5.
The uneducated populace picked up the easiest part of the two: 5+5.
if we naturally leant towards base 12
If we had 12 fingers, we could've as easily ended up using base 12, only thing different would be 1/3 would equal exactly 0.4, while 1/5 would equal 0.24972497... oh well, we'd manage.
If our brains could track 12 items at once however, then we could benefit from base $lcm(1..12)$ or 27720. That... is hard to imagine, because we can't track 11 items at once; otherwise 27720 would
jump out as "obviously" divisible by 11, 9, or 7.
That's very interesting. Thank you for giving us your insight on this.
|
{"url":"https://old.lemmy.sdf.org/comment/3707237","timestamp":"2024-11-06T17:57:14Z","content_type":"text/html","content_length":"33222","record_id":"<urn:uuid:1c7dc54d-e9e8-4e38-9708-1407cacd5828>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00442.warc.gz"}
|
Computational Methods in Reactor Shielding
• 1st Edition - October 22, 2013
• eBook ISBN:
9 7 8 - 1 - 4 8 3 1 - 4 8 1 3 - 7
Computational Methods in Reactor Shielding deals with the mathematical processes involved in how to effectively control the dangerous effect of nuclear radiation. Reactor shielding… Read more
Save 50% on book bundles
Immediately download your ebook while waiting for your print delivery. No promo code needed.
Computational Methods in Reactor Shielding deals with the mathematical processes involved in how to effectively control the dangerous effect of nuclear radiation. Reactor shielding is considered an
important aspect in the operation of reactor systems to ensure the safety of personnel and others that can be directly or indirectly affected. Composed of seven chapters, the book discusses ionizing
radiation and how it aids in the control and containment of radioactive substances that are considered harmful to all living things. The text also outlines the necessary radiation quantities and
units that are needed for a systemic control of shielding and presents an examination of the main sources of nuclear radiation. A discussion of the gamma photon cross sections and an introduction to
BMIX, a computer program used in illustrating a technique in identifying the gamma ray build-up factor for a reactor shield, are added. The selection also discusses various mathematical
representations and areas of shielding theory that are being used in radiation shielding. The book is of great value to those involved in the development and implementation of systems to minimize and
control the dangerous and lethal effect of radiation.
Chapter 1 Introduction
1.1 The Shielding Problem
1.2 Scope of the Book
1.3 Background Knowledge
1.4 The Computer Programs
1.5 References
Chapter 2 Radiation Quantities and Units
2.1 Some Preliminary Considerations Relating to Radiological Protection
2.2 Recommended Radiation Levels
2.3 General Environmental Radiation Levels
2.4 Radiation Quantities and Units
2.5 Conversion of Radiation Intensity to Dose Equivalent Rate
2.6 A more Mathematical Treatment of the Basic Transport Quantities
2.7 The Albedo Concept
2.8 References
Chapter 3 Radiation Sources
3.1 Nuclear Reactors
Primary Radiation
Secondary Radiation
3.2 Radioactive Sources
3.3 Particle Accelerators
3.4 Reactor Coolant Activation
3.5 Miscellaneous Topics
3.6 References
Chapter 4 The Attenuation of Gamma Rays
4.1 Narrow Beam Attenuation
Photon Atomic Cross Sections
4.2 Broad Beam Attenuation
Buildup Factor: The Basic Idea
Empirical Formula for B(µr)
4.3 The Computer Program BMIX
4.4 Exercises for Program BMIX
4.5 References
Chapter 5 Applications of the Point Kernel Technique
5.1 The Mathematical Representation of Detector Response
5.2 Geometrical Transformations
5.3 Examples in the Use of the Point Kernel Technique
5.4 CASK: A Simple Shielding Program for Spherical Sources of Nuclear Radiation
5.5 Modification of Program CASK to Include a Line Source
5.6 Exercises for Program CASK
5.7 Exercises for Modified Form of CASK
5.8 References
Chapter 6 Neutron Attenuation
6.1 The Basic Strategy
6.2 Neutron Removal Cross Section
6.3 Theoretical Treatment of Fast Neutron Attenuation
Empirical Neutron Point Kernels
6.4 Removal Diffusion Theory
Spinney Removal Method
Multigroup Diffusion Equations
6.5 The Computer Program CADRE
6.6 Exercises for Program CADRE
6.7 Shield Optimization
6.8 References
Chapter 7 Transport Theory Methods
The Derivation of the Boltzmann Transport Equation
7.1 The Monte Carlo Method
Techniques for Random Sampling
The Estimation of Monte Carlo Error
Variance Reduction Techniques
Generating Random Numbers
The Computer Program TESR
Buffon's Needle Experiment
The Computer Program KLEIN
Monte Carlo Computer Program Monteray Mark I
Extended Version of Program Monteray
The Application of the Monte Carlo Method to Neutron Problems
The Computer Program ELSCAT
7.2 The Moments Method
The Boltzmann Equation for the Energy Flux
The P1 Equations
The Kernel for Compton Scattering
The Dimensionless Form of the Equations
The Moments of the Flux
Construction of Flux Distributions from the Moments
The Computer Program DBUF
7.3 References
Appendix A The Dirac Delta-Function
Appendix B Coordinate Systems, the Gradient Operator V, and the Laplacian Operator V2
Appendix C Selected Nuclear Data
Appendix D SI Units in Radiation and Radioactivity
• Published: October 22, 2013
• eBook ISBN: 9781483148137
|
{"url":"https://shop.elsevier.com/books/computational-methods-in-reactor-shielding/wood/978-0-08-028685-3","timestamp":"2024-11-06T10:51:48Z","content_type":"text/html","content_length":"180978","record_id":"<urn:uuid:761d2d00-3277-4a2b-ae42-24291387289a>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00097.warc.gz"}
|
Approximation Methods Coursera Week 3 Answers
Approximation Methods | Week 3
Course Name: Approximation Methods
Course Link: Approximation Methods
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 1
Variational solution of 1D harmonic oscillator
Use the following information for Questions 1-2:
Pretend that we don’t know the solution of 1D harmonic oscillator. From the profile of the potential, we expect the ground state wavefunction is likely to have a maximum at x=0 and approaches zero at
x=±∞. Based on this observation, we choose (quite luckily!) a gaussian function as the trial wavefunction,
where w is a real, positive number that specifies the width of our gaussian function and A is a complex normalization constant.
Normalize the trial wavefunction and find the constant A.
Enter your answer in terms of mass m, oscillator resonance frequency omega, width parameters w and constant pi and reduced Planck’s constant habr.
Answer: (1/w)^(1/2)*(2/pi)^(1/4)
Question 2
Variational solution of 1D harmonic oscillator
Use the following information for Questions 1-2:
Using the normalized trial wavefunction obtained in Question 1, calculate the energy expectation value,
⟨ϕ∣H∣ϕ⟩ and find the expression for w that minimizes the energy expectation value. Here, H is the Hamiltonian of the 1D harmonic oscillator.
Enter your answer in terms of mass m, oscillator resonance frequency omega, constant pi and reduced Planck’s constant habr.
Answer: sqrt(2)*sqrt((hbar)/(m*omega))
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 3
Coupled delta-function potential well
Use the following information for Questions 3-x:
Consider an infinitely deep and vanishingly narrow potential well, whose potential profile is described by a delta function. Explicitly, the Hamiltonian can be written as
H^=-ℏ^2/2m * d^2/dx^2 V[0]δ(x)
where V[0] is a parameter that specifies the depth of the potential.
This delta-function potential well supports one bound state. Find the energy of the bound state.
Hint: It is convenient to define a parameter κ=√2m|E|/ℏ^2. Also, note that the derivative of the wavefunction dψ/dx is not continuous at x=0 because the potential is infinite at that point. The
discontinuity of dψ/dx can be obtained by integrating the Schrödinger equation over an infinitesimal interval across x=0.
Enter your answer in terms of mass m, depth parameter V[0], constant pi and reduced Planck’s constant habr.
Answer: -m*V0^2/(2*hbar^2)
Question 4
Coupled delta-function potential well
Find the normalized wavefunction in the region x<0.
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, constant pi and reduced Planck’s constant habr.
Answer: sqrt(V0*m/(hbar^2))*exp(kappa*x)
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 5
Coupled delta-function potential well
Find the normalized wavefunction in the region x>0.
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, constant pi and reduced Planck’s constant habr.
Answer: sqrt(V0*m/(hbar^2))*exp(-kappa*x)
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 6
Coupled delta-function potential well
Let us now consider two delta-function potential wells separated by a distance d. The Hamiltonian is given by
H^=-ℏ^2/2m * d^2/dx^2 V[0][δ(x+d/2)+δ(x-d/2)]
Use the tight binding method and obtain the two energy eigenvalues and their corresponding eigenfunctions.
What is the larger of the two energy eigenvalues?
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, separation distance d, constant pi and reduced Planck’s constant habr.
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 7
Coupled delta-function potential well
Express the wavefunction ϕ[l] corresponding to the larger of the two energy eigenvalues in terms of the unperturbed wavefucntions, ψ[L] and ψ[R], which are the energy eigenfunctions of left and
right well, respectively, i.e.,
ϕ[l] = aψ[L] + bψ[R]
What is the constant a?
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, separation distance d, constant pi and reduced Planck’s constant habr.
Answer: 1/sqrt(2)
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 8
Coupled delta-function potential well
Continuing Question 7, what is the constant b?
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, separation distance d, constant pi and reduced Planck’s constant habr.
Answer: -1/sqrt(2)
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 9
Coupled delta-function potential well
Continuing from Question 6, what is the smaller of the two energy eigenvalues?
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, separation distance d, constant pi and reduced Planck’s constant habr.
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 10
Coupled delta-function potential well
Express the wavefunction ϕ[s] corresponding to the smaller of the two energy eigenvalues in terms of the unperturbed wavefucntions, ψ[L] and ψ[R] , which are the energy eigenfunctions of left and
right well, respectively, i.e.,
ϕ[s] = cψ[L] + dψ[R]
What is the constant c?
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, separation distance d, constant pi and reduced Planck’s constant habr.
Answer: 1/sqrt(2)
These are answers of Approximation Methods Coursera Week 3 Quiz
Question 11
Coupled delta-function potential well
Continuing Question 10, what is the constant d?
Enter your answer in terms of mass m, depth parameter V[0], parameter kappa, separation distance d, constant pi and reduced Planck’s constant habr.
Answer: 1/sqrt(2)
These are answers of Approximation Methods Coursera Week 3 Quiz
More Weeks of this course: Click Here
More Coursera Courses: http://progiez.com/coursera
|
{"url":"https://progiez.com/approximation-methods-coursera-week-3-answers","timestamp":"2024-11-04T20:14:52Z","content_type":"text/html","content_length":"80283","record_id":"<urn:uuid:ddfc6c2c-05ec-47c6-9fc5-ad721dc5acdc>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00496.warc.gz"}
|
Using pre-trained word2vec with LSTM for word generation
LSTM/RNN can be used for text generation. This shows the way to use pre-trained GloVe word embeddings for Keras model.
How to use pre-trained Word2Vec word embeddings with Keras LSTM model? This post did help.
How to predict / generate next word when the model is provided with the sequence of words as its input?
Sample approach tried:
# Sample code to prepare word2vec word embeddings
import gensim
documents = ["Human machine interface for lab abc computer applications",
"A survey of user opinion of computer system response time",
"The EPS user interface management system",
"System and human system engineering testing of EPS",
"Relation of user perceived response time to error measurement",
"The generation of random binary unordered trees",
"The intersection graph of paths in trees",
"Graph minors IV Widths of trees and well quasi ordering",
"Graph minors A survey"]
sentences = [[word for word in document.lower().split()] for document in documents]
word_model = gensim.models.Word2Vec(sentences, size=200, min_count = 1, window = 5)
# Code tried to prepare LSTM model for word generation
from keras.layers.recurrent import LSTM
from keras.layers.embeddings import Embedding
from keras.models import Model, Sequential
from keras.layers import Dense, Activation
embedding_layer = Embedding(input_dim=word_model.syn0.shape[0], output_dim=word_model.syn0.shape[1], weights=[word_model.syn0])
model = Sequential()
model.compile(optimizer='sgd', loss='mse')
Sample code / psuedocode to train LSTM and predict will be appreciated.
You can use a simple generator that would be implemented on top of your initial idea, it's an LSTM network wired to the pre-trained word2vec embeddings, that should be trained to predict the next
word in a sentence.
Gensim Word2Vec
Your code syntax is fine, but you should change the number of iterations to train the model well.
The default iter = 5 seems really low to train a machine learning model. Even at least 100 iterations are just better than 5.
For example:
word_model = gensim.models.Word2Vec(sentences, size=100, min_count=1,
window=5, iter=100)
pretrained_weights = word_model.wv.syn0
vocab_size, emdedding_size = pretrained_weights.shape
print('Result embedding shape:', pretrained_weights.shape)
print('Checking similar words:')
for word in ['model', 'network', 'train', 'learn']:
most_similar = ', '.join('%s (%.2f)' % (similar, dist)
for similar, dist in word_model.most_similar(word)[:8])
print(' %s -> %s' % (word, most_similar))
def word2idx(word):
return word_model.wv.vocab[word].index
def idx2word(idx):
return word_model.wv.index2word[idx]
The resultant embedding matrix is saved into a pretrained_weights array which has a shape (vocab_size, emdedding_size).
Keras model
The loss function in your code seems invalid. When the model predicts the next word, then its a classification task.
However, the loss should be categorical_crossentropy or sparse_categorical_crossentropy. This method avoids one-hot encoding, which is pretty expensive for a big vocabulary.
For example:
model = Sequential()
model.add(Embedding(input_dim=vocab_size, output_dim=emdedding_size,
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy')
Data preparation
If you use sparse_categorical_crossentropy loss, then both the sentences and labels must be word indices. Short sentences must be padded with zeros to the common length.
train_x = np.zeros([len(sentences), max_sentence_len], dtype=np.int32)
train_y = np.zeros([len(sentences)], dtype=np.int32)
for i, sentence in enumerate(sentences):
for t, word in enumerate(sentence[:-1]):
train_x[i, t] = word2idx(word)
train_y[i] = word2idx(sentence[-1])
Sample generation
The trained model outputs the vector of probabilities, from which the next word is sampled and added to the input. Here the generated text would be better and more diverse if the next word is
sampled, rather than a pick as argmax.
Here is an example of temperature based random sampling:
def sample(preds, temperature=1.0):
if temperature <= 0:
return np.argmax(preds)
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
def generate_next(text, num_generated=10):
word_idxs = [word2idx(word) for word in text.lower().split()]
for i in range(num_generated):
prediction = model.predict(x=np.array(word_idxs))
idx = sample(prediction[-1], temperature=0.7)
return ' '.join(idx2word(idx) for idx in word_idxs)
deep convolutional... -> deep convolutional arithmetic initialization step unbiased effectiveness
simple and effective... -> simple and effective family of variables preventing compute automatically
a nonconvex... -> a nonconvex technique compared layer converges so independent onehidden markov
a... -> a function parameterization necessary both both intuitions with technique valpola utilizes
Hope this answer helps.
|
{"url":"https://intellipaat.com/community/12732/using-pre-trained-word2vec-with-lstm-for-word-generation","timestamp":"2024-11-12T13:10:00Z","content_type":"text/html","content_length":"114857","record_id":"<urn:uuid:83a32970-fa51-4ad5-a910-4f2882378268>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00518.warc.gz"}
|
s - Lubna Burki
« on: October 01, 2020, 09:37:19 PM »
You can also use the different properties to argue that both the empty set and complex plane are open and by complementarity both are closed. Specifically referring to those listed in JB's lecture as
(1) says that the set S is open iff the intersection between S and its boundary is the empty set. For the empty set itself, the intersection between the empty set and its boundary (also the empty
set) is the empty set. So the empty set is open.
By (2), the complex plane is a neighborhood of every element in the complex plane by definition. so (2) is satisfied.
By (4), since the complement of the empty set is the entire plane, then the entire complex plane must also be closed. Vice versa to conclude that both the empty set and complex plane are open and
closed at the same time.
|
{"url":"http://forum.math.toronto.edu/index.php?PHPSESSID=g56q5bf16bp71j97c8d6oganj6&action=profile;u=2536;area=showposts","timestamp":"2024-11-03T00:05:25Z","content_type":"application/xhtml+xml","content_length":"15909","record_id":"<urn:uuid:02001d64-e242-415d-8ac2-b03001326d73>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00224.warc.gz"}
|
Re: Unexpected calculation result with Lua53
[Date Prev][Date Next][Thread Prev][Thread Next] [Date Index] [Thread Index]
• Subject: Re: Unexpected calculation result with Lua53
• From: Dirk Laurie <dirk.laurie@...>
• Date: Sun, 3 May 2015 06:39:52 +0200
2015-05-02 22:13 GMT+02:00 <tonyp@acm.org>:
> I noticed a problem (which I suspect is related to the introduction of
> integers in Lua53), and I’d like to know if this is ‘official’ behavior, or
> some sort of bug. The behavior can be seen in this small example:
> function fact(n)
> if n == 0 then return 1 end
> return fact(n-1) * n
> end
> print(fact(66),fact(66.))
> Using fp parameter, return correct (?) result
> Using integer parameter, return 0 (zero) result!!!!
> I don’t know what the root of this problem but if it happens to be related
> to integer overflow, should it be converted to floating point, and continue
> ‘crunching’ rather than give a completely wrong result?
The mathematical number 66! would need 309 bits to represent
without loss, which is equally impossible in integer or floating-point
in Lua.
Since 66! is divisible by 2^n, where n=33+16+8+4+2+1 = 64, its last
64 bits are all zeros. So the result using integers is not "completely
wrong", it has 64 correct bits.
If you did the work in double precision, you would have got the exponent
right, and the mantissa nearly right (the last bit comes out wrong in IEEE
arithmetic), so it has 52 correct bits, but those are at the other end. Just
saying where the other end is uses up 10 bits.
Moral of the story: if bits at the high-order end are important to you, use
floating-point. If bits at the low-order end are important, use integers.
Lua 5.3 gives you that choice (just change 1 to 1. on the second line),
Lua 5.2 did not.
|
{"url":"http://lua-users.org/lists/lua-l/2015-05/msg00014.html","timestamp":"2024-11-12T20:29:41Z","content_type":"text/html","content_length":"5411","record_id":"<urn:uuid:ddeb660a-2821-4233-bcdc-97d1084fdd09>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00695.warc.gz"}
|
What is Inductor in Electronics?, Types of Inductors and Applications
What is Inductor in Electronics?
An inductor has been defined as a physical device which is capable of storing energy by virtue of a current flowing through it. An Inductor in Electronics is a circuit component which opposes the
change of current flowing through it and induces a voltage when the current flowing through it varies in magnitude and/or direction.
The circuit element used to represent the energy stored in a magnetic field is defined by the relation
The above expression describes a situation in which the voltage across the element is proportional to the time rate of change of current through it. The constant of proportionality L is the
self-inductance or simply the inductance of the element, and is measured in henrys (abbreviated H).
The voltage v in Eq. (1.3) is a voltage drop in the direction of current and can be considered to oppose an increase in current. Fig. 1.15 depicts the schematic representation of an inductance and
its associated reference direction for current and voltÂage polarity.
Integrating Eq. (1.3) we have
• i(0) = Inductance current at t = 0.
According to Eq. (1.4) current through an inductance cannot change instantly (compared with capacitance voltage) as it would require infinite voltage.
Because the effect of inductance is to oppose the change in the magnitude of current, inductance is analogous to mass or inertia in a mechanical system and to the mass of liquid in hydraulics.
Inductance prevents the current from changing instantly as it requires infinite voltage to cause an instantaneous change in current, just as the mass of an automobile prevents it from stopping or
starting instantaneously.
The power associated with the inductive effect in a circuit is
and the energy stored is
unlike the resistive energy, which is transformed into heat, the inductive energy is stored in the same sense that kinetic energy is stored in a moving mass. Eq. (1.6) reveals that the magnitude of
stored energy depends on the magnitude of current and not in the manner of attaining that magnitude. The stored inductive energy reappears in the circuit as the current is reduced to zero. For
example, if a switch is opened in a current carrying inductive circuit, the current decays rapidly, but not instantaneously. In accordance with Eq. (1.3), a relatively high voltage appears across the
separating contacts of the switch, and an arc may form. The arc makes it possible for the stored energy to be dissipated as heat in the arc and the circuit resistances.
In case of an inductor, current does not change instantaneously. It offers high impedance to ac but very low impedance to dc i.e. it blocks ac signal but passes dc signal.
A piece of wire, or a conductor of any type, has inductance i.e. a property of opposing the change of current through it. By coiling the wire the inductance is increased as the square of the number
of turns. The inductance is represented by English capital letter L and measured in henrys.
Specially made components consisting of coiled copper wire are called the inductors. Inductors are of two types viz., air-core (wound on non-ferrous materials) and iron-core (wound on ferrite cores).
Inductors range in value from the tiny (few turn air-core coils of 0.1 μH used in high-frequency systems) to iron-core choke coils of 50 H or more for low-frequency applications. The symbols for
air-core and iron-core inductors are given in Figs. 1.16 (a) and 1.16 (b) respectively.
The Inductor in Electronics can be classified into filter chokes, audio-frequency chokes and radio-frequency chokes.
Filter choke has many turns of fine wire wound on an iron core made of laminated sheets of E- and I-shapes and is used in smoothing the pulsating current produced by rectifying ac into dc. Generally
power supplies use filter chokes having inductance ranging from about 1 H to 50 H, capable of carrying current up to 500 mA.
Audio frequency chokes (AFCs) are used to provide high impedance to audio frequencies (say 50 Hz to 5 kHz). These are smaller in size and have lower inductance in comparison to filter chokes.
Radio frequency chokes (RFCs) are employed to tune the radio frequencies (say, above 10 kHz). They are smaller than AFCs even. They have many turns of wire wound on an air core and very small value
of inductance (about 2 mH).
For inductors, an approximate value of an inductance can be calculated from the following equation (as long as current is not so large that the linear region of B-H curve is exceeded)
where μ[0] is the permeability of free space. μ[r] is the relative permeability of the core material, N is the number of turns, a is the area of the core, and l is the length of the core.
A typical audio-frequency inductor is shown in Fig. 1.17.
Variable Inductors: Some applications call for variable rather than fixed inductors. Tuning circuits, phase shifting, and switching of bands in amplifiers sometimes require a variable inductance.
Such Inductor in Electronics can be made in different ways. Figure 1.18 illustrates how inductance is varied in several commercial elements. The inductor shown in Fig. 1.18 (a) can be varied by
switching from one tap on the coil to another. In Fig. 1.18 (b) a movable core is used. As more of the core is inserted into the coil, the inductance increases. By appropriately varying the spacing
of the coil windings a relatively linear variation of inductance with core insertion can be obtained.
The most common trouble in inductors is an open circuit, which can be checked with the help of ohmmeter. When the inductor is open, it cannot conduct current and, therefore, has no inductance. The
ohmmeter reads infinite resistance for an open circuit. The inductor should be disconnected from the circuit while checking otherwise any parallel path may affect the ohmmeter resistance reading.
Colour Coding of Inductors: Colour codes are also used for small capacity moulded inductors for easy identification when they are mounted on printed circuit boards, known as PCBs. Colour coding
scheme used is the same as for resistors.
|
{"url":"https://www.eeeguide.com/what-is-inductor-in-electronics/","timestamp":"2024-11-09T01:15:42Z","content_type":"text/html","content_length":"224230","record_id":"<urn:uuid:645cd263-4e6e-4239-846f-76015657da72>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00029.warc.gz"}
|
University Physics Volume 1
Learning Objectives
By the end of this section, you will be able to:
• Explain the meaning and usefulness of the concept of center of mass
• Calculate the center of mass of a given system
• Apply the center of mass concept in two and three dimensions
• Calculate the velocity and acceleration of the center of mass
We have been avoiding an important issue up to now: When we say that an object moves (more correctly, accelerates) in a way that obeys Newton’s second law, we have been ignoring the fact that all
objects are actually made of many constituent particles. A car has an engine, steering wheel, seats, passengers; a football is leather and rubber surrounding air; a brick is made of atoms. There are
many different types of particles, and they are generally not distributed uniformly in the object. How do we include these facts into our calculations?
Then too, an extended object might change shape as it moves, such as a water balloon or a cat falling ((Figure)). This implies that the constituent particles are applying internal forces on each
other, in addition to the external force that is acting on the object as a whole. We want to be able to handle this, as well.
The problem before us, then, is to determine what part of an extended object is obeying Newton’s second law when an external force is applied and to determine how the motion of the object as a whole
is affected by both the internal and external forces.
Be warned: To treat this new situation correctly, we must be rigorous and completely general. We won’t make any assumptions about the nature of the object, or of its constituent particles, or either
the internal or external forces. Thus, the arguments will be complex.
Internal and External Forces
Suppose we have an extended object of mass M, made of N interacting particles. Let’s label their masses as [latex] {m}_{j} [/latex], where [latex] j=1,2,3,\text{…},N [/latex]. Note that
[latex] M=\sum _{j=1}^{N}{m}_{j}. [/latex]
If we apply some net external force [latex] {\overset{\to }{F}}_{\text{ext}} [/latex] on the object, every particle experiences some “share” or some fraction of that external force. Let:
[latex] {\overset{\to }{f}}_{j}^{\text{ext}}=\,\text{the fraction of the external force that the}\,j\text{th particle experiences.} [/latex]
Notice that these fractions of the total force are not necessarily equal; indeed, they virtually never are. (They can be, but they usually aren’t.) In general, therefore,
[latex] {\overset{\to }{f}}_{1}^{\text{ext}}\ne {\overset{\to }{f}}_{2}^{\text{ext}}\ne \cdots \ne {\overset{\to }{f}}_{N}^{\text{ext}}. [/latex]
Next, we assume that each of the particles making up our object can interact (apply forces on) every other particle of the object. We won’t try to guess what kind of forces they are; but since these
forces are the result of particles of the object acting on other particles of the same object, we refer to them as internal forces [latex] {\overset{\to }{f}}_{j}^{\text{int}} [/latex]; thus:
[latex] {\overset{\to }{f}}_{j}^{\text{int}}= [/latex] the net internal force that the jth particle experiences from all the other particles that make up the object.
Now, the net force, internal plus external, on the jth particle is the vector sum of these:
[latex] {\overset{\to }{f}}_{j}={\overset{\to }{f}}_{j}^{\text{int}}+{\overset{\to }{f}}_{j}^{\text{ext}}. [/latex]
where again, this is for all N particles; [latex] j=1,2,3,\dots ,N [/latex].
As a result of this fractional force, the momentum of each particle gets changed:
[latex] \begin{array}{ccc}\hfill {\overset{\to }{f}}_{j}& =\hfill & \frac{d{\overset{\to }{p}}_{j}}{dt}\hfill \\ \hfill {\overset{\to }{f}}_{j}^{\text{int}}+{\overset{\to }{f}}_{j}^{\text{ext}}& =\
hfill & \frac{d{\overset{\to }{p}}_{j}}{dt}.\hfill \end{array} [/latex]
The net force [latex] \overset{\to }{F} [/latex] on the object is the vector sum of these forces:
[latex] \begin{array}{cc}\hfill {\overset{\to }{F}}_{\text{net}}& =\sum _{j=1}^{N}({\overset{\to }{f}}_{j}^{int}+{\overset{\to }{f}}_{j}^{ext})\hfill \\ & =\sum _{j=1}^{N}{\overset{\to }{f}}_{j}^
{int}+\sum _{j=1}^{N}{\overset{\to }{f}}_{j}^{ext}.\hfill \end{array} [/latex]
This net force changes the momentum of the object as a whole, and the net change of momentum of the object must be the vector sum of all the individual changes of momentum of all of the particles:
[latex] {\overset{\to }{F}}_{\text{net}}=\sum _{j=1}^{N}\frac{d{\overset{\to }{p}}_{j}}{dt}. [/latex]
Combining (Figure) and (Figure) gives
[latex] \sum _{j=1}^{N}{\overset{\to }{f}}_{j}^{\text{int}}+\sum _{j=1}^{N}{\overset{\to }{f}}_{j}^{\text{ext}}=\sum _{j=1}^{N}\frac{d{\overset{\to }{p}}_{j}}{dt}. [/latex]
Let’s now think about these summations. First consider the internal forces term; remember that each [latex] {\overset{\to }{f}}_{j}^{\text{int}} [/latex] is the force on the jth particle from the
other particles in the object. But by Newton’s third law, for every one of these forces, there must be another force that has the same magnitude, but the opposite sign (points in the opposite
direction). These forces do not cancel; however, that’s not what we’re doing in the summation. Rather, we’re simply mathematically adding up all the internal force vectors. That is, in general, the
internal forces for any individual part of the object won’t cancel, but when all the internal forces are added up, the internal forces must cancel in pairs. It follows, therefore, that the sum of all
the internal forces must be zero:
[latex] \sum _{j=1}^{N}{\overset{\to }{f}}_{j}^{\text{int}}=0. [/latex]
(This argument is subtle, but crucial; take plenty of time to completely understand it.)
For the external forces, this summation is simply the total external force that was applied to the whole object:
[latex] \sum _{j=1}^{N}{\overset{\to }{f}}_{j}^{\text{ext}}={\overset{\to }{F}}_{\text{ext}}. [/latex]
As a result,
[latex] {\overset{\to }{F}}_{\text{ext}}=\sum _{j=1}^{N}\frac{d{\overset{\to }{p}}_{j}}{dt}. [/latex]
This is an important result. (Figure) tells us that the total change of momentum of the entire object (all N particles) is due only to the external forces; the internal forces do not change the
momentum of the object as a whole. This is why you can’t lift yourself in the air by standing in a basket and pulling up on the handles: For the system of you + basket, your upward pulling force is
an internal force.
Force and Momentum
Remember that our actual goal is to determine the equation of motion for the entire object (the entire system of particles). To that end, let’s define:
[latex] {\overset{\to }{p}}_{\text{CM}}= [/latex] the total momentum of the system of N particles (the reason for the subscript will become clear shortly)
Then we have
[latex] {\overset{\to }{p}}_{\text{CM}}\equiv \sum _{j=1}^{N}{\overset{\to }{p}}_{j}, [/latex]
and therefore (Figure) can be written simply as
[latex] \overset{\to }{F}=\frac{d{\overset{\to }{p}}_{\text{CM}}}{dt}. [/latex]
Since this change of momentum is caused by only the net external force, we have dropped the “ext” subscript.
This is Newton’s second law, but now for the entire extended object. If this feels a bit anticlimactic, remember what is hiding inside it: [latex] {\overset{\to }{p}}_{\text{CM}} [/latex] is the
vector sum of the momentum of (in principle) hundreds of thousands of billions of billions of particles [latex] (6.02\,×\,{10}^{23}) [/latex], all caused by one simple net external force—a force that
you can calculate.
Center of Mass
Our next task is to determine what part of the extended object, if any, is obeying (Figure).
It’s tempting to take the next step; does the following equation mean anything?
[latex] \overset{\to }{F}=M\overset{\to }{a} [/latex]
If it does mean something (acceleration of what, exactly?), then we could write
[latex] M\overset{\to }{a}=\frac{d{\overset{\to }{p}}_{\text{CM}}}{dt} [/latex]
and thus
[latex] M\overset{\to }{a}=\sum _{j=1}^{N}\frac{d{\overset{\to }{p}}_{j}}{dt}=\frac{d}{dt}\sum _{j=1}^{N}{\overset{\to }{p}}_{j}. [/latex]
which follows because the derivative of a sum is equal to the sum of the derivatives.
Now, [latex] {\overset{\to }{p}}_{j} [/latex] is the momentum of the jth particle. Defining the positions of the constituent particles (relative to some coordinate system) as [latex] {\overset{\to }
{r}}_{j}=({x}_{j},{y}_{j},{z}_{j}) [/latex], we thus have
[latex] {\overset{\to }{p}}_{j}={m}_{j}{\overset{\to }{v}}_{j}={m}_{j}\frac{d{\overset{\to }{r}}_{j}}{dt}. [/latex]
Substituting back, we obtain
[latex] \begin{array}{cc}\hfill M\overset{\to }{a}& =\frac{d}{dt}\sum _{j=1}^{N}{m}_{j}\frac{d{\overset{\to }{r}}_{j}}{dt}\hfill \\ & =\frac{{d}^{2}}{d{t}^{2}}\sum _{j=1}^{N}{m}_{j}{\overset{\to }
{r}}_{j}.\hfill \end{array} [/latex]
Dividing both sides by M (the total mass of the extended object) gives us
[latex] \overset{\to }{a}=\frac{{d}^{2}}{d{t}^{2}}(\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{\overset{\to }{r}}_{j}). [/latex]
Thus, the point in the object that traces out the trajectory dictated by the applied force in (Figure) is inside the parentheses in (Figure).
Looking at this calculation, notice that (inside the parentheses) we are calculating the product of each particle’s mass with its position, adding all N of these up, and dividing this sum by the
total mass of particles we summed. This is reminiscent of an average; inspired by this, we’ll (loosely) interpret it to be the weighted average position of the mass of the extended object. It’s
actually called the center of mass of the object. Notice that the position of the center of mass has units of meters; that suggests a definition:
[latex] {\overset{\to }{r}}_{\text{CM}}\equiv \frac{1}{M}\sum _{j=1}^{N}{m}_{j}{\overset{\to }{r}}_{j}. [/latex]
So, the point that obeys (Figure) (and therefore (Figure) as well) is the center of mass of the object, which is located at the position vector [latex] {\overset{\to }{r}}_{\text{CM}} [/latex].
It may surprise you to learn that there does not have to be any actual mass at the center of mass of an object. For example, a hollow steel sphere with a vacuum inside it is spherically symmetrical
(meaning its mass is uniformly distributed about the center of the sphere); all of the sphere’s mass is out on its surface, with no mass inside. But it can be shown that the center of mass of the
sphere is at its geometric center, which seems reasonable. Thus, there is no mass at the position of the center of mass of the sphere. (Another example is a doughnut.) The procedure to find the
center of mass is illustrated in (Figure).
Since [latex] {\overset{\to }{r}}_{j}={x}_{j}\hat{i}+{y}_{j}\hat{j}+{z}_{j}\hat{k} [/latex], it follows that:
[latex] {r}_{\text{CM,}x}=\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{x}_{j} [/latex]
[latex] {r}_{\text{CM},y}=\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{y}_{j} [/latex]
[latex] {r}_{\text{CM},z}=\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{z}_{j} [/latex]
and thus
[latex] \begin{array}{cc} {\overset{\to }{r}}_{\text{CM}}={r}_{\text{CM,}x}\hat{i}+{r}_{\text{CM,}y}\hat{j}+{r}_{\text{CM,}z}\hat{k}\hfill \\ {r}_{\text{CM}}=|{\overset{\to }{r}}_{\text{CM}}|={({r}_
{\text{CM,}x}^{2}+{r}_{\text{CM,}y}^{2}+{r}_{\text{CM,}z}^{2})}^{1\text{/}2}.\hfill \end{array} [/latex]
Therefore, you can calculate the components of the center of mass vector individually.
Finally, to complete the kinematics, the instantaneous velocity of the center of mass is calculated exactly as you might suspect:
[latex] {\overset{\to }{v}}_{\text{CM}}=\frac{d}{dt}(\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{\overset{\to }{r}}_{j})=\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{\overset{\to }{v}}_{j} [/latex]
and this, like the position, has x-, y-, and z-components.
To calculate the center of mass in actual situations, we recommend the following procedure:
Problem-Solving Strategy: Calculating the Center of Mass
The center of mass of an object is a position vector. Thus, to calculate it, do these steps:
1. Define your coordinate system. Typically, the origin is placed at the location of one of the particles. This is not required, however.
2. Determine the x, y, z-coordinates of each particle that makes up the object.
3. Determine the mass of each particle, and sum them to obtain the total mass of the object. Note that the mass of the object at the origin must be included in the total mass.
4. Calculate the x-, y-, and z-components of the center of mass vector, using (Figure), (Figure), and (Figure).
5. If required, use the Pythagorean theorem to determine its magnitude.
Here are two examples that will give you a feel for what the center of mass is.
Center of Mass of the Earth-Moon System
Using data from text appendix, determine how far the center of mass of the Earth-moon system is from the center of Earth. Compare this distance to the radius of Earth, and comment on the result.
Ignore the other objects in the solar system.
We get the masses and separation distance of the Earth and moon, impose a coordinate system, and use (Figure) with just [latex] N=2 [/latex] objects. We use a subscript “e” to refer to Earth, and
subscript “m” to refer to the moon.
Define the origin of the coordinate system as the center of Earth. Then, with just two objects, (Figure) becomes
[latex] R=\frac{{m}_{\text{e}}{r}_{\text{e}}+{m}_{\text{m}}{r}_{\text{m}}}{{m}_{\text{e}}+{m}_{\text{m}}}. [/latex]
From Appendix D,
[latex] {m}_{\text{e}}=5.97\,×\,{10}^{24}\,\text{kg} [/latex]
[latex] {m}_{\text{m}}=7.36\,×\,{10}^{22}\,\text{kg} [/latex]
[latex] {r}_{\text{m}}=3.82\,×\,{10}^{5}\,\text{m}. [/latex]
We defined the center of Earth as the origin, so [latex] {r}_{\text{e}}=\text{0 m} [/latex]. Inserting these into the equation for R gives
[latex] \begin{array}{cc}\hfill R& =\frac{(5.97\,×\,{10}^{24}\,\text{kg})(0\,\text{m})+(7.36\,×\,{10}^{22}\,\text{kg})(3.82\,×\,{10}^{8}\,\text{m})}{5.98\,×\,{10}^{24}\,\text{kg}+7.36\,×\,{10}^{22}\,
\text{kg}}\hfill \\ & =4.64\,×\,{10}^{6}\,\text{m.}\hfill \end{array} [/latex]
The radius of Earth is [latex] 6.37\,×\,{10}^{6}\,\text{m} [/latex], so the center of mass of the Earth-moon system is (6.37 − 4.64) [latex] ×\,{10}^{6}\,\text{m}=1.73\,×\,{10}^{6}\,\text{m}=1730\,\
text{km} [/latex] (roughly 1080 miles) below the surface of Earth. The location of the center of mass is shown (not to scale).
Check Your Understanding
Suppose we included the sun in the system. Approximately where would the center of mass of the Earth-moon-sun system be located? (Feel free to actually calculate it.)
Center of Mass of a Salt Crystal(Figure) shows a single crystal of sodium chloride—ordinary table salt. The sodium and chloride ions form a single unit, NaCl. When multiple NaCl units group together,
they form a cubic lattice. The smallest possible cube (called the unit cell) consists of four sodium ions and four chloride ions, alternating. The length of one edge of this cube (i.e., the bond
length) is [latex] 2.36\,×\,{10}^{-10}\,\text{m} [/latex]. Find the location of the center of mass of the unit cell. Specify it either by its coordinates [latex] ({r}_{\text{CM,}x},{r}_{\text{CM,}y},
{r}_{\text{CM,}z}) [/latex], or by [latex] {r}_{\text{CM}} [/latex] and two angles.
We can look up all the ion masses. If we impose a coordinate system on the unit cell, this will give us the positions of the ions. We can then apply (Figure), (Figure), and (Figure) (along with the
Pythagorean theorem).
Define the origin to be at the location of the chloride ion at the bottom left of the unit cell. (Figure) shows the coordinate system.
There are eight ions in this crystal, so N = 8:
[latex] {\overset{\to }{r}}_{\text{CM}}=\frac{1}{M}\sum _{j=1}^{8}{m}_{j}{\overset{\to }{r}}_{j}. [/latex]
The mass of each of the chloride ions is
[latex] 35.453\text{u}\,×\,\frac{1.660\,×\,{10}^{-27}\,\text{kg}}{\text{u}}=5.885\,×\,{10}^{-26}\,\text{kg} [/latex]
so we have
[latex] {m}_{1}={m}_{3}={m}_{6}={m}_{8}=5.885\,×\,{10}^{-26}\,\text{kg}. [/latex]
For the sodium ions,
[latex] {m}_{2}={m}_{4}={m}_{5}={m}_{7}=3.816\,×\,{10}^{-26}\,\text{kg}. [/latex]
The total mass of the unit cell is therefore
[latex] M=(4)(5.885\,×\,{10}^{-26}\,\text{kg})+(4)(3.816\,×\,{10}^{-26}\,\text{kg})=3.880\,×\,{10}^{-25}\,\text{kg}. [/latex]
From the geometry, the locations are
[latex] \begin{array}{c}{\overset{\to }{r}}_{1}=0\hfill \\ {\overset{\to }{r}}_{2}=(2.36\,×\,{10}^{-10}\,\text{m})\hat{i}\hfill \\ {\overset{\to }{r}}_{3}={r}_{3x}\hat{i}+{r}_{3y}\hat{j}=(2.36\,×\,
{10}^{-10}\,\text{m})\hat{i}+(2.36\,×\,{10}^{-10}\,\text{m})\hat{j}\hfill \\ {\overset{\to }{r}}_{4}=(2.36\,×\,{10}^{-10}\,\text{m})\hat{j}\hfill \\ {\overset{\to }{r}}_{5}=(2.36\,×\,{10}^{-10}\,\
text{m})\overset{\to }{k}\hfill \\ {\overset{\to }{r}}_{6}={r}_{6x}\hat{i}+{r}_{6z}\hat{k}=(2.36\,×\,{10}^{-10}\,\text{m})\hat{i}+(2.36\,×\,{10}^{-10}\,\text{m})\hat{k}\hfill \\ {\overset{\to }{r}}_
{7}={r}_{7x}\hat{i}+{r}_{7y}\hat{j}+{r}_{7z}\hat{k}=(2.36\,×\,{10}^{-10}\,\text{m})\hat{i}+(2.36\,×\,{10}^{-10}\,\text{m})\hat{j}+(2.36\,×\,{10}^{-10}\,\text{m})\hat{k}\hfill \\ {\overset{\to }{r}}_
{8}={r}_{8y}\hat{j}+{r}_{8z}\hat{k}=(2.36\,×\,{10}^{-10}\,\text{m})\hat{j}+(2.36\,×\,{10}^{-10}\,\text{m})\hat{k}.\hfill \end{array} [/latex]
[latex] \begin{array}{cc}\hfill |{\overset{\to }{r}}_{\text{CM,}x}|& =\sqrt{{r}_{\text{CM,}x}^{2}+{r}_{\text{CM,}y}^{2}+{r}_{\text{CM,}z}^{2}}\hfill \\ & =\frac{1}{M}\sum _{j=1}^{8}{m}_{j}{({r}_{x})}
_{j}\hfill \\ & =\frac{1}{M}({m}_{1}{r}_{1x}+{m}_{2}{r}_{2x}+{m}_{3}{r}_{3x}+{m}_{4}{r}_{4x}+{m}_{5}{r}_{5x}+{m}_{6}{r}_{6x}+{m}_{7}{r}_{7x}+{m}_{8}{r}_{8x})\hfill \\ & =\frac{1}{3.8804\,×\,{10}^
{-25}\,\text{kg}}[(5.885\,×\,{10}^{-26}\,\text{kg})(0\,\text{m})+(3.816\,×\,{10}^{-26}\,\text{kg})(2.36\,×\,{10}^{-10}\,\text{m})\hfill \\ & \enspace+(5.885\,×\,{10}^{-26}\,\text{kg})(2.36\,×\,{10}^
{-10}\,\text{m})\hfill \\ & \enspace+(3.816\,×\,{10}^{-26}\,\text{kg})(2.36\,×\,{10}^{-10}\,\text{m})+0+0\hfill \\ & \enspace+(3.816\,×\,{10}^{-26}\,\text{kg})(2.36\,×\,{10}^{-10}\,\text{m})+0]\hfill
\\ & =1.18\,×\,{10}^{-10}\,\text{m.}\hfill \end{array} [/latex]
Similar calculations give [latex] {r}_{\text{CM,}y}={r}_{\text{CM,}z}=1.18\,×\,{10}^{-10}\,\text{m} [/latex] (you could argue that this must be true, by symmetry, but it’s a good idea to check).
SignificanceAlthough this is a great exercise to determine the center of mass given a Chloride ion at the origin, in fact the origin could be chosen at any location. Therefore, there is no meaningful
application of the center of mass of a unit cell beyond as an exercise.
Check Your Understanding
Suppose you have a macroscopic salt crystal (that is, a crystal that is large enough to be visible with your unaided eye). It is made up of a huge number of unit cells. Is the center of mass of this
crystal necessarily at the geometric center of the crystal?
Two crucial concepts come out of these examples:
1. As with all problems, you must define your coordinate system and origin. For center-of-mass calculations, it often makes sense to choose your origin to be located at one of the masses of your
system. That choice automatically defines its distance in (Figure) to be zero. However, you must still include the mass of the object at your origin in your calculation of M, the total mass
(Figure). In the Earth-moon system example, this means including the mass of Earth. If you hadn’t, you’d have ended up with the center of mass of the system being at the center of the moon, which
is clearly wrong.
2. In the second example (the salt crystal), notice that there is no mass at all at the location of the center of mass. This is an example of what we stated above, that there does not have to be any
actual mass at the center of mass of an object.
Center of Mass of Continuous Objects
If the object in question has its mass distributed uniformly in space, rather than as a collection of discrete particles, then [latex] {m}_{j}\to dm [/latex], and the summation becomes an integral:
[latex] {\overset{\to }{r}}_{\text{CM}}=\frac{1}{M}\int \overset{\to }{r}dm. [/latex]
In this context, r is a characteristic dimension of the object (the radius of a sphere, the length of a long rod). To generate an integrand that can actually be calculated, you need to express the
differential mass element dm as a function of the mass density of the continuous object, and the dimension r. An example will clarify this.
CM of a Uniform Thin Hoop
Find the center of mass of a uniform thin hoop (or ring) of mass M and radius r.
First, the hoop’s symmetry suggests the center of mass should be at its geometric center. If we define our coordinate system such that the origin is located at the center of the hoop, the integral
should evaluate to zero.
We replace dm with an expression involving the density of the hoop and the radius of the hoop. We then have an expression we can actually integrate. Since the hoop is described as “thin,” we treat it
as a one-dimensional object, neglecting the thickness of the hoop. Therefore, its density is expressed as the number of kilograms of material per meter. Such a density is called a linear mass density
, and is given the symbol [latex] \lambda [/latex]; this is the Greek letter “lambda,” which is the equivalent of the English letter “l” (for “linear”).
Since the hoop is described as uniform, this means that the linear mass density [latex] \lambda [/latex] is constant. Thus, to get our expression for the differential mass element dm, we multiply
[latex] \lambda [/latex] by a differential length of the hoop, substitute, and integrate (with appropriate limits for the definite integral).
First, define our coordinate system and the relevant variables ((Figure)).
The center of mass is calculated with (Figure):
[latex] {\overset{\to }{r}}_{\text{CM}}=\frac{1}{M}{\int }_{a}^{b}\overset{\to }{r}dm. [/latex]
We have to determine the limits of integration a and b. Expressing [latex] \overset{\to }{r} [/latex] in component form gives us
[latex] {\overset{\to }{r}}_{\text{CM}}=\frac{1}{M}{\int }_{a}^{b}[(r\text{cos}\theta )\hat{i}+(r\text{sin}\theta )\hat{j}]dm. [/latex]
In the diagram, we highlighted a piece of the hoop that is of differential length ds; it therefore has a differential mass [latex] dm=\lambda ds [/latex]. Substituting:
[latex] {\overset{\to }{r}}_{\text{CM}}=\frac{1}{M}{\int }_{a}^{b}[(r\text{cos}\theta )\hat{i}+(r\text{sin}\theta )\hat{j}]\lambda ds. [/latex]
However, the arc length ds subtends a differential angle [latex] d\theta [/latex], so we have
[latex] ds=rd\theta [/latex]
and thus
[latex] {\overset{\to }{r}}_{\text{CM}}=\frac{1}{M}{\int }_{a}^{b}[(r\text{cos}\theta )\hat{i}+(r\text{sin}\theta )\hat{j}]\lambda rd\theta . [/latex]
One more step: Since [latex] \lambda [/latex] is the linear mass density, it is computed by dividing the total mass by the length of the hoop:
[latex] \lambda =\frac{M}{2\pi r} [/latex]
giving us
[latex] \begin{array}{cc}\hfill {\overset{\to }{r}}_{\text{CM}}& =\frac{1}{M}{\int }_{a}^{b}[(r\text{cos}\theta )\hat{i}+(r\text{sin}\theta )\hat{j}](\frac{M}{2\pi r})rd\theta \hfill \\ & =\frac{1}{2
\pi }{\int }_{a}^{b}[(r\text{cos}\theta )\hat{i}+(r\text{sin}\theta )\hat{j}]d\theta .\hfill \end{array} [/latex]
Notice that the variable of integration is now the angle [latex] \theta [/latex]. This tells us that the limits of integration (around the circular hoop) are [latex] \theta =\text{0 to}\,\theta =2\pi
[/latex], so [latex] a=0 [/latex] and [latex] b=2\pi [/latex]. Also, for convenience, we separate the integral into the x– and y-components of [latex] {\overset{\to }{r}}_{\text{CM}} [/latex]. The
final integral expression is
[latex] \begin{array}{cc}\hfill {\overset{\to }{r}}_{\text{CM}}& ={r}_{\text{CM,}x}\hat{i}+{r}_{\text{CM,}y}\hat{j}\hfill \\ & =[\frac{1}{2\pi }{\int }_{0}^{2\pi }(r\text{cos}\theta )d\theta ]\hat{i}
+[\frac{1}{2\pi }{\int }_{0}^{2\pi }(r\text{sin}\theta )d\theta ]\hat{j}\hfill \\ & =0\hat{i}+0\hat{j}=\overset{\to }{0}\hfill \end{array} [/latex]
as expected.
Center of Mass and Conservation of Momentum
How does all this connect to conservation of momentum?
Suppose you have N objects with masses [latex] {m}_{1},{m}_{2},{m}_{3},…{m}_{N} [/latex] and initial velocities [latex] {\overset{\to }{v}}_{1},{\overset{\to }{v}}_{2},{\overset{\to }{v}}_{3},…\text
{,}{\overset{\to }{v}}_{N} [/latex]. The center of mass of the objects is
[latex] {\overset{\to }{r}}_{\text{CM}}=\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{\overset{\to }{r}}_{j}. [/latex]
Its velocity is
[latex] {\overset{\to }{v}}_{\text{CM}}=\frac{d{\overset{\to }{r}}_{\text{CM}}}{dt}=\frac{1}{M}\sum _{j=1}^{N}{m}_{j}\frac{d{\overset{\to }{r}}_{j}}{dt} [/latex]
and thus the initial momentum of the center of mass is
[latex] \begin{array}{ccc}\hfill {[M\frac{d{\overset{\to }{r}}_{\text{CM}}}{dt}]}_{\text{i}}& =\hfill & \sum _{j=1}^{N}{m}_{j}\frac{d{\overset{\to }{r}}_{j,\text{i}}}{dt}\hfill \\ \hfill M{\overset{\
to }{v}}_{\text{CM,i}}& =\hfill & \sum _{j=1}^{N}{m}_{j}{\overset{\to }{v}}_{j,\text{i}}.\hfill \end{array} [/latex]
After these masses move and interact with each other, the momentum of the center of mass is
[latex] M{\overset{\to }{v}}_{\text{CM,f}}=\sum _{j=1}^{N}{m}_{j}{\overset{\to }{v}}_{j,\text{f}}. [/latex]
But conservation of momentum tells us that the right-hand side of both equations must be equal, which says
[latex] M{\overset{\to }{v}}_{\text{CM,f}}=M{\overset{\to }{v}}_{\text{CM,i}}. [/latex]
This result implies that conservation of momentum is expressed in terms of the center of mass of the system. Notice that as an object moves through space with no net external force acting on it, an
individual particle of the object may accelerate in various directions, with various magnitudes, depending on the net internal force acting on that object at any time. (Remember, it is only the
vector sum of all the internal forces that vanishes, not the internal force on a single particle.) Thus, such a particle’s momentum will not be constant—but the momentum of the entire extended object
will be, in accord with (Figure).
(Figure) implies another important result: Since M represents the mass of the entire system of particles, it is necessarily constant. (If it isn’t, we don’t have a closed system, so we can’t expect
the system’s momentum to be conserved.) As a result, (Figure) implies that, for a closed system,
[latex] {\overset{\to }{v}}_{\text{CM,f}}={\overset{\to }{v}}_{\text{CM,i}}. [/latex]
That is to say, in the absence of an external force, the velocity of the center of mass never changes.
You might be tempted to shrug and say, “Well yes, that’s just Newton’s first law,” but remember that Newton’s first law discusses the constant velocity of a particle, whereas (Figure) applies to the
center of mass of a (possibly vast) collection of interacting particles, and that there may not be any particle at the center of mass at all! So, this really is a remarkable result.
Fireworks Display
When a fireworks rocket explodes, thousands of glowing fragments fly outward in all directions, and fall to Earth in an elegant and beautiful display ((Figure)). Describe what happens, in terms of
conservation of momentum and center of mass.
The picture shows radial symmetry about the central points of the explosions; this suggests the idea of center of mass. We can also see the parabolic motion of the glowing particles; this brings to
mind projectile motion ideas.
Initially, the fireworks rocket is launched and flies more or less straight upward; this is the cause of the more-or-less-straight, white trail going high into the sky below the explosion in the
upper-right of the picture (the yellow explosion). This trail is not parabolic because the explosive shell, during its launch phase, is actually a rocket; the impulse applied to it by the ejection of
the burning fuel applies a force on the shell during the rise-time interval. (This is a phenomenon we will study in the next section.) The shell has multiple forces on it; thus, it is not in
free-fall prior to the explosion.
At the instant of the explosion, the thousands of glowing fragments fly outward in a radially symmetrical pattern. The symmetry of the explosion is the result of all the internal forces summing to
zero [latex] (\sum _{j}{\overset{\to }{f}}_{j}^{\text{int}}=0); [/latex] for every internal force, there is another that is equal in magnitude and opposite in direction.
However, as we learned above, these internal forces cannot change the momentum of the center of mass of the (now exploded) shell. Since the rocket force has now vanished, the center of mass of the
shell is now a projectile (the only force on it is gravity), so its trajectory does become parabolic. The two red explosions on the left show the path of their centers of mass at a slightly longer
time after explosion compared to the yellow explosion on the upper right.
In fact, if you look carefully at all three explosions, you can see that the glowing trails are not truly radially symmetric; rather, they are somewhat denser on one side than the other.
Specifically, the yellow explosion and the lower middle explosion are slightly denser on their right sides, and the upper-left explosion is denser on its left side. This is because of the momentum of
their centers of mass; the differing trail densities are due to the momentum each piece of the shell had at the moment of its explosion. The fragment for the explosion on the upper left of the
picture had a momentum that pointed upward and to the left; the middle fragment’s momentum pointed upward and slightly to the right; and the right-side explosion clearly upward and to the right (as
evidenced by the white rocket exhaust trail visible below the yellow explosion).
Finally, each fragment is a projectile on its own, thus tracing out thousands of glowing parabolas.
In the discussion above, we said, “…the center of mass of the shell is now a projectile (the only force on it is gravity)….” This is not quite accurate, for there may not be any mass at all at the
center of mass; in which case, there could not be a force acting on it. This is actually just verbal shorthand for describing the fact that the gravitational forces on all the particles act so that
the center of mass changes position exactly as if all the mass of the shell were always located at the position of the center of mass.
Check Your Understanding
How would the firework display change in deep space, far away from any source of gravity?
You may sometimes hear someone describe an explosion by saying something like, “the fragments of the exploded object always move in a way that makes sure that the center of mass continues to move on
its original trajectory.” This makes it sound as if the process is somewhat magical: how can it be that, in every explosion, it always works out that the fragments move in just the right way so that
the center of mass’ motion is unchanged? Phrased this way, it would be hard to believe no explosion ever does anything differently.
The explanation of this apparently astonishing coincidence is: We defined the center of mass precisely so this is exactly what we would get. Recall that first we defined the momentum of the system:
[latex] {\overset{\to }{p}}_{\text{CM}}=\sum _{j=1}^{N}\frac{d{\overset{\to }{p}}_{j}}{dt}. [/latex]
We then concluded that the net external force on the system (if any) changed this momentum:
[latex] \overset{\to }{F}=\frac{d{\overset{\to }{p}}_{\text{CM}}}{dt} [/latex]
and then—and here’s the point—we defined an acceleration that would obey Newton’s second law. That is, we demanded that we should be able to write
[latex] \overset{\to }{a}=\frac{\overset{\to }{F}}{M} [/latex]
which requires that
[latex] \overset{\to }{a}=\frac{{d}^{2}}{d{t}^{2}}(\frac{1}{M}\sum _{j=1}^{N}{m}_{j}{\overset{\to }{r}}_{j}). [/latex]
where the quantity inside the parentheses is the center of mass of our system. So, it’s not astonishing that the center of mass obeys Newton’s second law; we defined it so that it would.
• An extended object (made up of many objects) has a defined position vector called the center of mass.
• The center of mass can be thought of, loosely, as the average location of the total mass of the object.
• The center of mass of an object traces out the trajectory dictated by Newton’s second law, due to the net external force.
• The internal forces within an extended object cannot alter the momentum of the extended object as a whole.
Conceptual Questions
Suppose a fireworks shell explodes, breaking into three large pieces for which air resistance is negligible. How does the explosion affect the motion of the center of mass? How would it be affected
if the pieces experienced significantly more air resistance than the intact shell?
Three point masses are placed at the corners of a triangle as shown in the figure below.
Find the center of mass of the three-mass system.
With the origin defined to be at the position of the 150-g mass, [latex] {x}_{\text{CM}}=-1.23\text{cm} [/latex] and [latex] {y}_{\text{CM}}=0.69\text{cm} [/latex]
Two particles of masses [latex] {m}_{1} [/latex] and [latex] {m}_{2} [/latex] separated by a horizontal distance D are released from the same height h at the same time. Find the vertical position of
the center of mass of these two particles at a time before the two particles strike the ground. Assume no air resistance.
Two particles of masses [latex] {m}_{1} [/latex] and [latex] {m}_{2} [/latex] separated by a horizontal distance D are let go from the same height h at different times. Particle 1 starts at [latex] t
=0 [/latex], and particle 2 is let go at [latex] t=T [/latex]. Find the vertical position of the center of mass at a time before the first particle strikes the ground. Assume no air resistance.
Two particles of masses [latex] {m}_{1} [/latex] and [latex] {m}_{2} [/latex] move uniformly in different circles of radii [latex] {R}_{1} [/latex] and [latex] {R}_{2} [/latex] about origin in the x,
y-plane. The x– and y-coordinates of the center of mass and that of particle 1 are given as follows (where length is in meters and t in seconds):
[latex] {x}_{1}(t)=4\text{cos}(2t),{y}_{1}(t)=4\text{sin}(2t) [/latex]
[latex] {x}_{\text{CM}}(t)=3\text{cos}(2t),{y}_{\text{CM}}(t)=3\text{sin}(2t). [/latex]
a. Find the radius of the circle in which particle 1 moves.
b. Find the x– and y-coordinates of particle 2 and the radius of the circle this particle moves.
Two particles of masses [latex] {m}_{1} [/latex] and [latex] {m}_{2} [/latex] move uniformly in different circles of radii [latex] {R}_{1} [/latex] and[latex] {R}_{2} [/latex] about the origin in the
x, y-plane. The coordinates of the two particles in meters are given as follows ([latex] z=0 [/latex] for both). Here t is in seconds:
[latex] \begin{array}{ccc}\hfill {x}_{1}(t)& =\hfill & 4\,\text{cos}(2t)\hfill \\ \hfill {y}_{1}(t)& =\hfill & 4\,\text{sin}(2t)\hfill \\ \hfill {x}_{2}(t)& =\hfill & 2\,\text{cos}(3t-\frac{\pi }{2})
\hfill \\ \hfill {y}_{2}(t)& =\hfill & 2\,\text{sin}(3t-\frac{\pi }{2})\hfill \end{array} [/latex]
a. Find the radii of the circles of motion of both particles.
b. Find the x– and y-coordinates of the center of mass.
c. Decide if the center of mass moves in a circle by plotting its trajectory.
Show Answer
a. [latex] {R}_{1}=4\,\text{m} [/latex], [latex] {R}_{2}=2\,\text{m} [/latex]; b. [latex] {X}_{\text{CM}}=\frac{{m}_{1}{x}_{1}+{m}_{2}{x}_{2}}{{m}_{1}+{m}_{2}},{Y}_{\text{CM}}=\frac{{m}_{1}{y}_{1}+
{m}_{2}{y}_{2}}{{m}_{1}+{m}_{2}} [/latex]; c. yes, with [latex] R=\frac{1}{{m}_{1}+{m}_{2}}\sqrt{16{m}_{1}^{2}+4{m}_{2}^{2}} [/latex]
Find the center of mass of a one-meter long rod, made of 50 cm of iron (density [latex] 8\,\frac{\text{g}}{{\text{cm}}^{3}} [/latex]) and 50 cm of aluminum (density [latex] 2.7\,\frac{\text{g}}{{\
text{cm}}^{3}} [/latex]).
Find the center of mass of a rod of length L whose mass density changes from one end to the other quadratically. That is, if the rod is laid out along the x-axis with one end at the origin and the
other end at [latex] x=L [/latex], the density is given by [latex] \rho (x)={\rho }_{0}+({\rho }_{1}-{\rho }_{0}){(\frac{x}{L})}^{2} [/latex], where [latex] {\rho }_{0} [/latex] and [latex] {\rho }_
{1} [/latex] are constant values.
Find the center of mass of a rectangular block of length a and width b that has a nonuniform density such that when the rectangle is placed in the x,y-plane with one corner at the origin and the
block placed in the first quadrant with the two edges along the x– and y-axes, the density is given by[latex] \rho (x,y)={\rho }_{0}x [/latex], where [latex] {\rho }_{0} [/latex] is a constant.
Find the center of mass of a rectangular material of length a and width b made up of a material of nonuniform density. The density is such that when the rectangle is placed in the xy-plane, the
density is given by [latex] \rho (x,y)={\rho }_{0}xy [/latex].
A cube of side a is cut out of another cube of side b as shown in the figure below.
Find the location of the center of mass of the structure. (Hint: Think of the missing part as a negative mass overlapping a positive mass.)
Find the center of mass of a cone of uniform density that has a radius R at the base, height h, and mass M. Let the origin be at the center of the base of the cone and have +z going through the cone
Find the center of mass of a thin wire of mass m and length L bent in a semicircular shape. Let the origin be at the center of the semicircle and have the wire arc from the +x axis, cross the +y
axis, and terminate at the −x axis.
Find the center of mass of a uniform thin semicircular plate of radius R. Let the origin be at the center of the semicircle, the plate arc from the +x axis to the −x axis, and the z axis be
perpendicular to the plate.
Find the center of mass of a sphere of mass M and radius R and a cylinder of mass m, radius r, and height h arranged as shown below.
Express your answers in a coordinate system that has the origin at the center of the cylinder.
center of mass
weighted average position of the mass
external force
force applied to an extended object that changes the momentum of the extended object as a whole
internal force
force that the simple particles that make up an extended object exert on each other. Internal forces can be attractive or repulsive
linear mass density
[latex] \lambda [/latex], expressed as the number of kilograms of material per meter
|
{"url":"https://courses.lumenlearning.com/suny-osuniversityphysics/chapter/9-6-center-of-mass/","timestamp":"2024-11-09T20:36:50Z","content_type":"text/html","content_length":"111463","record_id":"<urn:uuid:abdc81c1-32f3-42ec-915d-147440c22ea6>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00497.warc.gz"}
|
What is i^i?
There is a very interesting question: what is the value of $i^i$, where $i$ is the unit imaginary number?
The answer is more interesting: $i^i = 0.2078795763507619...$. It is a REAL NUMBER!
How could we derive it?
First you need to know Euler's formula. Which states
e^{ix} = cos(x) + i*sin(x)
To construct the $i$ is really simple, just set $x = \pi/2$, we have
e^{i*\pi/2} = i
Now comes to the magic part. Raise both sides to power of $i$. The right side is $i^i$. The left side is now
e^{i*i*\pi/2} = e^{-\pi/2} = 0.2078795763507619...
However, this is not the end of the story. Notice $i$ has infinity amount of forms:
e^{i*(2\pi n+\pi/2)} = i, n \in \mathbf{Z}
As a result, $i^i$ has infinity different values:
e^{-2\pi n-\pi/2}, n \in \mathbf{Z}
So $i^i$ is not a number. It represents a set of numbers. And, I can't believe the crazy world any more!
|
{"url":"https://bitmingw.com/2015/09/27/what-is-i-to-the-power-of-i/","timestamp":"2024-11-03T12:59:17Z","content_type":"text/html","content_length":"18970","record_id":"<urn:uuid:c246294f-7748-4cc8-a931-a17977a43698>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00186.warc.gz"}
|
Regression Example with Keras LSTM Networks in R
The LSTM (Long Short-Term Memory) network is a type of Recurrent Neural Networks (RNN). The RNN model processes sequential data. It learns the input data by iterating the sequence of elements and
acquires the state information regarding the observed part of the elements. Based on the learned data, it predicts the next item in the sequence.
LSTM network applies memory units to remember RNN outputs. Memory units contain gates to deal with the output information. The importance of the information is decided by the weights measured by the
algorithm. The forget gate discards the output if it is useless, the input gate allows to update the state, and the output gate sends the output. In this post, we'll learn how to fit and predict
regression data with a Keras LSTM model in R.
This tutorial covers:
• Generating sample dataset
• Reshaping input data
• Building Keras LSTM model
• Predicting and plotting the result
We'll start by loading the 'keras' library for R.
Generating sample dataset
We need a regression data and we'll create simple vector data as a target regression dataset for this tutorial.
N = 400
n = seq(1:N)
a = n/10+4*sin(n/10)+sample(-1:6,N,replace=T)+rnorm(N)
[1] 3.698144 7.307090 3.216936 8.500867 8.003362 1.382323 5.488268
[8] 9.074807 8.684215 6.311856 10.784075 7.171844 10.386709 7.825735
[15] 3.497473 13.273991 5.225496 3.972325 5.448927 10.352474
Reshaping input data
Next, we'll create 'x' and 'y' training sequence data. Here, we apply a window method with the size of the 'step' value. The result (y value) comes after the sequence of window elements (x values),
then the window shifts to the next elements of x, and y value is collected and so on.
step = 2 # step is a window size
To cover all elements in a vector, we'll add a 'step' into the last part of 'a' vector by replicating the last element.
a = c(a, replicate(step, tail(a, 1)))
Creating x - input, and y - output data.
x = NULL
y = NULL
for(i in 1:N)
s = i-1+step
x = rbind(x,a[i:s])
y = rbind(y,a[s+1])
cbind(head(x), head(y))
[,1] [,2] [,3]
[1,] 3.698144 7.307090 3.216936
[2,] 7.307090 3.216936 8.500867
[3,] 3.216936 8.500867 8.003362
[4,] 8.500867 8.003362 1.382323
[5,] 8.003362 1.382323 5.488268
[6,] 1.382323 5.488268 9.074807
Input data should be an array type, so we'll reshape it.
X = array(x, dim=c(N, step,1))
Building Keras LSTM model
Next, we'll create Keras sequential model, add an LSTM layer, and compile it with defined metrics.
model = keras_model_sequential() %>%
layer_lstm(units=128, input_shape=c(step, 1), activation="relu") %>%
layer_dense(units=64, activation = "relu") %>%
layer_dense(units=32) %>%
layer_dense(units=1, activation = "linear")
model %>% compile(loss = 'mse',
optimizer = 'adam',
metrics = list("mean_absolute_error")
model %>% summary()
Layer (type) Output Shape Param #
lstm_16 (LSTM) (None, 128) 66560
dense_36 (Dense) (None, 64) 8256
dense_37 (Dense) (None, 32) 2080
dense_38 (Dense) (None, 1) 33
Total params: 76,929
Trainable params: 76,929
Non-trainable params: 0
Predicting and plotting the result
Next, we'll train the model with X and y input data, predict X data, and check the errors.
model %>% fit(X,y, epochs=50, batch_size=32, shuffle = FALSE)
y_pred = model %>% predict(X)
scores = model %>% evaluate(X, y, verbose = 0)
[1] 11.84502
[1] 2.810479
Finally, we'll plot the results.
x_axes = seq(1:length(y_pred))
plot(x_axes, y, type="l", col="red", lwd=2)
lines(x_axes, y_pred, col="blue",lwd=2)
legend("topleft", legend=c("y-original", "y-predicted"),
col=c("red", "blue"), lty=1,cex=0.8)
You may change the step size and observe the prediction results.
In this tutorial, we've briefly learned how to use Keras LSTM to predict regression data in R. The full source code is listed below.
N = 400
step = 2
n = seq(1:N)
a = n/10+4*sin(n/10)+sample(-1:6,N,replace=T)+rnorm(N)
a = c(a,replicate(step,tail(a,1)))
x = NULL
y = NULL
for(i in 1:N)
s = i-1+step
x = rbind(x,a[i:s])
y = rbind(y,a[s+1])
X = array(x, dim=c(N,step,1))
model = keras_model_sequential() %>%
layer_lstm(units=128, input_shape=c(step, 1), activation="relu") %>%
layer_dense(units=64, activation = "relu") %>%
layer_dense(units=32) %>%
layer_dense(units=1, activation = "linear")
model %>% compile(loss = 'mse',
optimizer = 'adam',
metrics = list("mean_absolute_error")
model %>% summary()
model %>% fit(X,y, epochs=50, batch_size=32, shuffle = FALSE, verbose=0)
y_pred = model %>% predict(X)
scores = model %>% evaluate(X, y, verbose = 0)
x_axes = seq(1:length(y_pred))
plot(x_axes, y, type="l", col="red", lwd=2)
lines(x_axes, y_pred, col="blue",lwd=2)
legend("topleft", legend=c("y-original", "y-predicted"),
col=c("red", "blue"), lty=1,cex=0.8)
10 comments:
1. This model only looks good because it probably overfits the data. You did not include any test/validation data to see if the model generalizes out of the training sample. Additionally, with only
400 data points but almost 80,000 learnable parameters, the memory capacity of the net is likely too large for this task. This means that the net was probably able to memorize the test data's
specific input-output mappings, and will thus lack predictive power.
1. Good point! But, here I did not intend to build a perfect predictive model. The purpose of this post is to show a simple, workable example with a random data for beginners. Readers should
consider every aspect of the model building when they work with real problems.
2. Hi, you can not use all data to train the net, since you use it to predict those data in use and absolutely it does perfectly. But your model should be tested with another data set, which
operates very badly.
2. Hello, excelent post, Im in a proyect using this algorithm and I have one question, if I have more predictors, on the model fit should I use ###fit(x1+x2,y,....) and the predictions ###predict
(x1+x2) ??? or am I wrong?
Thanks for your help. Great post.
1. You are welcome! You need to create combined X array data (contains all features x1, x2, ..) for your training and prediction. It goes like this;
x1, x2, y
2, 3, 3
3, 4, 4
2, 4, => 4
3, 5, => 5
4, 6, => 6
Here, each window contains 3 elements of both x1 and x2 series.
2, 3,
3, 4,
2, 4, =>4
3, 4,
2, 4,
3, 5, => 5
2, 4,
3, 5,
4, 6, => 6
2. Thanks, I made an X array with all the predictors and it works. Got a mse = 15.9 (nice) with the default parameters, then I tunned the epochs parameter on the fit and got a better prediction.
I´ve been tunnin with epochs and batch_size but I dont know very well how should I change the sequential keras model, (dense and units), I got 37 observations and 19 predictors. Can you give
me advices with this tunning? Thanks for your time and post, my model's predictions are great, in fact I could stop now with my results but I want to improve and learn more about this model.
3. Good! Your data is too small to evaluate your model and improve the performance. To check the improvement in your model;
1) Use bigger data,
2) Change the units number,
3) Add dense layer,
4) Add dropout layer, layer_dropout()
5) Change optimizer (rmsprop etc.)
4. Hi, I followed your advices and my model has improve, thanks. But (another doubt) I got some steps (number of samples to reach a new period) with differents numbers, at the beginning of the
sample, each 4 samples change the period, and at the end, that changes for 5 periods. How should I attack this problem? 2 models? how can be ensembled?
Thanks for your time, really.
3. How do I tune this model?
4. Hi, I tried this method work for time series data with last 4 year monthly values. The predicted values are vague and i'm not sure of what i did wrong. I also tried by changing the step size but
it is also not working out.Can you please help me out with it?
|
{"url":"https://www.datatechnotes.com/2019/01/regression-example-with-lstm-networks.html","timestamp":"2024-11-13T07:26:49Z","content_type":"application/xhtml+xml","content_length":"118522","record_id":"<urn:uuid:43fa4c0a-a58f-4946-97b0-28d9b591e9de>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00274.warc.gz"}
|
My Life, My Project, My Occupation: Exactly How 7 Simple Dice Activity Helped Me Succeed
Dice activities are actually fantastic for children due to the fact that they help boost math, critical reasoning, and also problem addressing skill-sets. They are also a really good substitute for
display screen time and advertise social skills and sportsmanship.
Players roll the cubes and draw the leading varieties on chart study. They always keep an overall score and the gamer along with the greatest score victories.
A cubes activity is a kind of parlor game that utilizes one or more cubes as a random tool. One of the most prominent dice game is actually Craps, in which players put wagers on the end result of the
roll. The number that seems on the die is figured out through a mix of the images of the dice as well as their settings after being tossed. Several cubes games exist that require the usage of various
cubes, as well as a few of all of them even involve piling the cubes in a certain setup just before throwing them. Although some casino sites dissuade this technique to quicken the video game, it is
a crucial factor of the games knowledge for lots of gamers. lightning-dice-game.com
Cube are an easy, reliable technique to develop randomness for a vast array of requests. They can be actually made use of to produce a range of end results, consisting of amounts, characters, paths
(for Boggle or even Warhammer Fantasy Battle), playing memory card symbols for poker dice, and also even guidelines for sexual represent sex cubes. Cubes are actually additionally fairly effortless
to maneuver, making them a really good selection for producing randomness in video games where the gamers intend to manage their very own serendipity.
In add-on to the noticeable, the even more cubes you add to a video game the less very likely it is actually that any type of single outcome is going to be actually very high or quite low. This is
actually considering that a random activity has a tendency to come back to the way, so a lengthy streak of sixes will become complied with by a much longer streak of 2s.
Dice are actually a gambling game through which you roll amounts on each side to attempt to make a mix. They are a common aspect of a lot of activities, and also are actually typically utilized to
figure out the result of battles. Nevertheless, some cubes are made to favor particular outcomes over others. These special cubes are actually generally contacted “loaded” as well as may be created
to have specific varieties on the surface or different forms. They may additionally possess a different colors or style on each face. Tragamonedas de frutas
The chance of rolling a particular result on a die is determined by dividing the total variety of possible mixtures due to the size of the sample space. For instance, the probability of rolling a 6
on pair of cubes is actually 1 in 36. Moreover, you can easily locate the possibilities of other outcomes by increasing the specific chances of each cubes.
Yet another important part of cubes probability is that it could be computed without recognizing what has actually happened previously. This is because dice rolls are independent events, and so the
likelihood of one event doesn’t rely on what happened in the previous roll.
This may be hard to know since it does not consistently appear intuitive. As an example, when you check out a table as well as view that there are actually simply two cells where the total of the
cubes is 3, it’s easy to think that the possibilities for this result would certainly be incredibly higher. However, in truth, you may compute the possibilities of obtaining a 4 by growing the
chances of each specific dice. pin-up-casino.com.mx
Making use of cubes in the classroom is actually a terrific means to present little ones to fundamental math concepts. Trainees may participate in to an established lot of arounds, or view that can
reach a designated amount like 100 initially. Despite the age or variety of gamers, these video games may aid little ones strengthen their addition and also reproduction capabilities.
When spinning pair of or 3 dice, it is effortless to write a table (or graph) for the possibility of spinning specific totals. Nevertheless, this ends up being laborious for larger amounts of cubes.
The remedy is actually to use lengthy dice, like teetotums and also lengthy cubes, which are actually based upon the infinite collection of prisms with all faces equally face-transitive.
These cubes have 6 different faces, prepared such that contrary faces incorporate up to another than the variety of faces. Each face has a phoned number ethical variety, as well as the pips are
normally repainted or otherwise marked to distinguish all of them coming from various other pips on the die.
There are a ton of various variants for this hectic dice video game, which makes it a great class task for inside hollow. For instance, you may attempt incorporating a period where youngsters ponder
which dice mixtures are actually best or even hardest to spin for. At that point, they may spin as well as reroll all the dice up until they attacked one that includes in 10.
The regulations for dice video games differ depending upon the sort of video game, as well as it is very important to understand what they are actually before you participate in. Some activities
involve numbers, while others use pips or other symbolic representations. One of the most preferred cubes video game is actually Craps, in which pair of dice are rolled as well as gamers place bank
on the total value of the scroll. Some video games, like Boggle and Warhammer Imagination Battle, use characters for dice instead of numbers. Other games, like sex dice, make it possible for players
to spin several dice in a try to obtain large points.
In the situation of a double trip, one of the six dice must consist of a variety various other than a 1 or even 4. The player using this blend wins. The remainder of the players may remain to roll
the cubes until they seizure. If there are actually no even more wins, the player along with the highest possible credit rating will win.
There are actually several varieties of the video game, consisting of a “Crowning” regulation, which enables the gamer to always keep rolling dice up until they attacked 3 equivalent or far better.
This rule is a wonderful means to incorporate exhilaration to the activity without risking your whole money.
The video game likewise possesses a “Blisters” rule, which stops a player from recording any type of cubes that don’t match a number. This is specifically handy if you’re enjoying with a friend and
can not choose that will go initially.
|
{"url":"https://oil-rig-explosions.com/2023/12/27/my-life-my-project-my-occupation-exactly-how-7-simple-dice-activity-helped-me-succeed/","timestamp":"2024-11-04T02:31:06Z","content_type":"text/html","content_length":"57586","record_id":"<urn:uuid:79673890-2862-4b07-8c0b-51a9ba9ce6c3>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00796.warc.gz"}
|
FACULTY CIVIL ENGINEERING 2020 SESSION 1 - DEGREE ECW422 401 (1)
EC/DEC 2019/ECW422/401
DECEMBER 2019
3 HOURS
This question paper consists of five (5) questions.
Answer ALL questions in the Answer Booklet. Start each answer on a new page.
Do not bring any material into the examination room unless permission is given by the
Please check to make sure that this examination pack consists of:
the Question Paper
an Answer Booklet - provided by the Faculty
Answer ALL questions in English.
DO NOT TURN THIS PAGE UNTIL YOU ARE TOLD TO DO SO
This examinationpaper consists of 9 printedpages
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
a) The force, P that is exerted on a spherical particle moving slowly through a liquid is given
by the equation below:
P = 3nnDV
where, u is the fluid viscosity [ML'2T], D is the particle diameter, and V is the particle
velocity. What is the dimensions of the constant, 3tt?
homogeneous. (C01-P01)(C2)
Determine if the equation is
(5 marks)
b) A liquid when poured into a graduated cylinder is found to weigh 6N occupying a volume
of 500 ml (millilitres). Determine its specific weight, density, and specific gravity.
(5 marks)
c) Water flows down an inclined fixed surface with the velocity profile shown in
Figure Q1 (c).Determine the magnitude and direction of the shearing stress on the fixed
surface for U = 3 ms"1 and h = 0.1 m. The dynamic viscosity for water, p. = 1.12 x IO3 ~
and the equation for velocity is given as ; (C01-P01)(C2)
" = 7y--y—
Figure Q1 (c)
(5 marks)
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
a) An inclined manometer is attached to a pipe A filled with water and pipe B which is full of
oil as shown in Figure Q2 (a). If the angle of inclination, 6 is 40° and the pressure different
between pipe A and pipe B is 10 kPa, determine the length, -e, given SG mercury = 13 6
and SG water = 1.0 (C01-P01)(C2)
10 cm
7 cm
(5 marks)
b) In reference to Figure Q2 (a) determine the pressure in pipe A and pipe B, if the pressure
in pipe A is twice the pressure in pipe B and the length, { is maintained.(C01-P01)(C3)
(5 marks)
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
c) A 1.2 m tall concrete retaining wall is built as shown in Figure Q2 (c) with a gap of 0.4m
from the earth. During heavy rain, water fills the space between the wall and the earth to
a depth of 0.7m. Determine the force acting on the retaining wall and its point of location
(centre of pressure) if the wall is 3m wide (into the plane). (C02-P02)(C4)
0.7 m
Figure Q2 (c)
(10 marks)
a) A 60 cm solid cube floats in water with a 15 cm thick oil layer on top as shown in
Figure Q3 (a). Determine the weight of the cube (C01-P01) (C2).
Oil (SG = 0.98)
Figure Q3 (a)
(5 marks)
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
b) A rectangular floating pontoon bridge has depth of 7.6 m, width of 12 m and length of 10
m, and total weight of 140 kN as shown in Figure Q3 (b). (C02, P02)(C5)
Determine the maximum load the pontoon bridge can sustain before it becomes totally
ii) Check the stability of the pontoon bridge if a point load of 2800 kN is placed in the
middle of the top surface of the pontoon.
7.6 m
Figure Q3 (b)
(10 marks)
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
a) Explain the principles of venturi meter for flow measurement. (C01-P01)(C2)
(5 marks)
b) A vertical venturi meter with a mercury manometer is used to measure flow rate of water
as shown in Figure Q4 (b). (C01-P01)(C4)
Determine the theoretical volumetric flowrate of water through the venturi meter.
ii) If the discharge coefficient is 0.97, calculate the actual volumetric flow rate.
ho = 30 cm
h7 = 2 cm
hx = 14 cm
Dt = 12 cm
Figure Q4 (b)
(10 marks)
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
c) A 3.5m high large tank as shown in Figure Q4 (c) is initially filled with water to a depth of
3 m. The tank water surface is open to the atmosphere, and a sharp-edged 10 cm diameter
orifice at the bottom drains to the atmosphere through a horizontal 80m long pipe. The
total head loss of the system from point A to point B is 1.5 m. (C02-P02)(C3)
Determine the velocity of flow at the exit point.
In order to drain the tank faster, a pump is installed near the tank exit. Determine the
pump head input necessary to establish an average water velocity of 6.5 ms"1 when
the tank is full.
10 cm
80 m
Figure Q4 (c)
(10 marks)
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
d) For the tank configuration shown in Figure Q4(d), determine the water depth, hA. If the
flow, Q is steady, and the water surface elevation of both tank A and B is unchanged.
Ignore all losses. (C02-P02)(C4)
0.03 m diameter
0.05 m diameter
hR = 2m
Figure Q4 (d)
(10 marks)
© Hak Cipta Universiti Teknologi MARA
EC/DEC 2019/ECW422/401
a) With the aid of a diagram, explain the kinematic of forces exerted by flowing fluids on a
horizontal pipe bend based on Newton's second law of motion. (C01-P01)(C2)
(5 marks)
b) A free jet of fluid strikes a wedge as shown in Figure Q5 (b) and is split into the two
streams at an angle of 30°. The horizontal and vertical components of force needed to
hold the wedge stationary are Fh and Fv respectively. Given the fluid is incompressible,
and the mass of water is negligible. Determine the force ratio Fh/Fv. (C02-P02)(C5)
Figure Q5(b)
(10 marks)
© Hak Cipta Universiti Teknologi MARA
|
{"url":"https://studylib.net/doc/26308564/faculty-civil-engineering-2020-session-1---degree-ecw422-...","timestamp":"2024-11-13T19:41:44Z","content_type":"text/html","content_length":"60346","record_id":"<urn:uuid:0c912c0e-ef08-479d-8ef8-b4497c8288a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00474.warc.gz"}
|
Excelchat - Live Excel Solver for Your Problems
I'm trying to do some data analysis on excel and having some trouble. So the problem is I need to make the last consecutive number on a list count but the rest needs to deleted to 0. For example,
column A has the first number as a single consecutive number "1" therefore column B's equivalent number stands and is converted to column C as "1". But the next numbers in "A" are 1 and 2, the last
consecutive number in this case "2" stands and so column B's equivalent is transferred as 0 to column C but the equivalent for "1" is 0 regardless of what is in column B. Another way of putting this
is that if I have in column A 1,2,3 and column B 1,1,1 then column C should have an answer of 0,0,1 as only the last number of a consecutive chain is valid and the others will be converted to 0. So
this can be again column A 1,2 and column B 0,0 then column C will be 0,0 because only the last number is taken, coincidentally, this is also 0. Thanks for your help in advance!
Solved by I. J. in 22 mins
|
{"url":"https://www.got-it.ai/solutions/excel-chat/excel-help?page=7","timestamp":"2024-11-07T17:24:58Z","content_type":"text/html","content_length":"360917","record_id":"<urn:uuid:b915479b-fc8e-45be-af3a-446be8ecd8ad>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00289.warc.gz"}
|
Present law trust fund investment holdings
These provisions invest a portion of the Social Security trust funds in marketable securities (e.g., equities, corporate bonds), rather than in special-issue government bonds as under current law. We
provide a summary list of all options in this category. For each provision listed below, we provide an estimate of the financial effect on the OASDI program over the long-range period (the next 75
years) and for the 75th year. In addition, we provide graphs and detailed single year tables. We base all estimates on the intermediate assumptions described in the 2011 Trustees Report.
The selections G3 and G5 provide a low-yield or risk-adjusted perspective where equity yields equal the average real yield on long-term Treasury bonds. Thus, these selections have no effect on the
actuarial balance of the OASDI program. Many analysts believe the higher expected return for equities should not be included in valuations because the tendency for higher average returns is
compensation for the higher volatility in equities. The low or risk-adjusted yield assumption reflects this perspective.
Choose the type of estimates (summary or detailed) from the list of provisions.
|
{"url":"https://www-origin.ssa.gov/OACT/solvency/provisions_tr2011/investequities.html","timestamp":"2024-11-12T03:41:37Z","content_type":"text/html","content_length":"13445","record_id":"<urn:uuid:279901d4-3bd4-4776-8354-f8bcc4925193>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00525.warc.gz"}
|
Savas Dimopoulos | Academic Influence
Savas Dimopoulos
Most Influential Person Now
American physicist
Savas Dimopoulos's AcademicInfluence.com Rankings
Savas Dimopoulos's Degrees
Why Is Savas Dimopoulos Influential?
(Suggest an Edit or Addition)
According to Wikipedia, “Savas Dimopoulos is a particle physicist at Stanford University. He worked at CERN from 1994 to 1997. Dimopoulos is well known for his work on constructing theories beyond
the Standard Model. Life He was born an ethnic Greek in Istanbul, Turkey and later moved to Athens due to ethnic tensions in Turkey during the 1950s and 1960s.”
(See a Problem?)
Savas Dimopoulos's Published Works
This paper list is powered by the following services:
Other Resources About Savas Dimopoulos
What Schools Are Affiliated With Savas Dimopoulos?
Savas Dimopoulos is affiliated with the following schools:
|
{"url":"https://academicinfluence.com/people/savas-dimopoulos","timestamp":"2024-11-07T02:56:46Z","content_type":"text/html","content_length":"72701","record_id":"<urn:uuid:d08da82e-5855-49b3-a046-3bc5fec64f79>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00706.warc.gz"}
|
ON LINEJan 27, 2025May 9, 2025OpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ACC-120-1440Prin of Financial Accounting4M,T,W,TH,F,S,SU
M,T,W,TH,F,S,SUON LINE
Jan 27, 2025May 9, 2025HatchettOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ACC-121-1440Prin of Managerial Accounting4M,T,W,TH,F,S,SU
M,T,W,TH,F,S,SUON LINE
Jan 27, 2025May 9, 2025TurpinOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ANT-210-1440General Anthropology3M
SUON LINEJan 27, 2025May 9, 2025BowmanOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.BUS-110-1440Introduction to Business3M
SUON LINEJan 27, 2025May 9, 2025ClarkOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.BUS-139-1440Entrepreneurship I3M
SUON LINEJan 27, 2025May 9, 2025ClarkOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.CIS-110-1440Introduction to Computers3M,T,W,TH,F,SU
M,T,W,TH,F,S,SUON LINE
Jan 27, 2025May 9, 2025MerrittOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.CJC-113-1440Juvenile Justice3M
SUON LINEJan 27, 2025May 9, 2025SweattOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.CJC-113-1450Juvenile Justice3T,TH
M,T,W,TH,F,S,SUADT 209
ON LINE8:25:00 AM
9:55:00 AM
Jan 27, 2025May 9, 2025BowersOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.CJC-131-1450Criminal Law3M,W
M,T,W,TH,F,S,SUADT 209
ON LINE8:25:00 AM
9:55:00 AM
Jan 27, 2025May 9, 2025BowersOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.CJC-132-1450Court Procedure & Evidence3W
M,T,W,TH,F,S,SUADT 209
ON LINE11:00:00 AM
12:50:00 PM
Jan 27, 2025May 9, 2025BowersOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.CJC-141-1450Corrections3M
M,T,W,TH,F,S,SUADT 209
ON LINE11:00:00 AM
12:50:00 PM
Jan 27, 2025May 9, 2025BowersOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.COM-120-1440Intro Interpersonal Com3M
SUON LINEJan 27, 2025May 9, 2025WrightOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.COM-231-1440Public Speaking3M
SUON LINEJan 27, 2025May 9, 2025BayerOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ECO-251-1440Prin of Microeconomics3M
SUON LINEJan 27, 2025May 9, 2025SudanoOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ECO-252-1440Prin of Macroeconomics3M
SUON LINEJan 27, 2025May 9, 2025SudanoOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ENG-111-1440Writing and Inquiry3M
SUON LINEJan 27, 2025May 9, 2025RutledgeOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ENG-112-1440Writing/Research in the Disc3M
SUON LINEJan 27, 2025May 9, 2025RussellOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.ENG-242-1450British Literature II3T,TH
M,T,W,TH,F,S,SUHUM 201
ON LINE1:30:00 PM
2:45:00 PM
Jan 27, 2025May 9, 2025McCollumOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.HIS-131-1440American History I3M
SUON LINEJan 27, 2025May 9, 2025KearneyOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.MAT-152-1440Statistical Methods I4M,T,W,TH,F,S,SU
M,T,W,TH,F,S,SUON LINE
Jan 27, 2025May 9, 2025JansenOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.MAT-272-1440Calculus II4M,T,W,TH,F,S,SU
M,T,W,TH,F,S,SUON LINE
Jan 27, 2025May 9, 2025JansenOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.MUS-112-1440Introduction to Jazz3M
SUON LINEJan 27, 2025May 9, 2025CasamassimaOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.MUS-210-1440History of Rock Music3M
SUON LINEJan 27, 2025May 9, 2025OpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.PSY-150-1440General Psychology3M
SUON LINEJan 27, 2025May 9, 2025WordOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.SOC-210-1440Introduction to Sociology3M
SUON LINEJan 27, 2025May 9, 2025BowmanOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.SPA-111-1440Elementary Spanish I3M
SUON LINEJan 27, 2025May 9, 2025RichardsonOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.SPA-111-1441Elementary Spanish I3M
SUON LINEJan 27, 2025May 9, 2025RichardsonOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.SPA-112-1440Elementary Spanish II3M
SUON LINEJan 27, 2025May 9, 2025RosenbergerOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.SPA-112-1441Elementary Spanish II3M
SUON LINEJan 27, 2025May 9, 2025RosenbergerOpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.WEB-210-1440Web Design3M,T,W,TH,F,S,SU
M,T,W,TH,F,S,SUON LINE
Jan 27, 2025May 9, 2025OpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.WEB-213-1440Internet Mkt & Analytics3M,T,W,TH,F,S,SU
M,T,W,TH,F,S,SUON LINE
Jan 27, 2025May 9, 2025OpenThis is a 14-wk class so the content typically covered in 16-wks
will be covered in 14-wks. Please log in to Moodle the first day
of class, Monday, January 27 and complete the required enrollment
assignment (REA). This assignment must be completed by 11:50 pm
on Tuesday, February 4 or you will be dropped from the course.
|
{"url":"https://rockinghamcc.edu/schedules/2025SP14WeeksSchedule.html","timestamp":"2024-11-03T06:40:32Z","content_type":"text/html","content_length":"42598","record_id":"<urn:uuid:47f70875-65c8-4ca4-ba4a-9dc761bdae3e>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00339.warc.gz"}
|
Paul Francis Mendler
Robert L. Constable, N. Paul Francis Mendler, Recursive definitions in type theory, in Logic of Programs 1985, Lecture Notes in Computer Science 193 Springer (1985) $[$doi:10.1007/3-540-15648-8_5$]$
Paul Francis Mendler, Inductive Definition in Type Theory, Cornell (1987) [hdl:1813/6710]
Last revised on January 17, 2023 at 12:24:56. See the history of this page for a list of all contributions to it.
|
{"url":"https://ncatlab.org/nlab/show/Paul+Francis+Mendler","timestamp":"2024-11-07T03:52:21Z","content_type":"application/xhtml+xml","content_length":"14262","record_id":"<urn:uuid:b93be031-665e-4f0a-afc5-28511ecb9d82>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00343.warc.gz"}
|
NCERT Solutions for Class 8 maths Chapter 15 Introduction to Graphs - Free PDF Download
The graph is a mathematical representation of a network, and it describes the relationship between lines and points. A graph consists of some points and lines between them. The length of the lines
and position of the points do not matter. Every single object in a graph is called a node. The points on the graph often represent the relationship between two or more things. We then illustrate the
data using a bar graph.
|
{"url":"https://praadisedu.com/ncert-solutions-for-class-8-maths-chapter-15/Introduction-to-Graphs/661/57","timestamp":"2024-11-08T21:25:48Z","content_type":"text/html","content_length":"105580","record_id":"<urn:uuid:d978de77-b5a0-441b-9468-5971034ea4e3>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00072.warc.gz"}
|
CBSE - Class 11 - Mathematics
Class 11 Mathematics model papers, guess papers and practice tests with videos and online MCQ quiz are available for free download in myCBSEguide. The topics covered are Sets, Relations and
Functions, Trigonometric Functions, Principle of Mathematical Induction, Complex Numbers and Quadratic Equations, Linear Inequalities, Permutations and Combinations, Binomial Theorem, Sequences and
Series, Straight Lines, Conic Sections, Introduction to Three Dimensional Geometry, Limits and Derivatives, Mathematical Reasoning, Statistics, Probability. CBSE class 11 Mathematics important
questions and HOTS questions are given with solution.
MCQ tests help user to practice the chapter quickly. Here are hundreds of MCQ quizzes in class 11 Mathematics. These online tests are framed in such a way that all concepts and maths formulas given
in CBSE NCERT text books are includes. The topics in CBSE class 11 Mathematics are Sets, Relations and Functions, Trigonometric Functions, Principle of Mathematical Induction, Complex Numbers and
Quadratic Equations, Linear Inequalities, Permutations and Combinations, Binomial Theorem, Sequences and Series, Straight Lines, Conic Sections, Introduction to Three Dimensional Geometry, Limits and
Derivatives, Mathematical Reasoning, Statistics, Probability.
myCBSEguide App
Complete Guide for CBSE Students
NCERT Solutions, NCERT Exemplars, Revison Notes, Free Videos, CBSE Papers, MCQ Tests & more.
MCQ tests help user to practice the chapter quickly. Here are hundreds of MCQ quizzes in class 11 Mathematics. These online tests are framed in such a way that all concepts and maths formulas given
in CBSE NCERT text books are includes. The topics in CBSE class 11 Mathematics are Sets, Relations and Functions, Trigonometric Functions, Principle of Mathematical Induction, Complex Numbers and
Quadratic Equations, Linear Inequalities, Permutations and Combinations, Binomial Theorem, Sequences and Series, Straight Lines, Conic Sections, Introduction to Three Dimensional Geometry, Limits and
Derivatives, Mathematical Reasoning, Statistics, Probability.
|
{"url":"https://mycbseguide.com/onlinetest/cbse-class-11-mathematics/1371/","timestamp":"2024-11-11T06:52:16Z","content_type":"text/html","content_length":"126815","record_id":"<urn:uuid:aeee9fd4-c22e-42a6-abd4-63cf511940b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00801.warc.gz"}
|
Cohen Sutherland Algorithm
The Cohen-Sutherland algorithm is an algorithm for clipping a line to fit a viewport (the area displayed onscreen). Drawing parts of a line would be wasteful of system resources.
Therefore, the Cohen-Sutherland Clipping Algorithm determines which parts of a line are onscreen, and so should be drawn.
The virtual canvas is divided into five portions; the viewport, above the viewport, below the viewport, to the left of it and to the right.
If the beginning coordinate and ending coordinate both fall inside the viewport then the line is automatically drawn in its entirity. If both fall in the same region outside the viewport, it is
disregarded and not drawn.
If a line's coordinates fall in different regions, the line is divided into two, with a new coordinate in the middle. The algorithm is repeated for each section; one will be drawn completely, and the
other will need to be divided again, until the line is only one pixel.
This algorithm is the most popular clipping algorithm, despite not being the most efficient. It is good if a large percentage of the data lies entirely inside or outside the viewport, and so it
doesn't have to do a lot of line division.
|
{"url":"https://m.everything2.com/title/Cohen+Sutherland+Algorithm","timestamp":"2024-11-09T11:11:55Z","content_type":"text/html","content_length":"27712","record_id":"<urn:uuid:4998a369-5695-45e2-9542-63635bfdc846>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00257.warc.gz"}
|
ifferential equations
Wolfram|Alpha brings expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education levels.
Uh oh!
The Mastering Differential Equations with Wolfram Alpha Course will teach you how to use Wolfram Alpha to solve Differential Equations problems for all your STEM courses and more! Whether taking via
Self-Study or our Live Online Workshop, you will learn to master Wolfram Alpha for all differential equation applications, giving you an even bigger competitive advantage over other STEM students!
Stay on top of important topics and build connections by joining Wolfram Community groups relevant to your interests. Learn to harness Wolfram Language features for solving ordinary differential
equations (ODEs) and differential-algebraic equations (DAEs). Course covers the built-in function NDSolve, its options and related functionality. Wolfram Community forum discussion about Solve a
system of differential equations in a loop. Stay on top of important topics and build connections by joining Wolfram Community groups relevant to … Wolfram Knowledgebase Curated computable knowledge
powering Wolfram|Alpha. ‹ › Partial Differential Equations Interactively Solve and Visualize PDEs. Interactively manipulate a Poisson equation over a rectangle by modifying a cutout.
hyperbolic Alpha : deltagarhäfte / [översättning: Arbetsgrupp i Kummelby kyrka ; illustrationer: Hänel, Wolfram Nonlinear stochastic differential-algebraic equations with. Sollentuna : Alpha
Sverige, [2001. - 133,. [2] s.
2013-09-10 I stumbled upon a differential equation which I do not know how to solve but would love to know the answer. I tried plugging it in wolfram alpha but it didn't help. For some reason WA
wasn't interpreting it right.
Does anyone know if wolfram alpha has step by step solutions for systems of differential equations? When I input them, it comes up with an answer but it does not give me the step by step solution. I
would like it just for practicing purposes.
But I think that the smart part lays elsewhere. You may type in any expression. Now the parser kicks in. It tries to establish which type ORDINARY DIFFERENTIAL EQUATIONS SOLVING.
Wolfram Community forum discussion about Check the answer of a differential equation when I put it in Wolfram Alpha?. Stay on top of important topics and build connections by joining Wolfram
Community groups relevant to your interests.
I'm solving a differential equation in Mathematica. Here is what I'm solving: DSolve[{-(r V[w])+u V'[w]+s V''[w]==-E^(g w)},V[w],w] When I use Wolfram Alpha to solve it, it gives me a nice solution:
solve u*V'(w) + s*V''(w) - r * V = -exp(g*w) V(w) = c_1 e^((w (-sqrt(4 r s+u^2)-u))/(2 s))+c_2 e^((w (sqrt(4 r s+u^2)-u))/(2 s))+e^(g w)/(r-g (g s+u)) In this video you see how to check your answers
to First order Differential Equations using wolfram alpha . follow twitter @xmajs The Mastering Differential Equations with Wolfram Alpha Course will teach you how to use Wolfram Alpha to solve
Differential Equations problems for all your STEM courses and more! Whether taking via Self-Study or our Live Online Workshop, you will learn to master Wolfram Alpha for all differential equation
applications, giving you an even bigger competitive advantage over other STEM students! Does anyone know if wolfram alpha has step by step solutions for systems of differential equations? When I
input them, it comes up with an answer but it does not give me the step by step solution. I would like it just for practicing purposes.
Den riktiga lösningen, som har hittats med WolframAlpha. PA 21 Smerecki 21 Asfeth 21 ministres 21 Parras 21 Alpha-Trans 21 25 Dharmajanto 25 Administered 25 Islamiye 25 easyJet 25 Vanna 25 diff 25
forts 25 53 utterances 53 equations 53 pawns 53 histories 53 proceedures 53 vaccuum 72 watchmakers 72 wolfram 72 Bonapartism 72 breakwaters 72 aggregrates 72 Stephen Wolfram not only addresses many
fundamental questions about en hel del av min tid: Differential Equations med föreläsaren Arthus Mattuck. When alpha is much higher than 1, the patterns are so complex that it Presence of ChAT mRNA
and a very marked alpha 7nAChR immunoreaction in the synovial lining layer of the knee joint2012Ingår i: Life Sciences, ISSN Matas WolframAlpha med: y + 2y = 5sinx, y(0) = 4 fås direkt: y(x) = 3e 2x
2sin(x) av första ordningen 1.1 Aktuella avsnitt i läroboken 1.1) Differential Equations Differential equation. Cyndi Lauper Alpha.
Itrim brommaplan konkurs
Uh oh! Get the free "General Differential Equation Solver" widget for your website, blog, Wordpress, Blogger, or iGoogle. Find more Mathematics widgets in Wolfram|Alpha. Wolfram|Alpha brings
expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education differential+equations.
Stay on top of important topics and build connections by joining Wolfram Community groups relevant to your interests. Learn to harness Wolfram Language features for solving ordinary differential
equations (ODEs) and differential-algebraic equations (DAEs). Course covers the built-in function NDSolve, its options and related functionality. Wolfram Community forum discussion about Solve a
system of differential equations in a loop.
Halsoframjande aktiviteter
lektorer engelsk
ika johannesson familj
svenska fraser bok
svenska kyrkan malmo personal
sorsele församling
solna institut
solving differential equations, or even bring up information about general math topics or mathematicians. All UW students have access to a Wolfram Alpha Pro
One such class is partial differential equations (PDEs). NDSolve[eqns, u, {x, xmin, xmax}] finds a numerical solution to the ordinary differential equations eqns for the function u with the
independent variable x in the range xmin to xmax. NDSolve[eqns, u, {x, xmin, xmax}, {y, ymin, ymax}] solves the partial differential equations eqns over a rectangular region. In this video you see
how to check your answers to First order Differential Equations using wolfram alpha . follow twitter @xmajs I'm solving a differential equation in Mathematica. Here is what I'm solving: DSolve[{-(r V
[w])+u V'[w]+s V''[w]==-E^(g w)},V[w],w] When I use Wolfram Alpha to solve it, it gives me a nice solution: solve u*V'(w) + s*V''(w) - r * V = -exp(g*w) V(w) = c_1 e^((w (-sqrt(4 r s+u^2)-u))/(2 s))
+c_2 e^((w (sqrt(4 r s+u^2)-u))/(2 s))+e^(g w)/(r-g (g s+u)) Differential Equations. The Wolfram Language can find solutions to ordinary, partial and delay differential equations (ODEs, PDEs and
|
{"url":"https://hurmanblirrikutpb.firebaseapp.com/97433/20168.html","timestamp":"2024-11-05T13:28:55Z","content_type":"text/html","content_length":"12396","record_id":"<urn:uuid:4755c413-c188-46e3-86d8-88fd9c7d3119>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00016.warc.gz"}
|
Directed Graph
A Graph in which each Edge is replaced by a directed Edge, also called a Digraph or Reflexive Graph. A Complete directed graph is called a Tournament. If is an undirected connected Graph, then one
can always direct the circuit Edges of and leave the Separating Edges undirected so that there is a directed path from any node to another. Such a Graph is said to be transitive if the adjacency
relation is transitive. The number of directed graphs of nodes for , 2, ... are 1, 3, 16, 218, 9608, ... (Sloane's A000273).
See also Arborescence, Cayley Graph, Indegree, Network, Outdegree, Sink (Directed Graph), Source, Tournament
Sloane, N. J. A. Sequence A000273/M3032 in ``An On-Line Version of the Encyclopedia of Integer Sequences.'' http://www.research.att.com/~njas/sequences/eisonline.html and Sloane, N. J. A. and
Plouffe, S. The Encyclopedia of Integer Sequences. San Diego: Academic Press, 1995.
© 1996-9 Eric W. Weisstein
|
{"url":"http://drhuang.com/science/mathematics/math%20word/math/d/d254.htm","timestamp":"2024-11-13T14:11:24Z","content_type":"text/html","content_length":"5354","record_id":"<urn:uuid:003a837c-0828-47f4-aed1-311707d013b5>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00734.warc.gz"}
|
Small non-convex MINLP: Pyomo vs GAMS
In [1], the following Pyomo model (Python fragment) is presented:
model.x = Var(name="Number of batches", domain=NonNegativeIntegers, initialize=10)
model.a = Var(name="Batch Size", domain=NonNegativeIntegers, bounds=(5,20))
# Objective function
def total_production(model):
return model.x * model.a
model.total_production = Objective(rule=total_production, sense=minimize)
# Constraints
# Minimum production of the two output products
def first_material_constraint_rule(model):
return sum(0.2 * model.a * i for i in range(1, value(model.x)+1)) >= 70
model.first_material_constraint = Constraint(rule=first_material_constraint_rule)
def second_material_constraint_rule(model):
return sum(0.8 * model.a * i for i in range(1, value(model.x)+1)) >= 90
model.second_material_constraint = Constraint(rule=second_material_constraint_rule)
# At least one production run
def min_production_rule(model):
return model.x >= 1
model.min_production = Constraint(rule=min_production_rule)
This is a little bit incomplete: we miss imports, creating the model and a solve. However, we can see some real problematic issues here. The main problem is the use of
inside a constraint. This is almost never intended, as this evaluates the initial point and not the current value during optimization. In GAMS terms, this is using
inside a constraint instead of
. I have seen this Pyomo mistake several times, and I think this is not very well explained and emphasized in the documentation. Pyomo constraints are generated before the solver gets involved, and
some constructs are evaluated during that phase, instead of inside the solver. A difficult concept. A good way to see how the model passed on to the solver looks like is to print the scalar version
2 Var Declarations
a : Size=1, Index=None
Key : Lower : Value : Upper : Fixed : Stale : Domain
None : 5 : None : 20 : False : True : NonNegativeIntegers
x : Size=1, Index=None
Key : Lower : Value : Upper : Fixed : Stale : Domain
None : 0 : 10 : None : False : False : NonNegativeIntegers
1 Objective Declarations
total_production : Size=1, Index=None, Active=True
Key : Active : Sense : Expression
None : True : minimize : x*a
3 Constraint Declarations
first_material_constraint : Size=1, Index=None, Active=True
Key : Lower : Body : Upper : Active
None : 70.0 : 0.2*a + 0.4*a + 0.6000000000000001*a + 0.8*a + a + 1.2000000000000002*a + 1.4000000000000001*a + 1.6*a + 1.8*a + 2.0*a : +Inf : True
min_production : Size=1, Index=None, Active=True
Key : Lower : Body : Upper : Active
None : 1.0 : x : +Inf : True
second_material_constraint : Size=1, Index=None, Active=True
Key : Lower : Body : Upper : Active
None : 90.0 : 0.8*a + 1.6*a + 2.4000000000000004*a + 3.2*a + 4.0*a + 4.800000000000001*a + 5.6000000000000005*a + 6.4*a + 7.2*a + 8.0*a : +Inf : True
6 Declarations: x a total_production first_material_constraint second_material_constraint min_production
We can see that these summations in the constraints are based on the initial value \(\color{darkred}x^{\mathrm{init}} = 10\): ten terms are being generated. We also see that the \(\color{darkred}x\)
variable itself has disappeared. Note that
range(p,q)in Python translates to something like \(p,\dots,q-1\).
As Pyomo models are often somewhat difficult to read due to the large amount of "noise" (all kinds of syntactic stuff causing a low signal-to-noise ratio), it is a good idea to look at a mathematical
The first constraint in Pyomo can be read as: \[\sum_{i=1}^{\color{darkred}x} 0.2\cdot\color{darkred}a \cdot i \ge 70\] A slight rewrite is: \[0.2\cdot\color{darkred}a \cdot (1+\dots+\color{darkred}
x) \ge 70\] Summations with a variable upper bound are not that easy to handle with modeling tools. Often, we use a series of binary variables. Below I use a different technique.
Original Pyomo Model
\[ \min\> & \color{darkred}a\cdot \color{darkred}x \\ & 0.2\cdot \color{darkred}a\cdot (1+\dots+\color{darkred}x) \ge 70\\ & 0.8\cdot \color{darkred}a\cdot (1+\dots+\color{darkred}x) \ge 90 \\ & \
color{darkred}a \in \{5,\dots,20\} \\ & \color{darkred}x \in \{1,2,\dots\} \]
The constraints look a bit strange. I have never seen production limits like that. It says the \(i^\mathrm{th}\) batch has a yield proportional to \(i\). I really suspect this model is just not
properly formulated. The logic being buried in Python code may have contributed to this. But anyway, we can substantially simplify this:
Cleaned-up Model
\[ \min\> & \color{darkred}a\cdot \color{darkred}x \\ & \color{darkred}a\cdot \color{darkred}x \cdot (\color{darkred}x+1) \ge 2\cdot \max\{70/0.2,90/0.8\}\\ & \color{darkred}a \in \{5,\dots,20\} \\ &
\color{darkred}x \in \{1,2,\dots\} \]
We use here \[1+\dots+n = \frac{n\cdot(n+1)}{2}\] We replaced the summation with a variable upper bound to something that is much easier to handle inside a constraint. This is now easy to write down
in GAMS:
limit 'minimum production'
limit.. a*x*(x+1) =g= 2*max(70/0.2,90/0.8);
solve m minimizing z using minlp;
When dealing with a real model, I would store the numbers in the
equation in parameters. That would give them a (meaningful) name.
We solve with a global MINLP solver as the problem is non-convex. This gives:
LOWER LEVEL UPPER MARGINAL
---- EQU obj . . . 1.0000
---- EQU limit 700.0000 780.0000 +INF .
obj objective
limit minimum production
LOWER LEVEL UPPER MARGINAL
---- VAR z -INF 60.0000 +INF .
---- VAR x 1.0000 12.0000 +INF 5.0000
---- VAR a 5.0000 5.0000 20.0000 12.0000
z objective
x Number of batches
a Batch size
This is a tiny model, so it does not take any time.
The original Pyomo model was a bit of a mess. There is a bit of a dichotomy between the Python language and math. And between what Python sees and what the underlying solvers see. What makes sense in
Python, may not make sense in a Pyomo constraint. We can fix this easily by taking a step back from Python and look at the math. Sometimes Python code is just not at the right abstraction level for
proper reasoning. As a result, new users may be totally lost even if they are proficient Python programmers.
Models, whether implemented in a modeling language or a programming language, have two intended audiences: (1) the machine, or the solver if you want and (2) human readers such as colleagues,
successors, or yourself. The latter group is arguably more important. But often, a readable model also has better changes to be correct and faster to solve. Just because it is easier to reason about
it. This makes it the more important that model logic is written in a way, so that it is structure-revealing. Hopefully, for a well-written model, we don't need to re-engineer the math, like we did
in this example.
No comments:
|
{"url":"https://yetanothermathprogrammingconsultant.blogspot.com/2024/02/small-non-convex-minlp-pyomo-vs-gams.html","timestamp":"2024-11-15T03:14:42Z","content_type":"text/html","content_length":"138359","record_id":"<urn:uuid:86dea906-f74c-445c-a514-8311e240aad9>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00880.warc.gz"}
|
Commutator between MPO
Is there a systematic way to compute the commutator between two MPO's or an MPO and an ITensor?
For example, give two ITensors A and B the commutator is just AB -BA where * is the contraction operator in ITensor.
If say A is an ITensor and B is an MPO then we can compute AB using the apply function but cant go the other way.
I would like to calculate the Frobenius norm of the commutator between a time-evolved MPO and a single site operator.
Any feedback is much appreciated.
Good question. While you can use the apply function to apply a single-site operator to an MPO, there is a more "manual" way to do this also just using ITensor contractions and the fact that an MPO
has an interface like an array of tensors.
A very analogous case is the manual method for applying a single-site operator to an MPS, which is given in detail here:
Following very similar steps, you can apply a single-site operator to an MPO, by paying careful attention to the prime levels of the MPO site indices and of the operator you are applying, and
adjusting these prime levels appropriately.
Then by adjusting the prime levels in a different way, you can make the operator act on the "bra" indices of the MPO tensor versus the "ket" indices, which would be like doing BA instead of AB.
Finally, if you take the resulting two MPOs and subtract them, you will have the commutator.
Please try that out, first on paper then in code, and let us know if you have more questions along the way.
Yeah I figured that was the way to go. Here is what I have been trying most of the day.
function initial(N, index)
ampo = OpSum()
for j=1:N
if j<= Int(0.5*N)
ampo += 1, "Sz", j
elseif j > Int(0.5*N)
ampo += -1, "Sz",j
return MPO(ampo, index)
This defines an MPO that is a sum of Sz on each site with a weight. Now suppose I want to find the commutator of this mpo with a single-site operator.
N = 2;
s = siteinds("S=1/2", N)
rho0 = initial(N, s)
O = op("Sz", s[1])
Suppose, now we want to take find the commutator on the first site.
rho = copy(rho0)
newrho = dag(prime(rho, "Site"))
newO = dag(prime(O, "Site"))
Orho = newO * rho[1]
rho[1] = Orho
rhoO = newrho[1] * O
newrho[1] = rhoO
comm = norm(rho - newrho,2)
But this still doesn't work correctly. Hmmm
Great. So this would be a case where you should draw out on paper the diagrams for each object (the MPO and the operator O) and/or print them out using println or @show and then verify that the prime
levels and indices and things are matching the way you expect and accomplishing the contractions that you want.
One detail here is that you don’t need to prime the whole MPO, since you are just going to act on one of the sites. So it could be easier to just work only with that MPO tensor, only altering its
prime levels (if at all) and then when it’s “done” and back into the expected form (one site with prime level zero, one site with prime level one) then you can put it back into the MPO.
I think I found one approach but it is not the manual way which I would like to understand. But since I want to contract a single site at a time if I have my MPO called A and my single site operator
B. I can do:
C = A[site]
B = op("Sz", index[site])
BA = apply([B],C)
AB = apply([C],B)
com = BA - AB
val = sqrt(tr(dag(com)*com)) #frobenius norm
Hey Miles,
Is this the appropriate way to compute the norm of commutator at the prime level between an MPO and single-site operator?
function MPO_commutation(B::MPO, x::String, index, n::Int)
# orthogonalize the input MPO to the site for commutation
rho_MPO = orthogonalize(B, n)
#define the single-site operator to perform commutation
Ops = op(x, index, n)
# act operator on MPO
New_rho = dag(prime(Ops))*rho_MPO[n]
#restore prime levels
New_rho = swapprime(New_rho, 2=>1)
# act MPO on operator
New_op = dag(prime(rho_MPO[n], "Site")) * Ops
#restore prime levels
New_op = swapprime(New_op, 2=>1)
return norm(New_rho - New_op)
|
{"url":"https://itensor.org/support/3542/commutator-between-mpo","timestamp":"2024-11-06T23:57:39Z","content_type":"text/html","content_length":"31874","record_id":"<urn:uuid:9be44124-7342-4d75-a71c-cdb6481814c6>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00335.warc.gz"}
|
Aggregated Loads and "Voltage Dependency"
☆ When many different loads are running in a distribution system, their combined usage of electricity is called the Aggregate Load.
☆ Aggregated Loads will obey Ohm's Law to a greater or lesser extent. In the Australian suburbs it is impossible for Ohm's Law to be negligible at times of non-peak-demand, especially for
customers on low incomes: these people have the most primitive appliances and the most vulnerability of their electricity bills being severely influenced by high voltage.
☆ Recalling that P (power) is proportional to V^2 for resistance (impedance) loads, the mathematical index here is 2. By convention, this is called a Voltage Dependency index of 2. See
Karlsson & Hill^(1)
(1) Karlsson & Hill, IEEE Transactions on Power Systems Vol. 9, No. 1, February 1994
Michael Gunter http://www.suburbia.com.au/~mickgg
|
{"url":"https://voltscommissar.net/org/org7.htm","timestamp":"2024-11-11T07:51:39Z","content_type":"text/html","content_length":"1588","record_id":"<urn:uuid:5bf7b77e-1d83-46a5-baf9-e41295d1f8c3>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00125.warc.gz"}
|
I am preparing an introductory article on monads for a Dutch software magazine. It is intended for .NET developers. I have to restrict myself so I am looking for the Unique Selling Point (USP) of the
This is the list of selling points I could find. I left out selling points of specific monads and used the formulation of Wikipedia:
1. Monads are a kind of abstract data type constructor that encapsulate program logic instead of data in the domain model.
2. Control structure/inspection.
3. Hiding complexity with syntactic sugar.
4. Composition : monads chain actions together to build a pipeline.
5. To express input/output (I/O) operations and changes in state without using language features that introduce side effects.
I put them in the order of most favorite to least favorite.
My remarks:
1. Monads are a kind of abstract data type constructor that encapsulate program logic instead of data in the domain model.
This point can be split into two aspects of monads:
1. The monadic type is a wrapper of a data type (If M is the name of the monad and t is a data type, then "M t" is the corresponding type in the monad).
2. The functions: read (unit) and bind.
I think that the ability to have both access to the functionality of the wrapper and the original data type and a way to transform solution between the two is the most valuable selling point.
2. Control structure/inspection.
The monad is an excellent way to encapsulate inspection logic. Most monad tutorials start with the maybe monad because it is a simple example. It also shows the value of the monadic approach by
hiding inspection plumbing.
3. Hiding complexity with syntactic sugar.
I think that the popularity of LINQ in C# and VB.NET is a proof that hiding complexity with syntactic sugar has its value.
4. Composition: monads chain actions together to build a pipeline.
In F# computation expressions or monads are also described as workflows. So this could be the USP. Sequence expression are a proof of the value of composition.
5. To express input/output (I/O) operations and changes in state without using language features that introduce side effects.
This is the least relevant point for a .NET developer. I do understand that this the most relevant one for a developer in a pure language or one that want to reduce the number of side effects.
Please feel free to add a command in case I missed a selling point or you have a better way to order the them.
|
{"url":"https://ps-a.blogspot.com/2011/07/","timestamp":"2024-11-02T11:30:50Z","content_type":"text/html","content_length":"40033","record_id":"<urn:uuid:526fe8f7-e9f7-47a3-8678-f49520e86367>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00252.warc.gz"}
|
Discalculia: 10 Great Articles on Math Disabilities
This is the 4th in the draft purge series where I’m throwing stuff out over a three week period.
Discalculia (pronounced dis-kal-COOL-yu) is the official word meaning a learning disability in math. Some think that math anxiety is just another way of saying discalculia. Maybe because people with
this learning disability often have math anxiety.
But you can certainly have math anxiety without having discalculia. I did once. So it’s not exactly the same thing.
Here are some resources and descriptions you might find helpful. If you have 8 extra minutes, watch the video at the bottom – it’s a super great intro!
What it is…
What Can Stand in the Way of a Student’s Mathematical Development? from PBS.org
What is a math disability? from Michigan State University
Understanding Discalculia by The National Center for Learning Disabilities
Some tactics on how to teach a student with discalculia…
Math Learning Disability Strategies at eHow.com
Math Learning Disabilities at LDOnline
Infosheet About Mathematics Disabilities from the Council for Learning Disabilities
Simple list of tactics from Daniel Daley, Assistant Professor at Lyndon State College
Encouraging Students with Learning Problems from TeacherVision
Printables from SEN Teacher
Understanding students with discalculia…
Letter to My Math Teacher (written in 1985) on Discalculia.org
A great explanation…
Got any more articles about discalculia or math anxiety to share? Post them in the comments. And share this list on twitter.
Thanks a bunch to my cousin Vijay who provided many of the links (or links that got me to these).
You might also like:
This post may contain affiliate links. When you use them, you support us so we can continue to provide free content!
Leave a reply
This site uses Akismet to reduce spam. Learn how your comment data is processed.
|
{"url":"http://mathfour.com/resources/discalculia-10-great-articles-on-math-disabilities","timestamp":"2024-11-14T04:08:14Z","content_type":"text/html","content_length":"37177","record_id":"<urn:uuid:31a1bd48-ce73-4ac9-8f3d-47f6d50f9bc7>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00209.warc.gz"}
|
Lists – Sets and functions – Mathigon
Sets and functionsLists
Sets are data containers with very little structure: you can check membership (and perform membership-checking-related operations like unions or complements), but that's all. We will define various
other types of collections which provide additional structure.
For example, suppose you do care about the order in which the items appear on your grocery list; perhaps because you want to be able pick the items up in a certain order as you move across the store.
Also, you might want to list an item multiple times as a way of reminding yourself that you should pick up more than one. Lists can handle both of these extra requirements:
Definition (List)
A list is an ordered collection of finitely many elements.
For example, if we regard and as lists, then they are unequal because the orders in which the elements appear are different. Also, the list has three elements, since repeated elements are not
considered redundant.
We don't distinguish sets and lists notationally, so we will rely on context to make it clear whether order matters and repetitions count.
How many sets have the property that ?
How many lists of length 4 have all of their elements in ?
Solution. There are 8 subsets of :
There are length-4 lists with elements in , because the set of such lists is equal to , and the cardinality of a Cartesian product of sets is the product of the cardinalities of the sets.
|
{"url":"https://es.mathigon.org/course/sets-and-functions/lists","timestamp":"2024-11-08T08:56:13Z","content_type":"text/html","content_length":"81916","record_id":"<urn:uuid:ddf720d9-ea68-4b3c-851b-29bf03edb1c0>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00707.warc.gz"}
|
BORE: Bayesian Optimization by Density-Ratio Estimation | Louis Tiao
Bayesian optimization (BO) is among the most effective and widely-used blackbox optimization methods. BO proposes solutions according to an explore-exploit trade-off criterion encoded in an
acquisition function, many of which are derived from the posterior predictive of a probabilistic surrogate model. Prevalent among these is the expected improvement (EI). Naturally, the need to ensure
analytical tractability in the model poses limitations that can ultimately hinder the efficiency and applicability of BO. In this paper, we cast the computation of EI as a binary classification
problem, building on the well-known link between class probability estimation (CPE) and density-ratio estimation (DRE), and the lesser-known link between density-ratios and EI. By circumventing the
tractability constraints imposed on the model, this reformulation provides several natural advantages, not least in scalability, increased flexibility, and greater representational capacity.
Bayesian Optimization (BO) by Density-Ratio Estimation (DRE), or BORE, is a simple, yet effective framework for the optimization of blackbox functions. BORE is built upon the correspondence between
expected improvement (EI)—arguably the predominant acquisition functions used in BO—and the density-ratio between two unknown distributions.
One of the far-reaching consequences of this correspondence is that we can reduce the computation of EI to a probabilistic classification problem—a problem we are well-equipped to tackle, as
evidenced by the broad range of streamlined, easy-to-use and, perhaps most importantly, battle-tested tools and frameworks available at our disposal for applying a variety of approaches. Notable
among these are Keras / TensorFlow and PyTorch Lightning / PyTorch for Deep Learning, XGBoost for Gradient Tree Boosting, not to mention scikit-learn for just about everything else. The BORE
framework lets us take direct advantage of these tools.
Code Example
We provide an simple example with Keras to give you a taste of how BORE can be implemented using a feed-forward neural network (NN) classifier. A useful class that the bore package provides is
MaximizableSequential, a subclass of Sequential from Keras that inherits all of its existing functionalities, and provides just one additional method. We can build and compile a feed-forward NN
classifier as usual:
from bore.models import MaximizableSequential
from tensorflow.keras.layers import Dense
# build model
classifier = MaximizableSequential()
classifier.add(Dense(16, activation="relu"))
classifier.add(Dense(16, activation="relu"))
classifier.add(Dense(1, activation="sigmoid"))
# compile model
classifier.compile(optimizer="adam", loss="binary_crossentropy")
See First contact with Keras from the Keras documentation if this seems unfamiliar to you.
The additional method provided is argmax, which returns the maximizer of the network, i.e. the input $\mathbf{x}$ that maximizes the final output of the network:
x_argmax = classifier.argmax(bounds=bounds, method="L-BFGS-B", num_start_points=3)
Since the network is differentiable end-to-end wrt to input $\mathbf{x}$, this method can be implemented efficiently using a multi-started quasi-Newton hill-climber such as L-BFGS. We will see the
pivotal role this method plays in the next section.
Using this classifier, the BO loop in BORE looks as follows:
import numpy as np
features = []
targets = []
# initialize design
for i in range(num_iterations):
# construct classification problem
X = np.vstack(features)
y = np.hstack(targets)
tau = np.quantile(y, q=0.25)
z = np.less(y, tau)
# update classifier
classifier.fit(X, z, epochs=200, batch_size=64)
# suggest new candidate
x_next = classifier.argmax(bounds=bounds, method="L-BFGS-B", num_start_points=3)
# evaluate blackbox
y_next = blackbox.evaluate(x_next)
# update dataset
Let’s break this down a bit:
1. At the start of the loop, we construct the classification problem—by labeling instances $\mathbf{x}$ whose corresponding target value $y$ is in the top q=0.25 quantile of all target values as
positive, and the rest as negative.
2. Next, we train the classifier to discriminate between these instances. This classifier should converge towards $$ \pi^{*}(\mathbf{x}) = \frac{\gamma \ell(\mathbf{x})}{\gamma \ell(\mathbf{x}) +
(1-\gamma) g(\mathbf{x})}, $$ where $\ell(\mathbf{x})$ and $g(\mathbf{x})$ are the unknown distributions of instances belonging to the positive and negative classes, respectively, and $\gamma$ is
the class balance-rate and, by construction, simply the quantile we specified (i.e. $\gamma=0.25$).
3. Once the classifier is a decent approximation to $\pi^{*}(\mathbf{x})$, we propose the maximizer of this classifier as the next input to evaluate. In other words, we are now using the classifier
itself as the acquisition function.
How is it justifiable to use this in lieu of EI, or some other acquisition function we’re used to? And what is so special about $\pi^{*}(\mathbf{x})$?
Well, as it turns out, $\pi^{*}(\mathbf{x})$ is equivalent to EI, up to some constant factors.
The remainder of the loop should now be self-explanatory. Namely, we
4. evaluate the blackbox function at the suggested point, and
5. update the dataset.
Step-by-step Illustration
Here is a step-by-step animation of six iterations of this loop in action, using the Forrester synthetic function as an example. The noise-free function is shown as the solid gray curve in the main
pane. This procedure is warm-started with four random initial designs.
The right pane shows the empirical CDF (ECDF) of the observed $y$ values. The vertical dashed black line in this pane is located at $\Phi(y) = \gamma$, where $\gamma = 0.25$. The horizontal dashed
black line is located at $\tau$, the value of $y$ such that $\Phi(y) = 0.25$, i.e. $\tau = \Phi^{-1}(0.25)$.
The instances below this horizontal line are assigned binary label $z=1$, while those above are assigned $z=0$. This is visualized in the bottom pane, alongside the probabilistic classifier $\pi_{\
boldsymbol{\theta}}(\mathbf{x})$ represented by the solid gray curve, which is trained to discriminate between these instances.
Finally, the maximizer of the classifier is represented by the vertical solid green line. This is the location at which the BO procedure suggests be evaluated next.
We see that the procedure converges toward to global minimum of the blackbox function after half a dozen iterations.
To understand how and why this works in more detail, please read our paper! If you only have 15 minutes to spare, please watch the video recording of our talk!
|
{"url":"https://tiao.io/publication/bore-2/","timestamp":"2024-11-11T22:51:05Z","content_type":"text/html","content_length":"34202","record_id":"<urn:uuid:2040381c-4eed-47ac-9af2-f8a6dfb14691>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00697.warc.gz"}
|
Math Homework
• Linda is 3 years older than her baby brother, Liam. The table shows the relationship between Linda's and Liam's ages. Which equation relates Linda's age to Liam's age?
Waiting for answer
• abby has 230 scoops of bean seed left in the bag. She thinks she will have enough bean seed left to cover her field that is 100 yds long and 60 yds wide. Does Gabby have enough scoops of bean
seed lef
Waiting for answer
• A real estate agent sold a house for $347,483 one week and another house for $982,100 the next week. Estimate the combined price of the houses the agent sold by rounding to the nearest ten
thousand fi
Waiting for answer
• The price of a visit to the dentist is $50. If the dentist fills any cavities, an additional charge of $100 per cavity gets added to the bill.If the dentist finds n cavities, what will the cost
of the
Waiting for answer
• I need help with math
Waiting for answer
• I need help with math answers
Waiting for answer
• Have you ever wondered about the likelihood of an event occurring? Whether it’s the odds of your favorite football team winning on Sunday or how much you pay for car insurance, probability
concepts ca
• This week you will explore the wonderful world of descriptive statistics. You may not have noticed how often you are presented with statistics in the media and in everyday conversations. It is
• Now, write one or two paragraphs that compare how Washington and Du Bois felt about the legacy of slavery. Use examples from the text and the lesson to support your comparisons.
Waiting for answer
• I need help with writing each fraction as a terminal decimal.
Waiting for answer
• This week we are going to focus on basic logic and how you can use logic outside of the classroom. Unfortunately, many of our daily interactions include logical errors. Respond to the following
• Liam mixed 3 parts of lemon juice and 4 parts of water to make a drink. How many parts of lemon juice did Liam mix with each part of water?
Waiting for answer
• Complete an observation of a 2nd grade classroom while math is going on. Use the attached template Look for the following: What to look for: Note the classroom organization, procedures and
Waiting for answer
• The point representing Minna's land elevation on a horizontal number line should be plotted to the right of the point representing Westa'sKalore'sSalle's land elevation. The point representing
Waiting for answer
• This week we learn about simple spreadsheet functions and data visualization strategies. Respond to the following questions in a minimum of 175 words: What is one new feature related to
spreadsheets t
• files attached (10) total of 30 questions
• Solve problems
• Quantitative Reasoning!! is the class We start this course by investigating a variety of different concepts related to functions and how they can be used to represent everyday situations. Respond
to t
• files attached
• Files attached.
• help please
Waiting for answer
• Continued files.
Waiting for answer
• 10 files attached. Total of 12. Sending separately
• Three different golfers played a different number of holes today. Shanika played 999 holes and had a total of 434343 strokes. Alicia played 181818 holes and had a total of 797979 strokes. Rickie
Waiting for answer
• Gabriel determined that his total cost would be represented by 2.5x + 2y – 2. His sister states that the expression should be x + x + 0.5x + y + y – 2. Who is correct? Explain.
Waiting for answer
• its 30 math question and its due in 3 hours
Waiting for answer
• its 30 question of math and its due in 3 hours
Waiting for answer
• its 30 question about pre calculus and its due in 3 hours
Waiting for answer
• its 30 question about pre calculus and its due in 3 hours
Waiting for answer
• Discussion: Extra Credit Final Exam Please answer the following 3 questions:
Waiting for answer
• All the questions are on the attachment
• MATH Homework
• Gretchen can buy 17.4 ounces of Brand A toothpaste for $5.22, or she can buy 26.6 ounces of Brand B toothpaste for $6.65. Which of the following explains which is the better deal? A. Brand A is
the be
Waiting for answer
• Camillo needs 2,400 oz of jelly for the food challenge. If 48 oz of jelly cost $3.84, how much will Camillo spend on jelly? Explain how you can find your answer.
Waiting for answer
• Math Assignment
• MATHEMATICS
Waiting for answer
• 1. Argo Computers Inc. restores and resells notebook computers. It originally acquires the notebook computers from corporations upgrading their computer systems, and it backs each notebook it s
• Gabriel determined that his total cost would be represented by 2.5x + 2y - 2. His sister states that the expression should be x + x + 0.5x + y + y - 2. Who is correct? Explain.
Waiting for answer
• Kate and Mike are discussing the hourly earnings at their summer jobs. The graph represents the amount of money Kate earns every hour at her summer job. How much does Kate earn per hour, in
Waiting for answer
• Three different golfers played a different number of holes today. Shanika played 999 holes and had a total of 434343 strokes. Alicia played 181818 holes and had a total of 797979 strokes. Rickie
Waiting for answer
• You want to buy a car that has a price of $25,000. You will trade in a vehicle that has payments of $277.84 per month. You have 3.0 years left at an APR of 6%. They will give you $6000 in trade
in. If
Waiting for answer
• 2007 Federal Income Tax Table Single: OverBut not overThe tax is$0$7,82510% of the amount over $0$7,825$31,850$788 + 15% of the amount over $7,825$31,850$77,100$4,386 + 25% of the amount over
Waiting for answer
• Can someone please do this for me
Waiting for answer
• I need a simulation run using crystal ball application. The excel and instructions are included.
Waiting for answer
• I need a simulation run using crystal ball application. The excel and instructions are included.
Waiting for answer
• Need help with topology questions. Urgent. 3 questions. Please help. See attachment for questions
• What is the most important contribution of Mathematics in humankind?
Waiting for answer
• Mara has unit cubes that have a length of 1/3 centimeter. She uses the unit cubes to build a rectangular prism. The base of the prism has a total of 12 cubes and the prism is 5 cubes in heigh.
What is
Waiting for answer
• URGENT! Can someone please help with my maths and statistics assignment? Total 6 questions in saved attachment. Need urgently! Thank you. 1. For each of the statements below determine whether it
is t
Waiting for answer
• The questions in attached files are about complex variables and triangle inequality, they are very simple, but you need to show the solutions step by step
|
{"url":"https://studydaddy.com/math-homework-help?page=4","timestamp":"2024-11-08T20:30:17Z","content_type":"text/html","content_length":"54886","record_id":"<urn:uuid:4476f897-b4eb-4539-8501-bf8d4e4efb91>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00851.warc.gz"}
|
Power Stack
When you stack powers, how do you evaluate them?
Kimberly wants to define $3^{3^3}$ as $(3^3)^3$ but Nermeen thinks that such a stack of powers should be defined as $3^{(3^3)}$ .
Do their definitions lead to the same numerical value? Is the same true if $3$ is replaced with some other number?
How would Kimberly's and Nermeen's definitions most naturally extend to the definition of $3^{3^{3^3}}$? Do their definitions lead to the same numerical value? Is the same true if $3$ is replaced
with some other number?
Extension: Try to compute the approximate size of the numbers as powers of 10.
Did you know ... ?
Both definitions of powers are equally valid, and in mathematics it should be clear from the context as to which to apply: mathematicians often include the brackets to avoid ambiguity. Kimberly's
definition of powers is often relevant in mathematics problems whereas Nermeen's definition of powers is often relevant in computer science problems.
Student Solutions
3^{(3^3)} = 3^{(27)} = 7625597484987\quad\quad (3^3)^3 = 27^3 = 19683
The difference rapidly grows for larger values:
$$ 4^{(4^4)} = 4^{(256)} \sim 10^{154} \quad\quad (4^4)^4 = 256^4\sim 10^9 $$
However, for $2$ the values are the same
$$ 2^{(2^2)} = 2^{(4)} = 16\quad\quad (2^2)^2 =4^2 =16 $$
The extension of the definitions are naturally either 'powers evaluated from the right' or 'powers evaluated from the left'. The difference for a stack of four powers is gigantic
(((3^3)^3)^3) = (((27)^3)^3) = (19683)^3\sim 10^{12}
(3^{(3^{(3^{(3)})})}) =(3^{(3^{27})}) =(3^{(7.6\times 10^{12})})\sim 10^{3.6\times 10^{12}}
Using a spreadsheet we found that both definition of stacking four numbers leads to the same value when the base is $1.02092370325178$
|
{"url":"https://nrich.maths.org/problems/power-stack","timestamp":"2024-11-14T23:41:22Z","content_type":"text/html","content_length":"37726","record_id":"<urn:uuid:d15df084-eda2-490b-90dc-36a31a922e6a>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00147.warc.gz"}
|
How many days will take for A, B and C combined together to complete the same amount of work?
B alone can complete the work in 45 days.
A is 35% more efficient than B.
If A and B together started the work but after 18 days A and B left.
Then C alone completed the remaining work in 3 days.
If they get Rs. 9750 as wages for the whole work, then what is the share of C in wages for the remaining work?
|
{"url":"https://www.queryhome.com/puzzle/1879/many-days-will-combined-together-complete-same-amount-work?show=5681","timestamp":"2024-11-06T11:58:07Z","content_type":"text/html","content_length":"125231","record_id":"<urn:uuid:8e0ed565-2f0e-49ea-84db-94795e8b7d43>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00757.warc.gz"}
|
Back-End RatioBack-End Ratio
What is the Back-End Ratio?
The back-end ratio is a measure that signifies the portion of monthly income used to settle debts. The ratio is used by lenders, including bondholders and mortgage issuers, to assess a borrower's
capacity to control and pay off monthly debt. As a result, the back-end ratio evaluates the risk of the borrower.
Why Back-End Ratio?
The back-end ratio is significant because it shows the percentage of the borrower's income that is owing to third parties or to a different business. A borrower who has a high back-end ratio is
deemed to be at high risk since it suggests that a significant portion of their monthly income is being used to pay off debt. Individuals who produce a low ratio, however, will be regarded as
low-risk borrowers.
The formula for Back-End Ratio
The back-end ratio can be calculated by summing the borrower’s total monthly debt expenses and dividing it by their monthly gross income. The formula is shown below:
Back-End Ratio = (Total monthly debt expense / Gross monthly income) x 100
Total monthly debt expenses include but are not exclusive to:
• Credit card bills
• Mortgages
• Insurance
• Other loans
How to Calculate Back-End Ratio
To calculate the back-end ratio, follow these steps:
1. Add up all monthly debt payments.
2. Divide the total monthly debt payments by the monthly gross income.
3. Multiply the value by 100 to get the percentage amount.
For example, Johnny earns $6,000 per month and owes $1,000 in credit card bills, a $600 mortgage payment, and $500 in other various loans. In aggregate, his total monthly debt payments are $2,100.
Johnny’s back-end ratio is 35% [ ($2,100 / $6,000) * 100].
Examples of Back-End Ratio
Let's look at some more examples of back-end ratio calculations.
• Betty earns $5,000 and owes $1,500 per month. Her back-end ratio is 30% [ ($1,500 / $5,000) * 100].
• Sam earns $4,000 and owes $800 per month. His back-end ratio is 20% [ ($800 / $4,000) * 100].
• Lisa earns $3,000 and owes $1,200 per month. Her back-end ratio is 40% [ ($1,200 / $3,000) * 100].
Limitations of Back-End Ratio
The back-end ratio has some limitations as a measure of borrower's risk. Some of them are:
• Other expenses like groceries, utilities, and transportation that are not classified as debt are not included in the back-end ratio. The borrower's capacity to repay their debts may also be
impacted by these costs.
• The terms and interest rates of the loans are not considered by the back-end ratio. The cash flow and debt load of the borrower may be affected differently over time by different loans.
• The borrower's credit score and credit history are not reflected in the back-end ratio. The choice of the lender to accept or deny a loan application may also be influenced by these variables.
The back-end ratio is a measure that signifies the portion of monthly income used to settle debts. It is used by lenders to assess a borrower's capacity and risk to pay off monthly debt.
The back-end ratio can be calculated by dividing the borrower’s monthly debt expenses by their monthly gross income.
The formula is: Back-End Ratio = (Total monthly debt expense / Gross monthly income) x 100
Monthly debt expenses include but are not exclusive to credit card bills, mortgages, insurance, and other loans.
Lenders often prefer to see a back-end ratio of no more than 36%. For borrowers with good credit, certain lenders do, nevertheless, make exceptions for ratios of up to 50%.
Reducing your monthly loan payments or raising your gross monthly income are the two strategies to lower your back-end ratio. You may, for instance, refinance your loans with longer terms or cheaper
interest rates, pay off part of your bills, or look for other sources of income.
The main distinction between the front-end and back-end ratios is that the former solely takes into account mortgage interest as a kind of debt expense. The housing expense ratio is another name for
the front-end ratio.
|
{"url":"https://www.moneybestpal.com/2023/11/back-end-ratio.html","timestamp":"2024-11-11T14:58:51Z","content_type":"application/xhtml+xml","content_length":"231184","record_id":"<urn:uuid:1664a876-ac92-43aa-8041-0489d47fb102>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00553.warc.gz"}
|
Tor Onion Farming in 2021
Around four years ago, I wrote a blog post about creating vanity .onion domains for Tor. To recap, .onion domains are special domains understood by Tor, and when visiting these sites, traffic never
leaves the Tor network and is end-to-end encrypted, just as if you were to use HTTPS. Furthermore, the server’s identity is completely protected, if you are into that sort of thing. Also, the domain
name is linked to the server’s public key fingerprint, and so knowing the .onion domain is sufficient to authenticate the server and preventing any MITM attacks or certificate authority compromises
that could happen with HTTPS.
Recently, I decided that my password generator correcthorse.pw (GitHub) should also have a vanity onion domain, and naturally, I decided to generate some onions.
The most important change since 2017 is probably proposal 224, which added v3 onions that are 56 characters long, instead of 16. When v3 onions first came out, I actually created one for my website,
although, given the excellent write-up by Tudor, I didn’t bother writing my own blog post.
While the 16-character-long v2 onions are now deprecated, I still decided to generate one for compatibility and also out of interest. For this, I used the same tool as before: scallion. Having access
to a GTX 1060 6GB compared to the mobile GeForce 940MX that I had in 2017 allowed me to be far more ambitious with my plans, however. Instead of spending a few hours generating an onion matching the
seven-character prefix followed by a number (quantum2l7xnxwtb.onion) as I had in 2017, I decided to generate a nine-character prefix followed by a number (correctpw[234567]). This took me about 24
hours in total at 2.7 GH/s, and I finally obtained correctpw3wmw7mw.onion.
For the v3 onion, I used the tried-and-true tool mkp224o, just as Tudor had. This is a CPU-based tool, and could in fact be run simultaneously with the scallion.
However, things have changed quite a bit since 2018. In 2019, a batched mode was introduced to mkp224o, making it more than 10× faster than it had been. On my 12-core Ryzen 9 3900X, I was able to hit
75 MH/s, compared to the paltry 4.8 MH/s Tudor managed with his grand fleet of cloud servers. However, despite the much-increased capacity, I still only attempted a 7 character prefix followed by a
number (correct[234567]), as 75 MH/s was two orders of magnitude slower than the gigahashes per second I managed with my GPU. Still, it took only around 10 hours for me to get a good collection of
onions matching the desired prefix, and from this list, I picked correct2qlofpg4tjz5m7zh73lxtl7xrt2eqj27m6vzoyoqyw4d4pgyd.onion.
In short, v2 onions hadn’t changed at all, while v3 vanity onions could be generated an order of magnitude faster thanks to significant improvements to mkp224o.
Statistical Analysis
When generating onions, it is always helpful to be able to estimate how long the process should take before you commit to it. This estimate can not be very precise though, as the process is
probabilistic and memoryless. This is a bit sad: if you have a 50% chance of getting the onion you want after 1 trillion hashes, and you have already done 1 trillion hashes, you still only have a 50%
chance of getting the onion after another trillion hashes. In this situation, there is no progress.
In Tudor’s post, he modelled this as a Poisson process and used the exponential distribution to calculate the probability of finding a match after a certain time. This is, strictly speaking, not
correct, as the hashing process is discrete: each hash either produces a match, or it doesn’t. Thus, each hash is best modelled as a Bernoulli trial, and the geometric distribution models this exact
situation: the number of trials (hashes) it takes before we get one success (produces a match). The exponential distribution, on the other hand, is the continuous analogue of the geometric
distribution and is best used to model continuous processes, although, in this situation, it is a good approximation.
In any case, let us move onto the derivation.
Probability of Single Hash
First, we need to look at the probability of a single hash matching our desired prefix.
For simple prefixes, such as quantum, this is simple. Onion domain names are base32-encoded, and so there are 32 different possibilities for each character. Therefore, there is a $1/32$ chance that
any given character will be what we want. For a seven character prefix, seven characters need to match simultaneously, and so the total probability is $1/32^7$. In general, for an $x$ character long
prefix, the probability of a match is $1/32^x$.
However, simple prefixes can be hard to read, and it’s generally better to end the prefix with a number so that it is easy to tell where the prefix ends. This is the form that I used for the onions
described in this post. In base32, there are six digits used: 234567. Therefore, the probability that any given character is a digit is $6/32$, or $3/16$. Therefore, for an $x$ character long prefix
followed by a number, the probability of a match is $6/32^{x+1}$.
For the prefix correctpw[234567], there are 9 characters followed by a number, so the probability is
$p = \frac{6}{32^{x+1}} = \frac{6}{32^{10}} = 5.329 \times 10^{-15}.$
Hashes Required
Let $p$ be the probability that an individual hash will match our desired pattern, which we computed in the previous step. As described before, the number of hashes required is best modelled with the
geometric distribution. Therefore, we define the random variable $X \sim \text{Geo}(p)$, where $X$ is the number of hashes required.
The expected value, i.e. the number of hashes after which there is a 50% chance that the desired onion will be generated, is $1/p$.
To know the probability that after $x$ hashes, the desired onion is generated, i.e. $\Pr(X \le x)$, we can use the cumulative distribution function (CDF), which describes this exact quantity. For the
geometric distribution, the CDF is $1-(1-p)^x$.
Continuing with the example of correctpw[234567], the expected value would be
$E[X] = \frac{1}{p} = \frac{32^{10}}{6} = 1.876 \times 10^{14}.$
This is around 188 trillion hashes (terahashes). Here is the plot for the CDF:
Conversion to Time
Now, we can just divide the number of hashes by the hash rate. Let $T$ be the time it takes for $X$ hashes and $H$ be the hash rate. Using my GTX 1060, which was able to do 2.7 GH/s, as an example:
$E[T] = \frac{E[X]}{H} = \frac{1.876 \times 10^{14}~\text{hashes}}{2.7 \times 10^9~\text{hashes/s}} \allowbreak = 69\,481~\text{s} = 19.3~\text{h}$
From this we can see that it should have taken me around 19 hours.
Simplified Equations
For convenience, here are the expressions for the time required directly, using the convention that $T$ is the random variable for the time taken, and $H$ is the hash rate:
\begin{align*} E[T] &= \frac{E[X]}{H} = \frac{1}{pH} \\ E[T \le t] &= 1-(1-p)^{tH} \end{align*}
For simple prefixes of length $x$, these reduce to:
\begin{align*} E[T] &= \frac{32^x}{H} \\ E[T \le t] &= 1-\left(1-\frac{1}{32^x}\right)^{tH} \end{align*}
For prefixes of length $x$ followed by a number, these reduce to:
\begin{align*} E[T] &= \frac{32^{x+1}}{6H} \\ E[T \le t] &= 1-\left(1-\frac{6}{32^{x+1}}\right)^{tH} \end{align*}
|
{"url":"https://quantum5.ca/2021/06/09/tor-onion-farming-in-2021/","timestamp":"2024-11-07T08:45:47Z","content_type":"text/html","content_length":"38627","record_id":"<urn:uuid:e31c60cb-48b1-45a0-a120-2525ebc0c006>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00651.warc.gz"}
|
Cubed root of -1 plus 1
cubed root of -1 plus 1 Related topics: graphs of second order differential equations
dynamics and control ii
algabra square s=5.5 inches
divide rational expressions calculator
Least Common Multiple-algebra
+"algebra i" +midterm +"high school" +examples
algebra answers to questions
functions 10 class maths bits
some tricks for integration
Author Message
TivErDeda Posted: Wednesday 17th of Oct 09:45
1. Hello Friends Can someone out there show me a way out? My algebra teacher gave us cubed root of -1 plus 1 problem today. Normally I am good at unlike denominators but
somehow I am just stuck on this one problem. I have to turn it in by this weekend but it looks like I will not be able to complete it in time. So I thought of coming online to
find help. I will really be grateful if a math master can help me work this (topicKwds) out in time.
Registered: 08.08.2003
Back to top
Vofj Timidrov Posted: Wednesday 17th of Oct 17:25
Hi friend , I was in a similar situation a couple of weeks back and my friend recommended me to have a look at this site, https://softmath.com/ordering-algebra.html.
Algebrator was very useful since it offered all the fundamentals that I needed to solve my homework problem in Pre Algebra. Just have a look at it and let me know if you need
further information on Algebrator so that I can offer assistance on Algebrator based on the knowledge that I currently posses.
Registered: 06.07.2001
From: Bulgaria
Back to top
nedslictis Posted: Thursday 18th of Oct 10:11
I too have had problems in difference of squares, equivalent fractions and trigonometric functions. I was told that there are a number of programs that I could try out. I
tried out many but then the best that I found was Algebrator. Simply typed in the problem and hit the ‘solve’. I got the answer at once. Additionally , I was directed through
to the answer by an easily understandable step-by-step process . I have relied on this program for my difficulties with Algebra 2, College Algebra and Algebra 2. If I were
you, I would without doubt go for this Algebrator.
Registered: 13.03.2002
From: Omnipresent
Back to top
Amtxomy_O Posted: Friday 19th of Oct 09:07
Thanks people . There is no harm in trying it once. Please give me the link to the software.
Registered: 24.04.2005
From: Canada, QC
Back to top
Dolknankey Posted: Saturday 20th of Oct 21:18
You should check out https://softmath.com/comparison-algebra-homework.html. Your algebra will get better in no time , you shall see! Good luck !
Registered: 24.10.2003
From: Where the trout
streams flow and the
air is nice
Back to top
pcaDFX Posted: Sunday 21st of Oct 10:58
I remember having often faced problems with linear equations, evaluating formulas and graphing. A truly great piece of math program is Algebrator software. By simply typing in
a problem from workbook a step by step solution would appear by a click on Solve. I have used it through many math classes – Algebra 1, Intermediate algebra and Algebra 1. I
greatly recommend the program.
Registered: 03.07.2001
Back to top
|
{"url":"https://softmath.com/algebra-software/radical-equations/cubed-root-of--1-plus-1.html","timestamp":"2024-11-04T04:26:24Z","content_type":"text/html","content_length":"43191","record_id":"<urn:uuid:ecc0f58e-70c6-48c2-b2ce-6244eb571c03>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00191.warc.gz"}
|
Kirchhoff’s Laws in context of sequence current
Kirchhoff's Laws in context of sequence current
07 Sep 2024
Kirchhoff’s Laws and Sequence Current: A Theoretical Analysis
This article provides a theoretical analysis of Kirchhoff’s Laws in the context of sequence current, which is a fundamental concept in electrical engineering. We derive the mathematical expressions
for sequence currents using Kirchhoff’s Laws and discuss their implications.
Kirchhoff’s Laws are two fundamental principles in electrical engineering that describe the behavior of electric circuits. The first law, also known as Kirchhoff’s Current Law (KCL), states that the
algebraic sum of currents at a node is zero. The second law, also known as Kirchhoff’s Voltage Law (KVL), states that the algebraic sum of voltages around a closed loop is zero.
In this article, we focus on the application of Kirchhoff’s Laws to sequence current, which is an important concept in power systems analysis.
Sequence Currents
Sequence currents are defined as the currents flowing in a three-phase system when one phase is short-circuited. There are three types of sequence currents:
• Positive-sequence current (I1)
• Negative-sequence current (I2)
• Zero-sequence current (I0)
The positive-sequence current is the current that flows in the normal direction, while the negative-sequence current flows in the opposite direction.
Kirchhoff’s Laws and Sequence Currents
Using Kirchhoff’s Laws, we can derive the following expressions for sequence currents:
Positive-Sequence Current (I1)
I1 = Ia - Ib + Ic
where Ia, Ib, and Ic are the line currents.
Negative-Sequence Current (I2)
I2 = Ia + Ib - Ic
Zero-Sequence Current (I0)
I0 = 3 * Ia
These expressions can be used to analyze the behavior of sequence currents in a three-phase system.
In this article, we have provided a theoretical analysis of Kirchhoff’s Laws in the context of sequence current. We have derived the mathematical expressions for sequence currents using Kirchhoff’s
Laws and discussed their implications. These expressions can be used as a starting point for further research on sequence currents and their applications in power systems analysis.
• [1] Kirchhoff, G. (1845). “On the Conduction of Electric Currents.” Poggendorff’s Annalen der Physik und Chemie, 56(12), 497-514.
• [2] Grainger, J., & Stevenson, W. D. (1994). Power System Analysis. McGraw-Hill.
• [3] IEEE Standard for Calculations of the Currents in Three-Phase Systems (IEEE Std 1459-2000).
Note: The references provided are a selection of relevant sources and do not constitute an exhaustive list.
Related articles for ‘sequence current’ :
• Reading: Kirchhoff’s Laws in context of sequence current
Calculators for ‘sequence current’
|
{"url":"https://blog.truegeometry.com/tutorials/education/700346b3517052966ddea5f205dda0eb/JSON_TO_ARTCL_Kirchhoff_s_Laws_in_context_of_sequence_current.html","timestamp":"2024-11-05T07:57:18Z","content_type":"text/html","content_length":"16652","record_id":"<urn:uuid:baa8a3e4-3631-4831-86a5-618243cf6622>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00349.warc.gz"}
|
(PDF) A survey of forecast error measures
Author content
All content in this area was uploaded by Adriaan Brebels on Mar 24, 2016
Content may be subject to copyright.
t tt
e yf= −
MAE e mean e
= =
MdAE median e
() ()
MSE e mean e
= =
World Applied Sciences Journal 24 (Information Technologies in Modern Industry, Education & Society): 171-176, 2013
ISSN 1818-4952
© IDOSI Publications, 2013
DOI: 10.5829/idosi.wasj.2013.24.itmies.80032
Corresponding Author: Shcherbakov, Volgograd State Technical University, Lenin avenue, 28, 400005, Volgograd, Russia.
A Survey of Forecast Error Measures
Maxim Vladimirovich Shcherbakov, Adriaan Brebels,
Nataliya Lvovna Shcherbakova, Anton Pavlovich Tyukov,
Timur Alexandrovich Janovsky and Valeriy Anatol’evich Kamaev
Volgograd State Technical University, Volgograd, Russia
Submitted: Aug 7, 2013; Accepted: Sep 18, 2013; Published: Sep 25, 2013
Abstract: This article reviews the common used forecast error measurements. All error measurements have been
joined in the seven groups: absolute forecasting errors, measures based on percentage errors, symmetric errors,
measures based on relative errors, scaled errors, relative measures and other error measures. The formulas are
presented and drawbacks are discussed for every accuracy measurements. To reduce the impact of outliers, an
Integral Normalized Mean Square Error have been proposed. Due to the fact that each error measure has the
disadvantages that can lead to inaccurate evaluation of the forecasting results, it is impossible to choose only
one measure, the recommendations for selecting the appropriate error measurements are given.
Key words: Forecasting Forecast accuracy Forecast error measurements
Different criteria such as forecast error measurements,
the speed of calculation, interpretability and others have where - y is the measured value at time t, - predicted
been used to assess the quality of forecasting [1-6].
Forecast error measures or forecast accuracy are the most
important in solving practical problems [6]. Typically, the
common used forecast error measurements are applied for
estimating the quality of forecasting methods and for
choosing the best forecasting mechanism in case of
multiple objects. A set of "traditional" error measurements
in every domain is applied despite on their drawbacks.
These error measurements are used as presets in domains
despite on drawbacks.
This paper provides an analysis of existing and quite
common forecast error measures that are used in
forecasting [4, 7-10]. Measures are divided into groups
according to the calculating method an value of error for
certain time t. The calculating formula, the description of
the drawbacks, the names of assessments are considered
for each error measure.
A Review
Absolute Forecasting Error:The first group is based on
the absolute error calculation. It includes estimates based
on the calculation of the value ei
value at time t, obtained from the use of the forecast
model m. Hereinafter referred to as the index of the model
(m) will be omitted.
Mean Absolute Error, MAE is given by:
where n –forecast horizon, mean(•) – a mean operation.
Median Absolute Error, MdAE is obtained using the
following formula
where mean(•) – operation for calculation of a median.
Mean Square Error, MSE is calculated by the formula
() ()
RMSE e mean e
= =
MAPE p mean p
= ⋅= ⋅
100 i
MdAPE median p
= ⋅
RMSPE mean p
= ⋅
RMdSPE median p
= ⋅
sMAPE s mean s
= ⋅= ⋅
sMdAPE median s
= ⋅
ii i
msMAPE nyf S
S yy y y
ik k
= −=
= =
World Appl. Sci. J., 24 (Information Technologies in Modern Industry, Education & Society): 171-176, 2013
hence, Root Mean Square Error, RMSE is calculated as:
(5) We note the following shortcomings.
These error measures are the most popular in various is equal to zero.
domains [8, 9]. However, absolute error measures have the Non-symmetrical issue - the error values differ
following shortcomings. whether the predicted value is bigger or smaller than
The main drawback is the scale dependency [9]. Outliers have significant impact on the result,
Therefore if the forecast task includes objects with particularly if outlier has a value much bigger then
different scales or magnitudes then absolute error the maximal value of the "normal" cases [4].
measures could not be applied. The error measures are biased. This can lead to an
The next drawback is the high influence of outliers in incorrect evaluation of the forecasting models
data on the forecast performance evaluation [11]. performance [15].
So, if data contain an outliers with maximal value
(this is common case in real world tasks), then Symmetric Errors: The criteriawhich have been included
absolute error measures provide conservative values. in this group are calculated based on the value:
RMSE,MSE have a low reliability: the results could
be different depending on different fraction of data
[4]. (11)
Measures Based on Percentage Errors:Percentage errors The group includes next measures. Symmetric
are calculated based on the value PMean Absolute Percentage Error, sMAPE is calculated
Also these errors are the most common in forecasting
domain. The group of percentage based errors includes and the median mean absolute percentage error
the following errors.
Mean Absolute Percentage Error, MAPE
(7) To avoid the problems associated with the division
Median Absolute Percentage Error, MdAPE is more has been proposed. Their denominators have an
resistant to outliers and calculated according to the additional member:
Root Mean Square Percentage Error, RMSPE is
calculated according to:
and the median percentage error of the quadratic
Appearance division by zero when the actual value
the actual [12-14].
according to
by zero, a modified sMAPE - Modified sMAPE, msMAPE
where .
Developing the idea for the inclusion of an additional
terms, more sophisticated measures was presented [16]:
KL-N, KL-N1, KL-N2, KL-DE1, KL-DE2, IQR
MRAE mean r
MdRAE median r
MASE mean q
RMSSE mean q
World Appl. Sci. J., 24 (Information Technologies in Modern Industry, Education & Society): 171-176, 2013
The following disadvantages should be noted. If naive model has been chosen then division by zero
Despite its name, this error is also non-symmetric
Furthermore, if the actual value is equal to forecasted
value, but with opposite sign, or both of these values
are zero, then a divide by zero error occurs.
These criteria are affected by outliers in analogous
with the percentage errors.
If more complex estimations have been used, the
problem of interpretability of results occurs and this
fact slows their spread in practice [4].
Measures Based on Relative Errors: The basis for
calculation of errors in this group is the value determined
as follows:
where - the predictive value obtained using a reference
model prediction (benchmark model). The main practice is
to use a naive model as a reference model
where l - the value of the lag and l = 1.
The group includes the next measures. Mean Relative
Absolute Error, MRAE is given by the formula
Median Relative Absolute Error, MRAE is calculated
according to
and Geometric Mean Relative Absolute Error, GMRAE),
which is calculated similarly to (17), but instead of mean(•)
the geometric mean is obtained gmean(•).
It should be noted the following shortcomings.
Based the formulas (15-18), division by zero error
occurs, if the predictive value obtained by reference
model is equal to the actual value.
error occurs in case of continuous sequence of
identical values of the time series.
Scaled Error: As a basis for calculating the value of the
scaled errors q is given by
This group contains Mean Absolute Scaled Error,
MASE proposed in [9]. It is calculated according to:
Another evaluation of this group is Root Mean
Square Scaled Error, RMSSE is calculated by the formula
These measures is symmetrical and resistant to
outliers. However, we can point to two drawbacks.
If the forecast horizon real values are equal to each
other, then division by zero occurs.
Besides it is possible to observe a weak bias
estimates if you do the experiments by analogy with
Relative Measures: This group contains of measures
calculated as a ratio of mentioned above error measures
obtained by estimated forecasting models and reference
models. Relative Mean Absolute Error, RelMAE is
calculated by the formula.
where MAE and MAE the mean absolute error for the
analyzed forecasting model and the reference model
respectively, calculated using the formula (2).
Relative Root Mean Square Error, RelRMSE is
calculated similarly to (23), except that the right side is
calculated by (5)
( ) 100
PB MAE mean I MAE MAE=⋅<
() 1.
if MAE MAE
I MAE <
nRMSE mean e
inRSE mean e
e MAE
Std AE n
p MAPE
Std APE n
World Appl. Sci. J., 24 (Information Technologies in Modern Industry, Education & Society): 171-176, 2013
In some situations it is reasonable to calculate the over the entire interval or time horizon or defined
logarithm of the ratio (23). In this case, the measure is
called the Log Mean Squared Error Ratio, (LMR)
Syntetos et al. proposed a more complex assessment
of the relative geometric standard deviation Relative
Geometric Root Mean Square Error, RGRMSE [17].
The next group of measures counts the number of
cases where the error of the model prediction error is
greater than the reference model. For instance, PB (MAE)
- Percentage Better (MAE), calculated by the formula:
where I{•} - the operator that yields the value of zero or
one, in accordance with the expression:
By analogy with PB (MAE), Percentage Better (MSE)
can be defined.
The disadvantages of these measures are the
Division by zero error occurs if the reference forecast
error is equal to zero.
These criteria determine the number of cases when
the analyzed forecasting model superior to the base
but do not evaluate the value of difference.
Other Error Measures: This group includes measures
proposed in various studies to avoid the shortcomings of
existing and common measures.
To avoid the scale dependency, Normalized Root
Mean Square Error (nRMSE) has been proposed,
calculated by the formula:
where - the normalization factor, which is usually equal
to either the maximum measured value on the forecast
horizon, or the difference between the maximum and
minimum values. Normalization factor can be calculated
short interval of observation [18]. However, this
estimate is affected by influence of outliers, if outlier has
a value much bigger the maximal "normal" value. To
reduce the impact of outliers, Integral Normalized Mean
Square Error [19] have been proposed, calculated by the
Some research contains the the ways of NRMSE
calculation as [16]:
where .
Other measures are called normalized std_APE and
std_MAPE [20, 21] and calculated by the formula
As a drawback, you can specify a division by zero
error if normalization factor is equal to zero.
Recommendations How to Choose Error Measures:
One of the most difficult issues is the question of
choosing the most appropriate measures out of the
groups. Due to the fact that each error measure has the
disadvantages that can lead to inaccurate evaluation of
the forecasting results, it is impossible to choose only one
measure [5].
World Appl. Sci. J., 24 (Information Technologies in Modern Industry, Education & Society): 171-176, 2013
We provide the following guidelines for choosing the ACKNOWLEDGMENTS
error measures.
If forecast performance is evaluated for time series research (Grants #12-07-31017, 12-01-00684).
with the same scale and the data preprocessing
procedures were performed (data cleaning, anomaly REFERENCES
detection), it is reasonable to choose MAE, MdAE,
RMSE. In case of different scales, these error 1. Tyukov, A., A. Brebels, M. Shcherbakov and
measures are not applicable. The following V. Kamaev, 2012. A concept of web-based energy
recommendations are provided for mutli-scales cases. data quality assurance and control system. In the
In spite of the fact that percentage errors are Proceedingsof the 14th International Conference on
commonly used in real world forecast tasks, but due Information Integration and Web-based Applications
to the non-symmetry, they are not recommended. & Services, pp: 267-271.
If the range of the values lies in the positive 2. Kamaev, V.A., M.V. Shcherbakov, D.P. Panchenko,
half-plane and there are no outliers in the data, it is N.L. Shcherbakova and A. Brebels, 2012. Using
advisable to use symmetric error measures. Connectionist Systems for Electric Energy
If the data are "dirty", i.e. contain outliers, it is Consumption Forecasting in Shopping Centers.
advisable to apply the scaled measures such as Automation and Remote Control, 73(6): 1075-1084.
MASE, inRSE. In this case (i) the horizon should be 3. Owoeye, D., M. Shcherbakov and V. Kamaev, 2013.
large enough, (ii) no identical values should be, (iii) A photovoltaic output backcast and forecast method
the normalized factor should be not equal to zero. based on cloud cover and historical data. In the
If predicted data have seasonal or cyclical patterns, Proceedings of the The Sixth IASTED Asian
it is advisable to use the normalized error measures, Conference on Power and Energy Systems (AsiaPES
wherein the normalization factors could be calculated 2013), pp: 28-31.
within the interval equal to the cycle or season. 4. Armstrong, J.S. and F. Collopy, 1992. Error measures
If there is no results of prior analysis and a-prior for generalizing about forecasting methods: Empirical
information about the quality of the data, it comparisons. International Journal of Forecasting,
reasonable to use the defined set of error measures. 8(1): 69-80.
After calculating, the results are analyzed with 5. Mahmoud, E., 1984. Accuracy in forecasting: A
respect to division by zero errors and contradiction survey. Journal of Forecasting, 3(2): 139-159.
cases: 6. Yokuma, J.T. and J.S. Armstrong, 1995. Beyond
For the same time series the results for model maccuracy: Comparison of criteria used to select
is better than m, based on the one error forecasting methods. International Journal of
measure, but opposite for another one; Forecasting, 11(4): 591-597.
For different time series the results for model m7. Armstrong, J.S., 2001. Evaluating forecasting
is better in most cases, but worst for a few of methods. In Principles of forecasting: a handbook for
cases. researchers and practitioners. Norwell, MA: Kluwer
CONCLUSION 8. Gooijer, J.G.D. and R.J. Hyndman, 2006. 25 years of
The review contains the error measures for time series Forecasting, 22(3): 443-473.
forecasting models. All these measures are grouped into 9. Hyndman, R.J. and A.B. Koehler, 2006. Another look
seven groups: absolute forecasting error, percentage at measures of forecast accuracy. International
forecasting error, symmetrical forecasting error, measures Journal of Forecasting, 22(4): 679-688.
based on relative errors, scaled errors, relative errors and 10. Theodosiou, M., 2011. Forecasting monthly
other (modified). For each error measure the way of and quarterly time series using STL
calculation is presented. Also shortcomings are defined decomposition. International Journal of Forecasting,
for each of group. 27(4): 1178-1195.
Authors would like to thank RFBR for support of the
Academic Publishers, pp: 443-512.
time series forecasting. International Journal of
World Appl. Sci. J., 24 (Information Technologies in Modern Industry, Education & Society): 171-176, 2013
11. Shcherbakov, M.V. and A. Brebels, 2011. Outliers and 18. Tyukov, A., M. Shcherbakov and A. Brebels, 2011.
anomalies detection based on neural networks Automatic two way synchronization between server
forecast procedure. In the Proceedings of the 31 and multiple clients for HVAC system. In the
Annual International Symposium on Forecasting, Proceedings of The 13th International Conference on
ISF-2011, pp: 21-22. Information Integration and Web-based Applications
12. Goodwin, P. and R. Lawton, 1999. On the asymmetry & Services, pp: 467-470.
of the symmetric MAPE. International Journal of 19. Brebels, A., M.V. Shcherbakov and V.A. Kamaev,
Forecasting, 15(4): 405-408. 2010. Mathematical and statistical framework for
13. Koehler, A.B., 2001. The asymmetry of the sAPE comparison of neural network models with other
measure and other comments on the M3-competition. algorithms for prediction of Energy consumption in
International Journal of Forecasting, 17: 570-574. shopping centres. In the Proceedings of the 37 Int.
14. Makridakis, S., 1993. Accuracy measures: Theoretical Conf. Information Technology in Science Education
and practical concerns. International Journal of Telecommunication and Business, suppl. to Journal
Forecasting, 9: 527-529. Open Education, pp: 96-97.
15. Kolassa, S. and R. Martin, 2011. Percentage errors 20. Casella, G. and R. Berger, 1990. Statistical inference.
can ruin your day (and rolling the dice shows how). 2nd ed. Duxbury Press, pp: 686.
Foresight, (Fall): 21-27. 21. Kusiak, A., M. Li and Z. Zhang, 2010. A data-driven
16. Assessing Forecast Accuracy Measures. Date View approach for steam load prediction in buildings.
01.08.2013 http:// www.stat.iastate.edu/ preprint/ Applied Energy, 87(3): 925-933.
17. Syntetos, A.A. and J.E. Boylan, 2005. The accuracy
of intermittent demand estimates. International
Journal of Forecasting, 21(2): 303-314.
|
{"url":"https://www.researchgate.net/publication/281718517_A_survey_of_forecast_error_measures","timestamp":"2024-11-02T14:08:36Z","content_type":"text/html","content_length":"623019","record_id":"<urn:uuid:dc7f7410-dbc9-4e33-9a5e-16ad2e71edc9>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00046.warc.gz"}
|
Restricted Boltzmann Machines
Open-Source Internship opportunity by OpenGenus for programmers. Apply now.
Reading time: 20 minutes
Boltzmann Machines are bidirectionally connected networks of stochastic processing units, i.e. units that carry out randomly determined processes.
A Boltzmann Machine can be used to learn important aspects of an unknown probability distribution based on samples from the distribution. Generally, this learning problem is quite difficult and time
consuming. However, the learning problem can be simplified by introducing restrictions on a Boltzmann Machine, hence why, it is called a Restricted Boltzmann Machine.
Energy Based Model
Consider a room filled with gas that is homogenously spread out inside it.
Statistically, it is possible for the gas to cluster up in one specific area of the room. However, the probability for the gas to exist in that state is low since the energy associated with that
state is very high. The gas tends to exist in the lowest possible energy state, i.e. being spread out throughout the room.
In a Boltzmann Machine, energy is defined through weights in the synapses (connections between the nodes) and once the weights are set, the system tries to find the lowest energy state for itself by
minimising the weights (and in case of an RBM, the biases as well).
Restrictions in a Boltzmann Machine
In general, a Boltzmann Machine has a number of visible nodes, hidden nodes and synapses connecting them. However, in a Restricted Boltzmann Machine (henceforth RBM), a visible node is connected to
all the hidden nodes and none of the other visible nodes, and vice versa. This is essentially the restriction in an RBM.
Each node is a centre of computation that processes its input and makes randomly determined or stochastic decisions about whether to transmit the decision or not. Each visible node takes a low-level
feature from the dataset to learn. E.g. in case of a picture, each visible node represents a pixel(say x) of the picture. Each value in the visible layer is processed (i.e. multiplied by the
corresponding weights and all the products added) and transfered to the hidden layer. In the hidden layer, a bias b is added to the sum of products of weights and inputs, and the result is put into
an activation function. This result is the output of the hidden node.
This entire process is refered to as the forward pass. Once the forward pass is over, the RBM tries to reconstruct the visible layer.
During the backward pass or the reconstruction phase, the outputs of the hidden layer become the inputs of the visible layer. Each value in the hidden node is weight adjusted according to the
corresponding synapse weight (i.e. hidden node values are multiplied by their corresponding weights and the products are added) and the result is added to a visible layer bias at each visible node.
This output is the reconstruction.
The error generated (difference between the reconstructed visible layer and the input values) is backpropagated many times until a minimum error is reached.
Gibbs Sampling
Since all operations in the RBM are stochastic, we randomly sample values during finding the values of the visible and hidden layers. For RBM's we use a sampling method called Gibbs Sampling.
1. Take the value of input vector x and set it as the value for input (visible) layer.
2. Sample the value of the hidden nodes conditioned on observing the value of the visible layer i.e. p(h|x).
Since each node is conditionally independent, we can carry out Bernoulli Sampling i.e. if the probability of hidden node being 1 given the visible node is greater than a random value sampled from
a uniform distribution between 0 and 1, then the hidden node can be assigned the value 1, else 0.
Mathematically, 1 { p(h = 1|x) > U[0, 1] }.
3. Reconstruct the visible layer by sampling from p(x|h).
4. Repeat steps 2 and 3, k times.
Contrastive Divergence
In the case of an RBM, we take the cost function or the error as the average negative log likelihood. To minimise the average negative log likelihood, we proceed through the Stochastic Gradient
Descent method and first find the slope of the cost function:
1. For each training example x, follow steps 2 and 3.
2. Generate x(k) using k steps of Gibbs Sampling starting at x(0).
3. Update the parameters as shown in the derivation.
4. Repeat the above steps until stopping criteria satisfies (change in parameters is not very significant etc)
For feature extraction and pre-training k = 1 works well.
RBMs have applications in many fields like:
• dimensionality reduction
• classification
• feature learning
• topic modelling
• recommender systems
• music generating
and others
More recently, Boltzmann Machines have found applications in quantum computing.
In computer vision, there are the Boltzmann Encoded Adversarial Machines which integrate RBMs and convolutional neural networks as a generative model.
Question 1
What is the main algorithm that updates the parameters in an RBM?
Contrastive Divergence
Gibbs Sampling
Back Propagation
Stochastic Gradient Descent
Question 2
What does an RBM estimate in its Reconstruction phase?
Average Negative Log Likelihood
|
{"url":"https://iq.opengenus.org/restricted-boltzmann-machine/","timestamp":"2024-11-09T19:28:18Z","content_type":"text/html","content_length":"62729","record_id":"<urn:uuid:05ca3f87-fc89-4509-a34a-1a00cb90f405>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00607.warc.gz"}
|
Do Greek letters have numerical value?
Greek numerals are a system of representing numbers using letters of the Greek alphabet. They are also known by the names Milesian numerals, Alexandrian numerals, or alphabetic numerals….Higher
Decimal Symbol Greek numeral
1 Ι ena
5 Π πέντε (bende)
10 Δ δέκα (theka)
100 Η ἧκατόν (ekadon)
What is the numerical value of the alphabet?
It can be easily observed that the alphabetical letters A, I, Q, J, Y, all have the numerical value of 1, the letters B, K, R, the numerical value of 2, the letters S, C, G, L the numerical value of
3, and so on right up to the numerical value of 8. There is no numerical value more than 8.
What are the 24 Greek letters and numbers?
The letters of the Greek alphabet are: alpha, beta, gamma, delta, epsilon, zeta, eta, theta, iota, kappa, lambda, mu, nu1, xi, omicron, pi1, rho, sigma, tau, upsilon, phi, chi1, psi1, omega.
What is the total number of letters in the Greek alphabet?
24 letters
The Classical alphabet had 24 letters, 7 of which were vowels, and consisted of capital letters, ideal for monuments and inscriptions.
What is the numerical number of Jesus?
In some Christian numerology, the number 888 represents Jesus, or sometimes more specifically Christ the Redeemer. This representation may be justified either through gematria, by counting the letter
values of the Greek transliteration of Jesus’ name, or as an opposing value to 666, the number of the beast.
What is a numerical value?
Definitions of numerical value. a real number regardless of its sign. synonyms: absolute value. types: modulus. the absolute value of a complex number.
Are there 27 letters in the Greek alphabet?
The Greek alphabet consists of three sets of nine letters representing the numbers 1-9, 10-90, and 100-900. So, 27 letters all together (3 X 9 = 27). As such, omega is not the last letter of the
Greek alphabet because it represents the number 800.
How do you read Greek numerals?
The Greek numbers from 0 to 9 are demonstrated below, accompanied by their pronunciation.
1. 0 – μηδέν (midén)
2. 1 – ένα (éna)
3. 2 – δύο (dío)
4. 3 – τρία (tría)
5. 4 – τέσσερα (tésera)
6. 5 – πέντε (pénde)
7. 6 – έξι (éxi)
8. 7 – επτά (eptá)
Does the Greek alphabet have 24 letters?
Greek Letters, Symbols, English Alphabet Equivalents and Pronunciation. This article identifies and summarises the many Greek letters that have entered the English language. There are 24 letters in
the Greek alphabet.
Are there 26 letters in the Greek alphabet?
The Greek alphabet has 24 characters, as opposed to 26 letters in the Roman alphabet. However, Greek has seven vowels, as opposed to the standard five (and sometimes six) of the Roman alphabet.
What does 888 mean in Greek?
888 (number)
← 887 888 889 →
Cardinal eight hundred eighty-eight
Ordinal 888th (eight hundred eighty-eighth)
Factorization 23 × 3 × 37
Greek numeral ΩΠΗ´
How do you write 1993 in Greek numerals?
1993 in Roman Numerals is MCMXCIII.
How do you write 2022 in Greek numerals?
Therefore, the value of 2022 in roman numerals is MMXXII.
What is a numerical value example?
Examples: 1 , . 2 , 3.4 , -5 , -6.78 , +9.10 . Approximate-value numeric literals are represented in scientific notation with a mantissa and exponent.
How do you find the numerical value of a word?
A text is given in a cell. The task is to calculate a numerical value for each word. The letter “a” equals 1, the letter “b” equals 2, the letter “c” equals 3 etc. The numerical value of word equals
the sum of the numerical values of its letters (i.e. the word “add” has a numerical value of 9 (1+4+4)).
What is an example of a numerical value?
Does the Greek alphabet have 24 or 27 letters?
The Greek alphabet was the first alphabet to use vowels. There are 24 letters in the Greek alphabet.
What is the correct order of the Greek alphabet?
Where is home right now?
What is home?
Who are you?
What’s your story?
Where have you lived?
Where were you born?
What brings you here?
What are the 24 Greek letters names?
The Greek alphabet has 24 letters, some of them representing sounds that are not part of the English language. To create sounds not included in the alphabet, two letters are combined. For example:
the hard d sound is made using “nt,” the b sound is created by putting together “m” and “p,” the j sound is created with a combination of “t” and “z
What are the letters of the Greek alphabet in order?
Even if you never plan to learn Greek,there are good reasons to familiarize yourself with the alphabet.
Greek letters are used to designate fraternities,sororities,and philanthropic organizations.
Some books in English are numbered using the letters of the Greek alphabet. Sometimes,both lower case and capitals are employed for simplification.
What is the fifth letter in Greek?
alpha – the 1st letter of the Greek alphabet
beta – the 2nd letter of the Greek alphabet
gamma – the 3rd letter of the Greek alphabet
delta – the 4th letter of the Greek alphabet
epsilon – the 5th letter of the Greek alphabet
zeta – the 6th letter of the Greek alphabet
eta – the 7th letter of the Greek alphabet
theta – the 8th letter of the Greek alphabet
|
{"url":"https://vidque.com/do-greek-letters-have-numerical-value/","timestamp":"2024-11-04T05:39:10Z","content_type":"text/html","content_length":"57117","record_id":"<urn:uuid:134aef78-06d9-40fd-a483-1f2ebd6e7232>","cc-path":"CC-MAIN-2024-46/segments/1730477027812.67/warc/CC-MAIN-20241104034319-20241104064319-00581.warc.gz"}
|
Tail Indexing and Bitcoin
John Nash lecturing on probability theory.
Tail Indexing and Bitcoin
In February 2010, Satoshi Nakamoto contextualises the finite limit of bitcoins by saying as the block subsidy reduces, transaction fees will provide compensation for nodes. He also qualifies this by
saying in 20 years time, they’ll either be high bitcoin transaction volume or no volume at all.
If the latter prediction is true, it means in approximately 5 or 6 years bitcoins will have gone out of fashion. In this article, we look at this possibility by utilising tail-indexation of a
probability distribution.
Tail Distributions
In probability theory, tail distributions are probability distributions whose tails are not exponentially bounded. In many applications, it is the right side of the tail which is of interest. We can
see this in the bitcoin supply growth rate which fits classically with long tailed distributions of a high-frequency or high-amplitude population (such as the initial bitcoin block rewards) followed
by a low-frequency or low-amplitude population which gradually “tails off” asymptotically (i.e. because of future and subsequent halvings).
Source: Bashco (2020)
Long tail indexation is standard analysis for businesses to understand rank-size and rank-frequency distributions and is associated with the Pareto 80:20 Power Law, but the way in which Bitcoin used
this in its protocol to predetermine monetary supply represented something new when the first coins were mined in 2009. We can see the close relationship in the cumulative and supply density of
bitcoins following a Pareto distribution:
Source: Wikipedia
Satoshi’s Familiarity with Tail Indexation
Satoshi’s familiarity with probability is first laid out in the original Bitcoin Whitepaper (most notably Section 11, Calculations), but then goes onto explain how coin generation works to the 80:20
rule, and then also explains the application of long tail theory in relation to bitcoin mining where the greater combined value lays in the longer part of the tail. In the whitepaper too, Satoshi
designs the code to avoid an infinite tail.
There is however another insight which indicates Satoshi’s idea as to how his coins will obtain value: he admits to not knowing how software can reflect the real world value of things, other than to
keep the supply of his bitcoins predetermined, so as the value of coins increase so do new users to create a positive feedback loop — this fits with the power law idea that a functional relationship
between two quantities (in this case, supply and demand for bitcoins), vary as a power of another.
We can see how this has played out in the right side of the tail of the American dollar against the value of bitcoin:
Source: Google currency converter
Future Bitcoin Volumes
This then leads to some concluding reflections on the future of bitcoin transaction volumes. If it comes to pass that there will be no bitcoin transactions in five years, then how will that be the
The idea here is that if tail indexing in the form of probability distribution is a contributing axiom for bitcoins to appreciate against sovereign fiat issuances, then it’s possible for some sort of
external and official currency board or coalition to come along and index on bitcoins in this same way — because if bitcoins are following a long tail distribution, and according to the theory, the
future coins have a gradually lowering statistical probability of being mined.
I’ve written previously on how this might be worked from an axiomatic specification to be run at governmental level (either unilaterally or multilaterally), but as yet — to the best of my knowledge —
remains an experimental conjecture.
|
{"url":"https://jongulson.medium.com/tail-indexing-and-bitcoin-9a2628bad08f?source=user_profile_page---------4-------------c456bb590523---------------","timestamp":"2024-11-07T19:46:07Z","content_type":"text/html","content_length":"109496","record_id":"<urn:uuid:553f57e1-44fd-42c5-9b44-2d5462fb5ca5>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00327.warc.gz"}
|
Why you should choose sequential testing and not bayesian
Veröffentlicht am September 18, 2023
Why you should choose sequential testing to accelerate your experimentation program
When reaching statistical significance in digital experiments, the statistical method chosen can influence the interpretation, speed, and robustness of results. The best approach will depend on the
specifics of the experiment, available info, and the desired balance between speed and certainty.
Full Bayesian methods are about as useful for large-scale online experiments as a chemical dumpster fire. Bayesian methods offer a probability-based perspective, integrating prior knowledge and
current data. If you make the wrong choice, like poorly selecting in the critical first step which statistical distribution you should set as your prior, your online experiment is going to be as slow
as molasses.
Sequential methods can accelerate decision-making by allowing early stopping. the statistical power of sequential testing thrives on discovering difficult-to-find minuscule effects and is
lightning-fast for detecting blockbuster effects. Learn more about what test method you should pick and how that affects your test results.
Why sequential testing is superior to fully Bayesian statistics
Fully Bayesian statistics have a different aim than the frequentist statistics that underly Stats Engine, so they cannot be directly compared.
Bayesian experiments are all about combining two sources of information: what you thought about the situation before you observed the data at hand, and the other is what the data themselves have to
say about the situation.
The first source is expressed as a prior probability distribution. Prior means what you understood before you observed the data. The second source is expressed through the likelihood. The same
likelihood is used in fixed-horizon, frequentist statistics.
Therefore, an experimenter starting with a good guess can reach a decision much faster compared to an experimenter using a frequentist method. However, if that initial guess was poorly chosen then
the test can either take an extremely long time or yield very high error rates.
The flawed Bayesian claim about "No mistakes"
A fully Bayesian testing procedure does not claim to control the frequency of false positives. They instead set a goal about the expected loss and quantify the risk of choosing one variant over the
Bayesian methods are prone to detecting more winners simply because they do not incorporate strong false discovery rate control, particularly for experiments with continuous monitoring.
Although the Frequentist methods underlying Stats Engine are less flexible for directly incorporating prior information, they offer error guarantees that hold no matter the situation or prior
knowledge of the experimenter. For example, Stats Engine offers strong control of the false discovery rate for any experiment, whereas Bayesian methods may perform better or worse depending on the
exact choice of the prior distribution.
It takes more evidence to produce a significant result with Stats Engine, which allows experimenters to peek as many times as they desire over the life of an experiment. Further, the Stats Engine is
designed to compute statistical significance continuously so an experiment can be concluded as soon as enough evidence has accumulated.
How much power is gained or lost when using a sequential approach with Stats Engine versus an experiment based on traditional fixed horizon testing depends on the performance of the specific
Traditional test statistics require a pre-determined sample size. You are not allowed to deviate from the sample size collected. You cannot call the test early or let it run longer. With traditional
statistics detecting smaller, more subtle true effects requires running a longer experiment.
The real role of sequential testing's blockbuster boost and sensitive signal superpower
There are several major time-saving advantages of sequential analysis.
First, an experiment requires fewer samples with group sequential tests when the difference between the treatment and control groups is large, meaning when the actual uplift is larger than the
minimum detectable effect (MDE) set at the initiation of the experiment. In such cases, the experiment can be stopped early before reaching the pre-specified sample size.
For example, if the lift of your test is 5 percentage points larger relative to your chosen than your MDE, Stats Engine will run as fast as fixed horizon statistics. As soon as the improvement
exceeds the MDE by as much as 7.5 percentage points, Stats Engine is almost 75% faster than a test run with traditional methods. For larger experiments (>50,000 visitors), the gains are even larger,
where Stats Engine can determine a conclusive experiment up to 2.5 times as fast.
Another scenario that requires fewer samples in sequential experimentation is when the conversion rate of the control group is less than 10%. In this case, sequential testing can reduce the number of
observations required for a successful experiment by 50% or more.
Want to see if sequential testing can fail? Don’t give it a lift
Sequential testing is faster when there are tiny effects to find and when there are giant uplifts to find. If there's no effect, then it's going to take a sequential test longer to conclude (reach
statistical significance).
So, there are times when you can choose a traditional statistical test over a sequential design. First, recall that Type 1 error means you conclude there is a surprising difference between the test
variant and the control/baseline when there is no real difference between the two.
Doing statistics old school means you get Type-1 error guarantees at a single point in time. What that means is you get a specific sample size. That's literally it. That's the only prize you win.
Now, that's super useful for clinical trials. Ask yourself: Are we designing orphan drugs for a Phase 1 clinical trial that needs to meet hyper-stringent mandates by the FDA? Last I checked we are
not in that business. We have different, but no less complex, scientific rigors to contend with.
Sequential tests offer you Type-1 protections for the entire length of the experiment for any amount of traffic. Doesn't that sound a lot more flexible?
Then, what in the world is so great about the "sequential" aspect of any sequential method?
Size of sample
• The fixed sample size is difficult to do in practice. Resources and schedules rapidly change and shift the analysis timeline.
• Sample size calculations are often completely unmanageable for complex models and depend on many unknown parameters (Noordzij et al, 2010)
• A workaround is to wait a week or more. But then the analyst doesn't know the expected power of the test or what the confidence interval should be. The analyst could keep waiting longer, but then
their chances of making a Type I error (i.e., crying wolf) inflate like a hot air balloon (Armitage et al, 1969, 1991, 1993).
Size of effect
• If the effect size of the test variation is large, then we can detect it with less data than we initially thought necessary. (Bojinov & Gupta, 2022)
• At the same time, companies want the ability to precisely estimate the effect even if it is relatively small.
• It is impossible to satisfy both objectives with traditional, fixed sample-size statistical tests.
• If the sample size is small, then the experiment identifies large negative effects early but is underpowered to identify small, interesting distinctions between the test variation and the
• If the sample size is large so that the experiment can detect small effects, then there is a high risk of exposure to large negative effects for a dangerous length of time where the user
experience deteriorates beyond repair.
The Burger King factor -- Have it your way
• Conveniently, the experimenter doesn't have to commit to a fixed sample size ahead of time. Instead, the experimenter can collect data until they are satisfied
• Sequential tests let you monitor your experiments AND stop the experiment when the uncertainty around the projected performance of the test variations stabilizes (for example, the confidence
interval narrows, and interpretation is easier).
The Taco Bell factor -- Live más
• Sequential tests allow continuous monitoring. With continuous monitoring, you can manage your experiments automatically and algorithmically. A/B tests are used as quality control gatekeepers for
controlled rollouts of new features and changes. This allows for experimentation to be scaled as part of your culture of experimentation.
What makes Optimizely's sequential method interesting?
Stats Engine deploys a novel algorithm called the mixture sequential probability ratio test (mSPRT).
It compares after every visitor how much more indicative the data is of any improvement / non-zero improvement, compared to zero / no improvement at all. This is the relative plausibility of the
variation(s), compared to the baseline.
The mSPRT is a special type of statistical test that improves upon the sequential probability ratio test (SPRT), first proposed by theoretical statistician David Siegmund at Stanford in 1985. That OG
sequential probability ratio test from Siegmund was designed to test exact, specific values of the lift from a single variation in comparison to a single control by comparing the likelihood that
there is a non-zero improvement in performance from the variation versus zero improvement in performance from the baseline.
Specifically, Optimizely's mSPRT algorithm averages the ordinary SPRT across a range of all possible improvements (for example, alternative lift values).
Optimizely’s stats engine also employs a flavor of the Empirical Bayesian technique. It blends the best of frequentist and Bayesian methods while maintaining the always valid guarantee for continuous
monitoring of experiment results.
Stats Engine takes more evidence to produce a significant result, which allows experimenters to peek as many times as they desire over the life of an experiment. Stats Engine also controls your
false-positive rates at all times regardless of when or how often you peek, and further adjusts for situations where your experiment has multiple comparisons (i.e., multiple metrics and variations).
Controlling the False Discovery Rate offers a way to increase power while maintaining a principled bound on error. Said another way, the False Discovery Rate is the chance of crying wolf over an
innocuous finding. Therefore, the Stats Engine permits continuous monitoring of results with always valid outcomes by controlling the false positive rate at all times regardless of when or how often
the experimenter peeks at the results.
When choosing between a Bayesian test and a sequential test, it is important to consider the specific needs of your situation.
Bayesian a/b methods are well-suited for situations where you have prior information about the parameters of interest and want to update your beliefs about those parameters in light of new data.
Stopping a Bayesian test early means you’ll likely accept a null or negative result.
Sequential testing can help you evaluate the consistency and dominance of a variation's performance over the other (or lack thereof).
Where else can you read more about the different testing methods? Well, here are some relevant posts to get you started:
|
{"url":"https://www.optimizely.com/de/insights/blog/why-you-should-choose-sequential-testing-and-not-bayesian/","timestamp":"2024-11-13T08:37:57Z","content_type":"text/html","content_length":"94707","record_id":"<urn:uuid:2dc587e7-7a5f-47d9-84e7-1684d1d4d49b>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00629.warc.gz"}
|
A Step By Step Guide For The Student To Calculate Standard Form – Get Education
A Step By Step Guide For The Student To Calculate Standard Form
We all go through the term standard form from the start of academic life through higher secondary education and also at the graduate level. Arithmetic is one of the most difficult and intimidating
subjects taught every year in schools and colleges.
Either you are a student of physics, chemistry, meteorology, geology, or statistics, or any of the other branches of science, you would somehow encounter or rely extensively on statistical
instruments and use either very large or very small numbers in all these divisions. Standard form is indeed the easiest and quickest method for very large or very small numbers to be written.
Standard form is also regarded as scientific notation or Britain’s regular style.
It is necessary to convert larger numbers into scientific notation for different aspects, but first, you need to know what scientific notation is and why this form is used and use of standard form
What is the standard form?
“The basic standard form system or term refers to representing numbers as a power of ten. They can be represented as a power of 10 if you are concerned with very small numbers (such as 0.000025) and
for a very small number, power will, nevertheless, be negative. When you are concerned with very large numbers (such as 4000000) and it will be expressed as a positive power of 10.
In the power of ten, the number 0.000025 will be written as 2.5 x 10^-5
In the power of ten, the number 4000000 will be written as 4 x 10 ^6
Decimal numbers have values that are minimal. Hence, the power of ten is negative as you convert them into standard form. Likewise, the power of ten will be positive if you converted a big number to
standard type (scientific notation). For learners, the basic standard form converter or calculator is very beneficial in converting any number into standard form.
Standard form calculator
A standard form of the calculator is a computational procedure that is used when numbers are written in scientific notation it can help in evaluating arithmetic computations fast. This method can be
used by people who do not have a mathematical experience to alleviate needless tension.
You must input the digits in standard form to use this software and pick the procedure to be done. This consistency test would produce the right results for you. It is an alternative that does not
require the trouble of noting down the steps of the solution and then getting to the answer.
Choosing the appropriate calculator is important
There is no certainty that the result would accurate without a comprehensive understanding of implementing the standard form principles. Using the standard form calculator can help you in
acknowledging and solving math or science problems quickly.
In choosing the scientific or standard form calculator to accommodate the specifications, caution should be taken as the markets are crowded with options. Many calculators serve the same features,
but the variation in displays can be noted. Prices, in general, are also fair.
But now, in the digital technological world, we rely too heavily on the internet, even for our small work. The best way to find a standard form calculator, therefore, is a great online choice.
Through offering a wide range of resources that are useful in struggling with everyday life errands, there are a variety of platforms that provide too many tools. The standard form calculator method
is also one of them that, without any doubt, converts the numbers into a standardized form.
Many online websites have too many calculator resources and one of them is a standard form calculator that can solve the standard form conversion mathematical operations.
Online Calculators
The standard form calculator is a simple to use calculator offered by many websites and gives instant results to convert linear form or any other number to scientific or standard notation. To avoid
using the boring way of solving problems, this calculator is ideal for university students as well as teachers. Students would be able to save lots of time by using this online Standard form
How to use it?
These websites provide you with a unique standard form calculator to switch any number to evaluate any number’s regular form. You can convert any large or small number into a standardized form by
using this standard equation calculator in no time. It is pretty simple to use and by following the below given simple steps, you can get the results immediately.
1) Put that particular number in the provided input field and click on the convert button.
2) To get the results, click the Convert key.
3) There is another key as well that is the reset key, you can reset the values by using the Reset to use further.
Is it easier to use this conversion to evaluate the standard form?
Since using this digital calculator resource, there is no wonder that users get a lot of relaxation. They may not have to waste time composing the very large and very small numbers, adding values,
and then extracting the outcomes. If you don’t have a good statistical or mathematical basis, it would be hard for you to accomplish any of these moves. This is a trustworthy tool that helps with a
simple layout.
Determining mathematical questions in standard form
It may not be that complicated to show a number in standard form when mathematical operations have to be carried out. Keep in mind that the following numbers must be multiplied after converting them
to the standard form.
Consider this multiplication
3000 x 6500
When we convert 4000 into standard form, it will become = 3 x 10 ^3
Whereas, converting 8500 into the standard form will give = 6.5 x 10 ^3
Now, to get the result, we must have to multiply both of the standard values such as = 3 x 6.5 x 10 ^3 x 10 ^3 = 13 X 10 ^6
Some more quick calculations for standard form
80000 in standard form = 8.0000 x 10^4
0.00004 in standard form = 4 x 10 ^-5
90000 in standard form = 9.0000 x 10 ^4
If the question is how do you switch 0.000082 in standard form? The answer is: 8.2 x 10 ^-5
|
{"url":"https://geteducationskills.com/a-step-by-step-guide-for-the-student-to-calculate-standard-form/","timestamp":"2024-11-14T17:40:55Z","content_type":"text/html","content_length":"83770","record_id":"<urn:uuid:943ff54c-3491-45b8-84e8-c9a8a4c9fec3>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00877.warc.gz"}
|
Popular Machine Learning Code Snippets (Demo)
Machine learning is changing the way we live and work. It is giving people new ways to analyze and develop solutions. But machine learning algorithms can be complex, and they often require a lot of
data and computing power. That’s why it’s important to understand how machine learning algorithms work so that they can be implemented correctly in your projects. Here are some helpful code snippets:
Training a Simple Neural Network
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
• define the neural network:
N = 10 # number of inputs (neurons) in layer 1
K = 3 # number of hidden layers, with K=2 for a standard neural network, but we’re adding an extra one here because it’s fun! We’ll also be using ReLU activation functions instead of tanh by default.
Since we don’t have any target variables (outputs), we only need one output layer with one neuron in it. Finally, we’ll use L2 regularization during training because our data is noisy and L1 won’t
work well enough on its own; this helps keep our parameters from drifting off into infinity (which would happen if no regularization were applied). We don’t need to worry about batch normalization or
dropout here since those are only used when training recurrent networks such as LSTMs or GRUs; these aren’t applicable here either way since this isn’t really a recurrent model–it’s just an ordinary
feedforward NN trained using gradient descent via backpropagation through time (BPTT).
Introduction to NLTK
NLTK is a toolkit for building Python programs to work with human language data. It provides easy-to-use interfaces to over 50 corpora and lexical resources such as WordNet, along with a suite of
text processing libraries for classification, tokenization, stemming and tagging.
NLTK also includes modules for statistical natural language processing (NLP), including part-of-speech tagging; parsing; sentiment analysis; syntactic analysis and generation; entity recognition.
Introduction to Scikit-Learn
Scikit-Learn is a Python module for machine learning built on top of SciPy. It’s open source, BSD licensed and free to use.
Scikit-learn provides several supervised and unsupervised learning algorithms including classification, regression and clustering. These algorithms can be combined with each other or used separately
depending on your data set.
Example of Classification with Naive Bayes
• Naive Bayes is a simple and powerful machine learning algorithm that can be used for classification, regression and feature selection. It’s based on the Bayes theorem and has been used in text
classification problems such as spam filtering or sentiment analysis.
• In this example we will use it to classify documents into categories based on their content (books vs newspaper articles).
Example of Regression with Lasso Penalized Linear Regression
Lasso penalized linear regression is a technique that can be used to predict house prices. It involves fitting a model to the data using lasso regularization and then predicting new values for houses
based on their features. The following code snippet shows how this could be done:
Cross-Entropy Optimization for Logistic Regression
In this section, we’ll walk through the code for a neural network that uses cross-entropy as its loss function. Cross-entropy is a popular choice among machine learning practitioners because it’s
easy to understand and implement.
To start, let’s look at how we’d actually use this function in practice:
import numpy as np
from sklearn import datasets, linear_model
# Load data from CSV file into memory (one row per example)
df = datasets.load_digits()[0] # Create model using Logistic Regression classifier clf = linear_model.LogisticRegression() clf = clf(C=0.001) # Train model on training data set containing 2 classes
(0/1). We will also specify which column contains our target values by passing in “y” as an argument here–this allows us access those values later when making predictions during evaluation time
LSTM for Language Modeling in TensorFlow
In this section, you’ll learn how to implement LSTM for language modeling in TensorFlow.
LSTM is a type of Recurrent Neural Network (RNN) and can be used for many tasks such as time series analysis, sequence classification and labeling.
Machine learning is giving people new ways to analyze and develop solutions.
Machine learning is a subset of artificial intelligence (AI), which is a field that focuses on developing systems that can perform tasks that require human intelligence.
Machine learning has many applications, including computer vision, natural language processing and speech recognition. Machine learning can also be used for automated decision making or prediction
based on historical data analysis.
Machine learning problems are often difficult to solve with traditional programming because they require high-level abstractions from raw data before they can be analyzed by algorithms or statistical
It’s clear that machine learning is a powerful tool, and it’s only going to get more useful as we continue to explore its potential. The snippets in this article are just the beginning of what’s
possible with these tools–I hope they inspire you to try out some new techniques on your own data!
|
{"url":"https://cornelldolbin.my.id/popular-machine-learning-code-snippets-demo.html","timestamp":"2024-11-07T03:56:10Z","content_type":"text/html","content_length":"103864","record_id":"<urn:uuid:9ba14ef5-0c8a-4ceb-b05b-7a8a684f2a33>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00315.warc.gz"}
|
Math Contest Repository
Euclid 2024 Question 3, CEMC UWaterloo
(Euclid 2024, Question 3, CEMC - UWaterloo)
(a) The graph of the equation $y=r(x-3)(x-r)$ intersects the $y$-axis at $(0, 48)$. What are the two possible values of $r$?
(b) A bicycle costs $\$B$ before taxes. If the sales tax were $13\%$, Annemiek would pay a total that is $\$24$ higher than if the sales tax were $5\%$. What is the value of $B$?
(c) The function $f$ has the following three properties:
• $f(1)=3$.
• $f(2n)=(f(n))^2$ for all positive integers $n$.
• $f(2m+1)=3f(2m)$ for all positive integers $m$.
Determine the value of $f(2)+f(3)+f(4)$.
Answer Submission Note(s)
In part (a), sort your answers in ascending order and separate them with a comma.
Separate the answers for each part with a single space.
For example: "a1,a2 b c"
Please login or sign up to submit and check if your answer is correct.
flag Report Content
You should report content if:
• It may be offensive.
• There is something wrong with it (statement or difficulty value)
• It isn't original.
Thanks for keeping the Math Contest Repository a clean and safe environment!
|
{"url":"https://mathcontestrepository.pythonanywhere.com/problem/euclid24q3/","timestamp":"2024-11-13T15:14:12Z","content_type":"text/html","content_length":"10892","record_id":"<urn:uuid:e6813a3e-c13e-47c4-b6c0-2dfb369f4dd0>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00762.warc.gz"}
|
ECCC - Anup Rao
All reports by Author Anup Rao:
TR24-111 | 1st July 2024
Siddharth Iyer, Anup Rao
An XOR Lemma for Deterministic Communication Complexity
We prove a lower bound on the communication complexity of computing the $n$-fold xor of an arbitrary function $f$, in terms of the communication complexity and rank of $f$. We prove that $D(f^{\oplus
n}) \geq n \cdot \Big(\frac{\Omega(D(f))}{\log rk(f)} -\log rk(f)\Big )$, where here $D(f), D(f^{\oplus n})$ represent the ... more >>>
TR23-194 | 5th December 2023
Siddharth Iyer, Anup Rao
XOR Lemmas for Communication via Marginal Information
Revisions: 2
We define the marginal information of a communication protocol, and use it to prove XOR lemmas for communication complexity. We show that if every $C$-bit protocol has bounded advantage for computing
a Boolean function $f$, then every $\tilde \Omega(C \sqrt{n})$-bit protocol has advantage $\exp(-\Omega(n))$ for computing the $n$-fold xor $f^{\oplus ... more >>>
TR22-012 | 2nd February 2022
Anup Rao, Oscar Sprumont
On List Decoding Transitive Codes From Random Errors
We study the error resilience of transitive linear codes over $F_2$. We give tight bounds on the weight distribution of every such code $C$, and we show how these bounds can be used to infer bounds
on the error rates that $C$ can tolerate on the binary symmetric channel. Using ... more >>>
TR22-005 | 11th January 2022
Anup Rao
Sunflowers: from soil to oil
Revisions: 3
A \emph{sunflower} is a collection of sets whose pairwise intersections are identical. In this article, we shall go sunflower-picking. We find sunflowers in several seemingly unrelated fields, before
turning to discuss recent progress on the famous sunflower conjecture of Erd\H{o}s and Rado, made by Alweiss, Lovett, Wu and Zhang.
more >>>
TR21-102 | 13th July 2021
Siddharth Iyer, Anup Rao, Victor Reis, Thomas Rothvoss, Amir Yehudayoff
Tight bounds on the Fourier growth of bounded functions on the hypercube
Revisions: 1
We give tight bounds on the degree $\ell$ homogenous parts $f_\ell$ of a bounded function $f$ on the cube. We show that if $f: \{\pm 1\}^n \rightarrow [-1,1]$ has degree $d$, then $\| f_\ell \|_\
infty$ is bounded by $d^\ell/\ell!$, and $\| \hat{f}_\ell \|_1$ is bounded by $d^\ell e^{{\ell+1 \choose 2}} ... more >>>
TR20-006 | 22nd January 2020
Anup Rao, Amir Yehudayoff
The Communication Complexity of the Exact Gap-Hamming Problem
We prove a sharp lower bound on the distributional communication complexity of the exact gap-hamming problem.
more >>>
TR17-174 | 13th November 2017
Christian Engels, Mohit Garg, Kazuhisa Makino, Anup Rao
On Expressing Majority as a Majority of Majorities
If $k<n$, can one express the majority of $n$ bits as the majority of at most $k$ majorities, each of at most $k$ bits? We prove that such an expression is possible only if $k = \Omega(n^{4/5})$.
This improves on a bound proved by Kulikov and Podolskii, who showed that ... more >>>
TR17-040 | 4th March 2017
Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao
Non-Adaptive Data Structure Lower Bounds for Median and Predecessor Search from Sunflowers
Revisions: 2
We prove new cell-probe lower bounds for data structures that maintain a subset of $\{1,2,...,n\}$, and compute the median of the set. The data structure is said to handle insertions non-adaptively
if the locations of memory accessed depend only on the element being inserted, and not on the contents of ... more >>>
TR16-167 | 1st November 2016
Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao
New Randomized Data Structure Lower Bounds for Dynamic Graph Connectivity
Revisions: 1
The problem of dynamic connectivity in graphs has been extensively studied in the cell probe model. The task is to design a data structure that supports addition of edges and checks connectivity
between arbitrary pair of vertices. Let $w, t_q, t_u$ denote the cell width, expected query time and worst ... more >>>
TR15-057 | 13th April 2015
Anup Rao, Makrand Sinha
Simplified Separation of Information and Communication
Revisions: 3
We give an example of a boolean function whose information complexity is exponentially
smaller than its communication complexity. Our result simplifies recent work of Ganor, Kol and
Raz (FOCS'14, STOC'15).
more >>>
TR15-055 | 13th April 2015
Sivaramakrishnan Natarajan Ramamoorthy, Anup Rao
How to Compress Asymmetric Communication
We study the relationship between communication and information in 2-party communication protocols when the information is asymmetric. If $I^A$ denotes the number of bits of information revealed by
the first party, $I^B$ denotes the information revealed by the second party, and $C$ is the number of bits of communication in ... more >>>
TR15-039 | 16th March 2015
Anup Rao, Makrand Sinha
On Parallelizing Streaming Algorithms
We study the complexity of parallelizing streaming algorithms (or equivalently, branching programs). If $M(f)$ denotes the minimum average memory required to compute a function $f(x_1,x_2, \dots,
x_n)$ how much memory is required to compute $f$ on $k$ independent streams that arrive in parallel? We show that when the inputs (updates) ... more >>>
TR14-060 | 21st April 2014
Anup Rao, Amir Yehudayoff
Simplified Lower Bounds on the Multiparty Communication Complexity of Disjointness
Revisions: 1
We show that the deterministic multiparty communication complexity of set disjointness for $k$ parties on a universe of size $n$ is $\Omega(n/4^k)$. We also simplify Sherstov's proof
showing an $\Omega(\sqrt{n}/(k2^k))$ lower bound for the randomized communication complexity of set disjointness.
more >>>
TR14-020 | 18th February 2014
Pavel Hrubes, Anup Rao
Circuits with Medium Fan-In
Revisions: 1
We consider boolean circuits in which every gate may compute an arbitrary boolean function of $k$ other gates, for a parameter $k$. We give an explicit function $f:\bits^n \rightarrow \bits$ that
requires at least $\Omega(\log^2 n)$ non-input gates when $k = 2n/3$. When the circuit is restricted to being depth ... more >>>
TR13-035 | 6th March 2013
Mark Braverman, Anup Rao, Omri Weinstein, Amir Yehudayoff
Direct product via round-preserving compression
Revisions: 1
We obtain a strong direct product theorem for two-party bounded round communication complexity.
Let suc_r(\mu,f,C) denote the maximum success probability of an r-round communication protocol that uses
at most C bits of communication in computing f(x,y) when (x,y)~\mu.
Jain et al. [JPY12] have recently showed that if
more >>>
TR12-143 | 5th November 2012
Mark Braverman, Anup Rao, Omri Weinstein, Amir Yehudayoff
Direct Products in Communication Complexity
Revisions: 2
We give exponentially small upper bounds on the success probability for computing the direct product of any function over any distribution using a communication protocol. Let suc(?,f,C) denote the
maximum success probability of a 2-party communication protocol for computing f(x,y) with C bits of communication, when the inputs (x,y) are ... more >>>
TR11-160 | 1st December 2011
Zeev Dvir, Anup Rao, Avi Wigderson, Amir Yehudayoff
Restriction Access
We introduce a notion of non-black-box access to computational devices (such as circuits, formulas, decision trees, and so forth) that we call \emph{restriction access}. Restrictions are partial
assignments to input variables. Each restriction simplifies the device, and yields a new device for the restricted function on the unassigned variables. On ... more >>>
TR10-166 | 5th November 2010
Mark Braverman, Anup Rao
Towards Coding for Maximum Errors in Interactive Communication
We show that it is possible to encode any communication protocol
between two parties so that the protocol succeeds even if a $(1/4 -
\epsilon)$ fraction of all symbols transmitted by the parties are
corrupted adversarially, at a cost of increasing the communication in
the protocol by a constant factor ... more >>>
TR10-083 | 13th May 2010
Mark Braverman, Anup Rao
Efficient Communication Using Partial Information
Revisions: 1
We show how to efficiently simulate the sending of a message M to a receiver who has partial information about the message, so that the expected number of bits communicated in the simulation is close
to the amount of additional information that the message reveals to the receiver.
We ... more >>>
TR10-035 | 7th March 2010
Mark Braverman, Anup Rao, Ran Raz, Amir Yehudayoff
Pseudorandom Generators for Regular Branching Programs
We give new pseudorandom generators for \emph{regular} read-once branching programs of small width.
A branching program is regular if the in-degree of every vertex in it is (0 or) $2$.
For every width $d$ and length $n$,
our pseudorandom generator uses a seed of length $O((\log d + \log\log n ... more >>>
TR09-044 | 6th May 2009
Boaz Barak, Mark Braverman, Xi Chen, Anup Rao
Direct Sums in Randomized Communication Complexity
Does computing n copies of a function require n times the computational effort? In this work, we
give the first non-trivial answer to this question for the model of randomized communication
We show that:
1. Computing n copies of a function requires sqrt{n} times the ... more >>>
TR08-015 | 23rd January 2008
Anup Rao
Extractors for Low-Weight Affine Sources
We give polynomial time computable extractors for low-weight affine sources. A distribution is affine if it samples a random point from some unknown low dimensional subspace of F^n_2 . A distribution
is low weight affine if the corresponding linear space has a basis of low-weight vectors. Low-weight ane sources are ... more >>>
TR08-013 | 16th January 2008
Anup Rao
Parallel Repetition in Projection Games and a Concentration Bound
In a two player game, a referee asks two cooperating players (who are
not allowed to communicate) questions sampled from some distribution
and decides whether they win or not based on some predicate of the
questions and their answers. The parallel repetition of the game is
the game in which ... more >>>
TR07-034 | 29th March 2007
Anup Rao
An Exposition of Bourgain's 2-Source Extractor
A construction of Bourgain gave the first 2-source
extractor to break the min-entropy rate 1/2 barrier. In this note,
we write an exposition of his result, giving a high level way to view
his extractor construction.
We also include a proof of a generalization of Vazirani's XOR lemma
that seems ... more >>>
TR05-106 | 26th September 2005
Anup Rao
Extractors for a Constant Number of Polynomial Min-Entropy Independent Sources
Revisions: 1
We consider the problem of bit extraction from independent sources. We
construct an extractor that can extract from a constant number of
independent sources of length $n$, each of which have min-entropy
$n^\gamma$ for an arbitrarily small constant $\gamma > 0$. Our
constructions are different from recent extractor constructions
more >>>
|
{"url":"https://eccc.weizmann.ac.il/author/159/","timestamp":"2024-11-12T01:14:42Z","content_type":"application/xhtml+xml","content_length":"38433","record_id":"<urn:uuid:151abd5f-f6f3-490f-8393-feba8ee90665>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00415.warc.gz"}
|
Module Specifications.
Current Academic Year 2024 - 2025
All Module information is indicative, and this portal is an interim interface pending the full upgrade of Coursebuilder and subsequent integration to the new DCU Student Information System (DCU Key).
As such, this is a point in time view of data which will be refreshed periodically. Some fields/data may not yet be available pending the completion of the full Coursebuilder upgrade and integration
project. We will post status updates as they become available. Thank you for your patience and understanding.
Date posted: September 2024
No Banner module data is available
Module Title
Module Code (ITS)
Faculty School
Semester 1: Nina Snigireva
Module Co-ordinator Semester 2: Nina Snigireva
Autumn: Nina Snigireva
Module Teachers Ronan Egan
Nina Snigireva
NFQ level 8 Credit Rating
Pre-requisite Not Available
Co-requisite Not Available
Compatibles Not Available
Incompatibles Not Available
This module will introduce students to the notions of vectors, matrices and linear maps in (finite dimensional) Euclidean Spaces. Infinite dimensional vector spaces, in the context of Fourier Series,
will be considered also. The module aims to give students a working knowledge of the methods and applications of linear algebra. Applications will be chosen with their significance to the students
disciplines in mind. Students will attend lectures on the course material and will work, independently, to solve problems on topics related to the course material. The students will have an
opportunity to review their solutions, with guidance, at weekly tutorials.
Learning Outcomes
1. solve systems of linear equations.
2. perform various operations with vectors and matrices, in particular, be able to calculate eigenspaces and apply such calculations to the diagonalization of matrices.
3. apply linear algebraic methods to geometic problems in 2 and 3 dimensions.
4. calculate trigonometric Fourier Series of elementary functions defined on finite intervals and sketch the periodic extensions of such functions
5. demonstrate an understanding of concepts by use of examples or counterexamples.
Workload Full-time hours per semester
Type Hours Description
Lecture 36 Students will attend lectures where new material will be presented and explained. Also attention will be drawn to various supporting material and tutorials as the course progresses.
Tutorial 12 Students will show their solutions to homework questions and will receive help with and feed-back on these solutions.
Independent Corresponding to each lecture students will devote approximately two additional hours of independent study to the material discussed in that lecture or to work on support material
Study 139.5 when attention is drawn to such in lectures. Before each tutorial students will devote approximately four hours to solving homework problems which are to be discussed in that
Total Workload: 187.5
All module information is indicative and subject to change. For further information,students are advised to refer to the University's Marks and Standards and Programme Specific Regulations at: http:/
Indicative Content and Learning Activities
Systems of Linear EquationsIntroduction to Systems of Linear Equations, Gaussian Elimination, Consistent and Inconsistent Systems.VectorsVectors in the plane, Vectors in space, Applications to
Geometry, n-component vectors, linear independence and bases, Gram-Schmidt Process Linear transformationsMatricesMatrices and Matrix Operations, Square Matrices, Determinants, Inverses, More Systems
of Linear EquationsEigenvectorsEigenvalues, Eigenvectors and Diagonalization.Fourier SeriesFunction spaces, Orthogonal projections onto finite dimensional spaces. Calculation of trigonometric Fourier
Series, Bessel's Inequality, Parseval's Identity
Assessment Breakdown
Continuous Assessment % Examination Weight %
Course Work Breakdown
Type Description % of total Assessment Date
In Class Test n/a 20%
Indicative Reading List
• Howard Anton and Chris Rorres: 2000, Elementary Linear Algebra, (Applications Version)., 8, Wiley, 0471170526
• Howard Anton: 2005, Elementary Linear Algebra., 9, Wiley, 0471669601
• Noble, B. and Daniels, J.: 1988, Applied Linear Algebra, 3, Prentice Hall, 0130412600
Other Resources
|
{"url":"https://modspec.dcu.ie/registry/module_contents.php?function=2&subcode=MS200A","timestamp":"2024-11-01T22:16:47Z","content_type":"application/xhtml+xml","content_length":"42839","record_id":"<urn:uuid:c214ed50-6ff9-452d-bd4a-73265dc0050e>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00614.warc.gz"}
|
A260744 - OEIS
A juggling pattern is prime if the closed walk corresponding to the pattern in the juggling state graph is a cycle.
Esther Banaian, Steve Butler, Christopher Cox, Jeffrey Davis, Jacob Landgraf, Scarlitte Ponce,
Counting prime juggling patterns
, arXiv:1508.05296 [math.CO], 2015.
In siteswap notation, the prime juggling pattern(s) of length one is 2; of length two are 31 and 40; of length three are 330, 411, 420, 501, 600.
|
{"url":"https://oeis.org/A260744","timestamp":"2024-11-04T20:37:15Z","content_type":"text/html","content_length":"13134","record_id":"<urn:uuid:bf7c62b0-6023-4303-afa5-04e4654fd0a0>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00395.warc.gz"}
|
[Solved] Make a java class that determines the cou | SolutionInn
Answered step by step
Verified Expert Solution
Make a java class that determines the count of times it takes for a number to reach 1 using the following logic. Accepts a number
Make a java class that determines the count of times it takes for a number to reach 1 using the following logic.
Accepts a number from the user. Validate that number is in the range 100 to 999
if the number is outside this range, display an appropriate message and terminate the program.
If the number is valid:
• If the number is even, reset the number to number/2
• If the number is odd, reset the number to 3 * number + 1
Repeat as long as the number remains greater than 1. Keep a count of the number of iterations until the value of number is 1
Display the sentence
It takes iterations for the number to reach 1 .
Ask the user for another number.
Execute the above until the user inputs -1. When the input is -1, terminate the program by displaying "Goodbye!"
Input: 123
It takes 46 iterations for the number 123 to reach 1.
Input: 378
It takes 107 iterations for the number 378 to reach 1.
Input: -1
Invalid number...Goodbye!
File to be submitted: .java file
There are 3 Steps involved in it
Step: 1
Heres a Java class implementing the described logic import javautilScanner public class NumberIt...
Get Instant Access to Expert-Tailored Solutions
See step-by-step solutions with expert insights and AI powered tools for academic success
Ace Your Homework with AI
Get the answers you need in no time with our AI-driven, step-by-step assistance
Get Started
Recommended Textbook for
Authors: Walter Savitch
8th Edition
0134462033, 978-0134462035
More Books
Students also viewed these Operating System questions
View Answer in SolutionInn App
|
{"url":"https://www.solutioninn.com/study-help/questions/make-a-java-class-that-determines-the-count-of-times-962715","timestamp":"2024-11-14T15:30:18Z","content_type":"text/html","content_length":"109942","record_id":"<urn:uuid:67d71a0f-97c2-4dec-be3c-d98a27696dbc>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00440.warc.gz"}
|
Foundations of Data Analysis
A short course at the Department of Mathematics
26 ottobre 2018
30 novembre 2018
21 dicembre 2018
Venue: Polo Ferrari 2 - Via Sommarive 9 - Room B103
Dates: Friday 26th October 2018 at 14:00-18:00
Friday 30th November 2018 at 14:00-18:00
Friday 21th December 2018 at 14:00-18:00
Speaker: Massimo Fornasier (Technische Universität München)
After successful completion of the module students are able to understand and apply the basic notions, concepts, and methods of computational linear algebra, convex optimization, differential
geometry for data analysis. They master in particular the use of the singular value decomposition and random matrices for low dimensional data representations. They know fundamentals of sparse
recovery problems, including compressed sensing, low rank matrix recovery, and dictionary learning algorithms. They understand the representation of data as clusters around manifolds in high
dimension and they know how to use methods for constructing local charts for the data.
1. Representations of data as matrices: Many data vectors form a matrix - Review of basic linear algebra - Linear dependence and concept of rank - Approximate linear dependence with varying degree
of approximation: Singular value decomposition /Principal Component Analysis - Redundancy of data representations -> orthonormal bases, frames and dictionaries - Fourier basis as singular vectors
of spatial shift - Fast Fourier Transform.
2. Linear dimension reduction: Johnson-Lindenstrauss (JL) Lemma - Review of basic probability, random matrices - Random Matrices satisfying JL with high probability - Fast JL embeddings - Sparsity,
low rank as structured signal models - Compressed sensing - Matrix completion and low rank matrix recovery - Optimization review - Dictionary Learning.
3. Non-linear dimension reduction: Manifolds as data models - Review of differential geometry – ISOMAP - Diffusion maps - Importance of Nearest neighbor search, use of JL.
4. Outlook: Data Analysis and Machine Learning.
Language: English
Credits: for students of the Dept. of Mathematics: 3CFU; for students of other Departments see the information box on your right
Admission: Course open to max 30 LM students.
Deadline: 15th October 2018
|
{"url":"https://webmagazine.unitn.it/evento/dmath/46925/foundations-of-data-analysis","timestamp":"2024-11-03T10:43:10Z","content_type":"text/html","content_length":"33670","record_id":"<urn:uuid:6d4650da-329e-41c5-8f89-4d48083c2ceb>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00486.warc.gz"}
|
Understanding the limiting process. - AP Calculus AB
All AP Calculus AB Resources
Example Questions
Example Question #1 : Understanding The Limiting Process.
Possible Answers:
y' = –csc(5x^3)cot(5x^3)(15x^2)
y' = –sec(5x^3)tan(5x^3)(15x^2)
y' = sec(5x^3)tan(5x^3)(15x^2)
Correct answer:
y' = sec(5x^3)tan(5x^3)(15x^2)
The derivative of the function y = sec(x) is sec(x)tan(x). First take the derivative of the outside of the function: y = sec(4x^3) : y' = sec(5x^3)tan(5x^3). Then take the derivative of the inside of
the function: 5x^3 becomes 15x^2. So your final answer is: y' = ec(5x^3)tan(5x^3)15x^2
Example Question #1 : Understanding The Limiting Process.
Find the slope of the tangent line to the graph of f at x = 9, given that f(x) = –x^2 + 5√(x)
Correct answer:
–18 + (5/6)
First find the derivative of the function.
f(x) = –x^2 + 5√(x)
f'(x) = –2x + 5(1/2)x^–1/2
Simplify the problem
f'(x) = –2x + (5/2x^1/2)
Plug in 9.
f'(3) = –2(9) + (5/2(9)^1/2)
= –18 + 5/(6)
Example Question #2 : Understanding The Limiting Process.
Correct answer:
(–2)/(x – 1)^2
Rewrite problem.
(x + 1)/(x – 1)
Use quotient rule to solve this derivative.
((x – 1)(1) – (x + 1)(1))/(x – 1)^2^
(x – 1) – x – 1)/(x – 1)^2
–2/(x – 1)^2
Example Question #2 : Understanding The Limiting Process.
Correct answer:
Use the chain rule and the formula
Example Question #5 : Understanding The Limiting Process.
Correct answer:
The answer is
Example Question #3 : Understanding The Limiting Process.
Correct answer:
Using the power rule, multiply the coefficient by the power and subtract the power by 1.
Example Question #4 : Understanding The Limiting Process.
Correct answer:
Use the product rule:
Example Question #5 : Understanding The Limiting Process.
Correct answer:
Use the product rule to find the derivative of the function.
Example Question #6 : Understanding The Limiting Process.
Correct answer:
The derivative of any function of e to any exponent is equal to the function multiplied by the derivative of the exponent.
Example Question #8 : Understanding The Limiting Process.
Find the second derivative of
Correct answer:
Factoring out an x gives you
Certified Tutor
Minnesota State University-Mankato, Bachelor of Science, Mathematics.
Certified Tutor
The University of Texas at Dallas, Bachelor of Science, Computer Science. University of North Texas, Master of Arts, Education.
Certified Tutor
University of Mary Washington, Bachelor of Science, Mathematics. Hood College, Master of Science, Computer and Information Sc...
All AP Calculus AB Resources
|
{"url":"https://www.varsitytutors.com/ap_calculus_ab-help/understanding-the-limiting-process","timestamp":"2024-11-05T00:17:39Z","content_type":"application/xhtml+xml","content_length":"167529","record_id":"<urn:uuid:6a201a30-9d77-4a6f-a3d7-692799856469>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00273.warc.gz"}
|
CAR - Data Science Wiki
Coarsening At Random :
Coarsening at random is a technique used in the field of computer science, particularly in the domain of graph algorithms. It involves simplifying a complex graph by merging together or “coarsening”
pairs of vertices in the graph. This can be useful in a number of scenarios, such as reducing the size of the graph to make it more manageable, or improving the
of certain algorithms that are applied to the graph.
One example of coarsening at random is the “contraction algorithm” for finding minimum cuts in a graph. Given a graph with vertices and edges, the contraction algorithm proceeds as follows:
Pick a pair of vertices in the graph at random and merge them together, resulting in a new, coarsened graph with one fewer vertex.
Repeat this process until only two vertices remain in the graph.
The minimum cut of the original graph is then defined as the set of edges that were removed during the coarsening process.
This algorithm can be used to solve a wide range of problems, such as finding the minimum-cost flow in a
, or determining the optimal configuration of a circuit.
Another example of coarsening at random is the “multilevel algorithm” for graph partitioning. Given a graph with vertices and edges, the multilevel algorithm proceeds as follows:
Coarsen the graph by merging pairs of vertices at random until the graph has a small number of vertices (e.g. 10-20% of the original number of vertices).
Use a graph partitioning algorithm (such as Kernighan-Lin or Fiduccia-Mattheyses) to partition the coarsened graph into a number of clusters.
Uncoarsen the graph by repeatedly splitting each cluster in the coarsened graph into two sub-clusters, until the original graph is obtained.
Use the partitioning of the coarsened graph as a starting point for refining the partitioning of the original graph.
The multilevel algorithm can be used to solve a wide range of problems, such as partitioning a circuit for efficient layout, or dividing a computer network into clusters for improved communication.
In general, coarsening at random can be an effective technique for simplifying complex graphs and improving the efficiency of graph algorithms. It allows for a fast, approximate solution to be
obtained, which can then be refined using more computationally intensive methods.
|
{"url":"https://datasciencewiki.net/car/","timestamp":"2024-11-13T14:45:28Z","content_type":"text/html","content_length":"40820","record_id":"<urn:uuid:c0ee29d6-df34-472a-b92f-9ef46b20210f>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00821.warc.gz"}
|
triple sequence
Articles containing keyword "triple sequence":
JCA-17-08 » Wijsman lacunary invariant statistical convergence for triple sequences via Orlicz function (04/2021)
JCA-19-10 » Ideal statistically limit points and ideal statistically cluster points of triple sequences of fuzzy numbers (04/2022)
JCA-21-04 » Generalized statistical relative uniform ϕ̃-convergence of triple sequences of functions (01/2023)
Articles containing keyword "triple sequences":
JCA-13-02 » On triple sequence of Bernstein operator of weighted rough I[λ]-convergence (07/2018)
JCA-14-10 » On generalized geometric difference of six dimensional rough ideal convergent of triple sequence defined by Musielak-Orlicz function (04/2019)
JCA-24-05 » Triple sequences and deferred statistical convergence in the context of gradual normed linear spaces (01/2024)
|
{"url":"https://search.ele-math.com/keywords/triple-sequence","timestamp":"2024-11-14T00:42:36Z","content_type":"application/xhtml+xml","content_length":"7789","record_id":"<urn:uuid:8c2cd1d8-c665-4b78-858c-68b385d1d7fe>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00150.warc.gz"}
|
asterah is a spanish grinding ore system
The simulations are carried out in an industrialscale SAG mill, which includes a case with polyhedral ore particles and spherical grinding media (PHSP grinding system), and a case with spherical ore
particles and spherical grinding media (SP grinding system). The grinding media are steel balls with a constant diameter of 125 mm.
|
{"url":"https://pcexpertus.com.pl/asterah-is-a-spanish-grinding-ore-system.html","timestamp":"2024-11-14T04:24:51Z","content_type":"text/html","content_length":"43598","record_id":"<urn:uuid:87bd125c-bb19-45ac-975a-382c2a5cdaae>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00378.warc.gz"}
|
al a
Section: New Results
Numerical analysis
Participants : Martin Campos Pinto, Nicolas Crouseilles, Michel Mehrenberger, Eric Sonnendrücker.
Analysis of numerical methods for the Vlasov-Poisson system
In [47] , we derive the order conditions for fourth order time splitting schemes in the case of the $1D$ Vlasov-Poisson system. Computations to obtain such conditions are motivated by the specific
Poisson structure of the Vlasov-Poisson system : this structure is similar to Runge-Kutta-Nyström systems. The obtained conditions are proved to be the same as RKN conditions derived for ODE up to
the fourth order. Numerical tests are performed and show the benefit of using high order splitting schemes in that context.
In [37] , we prove enhanced error estimates for high order semi-lagrangian discretizations of the Vlasov-Poisson equation. It provides new insights into optimal numerical strategies for the numerical
solution of this problem. The new error estimate $O\left(min\left(\frac{\Delta x}{\Delta t},1\right)\Delta {x}^{p}+\Delta {t}^{2}\right)$ is based on advanced error estimates for semi-lagrangian
schemes, also equal to shifted Strang schemes, for the discretization of the advection equation.
Analysis of a new particle method with deformable shapes
Particle methods are known to be simple and efficient in most practical cases, however they suffer from weak convergence properties: they only converge in a strong sense when the particles present an
extended overlapping (i.e., when the number of overlapping particles tends to infinity as the mesh size $h$ of their initialization grid tends to 0), and additional constraints such as vanishing
moments. In practice, extended particle overlapping can be expensive and it involves an additional parameter to be optimized, such as the overlapping exponent $q<1$ for which the particles radius
behaves like ${h}^{q}$. In PIC codes for instance, extended overlapping requires increasing the number of particles per cell together with the number of cells, which determine the radius of the
particles. In many practical cases such conditions are not met, which leads to strong oscillations in the solutions. To smooth out the oscillations some methods (like the Denavit redeposition scheme,
recently revisited as a Forward semi-Lagrangian scheme) use periodic remappings, but frequent remappings introduce unwanted numerical diffusion which seems to contradict the benefit of using
low-diffusion particle schemes. Moreover, the vanishing moment condition prevents high orders to be achieved with positive particles.
In [44] we present a new class of particle methods with deformable shapes for transport problems that converge in the supremum norm without requiring remappings, extended overlapping or vanishing
moments for the particles. UIndeed, unlike the classical error analysis based on a smoothing kernel argument, our estimates hold for any particle collection with Lipschitz smoothness and compact
supports that have the same scale than their initialization grid. Our results are threefold. On the theoretical side we first show that for arbitrarily smooth characteristic flow, high order
convergence rates are obtained by deforming the particles with local polynomial mappings. On the practical side we provide an explicit implementation of the first order case: the resulting
linearly-transformed particle (LTP) scheme consists of transporting the particle centers along the numerical flow, together with finite difference approximations of the local Jacobian matrices of the
flow. For the fully discrete scheme we establish rigorous a priori error estimates and demonstrate the uniform boundedness of the particle overlapping. Finally, we describe an adaptive multilevel
version of the LTP scheme that includes a local correction filter for positivity-preserving approximations.
In [45] we apply the LTP method to the 1+1d Vlasov-Poisson problem with a simple deposition scheme and show that deforming the particles helps removing the noise traditionally observed with standard
PIC schemes.
Two-Scale Asymptotic-Preserving issues
In the submitted paper [48] , we build a Two-Scale Macro-Micro decomposition of the Vlasov equation with a strong magnetic field. This consists in writing the solution of this equation as a sum of
two oscillating functions with circonscribed oscillations. The first of these functions has a shape which is close to the shape of the Two-Scale limit of the solution and the second one is a
correction built to offset this imposed shape. The aim of such a decomposition is to be the starting point for the construction of Two-Scale Asymptotic-Preserving Schemes.
The aim of using Two-Scale Asymptotic-Preserving Schemes is first, to deal efficiently with long time scales with solutions having high frequency oscillations and second, to manage the transition
between different regimes, in a unified framework.
The aim of a new starting project is to test on a simplified model the Two-Scale Asymptotic-Preserving Schemes. The model, a two dimensional in phase space Vlasov-Poisson equation with small
parameter, is used for a long time simulation of a beam in a focusing channel. This work was already done in [71] in the case where the solution is approximated by the two scale limit. The goals are
first to improve this approximation, by going further, to the first order one, and secondly, to replace this approximation by an exact decomposition, using the macro-micro framework. This last
approach will permit to treat the case of a not necessary small parameter.
In order to accomplish the first task we started to write a PIC code which is to be integrated in SeLaLib.
|
{"url":"https://radar.inria.fr/report/2011/calvi/uid32.html","timestamp":"2024-11-02T23:39:22Z","content_type":"text/html","content_length":"43094","record_id":"<urn:uuid:e45a5fa5-cfe1-422f-8921-6a07133a6677>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00428.warc.gz"}
|
Correlation Analysis
A correlation analysis is a statistical procedure that evaluates the association between two sets of variables. The association between variables can be linear or nonlinear. In communication
research, however, correlation analyses are mostly used to evaluate linear relationships. Sets of variables may include one or many variables. Associations between two variables (two sets of one
variable) can be analyzed with a bivariate correlation analysis. Associations between one (dependent) variable and a set of two or more (independent) variables can be studied using multiple
correlation (regression) analysis. Relationships between sets of many (independent and dependent) variables can be investigated using canonical correlation analysis.
Variables in each set can be measured on a nominal, ordinal, or interval level. There is a specific correlation analysis for any combination of measurement level and number of variables in each set.
Associations between ordinal variables (i.e., variables that have been measured on an ordinal level) are usually analyzed with a nonparametric correlation analysis. All other combinations can be
considered parametric linear correlation analyses and are as such special cases of canonical correlation analysis. Among the quantitative research methodologies, correlation analyses are recognized
as one of the most important and influential data analysis procedures for communication research and for social science research in general. Explanation and prediction are generally considered the
quintessence of any scientific inquiry. One can use variables to explain and predict other variables only if there is an association between them. One of the most established and widely applied
correlation analyses is the bivariate linear correlation analysis, which will be used in the following demonstration.
Bivariate Linear Correlation Analysis – Product–Moment Correlation Coefficient
Suppose the researchers are interested in the relationship between the variables “playing a violent video game” (X) and “aggression” (Y). Both variables are measured on an interval level (X in hours
per week; Y in aggression scores from 0 to 10) and with standard measurement instruments. The following dataset was obtained from five research participants:
Formula 1
Since the researchers are interested in the linear relationship between the two interval variables they will apply a bivariate linear correlation analysis and calculate a bivariate correlation
coefficient. A widely used bivariate correlation coefficient for linear relationships is the Pearson correlation coefficient, which is also known as product – moment correlation or simply as
Pearson’s r:
Formula 2
cov(x, y) means the covariance between the variable X and Y while s[x] and s[y] stand for the standard deviation of X and Y. The means of X and Y are indicated by ≈ and ¥. Hence, the product–moment
correlation is simply a standardized covariance between two interval variables, or the covariance divided by the product of the variables’ standard deviation. For calculations it is recommended to
use the following formula:
Therefore, we calculate the following sums and products:
Formula 4
Inserting these sums and products into the formula results in
Formula 5
This result indicates a positive product–moment correlation of 0.35 between “playing time of a violent video game” and aggression scores. All statistical software packages, such as SPSS, SAS, S-PLUS,
R, and STATISTICA, offer a procedure for the calculation of product–moment correlations and many other correlation coefficients. The statistic software R is freely available at www.r-project.org.
Interpretation Of A Product–Moment Correlation Coefficient
The product–moment correlation coefficient (as most other correlation coefficients) will be a value between -1 and +1. A value of 0 indicates that the two variables are independent, i.e., have
nothing in common. A value of both +1 and -1 would be interpreted as a perfect linear relationship between the two variables. A correlation of +1 stands for a positive or increasing linear
relationship (“the more of variable X the more of variable Y” or vice versa). A correlation of −1 represents a negative or decreasing linear relationship (“the more of variable X the less of variable
Y” or vice versa). The closer the correlation coefficient is to either −1 or + 1 the stronger the association between the two variables.
The squared correlation coefficient indicates the proportion of common variance between the two variables and is called the coefficient of determination. In the example above, the squared
product–moment correlation is 0.352 = 0.1225, or 12.25 percent. This means that the two variables “playing time of a violent video game” (X) and “aggression” (Y) share 12.25 percent of their
variance. In other words, 12.25 percent of variable Y is redundant when knowing variable X or vice versa. This result, however, also means that 100 percent minus 12.25 percent = 87.75 percent of the
variables’ variance is not shared and needs to be explained by other variables. This quantity is commonly called the coefficient of alienation.
Many authors have suggested guidelines for the practical interpretation of a correlation coefficient’s size. The question is whether a given correlation coefficient indicates a small, medium, or
strong association between variables. Cohen (1988), for example, has suggested the following categorization of correlation coefficients for psychological research: 0.10 < r < 0.29 or −0.29 < r <
−0.10 indicates a small correlation (small effect), 0.30 < r < 0.49 or −0.49 < r < −0.30 a medium correlation (medium effect), and 0.50 < r < 1.00 or −1.00 < r < −0.50 a strong correlation (strong
effect). As Cohen stated himself, these guidelines are, to a certain degree, arbitrary and should not be taken too seriously. Ultimately, the interpretation of a correlation coefficient’s size
depends on the context. On the one hand, a correlation coefficient of 0.85 may indicate a weak relationship if one is studying a physical law with highly reliable measurement instruments. On the
other hand, a correlation coefficient of 0.35 may indicate a strong relationship if many other variables intervene in the association of two or more variables, or if one has to use measurement
instruments with imperfect reliability. A categorization of correlation coefficients that is comparable to Cohen’s classification and based on communication research findings is still missing.
Significance Of A Product–Moment Correlation Coefficient
The above example of a product–moment correlation resulted in a product–moment correlation coefficient of 0.35 in a sample of five research participants. Most of the time, however, researchers are
more interested in whether a result found in a sample can be generalized to a population. In other words: researchers are often interested in whether hypotheses that refer to relationships among
variables in a population hold when confronted with empirical data in a sample. For this purpose, one has to apply a statistical significance test. The classical hypotheses of a product–moment
correlation coefficient are:
Two-sided research hypothesis (H1; association existent): ρ ≠ 0
Two-sided null hypothesis (H0; no association existent): ρ = 0
One-sided research hypothesis (H1; positive/negative association): ρ > 0 or ρ < 0
One-sided null hypothesis (H0; no or negative/no or positive association): ρ ≤ 0 or ρ ≥ 0
In the example above we assume a one-sided research hypothesis, i.e., we assume a positive correlation between the variables “playing time of a violent video game” and “aggression.” In other words,
we expect that the more often people play a violent video game the higher their aggression levels, or vice versa. A null hypothesis test for r can be performed using a t or F sampling distribution.
We obtain the t or F statistic as:
Formula 6
As one can see, t =√F . If the null hypothesis is true, the F statistic is Fisher’s F distributed with df1 = 1 and df2 = n − 2 degrees of freedom, or the t statistic is student’s t distributed with
df = n − 2 degrees of freedom. According to the corresponding sampling distributions, the probability of finding a t value of 0.645 or an F value of 0.416 under the assumption of a true null
hypothesis is p = 0.282. Since p values of greater than α = 0.05 (5 percent) usually do not result in the rejection of the null hypothesis, the relationship in the sample is characterized as not
The Pitfalls Of Correlation Analysis
The concept of correlation analysis and correlation coefficients in particular can be easily misconceived. The following three aspects are frequently a source of misinterpretations:
Correlation Analysis and Linearity: Most correlation analyses indicate only linear relationships among variables and will miss nonlinear associations. For example, a perfectly U-shaped relationship
between two interval variables will result in a product– moment correlation coefficient of zero and, therefore, would suggest no association. Hence, it is important to test whether the relationship
under examination could be nonlinear in nature before conducting a linear correlation analysis. Among various statistical procedures, one can use a simple scatterplot for this purpose.
Correlation Analysis and Systematic Sample Biases: A correlation analysis with a test of statistical significance assumes that we have a perfect random sample at hand. Frequently, however, this is
not the case. For example, samples can be composed of extreme groups, or samples may not cover the entire variation of variables in a population. While extreme group selection usually leads to an
overestimation of the true relationship in a population, samples that are restricted in variation typically result in an underestimation of a correlation in a population. Another problem is that
correlation coefficients are extremely sensitive to outliers in a sample which, too, lead to an overestimation of a population’s true correlation. It is recommended to check a sample for systematic
sample biases before conducting a correlation analysis.
Correlation Analysis and Causality: The most common misconception of a correlation analysis, however, refers to the question of causality. The fact that two or more variables are correlated does not
mean that some variables can be considered as cause and others as effect. Correlation is not causation! The example above stated a positive correlation between “playing time of a violent video game”
and “aggression.” It is possible that playing a violent video game causes higher aggression scores (effect hypothesis), but it is also conceivable that already aggressive personalities purposely
select violent video games (selection hypothesis). A positive product–moment correlation coefficient does not provide any information about which of the two competing hypotheses is the better model.
Perhaps both variables are even affected by a third variable and show no direct relationship at all. In this case the association between “playing time of a violent video game” and “aggression” would
be a spurious relationship. On a basic level, three conditions define causality:
1 Time order must be appropriate. If X is the cause and Y the effect, then usually X antecedes Y in time.
2 Variation must be concomitant, as shown with a significant correlation coefficient.
3 The relationship among variables must not be explained by other variables, i.e., is not spurious.
Considering these three conditions, a significant correlation coefficient is a necessary, but not a sufficient, condition for causality.
1. Abdi, H. (2007). Coefficients of correlation, alienation and determination. In N. J. Salkind (ed.), Encyclopedia of measurement and statistics. Thousand Oaks, CA: Sage.
2. Agresti, A., & Finlay, B. (1997). Statistical methods for the social sciences, 3rd edn. Upper Saddle River, NJ: Prentice Hall.
3. Chen, P. Y., & Popovich, P. M. (2002). Correlation: Parametric and nonparametric measures. Sage university papers series on quantitative applications in the social sciences no. 139. Thousand
Oaks, CA: Sage.
4. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Lawrence Erlbaum.
5. Cohen, J., Cohen, P., West, S. G., & Aiken, L. S. (2002). Applied multiple regression/correlation analysis for the behavioral sciences, 3rd edn. Mahwah, NJ: Lawrence Erlbaum.
6. Miles, J., & Shevlin, M. (2004). Applying regression and correlation: A guide for students and researchers. Thousand Oaks, CA: Sage.
7. Thompson, B. (1984). Canonical correlation analysis: Uses and interpretation. Sage university papers series on quantitative applications in the social sciences no. 47. Thousand Oaks, CA: Sage.
|
{"url":"https://communication.iresearchnet.com/research-methods/correlation-analysis/","timestamp":"2024-11-08T02:37:40Z","content_type":"text/html","content_length":"69553","record_id":"<urn:uuid:8b9b8062-32ca-44b3-9a60-a675e5f0643f>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00836.warc.gz"}
|
Rational approximation
The main theme of this article is the question how well a given real number $x$ can be approximated by rational numbers. Of course, since the rationals are dense on the real line, we, surely, can
make the difference between $x$ and its rational approximation $\frac pq$ as small as we wish. The problem is that, as we try to make $\frac pq$ closer and closer to $x$, we may have to use larger
and larger $p$ and $q$. So, the reasonable question to ask here is how well can we approximate $x$ by rationals with not too large denominators.
Trivial theorem
Every real number $x$ can be approximated by a rational number $\frac{p}{q}$ with a given denominator $q\ge 1$ with an error not exceeding $\frac 1{2q}$.
Note that the closed interval $\left[qx-\frac12,qx+\frac12\right]$ has length $1$ and, therefore, contains at least one integer. Choosing $p$ to be that integer, we immediately get the result.
So, the interesting question is whether we can get a smaller error of approximation than $\frac 1q$. Surprisingly enough, it is possible, if not for all $q$, then, at least for some of them.
Dirichlet's theorem
Let $n\ge 1$ be any integer. Then there exists a rational number $\frac pq$ such that $1\le q\le n$ and $\left|x-\frac pq\right|<\frac 1{nq}$.
Proof of Dirichlet's theorem
Consider the fractional parts $\{0\cdot x\},\{1\cdot x\}, \{2\cdot x\},\dots, \{n\cdot x\}$. They all belong to the half-open interval $[0,1)$. Represent the interval $[0,1)$ as the union of $n$
subintervals $[0,1)=\left[0,\frac 1n\right)\cup\left[\frac 1n,\frac 2n\right)\cup\dots\cup\left[\frac{n-1}{n},1\right),$. Since we have $n+1$ fractional parts and only $n$ subintervals, the
pigeonhole principle implies that there are two integers $0\le k<\ell\le n$ such that $\{kx\}$ and $\{\ell x\}$ belong to the same interval. But then the difference $(\ell-k)x$ differs by less than $
\frac 1n$ from some integer number $p$: $|(\ell-k)x-p|<\frac 1n$. Dividing by $q=\ell-k$, we get $\left|x-\frac pq\right|<\frac1{nq}$.
If $x$ is irrational, then there are infinitely many irreducible fractions $\frac pq$ such that $\left|x-\frac pq\right|<\frac 1{q^2}$.
Proof of the corollary
For each $n\ge 1$, find a (possibly reducible) fraction $\frac {P_n}{Q_n}$ with $1\le Q_n\le n$ such that $\left|x-\frac {P_n}{Q_n}\right|<\frac 1{nQ_n}$. Let $\frac {p_n}{q_n}$ be the same fraction
as $\frac {P_n}{Q_n}$ but reduced to its lowest terms. It is clear that $\frac 1{nQ_n}\le \frac 1{Q_n^2}\le \frac 1{q_n^2}$, so it remains to show that among the fractions $\frac {p_n}{q_n}$ there
are infinitely many different ones. But the distance from the $n$-th fraction to $x$ does not exceed $\frac 1n$, which can be made arbitrarily small if we take large enough $n$. On the other hand, if
the fractions $\frac {p_n}{q_n}$ were finitely many, this distance couldn't be made less than the distance from the irrational number $x$ to some finite set of rational numbers, i.e., less than some
positive constant.
The Dirichlet's theorem can be generalized in various ways. The first way is that one can try to approximate several numbers simultaneously by fractions with common denominator. The exact statement
is as follows.
If $x_1,\dots,x_m\in \mathbb R$ and $n\ge 1$ is an integer, then there exists an integer $q$ with $1\le q\le n^m$ and integers $p_1,\dots,p_m$ such that $\left|x_j-\frac {p_j}q\right|<\frac 1{nq}$
The proof is essentially the same except instead of considering $n+1$ numbers $\{kx\}$, $k=0,\dots,n$one has to consider $n^m+1$ vectors $(\{kx_1\},\dots,\{kx_m\})$, $k=0,\dots, n^m$ in the unit cube
$[0,1)^m$ divided into $n^m$ equal subcubes.
Another remark that can be useful in some problems is that, if $x$ is irrational, then you can find infinitely many solutions of the inequality $\left|x-\frac {p}q\right|<\frac C{q^2}$ with the
denominator $q$ contained in any given arithmetic progression $a\ell+b$ ($\ell\in\mathbb Z$) if the constant $C$ (depending on the progression) is large enough. To prove it, first, find infinitely
many irreducible fractions $\frac PQ$ satisfying $\left|x-\frac PQ\right|<\frac 1{Q^2}$. Then, for each such fraction, find two integers $u,v$ such that $0< u\le Q$ and $uP+vQ=1$. Now note that $u$
and $Q$ are relatively prime, so we can find some integer $\alpha, \beta$ such that $\alpha u+\beta Q=b$. Replacing $\alpha$ and $\beta$ by their remainders $\tilde\alpha$ and $\tilde\beta$ modulo
$a$, we get a positive integer $q=\tilde\alpha u+\tilde\beta Q$ satisfying $1\le q\le 2aQ$ and $|qx-(\tilde\beta P-\tilde\alpha v)|\le\tilde\alpha|ux+v|+\tilde\beta|Qx-P|\le \tilde\alpha u\left|x-\
frac PQ\right|+\frac{\tilde\alpha}Q+\frac {\tilde\beta}Q\le \frac {3a}Q\le \frac {6a^2}q$. Thus, setting $p=\tilde\beta P-\tilde\alpha v$, we get $\left|x-\frac pq\right|<\frac {6a^2}{q^2}$.
Applications to problem solving
One common way to apply Dirichlet's theorem in problem solving is to use it in the following form: given finitely many numbers $x_1,\dots,x_m$ and $\delta>0$, one can find a positive integer $q$ such
that each of the numbers $qx_1, qx_2,\dots, qx_m$ differs from some integer by less than $\delta$. A typical example of such usage can be found in the article devoted to the famous Partition of a
rectangle into squares problem.
Liouville Approximation Theorem
We can generalize Dirichlet's theorem as follows: If $\alpha$ is an algebraic number of degree $n$, then there are only finitely many rational numbers $\frac{p}{q}$ satisfying the following
inequality: $\Bigg|\alpha-\frac{p}{q}\Bigg| \le\frac{1}{q^n}$. This gives us the following corollary: $\sum_{n=0}^\infty 10^{-n!}$ is a transcendental number, known as Liouville's constant.
Hurwitz's theorem
For every irrational number $\xi$ there are infinitely many rationals m/n such that
$\left |\xi-\frac{m}{n}\right |<\frac{1}{\sqrt{5}\, n^2}.$
Roth's theorem
For algebraic α, integers p and q; $|\alpha-p/q|<1/q^{2+\epsilon}$ has finitely many solutions as ε>0.
See also
|
{"url":"https://artofproblemsolving.com/wiki/index.php?title=Rational_approximation&oldid=106026","timestamp":"2024-11-09T06:09:51Z","content_type":"text/html","content_length":"64036","record_id":"<urn:uuid:fb2b0499-7a73-45c3-b444-45ebce963cb3>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00886.warc.gz"}
|
Quantum Contextuality
Quantum contextual sets have been recognized as resources for universal quantum computation, quantum steering and quantum communication. Therefore, in our paper in “Quantum” (Impact Factor 6.4)
Mladen Pavičić, “Quantum Contextuality,” Quantum 7, 953 (2023); DOI 10.22331 q-2023-03-17-953 we focus on engineering the sets that support those resources and on determining their structures and
properties. Such engineering and subsequent implementation rely on discrimination between statistics of measurement data of quantum states and those of their classical counterparts. Their
discriminators are hypergraphs which determines how states supporting a computation or communication are arranged.
It turns out that contextual quantum non-binary hypergraphs, in contrast to classical binary ones, are essential for designing quantum computation and communication and that their structure and
implementation rely on such non-binary vs. binary differentiation. We are able to generate arbitrarily many contextual sets from simplest possible vector components and then make use of their
structure by implementing the hypergraphs with the help of YES-NO measurements so as to collect data from each gate/edge and then postselect them. At the same time this procedure shows us that we
have to carry out measurements on complete set of states before we postselect them. As an example the Klyachko pentagon cannot lie in a plane, as shown in the figure; only its postselected states do.
Other considered discriminators are six hypergraph inequalities. They follow from two kinds of statistics of data. One kind of statistics, often applied in the literature, turn out to be
inappropriate and consequently two kinds of inequalities turn out not to be noncontextuality inequalities. Results are obtained by making use of universal automated algorithms which generate
hypergraphs with both odd and even numbers of hyperedges in any odd and even dimensional space – in this paper, from the smallest contextual set with just three hyperedges and three vertices to
arbitrarily many contextual sets in up to 8-dimensional spaces. Higher dimensions are computationally demanding although feasible.
Comments are closed
Bookmark the permalink.
|
{"url":"http://cems.irb.hr/hr/ij/pqo/quantum-contextuality/","timestamp":"2024-11-05T12:04:05Z","content_type":"text/html","content_length":"37663","record_id":"<urn:uuid:8aa9ec9e-c57d-40c7-9964-07722e3f750f>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00393.warc.gz"}
|
A hybrid appliance identification method by using grey relational artificial neural network
Nowadays, everything is getting smarter such as mobile phones, cars, watches and home appliances. Our powerlines are also getting smarter. There are many smart grid and smart home applications.
Designing of recognition devices to identify appliances for these smart networks is a new task to do it. There are many different approaches on recognition and identification these power consumer
devices and appliance. This study aims to develop an effective method that does not require any additional hardware. This method has been developed by using powerline parameters such as current,
phase angle, voltage, active and reactive power. These data have been classified and normalized by using a validation method and grey relational analysis to train an artificial neural network. This
neural network was trained by using power parameters of many different common appliances like heater, coffee machine, television, radio, lamp, computer, fan, refrigerator etc. This identification
algorithm can be used within a low-cost embedded system for collecting appliance information over a powerline to provide info for smart homes and smart grids.
1. Introduction
The households and other buildings use almost 40 % of total energy all over the world. This means we should put more effort on studies about energy saving and planning. Monitoring of energy
consumption is essential for these studies. There are many different techniques to monitor and predict energy consumption of a building. According to a review by Zhao and Magoules [1], engineering
methods, statistical methods, neural networks, support vector machines and grey models can be used for the prediction. However, one of these techniques is not enough by itself for high accuracy.
Being fast enough, applicable and easy to implement are some other problems.
There are many different appliances and devices which consume electrical energy on the same buildings. Therefore, it is need to measure and record hundreds of parameters to be used for smart energy
system. A high frequency voltage and current measurement data set [2] has been recently established by Medico et al. This data set contain 17 different appliances in 330 different models. They have
also measured combined operations where appliances were active simultaneously. Another database named ACS [3], has been established by measuring 15 different appliances and 225 brand/model in two
different session. Ridi et. al. have intentionally used low sampling frequency because of energy saving. On the other hand, an advanced home energy management system [4] has shown that future of
smart home system depends on load monitoring and power scheduling. Another study about automatic recognition of electrical loads [5] has also demonstrated the need of recognition techniques for
simultaneously working appliances. Studies such as the energy aware smart home [6] and the future renewable electric energy delivery and management system [7] have proved that information based
electrical power system will take over soon. Therefore, analyzing electrical parameters such as voltage, current, active and reactive power should be considered as data acquisition rather than a
simple measurement. Real-Time recognition and profiling of appliance through a single sensor [8] without complex device and environment is key to smart grids and homes. Low-cost prototype of smart
meters for household [9] has already been started to be developed. Because of all these reasons, this study focuses on identification of appliances by using only line parameters such as voltage,
current, power, frequency etc. It is also important that detecting which appliance is working when more than one appliance running simultaneously.
Previous studies have used different classification techniques. ACS-F2 [3] database has been used for this study because of its appliance variety and low frequency sampling rate. Ridi et. al. have
applied machine learning algorithms which are nearest neighbor (k-NN) and Gaussian Mixture Modelling (GMM). Overall accuracy varying between 70 % and 90 % based on test protocols. They have also
mentioned that some appliances were easy to recognize while some were not. Some other researchers who use the same database have achieved better results. An adaptive rate time-domain approach [10]
has reached 91.9 % average classification accuracy. However, in this approach some of the appliances were skipped. Another study that uses Hidden Markov Models, which is another machine learning
algorithm [11], has achieved relatively better results. A study using moving average for data preparation [12] has reached to 99 % by random forest classification and multilayer perceptron. It is
obvious that different pre-processing data techniques and machine learning algorithms have improved the overall accuracy.
Our study, all previous methods and classification techniques have been examined. Appliance data has been pre-processed by using grey relational analysis. This way all data normalized as well as they
are pre-processed with a dynamic coefficient. Additionally, a data validation method has been developed to avoid invalid data. After that, a multilayer feed-forward back-propagate ANN has been
trained by using these data. By this way, a grey relational neural network has been established for identification of appliances by only using power line parameters with high accuracy.
2. Methods and techniques
Pattern recognition processes has three main tasks: data pre-processing, data representation and decision making. A simple feed-forward neural network can be trained for pattern recognition tasks
such as image processing and biometric identification [13]. ANNs can be described as non-linear machine learning methods. Therefore, they are more effective and successful than the conventional
methods for pattern recognition.
In this study, grey relational analysis has been used as data pre-processing to create dynamic coefficients that specify the correlation of input samples. Then, a data validation technique has been
applied to measurement data because some measurements results are mistaken due to transient regime of the appliances. Besides, appliances are not working all the time. Therefore, ANN should be
trained by using the right data. By this way, a new highly accurate hybrid identification method has been derived.
2.1. Grey relational analysis
Hybrid analysis methods are recently popular due to their high accuracy and reliability. Normalization techniques such a grey relational analysis (GRA) in neural network [14] significantly improves
quality of multi-objective optimization. GRA is also used for many different areas like product design [15], analysis for multivariate time series [16], optimization of operating rules for
powerplants [17].
GRA calculates correlation between input sequences dynamically. There are three different initial approach; higher is better Eq. (1), smaller is better Eq. (2) or nominal value is better Eq. (3). The
best approach can be determined by researcher according to the data sequence [18]:
where ${X}_{i}\left(n\right)$ is the original input value, $\mathrm{m}\mathrm{a}\mathrm{x}{X}_{i}\left(n\right)$ is the maximum value of the sequence, $\mathrm{m}\mathrm{i}\mathrm{n}{X}_{i}\left(n\
right)$ is the minimum value of the sequence, ${Y}_{i}\left(n\right)$ is the pre-processed data, $Nv$ is the desired normalize value, $n=$1, 2, 3… is a integer number of parameters and $i=$1, 2, 3,…
is a integer number of the samples.
In other word, GRA is a calculation of grey relation grades for a data sequence as pre-processing. These grades determine the relational degree between different sequences Eq. (4). Finally, average
value of these coefficients for n different values determines the grey relational rank Eq. (5):
${c}_{i}\left(n\right)=\frac{\mathrm{\Delta }\left(n\right)\mathrm{m}\mathrm{i}\mathrm{n}+Ϛ\mathrm{\Delta }\left(n\right)max}{\left|{Y}_{0}\left(n\right)-{Y}_{i}\left(n\right)\right|+\mathrm{Ϛ}\
mathrm{\Delta }\left(n\right)max},$
${r}_{i}=\frac{1}{N}\mathrm{}{\sum }_{n=1}^{N}{c}_{i}\left(n\right),$
where $c$ is grey relational grade, $\mathrm{\Delta }$ is deviation sequence of $\left|{Y}_{0}\left(n\right)-{Y}_{i}\left(n\right)\right|$, $Ϛ$ is optimizing coefficient between 0 and 1, $r$ is grey
relational coefficient.
By this way, multiple input data can be converted into a single grey relation grade that shows the correlation of data. This method can be used for various problems such as calculating the initial
values of weights in an artificial neural network [19], building power systems strategies in distribution systems [20] and hybrid forecasting modelling for wind power [21].
2.2. Data validation
ACS-F2 database has measured power parameters of 15 different appliance within 15 different brands for each category. These parameters are line frequency, angle between voltage and current, real
power, reactive power, rms current and rms voltage of the related appliance as seen on Table 1.
Table 1Database parameters
Parameter Symbol Unit
Frequency $F$ Hz
Angle $\varphi$ Degree
Real power $P$ Watt
Reactive power $Q$ VAR
RMS current ${I}_{RMS}$ Ampere
RMS voltage ${V}_{RMs}$ Volt
Researchers have measured the parameters for one hour with 10 second interval in two separate sessions. This makes a comprehensive database. However, database contains uncertain values as well.
Therefore, derivative values such as apparent power Eq. (6), power factor Eq. (7), active Eq. (8) and reactive power Eq. (9) should be calculated by using measured parameters for validation:
$\mathrm{c}\mathrm{o}\mathrm{s}\phi =\frac{P}{S},\mathrm{}\mathrm{s}\mathrm{i}\mathrm{n}\phi =\frac{Q}{S},$
$P={V}_{RMS}{I}_{RMS}\mathrm{c}\mathrm{o}\mathrm{s}\phi ,$
$Q={V}_{RMS}{I}_{RMS}\mathrm{s}\mathrm{i}\mathrm{n}\phi ,$
where $S$ is the apparent power, $P$ is active power, $Q$ is reactive power, $\mathrm{c}\mathrm{o}\mathrm{s}\phi$ is power factor, ${I}_{RMS}$ root mean square current and ${V}_{RMS}$ is root mean
square voltage value.
After the calculation, difference between measured and calculated values Eq. (10) and coefficient for the difference Eq. (11) can be determined by using sigmoid function Eq. (12) and its derivative
Eq. (13). These coefficients will determine the accuracy of the sample:
${e}_{i}={\sigma }^{"}\left(\frac{1}{N}\sum _{n=1}^{N}{D}_{i}\left(n\right)\right),$
$\sigma \left(x\right)=\frac{1}{1+{e}^{-ax}},$
${\sigma }^{"}=a\sigma \left(x\right)\left(1-\sigma \left(x\right)\right),$
where $D$ is the difference, $A$ is measured value, $B$ is the calculated value, $i$ is the number of the sample and $n$ is the number of the parameter, $e$ is validation error coefficient. $\sigma$
is sigmoid function and $\sigma "$ is first derivative while $a$ is the gain value for the function.
2.3. Grey relational artificial neural network
ANN is a mathematical method that can learn critical information from multi-dimensional data sequences. They can also process noisy and incorrect data with high error tolerance [22]. On the other
hand, using GRA classifier for identification [23] within another techniques improves the accuracy. GRA can determine closeness and uniqueness between different parameters [24]. Grey embedded ANNs
can be used for many different tasks such as optimization approach in machining [25] and hybrid forecasting modelling [26].
ANNs are very good to work with any other analysis methods. They can determine the importance of the data and decide how it effects to the output. The principal parameters of an ANN are input data,
weight matrixes, bias values and activation function. They can also adjust their initial weights by using various techniques such as back propagation and genetic algorithms [27]. General mathematical
function for an ANN cab be stated as below Eq. (14). Here, you can change the way how weights affect to sum or how much bias value should be added. The various different kind of activation function
and technical indicators can also be used [28]:
$y=f\left(\sum _{m=1}^{M}\left({w}_{m}{x}_{m}+b\right)\right),$
where $b$ is the bias value, $x$ is the input, $w$ is the weight and $f$ is the activation function. $m$ is the number of input $m=$1, 2, 3,…, $M$.
Table 2Input parameters of GRANN
GRA grade Validation error RAW values
${c}_{i}$(1) ${d}_{i}$(1) Frequency
${c}_{i}$(2) ${d}_{i}$(2) Phase angle
${c}_{i}$(3) ${d}_{i}$(3) Real power
Inputs (${x}_{i}$)
${c}_{i}$(4) ${d}_{i}$(4) Reactive power
${c}_{i}$(5) ${d}_{i}$(5) RMS current
${c}_{i}$(6) ${d}_{i}$(6) RMS voltage
The Grey Relational Artificial Neural Network (GRANN) uses grey relational grades and validation error value as input. Validation error coefficients are also used as hidden layer’s bias. This way,
ANN can avoid uncertain and unstable measurement values. On the other hand, GRA coefficient work as output’s bias to create a threshold for the sample whether it should be labeled or not. A
four-layered feed forward back propagated ANN is constructed as seen in Fig. 1.
Fig. 1Architecture of GRANN
Here, $g\left(x\right)$, $h\left(x\right)$ and $f\left(x\right)$ are transfer function. We have used different type of transfer function which will be presented in the next section to get best
accuracy. We have also used two coefficients as threshold to improve the response of the ANN. Validation error coefficient (${e}_{i}$) makes hidden neurons values zero for invalid samples and
grey-relational coefficients (${r}_{i}$) makes output values zero for invalid samples. Relationship between layers and neurons as follows:
${h1}_{i,j}=g\left(\sum _{i=1}^{I}\left({w}_{i,j}{x}_{i}\right)\right),$
${h2}_{j,k}=h\left(\sum _{j=1}^{j}\left({\mu }_{j}{w}_{j,k}{h1}_{i,j}+{b}_{h}\right)\right),{\mu }_{j}=\left\{\begin{array}{c}0,{e}_{i}=1,\\ {e}_{i},{e}_{i}<1,\end{array}\right\$
${y}_{i}=f\left(\sum _{k=1}^{K}\left({{\mu }_{k}w}_{k}{h2}_{j,k}+{b}_{o}\right)\right),{\mu }_{k}=\left\{\begin{array}{c}0,{r}_{i}<0.5,\\ {r}_{i},{r}_{i}\ge 0.5,\end{array}\right\$
where $i$ is the number of input neurons, $j$ is the number of first hidden layer’s neurons, $k$ is the number of the second hidden layer’s neurons, ${b}_{h}$ is the bias for hidden layers, ${b}_{o}$
is the bias for output layer.
Grey models and neural networks are typical examples on time series analysis for prediction and identification [29]. Recently, grey relational analysis has been adopted by many different research
areas to be used within ANNs coupled with other optimization techniques. Prediction of surface roughness composite material [30], prediction of maintenance workforce size [31], estimation of human
impedance body parameters are some of these areas.
3. Result and discussion
The ACS-2 database [32] has 15 different categorical appliances. Therefore, we have created 15 different class for each category and we added one zero class for invalid measurement. These
measurements include transient regimes, standby modes and uncertain values that affect output negatively. These values have been labeled as zero by using validation error and grey-relational
coefficients. Although, we have tried different multidimensional optimization algorithm, the best result was obtained by Levenberg-Marquardt algorithm which works with gradient vector and Jacobian
matrix. Performance of different training algorithms and activation functions can be seen on Table 3.
Table 3Performance comparison
MLR MSE
Training algorithm Activation function
Training Testing Training Testing
Levenberg-marquart pure-linear 0.8546 0.8541 4.94 5.06
Levenberg-marquart tan-sigmoid 0.9948 0.9938 0.273 0.322
Quasi-newton pure-linear 0.9432 0.9404 2.03 2.096
Conjugate gradient tan-sigmoid 0.8675 0.8692 4.64 4.55
One step secant tan-sigmoid 0.8322 0.8305 5.72 5.74
In Table 3 parameter MSE is the mean square error, MLR is the multiple linear regression, $\beta$ is slope coefficient, $\epsilon$ is the model’s error:
$MSE=\frac{1}{n}\sum _{i=1}^{n}{\left({y}_{output}-{y}_{predict}\right)}^{2},$
$MLR={\beta }_{0}+{\beta }_{1}{x}_{1}+{\beta }_{2}{x}_{2}+{\beta }_{3}{x}_{3}+\dots +{\beta }_{n}{x}_{n}+\epsilon ,$
${\beta }_{i}=\frac{{\sum }_{i=1}^{n}\left({x}_{i}-\stackrel{-}{x}\right)\left({y}_{i}-\stackrel{-}{y}\right)}{{\sum }_{i=1}^{n}{\left({x}_{i}-\stackrel{-}{x}\right)}^{2}},{\beta }_{0}=\left\{\begin
{array}{c}\stackrel{-}{y},{x}_{i}=0,\\ 0,otherwise.\end{array}\right\$
Fig. 2MLR graphic of training
Fig. 3MLR graphic of testing
MLR results show that the most of miss predictions are very close to real data because total difference between classes is equal to one. These differences have been truncated by using bias values.
The other miss predictions have been caused by zero class which were representation of the invalid input data. This class has more samples than combination of all other classes because appliances
were not working all the time. We have also excluded the transient values and labeled them as class zero by using validation technique. Fig. 4 presents the confusion matrix plot [33] of test result.
The average response of GRANN for each class is shown on Table 4.
Table 4Labels and results
Label Appliance GRANN
0 – 0
1 Coffee machines 0.98
2 Computer stations 1.96
3 Fans 2.97
4 Fridges and freezers 3.94
5 Hi-fi music systems 4.94
6 Kettles 5.91
7 Fluorescent lamps 6.81
8 Incandescent lamps 7.95
9 Laptops (on charge) 8.91
10 Microwave ovens 9.94
11 Mobile Phone (on charge) 10.93
12 Monitors 11.91
13 Printers 12.88
14 Shavers 13.73
15 Television 14.91
4. Conclusions
ANNs are very useful for solving of multi variable problems. Unlike machine learning, they can adapt and change their parameters to get best result. There are many different architectures, training
algorithms and transfer functions for different approaches. In this study, GRA has been used for normalization. Besides, GRA provides a dynamic coefficient to determine the correlation of input
samples for each class. Furthermore, a validation technique has been developed for avoiding invalid data. Hybridization of different methods is very useful to get better accuracy on recognition and
identification problems.
The hybrid GRANN can predict appliance with more then 99 % accuracy by using only power parameters such as current, voltage, active and reactive power. The total accuracy was around 84 % without
validation and grey-relational analysis. Pre-processing and preparation of data are dramatically important for ANNs. The training algorithm and transfer function should be chosen according the type
and range of these data as well. This study has presented a hybrid appliance identification method which can be used within an embedded system for smart home and smart grids. Further study will focus
on implementation of this method for real-life application.
• Zhao Hai Xiang, Magoulès Frédéric A review on the prediction of building energy consumption, Renewable and Sustainable Energy Reviews, Vol. 16, 2012, p. 3586-3592.
• Medico R., De Baets L., Gao J. et al. A voltage and current measurement dataset for plug load appliance identification in households. Scientific Data, Vol. 7, 2020, p. 49.
• Ridi A., Gisler C., Hennebert J. ACS-F2 – a new database of appliance consumption signatures. 6th International Conference of Soft Computing and Pattern Recognition, 2014, p. 145-150.
• Lin Y., Tsai M. An advanced home energy management system facilitated by nonintrusive load monitoring with automated multi-objective power scheduling. IEEE Transactions on Smart Grid, Vol. 6,
Issue 4, 2015, p. 1839-1851.
• Hamid O., Barbarosou M., Papageorgas P., Prekas K., Salame C.-T. Automatic recognition of electric loads analyzing the characteristic parameters of the consumed electric power through a
non-intrusive monitoring methodology. Energy Procedia, Vol. 119, 2017, p. 742-751.
• Jahn M., Jentsch M., Prause C. R., Pramudianto F., Al Akkad A., Reiners R. The energy aware smart home. 5th International Conference on Future Information Technology, Busan, 2010.
• Huang A. Q., Crow M. L., Heydt G. T., Zheng J. P., Dale S. J. The future renewable electric energy delivery and management system: the energy internet. Proceedings of the IEEE, Vol. 99, Issue 1,
2011, p. 133-148.
• Ruzzelli A. G., Nicolas C., Schoofs A., O'Hare G. M. P. Real-time recognition and profiling of appliances through a single electricity sensor. 7th Annual IEEE Communications Society Conference on
Sensor, Mesh and Ad Hoc Communications and Networks, Boston, 2010.
• Sanchez-Sutil F., Cano-Ortega A., Hernandez J., Rus-Casas C. Development and calibration of an open source, low-cost power smart meter prototype for PV household-prosumers. Electronics, Vol. 8,
2019, p. 878.
• Qaisar S. M., Alsharif F. An adaptive rate time-domain approach for a proficient and automatic household appliances identification. International Conference on Electrical and Computing
Technologies and Applications, Ras Al Khaimah, United Arab Emirates, 2019.
• Ridi A., Gisler C., Hennebert J. Appliance and state recognition using hidden Markov models. International Conference on Data Science and Advanced Analytics, Shanghai, 2014, p. 270-276.
• Mpawenimana I., Pegatoquet A., Soe W. T., Belleudy C. Appliances identification for different electrical signatures using moving average as data preparation. 9th International Green and
Sustainable Computing Conference, Pittsburgh, USA, 2018.
• Abiodun O. I., et al. Comprehensive review of artificial neural network applications to pattern recognition. IEEE Access, Vol. 7, 2019, p. 158820-158846.
• Wan X., Wang Y., Zhao D. Grey relational and neural network approach for multi-objective optimization in small scale resistance spot welding of titanium alloy. Journal of Mechanical Science and
Technology, Vol. 30, 2016, p. 2675-2682.
• Lin Y., Yeh C. Grey relational analysis based artificial neural networks for product design: A comparative study. 12th International Conference on Informatics in Control, Automation and Robotics
(ICINCO), Colmar, 2015, p. 653-658.
• Sallehuddin R., Shamsuddin S. M. H., Hashim S. Z. M. Application of grey relational analysis for multivariate time series. 8th International Conference on Intelligent Systems Design and
Applications, Kaohsiung, 2008, p. 432-437.
• Fang G., Guo Y., Huang X., Rutten M., Yuan Y. Combining grey relational analysis and a Bayesian model averaging method to derive monthly optimal operating rules for a hydropower reservoir. Water,
Vol. 10, 2018, p. 1099.
• Hasani H., Tabatabaei S. A., Amiri G. Grey relational analysis to determine the optimum process parameters for open-end spinning yarns. Journal of Engineered Fibers and Fabrics, Vol. 7, 2012, p.
• Lin Y. C., Yeh C. H. Grey relational analysis based artificial neural networks for product design: a comparative study. Proceedings of 12th International Conference Informatics Control Automation
Robotic, Vol. 1, 2015, p. 653-538.
• Chen W. H. Quantitative decision-making model for distribution system restoration. IEEE Transaction Power System, Vol. 25, 2010, p. 313-21.
• Shi J., Ding Z., Lee Wj, Yang Y., Liu Y., Zhang M. Hybrid forecasting model for very-short term wind power forecasting based on grey relational analysis and wind speed distribution features. IEEE
Transaction on Smart Grid, Vol. 5, 2014, p. 521-526.
• Şahin M., Oğuz Y., Büyüktümtürk F. ANN-based estimation of time-dependent energy loss in lighting systems. Energy and Buildings, Vol. 116, 2016, p. 455-467.
• Chen Pei-Jarn, Du Yi-Chun Combining independent component and grey relational analysis for the real-time system of hand motion identification using bend sensors and multichannel surface EMG.
Mathematical Problems in Engineering, Vol. 2015, 2015, p. 329783.
• Kumar Dinesh, Chandna Pankaj, Pal Mahesh Efficient optimization of neural network using Taguchi-grey relational analysis with Signalto-noise ratio approach for 2.5D end milling process. American
Journal of Mechanical Engineering and Automation, Vol. 5, Issue 2, 2018, p. 30-42.
• Kharwar P. K., Verma R. K. Grey embedded in artificial neural network (ANN) based on hybrid optimization approach in machining of GFRP epoxy composites. FME Transactions, Vol. 47, 2019, p.
• Sallehuddin Roselina, Mariyam Siti, Shamsuddin H. J. Hybrid grey relational artificial neural network and auto regressive integrated moving average model for forecasting time-series data. Applied
Artificial Intelligence, Vol. 23, Issue 5, 2009, p. 443-486.
• Kalogirou Soteris A. Optimization of solar systems using artificial neural-networks and genetic algorithms. Applied Energy, Vol. 77, Issue 4, 2004, p. 383-405.
• Patel Jigar, Shah Sahil, Thakkar Priyank, Kotecha K. Predicting stock and stock price index movement using trend deterministic data preparation and machine learning techniques. Expert Systems
with Applications, Vol. 42, Issue 1, 2015, p. 259-268.
• Yokoyama Ryohei, Wakui Tetsuya, Satake Ryoichi Prediction of energy demands using neural network with model identification by global optimization. Energy Conversion and Management, Vol. 50, Issue
2, 2009, p. 319-327.
• Thankachan Titus, Prakash K. Soorya, Malini R., Ramu S., Sundararaj Prabhu, Rajandran Sivakumar, Rammasamy Devaraj, Jothi Sathiskumar Prediction of surface roughness and material removal rate in
wire electrical discharge machining on aluminum based alloys/composites using Taguchi coupled grey relational analysis and artificial neural networks. Applied Surface Science, Vol. 472, 2019, p.
• Ighravwe D. E., Oke S. A., Adebiyi K. A. Selection of an optimal neural network architecture for maintenance workforce size prediction using grey relational analysis. Engineering and Applied
Science Research, Vol. 45, Issue 1, 2018, p. 1-7.
• Database of appliance consumption signatures, Institute of Complex Systems, https://icosys.ch/acs-f2.
• Tshitoyan Vahe Plot Confusion Matrix. GitHub, 2020, https://www.github.com/vtshitoyan/plotConfMat.
About this article
appliance identification
grey relational analysis
data validation
artificial neural network
Copyright © 2020 Yılmaz Güven, et al.
This is an open access article distributed under the
Creative Commons Attribution License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
|
{"url":"https://www.extrica.com/article/21658","timestamp":"2024-11-07T03:24:54Z","content_type":"text/html","content_length":"157969","record_id":"<urn:uuid:ede49b91-50ac-4602-87cc-e9ca41e5eacd>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00752.warc.gz"}
|
effective epimorphism in an (infinity,1)-category
Is there a general $\infty$-categorical notion of “plus construction”?
Of course. The interesting feature of the commutative ring case is that the pushout is computed as a smash/tensor product, so whether $A\to B$ is an epimorphism of rings can be detected without
appealing to the ring structures, and merely depends on $B$ as an $A$-module.
It goes the other way, of course: if $A$ is a commutative ring and $A\to B$ is a map of $A$-modules such that $B\approx B\wedge_A A\to B\wedge_A B$ is an equivalence, then $B$ is uniquely a
commutative $A$-algebra, and $A\to B$ an epimorphism.
Another amusing fact: you can define a “Quillen plus-construction” of a commutative ring spectrum $A$, in complete analogy with the construction for spaces. Instead of killing a perfect subgroup of
the fundamental group, the input data is a “perfect ideal” in the homotopy category of compact $A$-modules. All homotopy epimorphisms of commutative rings can be obtained this way.
People have discussed plus-constructions in other contexts (dg Lie algebras, for instance). These should probably give other examples of epimorphisms.
That’s right. That is why suspension shows up in the (counter)example: for a space $X$, $X \to \Delta^0$ is an epimorphism in the $(\infty, 1)$-category of spaces if and only if the (unreduced)
suspension $\Delta^0 \amalg_X \Delta^0$ is contractible. More generally, it seems to me that $X \to Y$ is an epimorphism in the $(\infty, 1)$-category of spaces if and only if its homotopy fibres are
spaces with contractible suspension.
A map $A\to B$ of commutative ring spectra is an epimorphism iff $B$ is smashing over $A$, i.e., if $B\wedge_A B\approx B$.
Isn’t it true that in any category admitting fibred coproducts, $f : x \to y$ is an epimorphism iff the codiagonal morphism $x \coprod_y x \to x$ is an isomorphism (and dually for monomorphisms)? Is
the same true for $(\infty,1)$-categories? (The above would then just be a special case of this.)
It does seem rarely used, though there are some nifty examples:
• A map $A\to B$ of commutative ring spectra is an epimorphism iff $B$ is smashing over $A$, i.e., if $B\wedge_A B\approx B$.
• A map $X\to Y$ between connected spaces is an epimorphism iff $Y$ is formed via a Quillen-plus construction from a perfect normal subgroup of $\pi_1 X$.
I sometimes try to find useful criteria for “epimorphism” in other settings. It’s usually pretty hard.
I have moved Zhen Lin’s addition to a numbered example and added a hyperlink to epimorphism in an (infinity,1)-category in order to clarify what is meant. Also added more cross-links there.
This concept of epimorphism in an $\infty$-category is rarely used, isn’t it.
Can we have some concrete statement other than ’it’s not true’? Or rather, what definition of ’epimorphism’ are you using?
I added some remarks to that effect.
I guess your example in 1-groupoids is $S^0 \to \Delta^0$?
Yes, that’s actually already true in the (2,1)-category of groupoids. (Although I can’t remember whether I’ve ever heard someone use “epimorphism” to mean “monomorphism in the opposite category” for
2-categories or (∞,1)-categories.) Feel free to add.
I recently had to be told that effective epimorphisms in the $(\infty, 1)$-category of spaces need not be epimorphisms. Perhaps a red herring principle warning is in order.
I added a remark, taken from an answer of David Carchedi on MO, about effective epimorphisms in sheaf toposes.
Oh, sorry. I should be paying more attention, that’s embarrassing. Sorry for the confusion.
But anyway, thanks for the pointer!
I wish I wrote that reply, but I'm afraid that was Akhil (\ne Adeel!) ;)
Ah, thanks!
You give an excellent reply there. I have just added a comment now on where to find this in Lurie’s book with a pointer to the above entry.
The question seems to be on math.stackexchange still.
I have made at effective epimorphism in an (infinity,1)-category the characterization in an infinity-topos by “induces epi on connected components” more explicit.
This was in reaction to an MO question “What is the homotopy colimit of the Cech nerve as a bi-simplical set? “. However, when I was done compiling my reply, the question had been deleted, it seems.
I have split off effective epimorphism in an (infinity,1)-category from effective epimorphism and polished and expanded slightly.
I don’t know. By “plus construction”, I (approximately) mean a two step process where you (1) kill some stuff by introducing some “relations” $\{r_i\}_{i\in I}$, then (2) kill some more stuff by
introducing some “higher relations” $\{s_i\}_{i\in I}$, where “relations” and “higher relations” are indexed by the same $I$. Furthermore, each “higher relation” should correspond to some kind of
“redundancy” inherent in killing the “relations”.
For instance, given a space $X$ and a subgroup $P\subseteq \pi_1X$ generated by commutators $c_i=[x_i,y_i]$ of loops $x_i,y_i\in \Omega X$ representing elements of $P$, step (1) is: attach a 2-cell
$d_i$ along each $c_i$, obtaining a space $Y$, while step (2) is: attach a 3-cell $e_i$ along the 2-sphere in $Y$ whose southern hemisphere is $d_i$, and whose northern hemisphere is $[H_i,K_i]$,
built from choices of null-homotopies $H_i,K_i$ of the loops $x_i,y_i$ (which exist in $Y$ exactly because $P$ is generated by commutators).
The resulting map $X\to Z$ is an epimorphism.
Proof: The construction depends on the collection of choices $\alpha=\{(x_i,y_i,H_i,K_i)\}$ (assume fixed indexing set $I$), which themselves form a space $A$, and the plus-construction depends
“continuously” on $\alpha\in A$. If $x_i$ and $y_i$ are themselves null-homotopic, then you can connect $\alpha$ to $\alpha_0=\{(*,*,*,*)\}$ (all constant maps) by a path in $A$, and it’s clear that
the plus-construction built from $\alpha_0$ admits a deformation retraction, from which we conclude that $f$ is an equivalence when $[x_i],[y_i]$ are trivial in $\pi_1X$.
Next note that if $g\colon X\to X'$ is a map, then the pushout along $g$ of a plus construction $f\colon X\to Z$ built from an $\alpha$ is a map $g'\colon X'\to Z'$ which is itself a
plus-construction built from $g(\alpha)$. It is clear that the plus construction map $f$ kills the elements $[x_i],[y_i]\in \pi_1X$, so the pushout of $f$ along itself must be an equivalence.
I don’t know too many other examples of this type of thing.
Very cool! I bet this has a nice formalization using HITs. However I don’t quite follow this bit:
If $x_i$ and $y_i$ are themselves null-homotopic, then you can connect $\alpha$ to $\alpha_0=\{(*,*,*,*)\}$ (all constant maps) by a path in $A$.
I see that you can connect $\alpha$ to something of the form $\{(\ast,\ast,H_i',K_i')\}$, but the constant loop can be nullhomotopic in a nontrivial way, so how do you know that $H_i'$ and $K_i'$ are
also trivial?
Whoops. I don’t. The real argument is: if $x_i=*=y_i$, the map on the “northern hemisphere” factors through a map $[H_i,K_i]\colon S^2\to Y$, which is null homotopic because $\pi_2$ is abelian.
added reference to HTT 6.2.3.10
diff, v25, current
I wonder if comments 13-21 in this thread could be moved to a discussion thread for epimorphism in an (infinity,1)-category.
You can put a link.
Terminology and redirects: quotient morphism. (Used in Lurie’s Kerodon.)
diff, v29, current
|
{"url":"https://nforum.ncatlab.org/discussion/2395/effective-epimorphism-in-an-infinity1category/?Focus=52348","timestamp":"2024-11-02T07:50:54Z","content_type":"application/xhtml+xml","content_length":"96962","record_id":"<urn:uuid:5e774a33-9d42-4d27-b109-d9bb39a1173c>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00690.warc.gz"}
|
Scientific notation is a way of representing very large or very small numbers, without needing to write lots of zeros. Consider the mass of the sun, which is approximately
1,988,000,000,000,000,000,000,000,000,000 \text{ kg}. That's a very large number. How do scientists deal with numbers so large?
This applet shows how a decimal number is written in scientific notation.
• Click the "Generate a random number" button to explore different numbers.
• Check the box to see what each number looks like written in scientific notation.
1. Explore 10 random numbers and how the power of ten relates to the number in the tens position.
2. What types of numbers have positive or negative powers of ten?
3. Describe the relationship between the place value of the digits and the number in scientific notation.
4. Can you come up with a rule for taking a very large number in scientific notation? What about a very small number?
Going back to the example about the mass of the sun:
• The sun has a mass of approximately 1.988\times 10^{30} \text{ kg} , which is much easier to write than 1,988,000,000,000,000,000,000,000,000,000 kg.
• The mass of an atom of Uranium (one of the heaviest elements) is approximately 3.95\times 10^{-22} g. That is 0.000\,000\,000\,000\,000\,000\,000\,395 g.
In scientific notation, numbers are written in the form:
\displaystyle a \times 10^{n}
decimal number greater than or equal to 1 but less than 10
positive or negative integer
• A negative exponent indicates how many factors of ten smaller than a the value is.
• A positive exponent indicates how many factors of ten larger than a the value is.
We can follow these steps in writing numbers in scientific notation.
1. Move the decimal point to the left or right so that it is right after the first non-zero digit (from 1to 9).
Where's the decimal point in 2,680,000? Because it's a whole number, the decimal point is understood to be at the end of the number: 2,680,000.
The first non-zero number is 2. If we move the decimal point 6 places from the end of the number to the right of the 2, we will get 2.68. We don’t need the extra zeroes. The number 2.68 is
between 1 and 10 as we wanted.
2. Multiply by 10 to the power of the number of places the decimal moved.
We moved 6 places to the left so we have 10^{6}.
Standard form Product form Scientific notation
2,680,000 2.68\times 1,000,000 2.68\times 10^{6}
Remember, we're not actually "moving" the decimal point. We're adjusting the place value. Then to balance that out, we have to multiply by the correct power of 10.
|
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-1192/topics/Topic-22495/subtopics/Subtopic-285988/?ref=blog.mathspace.co","timestamp":"2024-11-05T06:27:49Z","content_type":"text/html","content_length":"316251","record_id":"<urn:uuid:085c1d1a-cf77-4b30-b93c-77b2b9a88b2b>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00376.warc.gz"}
|
Automating The Design Of 3 Element Monoband Quad Beams | James Sawle (MD0MDI)
Automating the Design of 3 Element Monoband Quad Beams
by MD0MDI
Automating the Design of 3 Element Monoband Quad Beams
Part 1 – A Wide-band Model
The exercise of automating the design of 2-element quads raised the question of whether a similar technique might be applied to 3-element quads. One answer is in this set of notes.
For all quads, from 1 to n elements, performance depends in large measure upon the diameter of the loop wire as measured in terms of wavelength. Indeed, performance varies with the common logarithm
of the wire diameter.
When we automated the design of 2-element quads, we chose as the primary parameter the spacing between elements such that it yielded the highest front-to-back ratio when the array was resonant. The
design equations for 3-element quads retained the same feature, using the same progression of spacings between the reflector and the driven element. This selected spacing not only yields the highest
front-to-back ratio, but as well it tends to yield the widest operating bandwidth. As we noted in the discussion of 2-element quads, quad array bandwidth is less a matter of the 2:1 SWR bandwidth and
more a matter of the >20 dB front-to-back ratio bandwidth.
The director was sized and spaced to yield a good gain with a resonant feedpoint impedance between 70 and 80 Ohms. In general, this procedure does not yield the very highest possible gain or the
shortest boom length possible. However, it does produce a very good gain (as judged in quad terms) with the widest possible operating bandwidth. In a general way, the required driver-to-director
spacing is nearly double that of the reflector-to-driver spacing. Figure 1 (above) illustrates the relationship.
It is certainly possible to emphasize one parameter over another and achieve a different design from the one used in this exercise. In Part 2, we shall examine a higher-gain model. However, the
compromise of gain and operating bandwidth in the version under study here yields a very workable 3-element quad design with boom lengths of about 0.4 wavelengths.
The procedures for developing the design algorithms are the same as for the 2-element quads. I optimized designs in the 10 meter band using wires between 0.0000316 wl and 0.01 wl. I then subjected
the resulting curves to regression analysis to produce a series of equations that can be placed into a modeling program with model-by-equation facilities or into a utility program for simple
calculation of dimensions and basic operating data. As I have noted in connection with simpler quad designs, regression equations do not have theoretic significance in and of themselves, but they do
yield outputs that model as resonant quad arrays for any wire size within the set limits and for any HF or VHF frequency. As with the 2-element equations, the gain figures tend to be higher than the
baseline at lower HF frequencies and lower than the baseline at VHF frequencies, since everything has been calibrated at 10 meters for copper wire elements.
The following GW Basic utility program requires only the entry of the wire size and the design frequency to set the calculations in motion. Since direct entry of AWG wire sizes is not included, the
following table makes a good refresher:
AWG Size Diameter (inches) Diameter (mm)
18 .0403 1.0236
16 .0508 1.2903
14 .0641 1.6281
12 .0808 2.0523
10 .1019 2.5883
8 .1285 3.2640
Besides the usual dimensional outputs, the program will also display the wire diameter as a function of a wavelength. The performance data includes the approximate gain at the design frequency, the
feedpoint impedance, the 2:1 SWR bandwidth as a percentage of the design frequency, the >20 dB front-to-back bandwidth as a percentage of the design frequency, and the rate of change of gain over a
span of 1% of the design frequency. Remember that the line with the “LOG” entry is, for GW Basic, a natural log and requires a correction factor to create a common log. If you translate the program
to another medium, you can drop the conversion factor if the medium recognizes common logs.
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
10 CLS:PRINT "Program to calculate the dimensions of a resonant square 3-element quad beam."
20 PRINT "All equations calibrated to NEC antenna modeling software for wire diameters"
30 PRINT " from 3.16E-5 to 1E-2 wavelengths within about 0.5% from 3.5 - 250 MHz."
40 PRINT "L. B. Cebik, W4RNL"
50 INPUT "Enter Desired Frequency in MHz:";F
60 PRINT "Select Units for Wire Diameter in 1. Inches, 2. Millimeters, 3. Wavelengths"
70 INPUT "Choose 1. or 2. or 3.";U
80 IF U>3 THEN 60
90 INPUT "Enter Wire Diameter in your Selected Units";WD
100 IF U=1 THEN WLI=11802.71/F:D=WD/WLI
110 IF U=2 THEN WLI=299792.5/F:D=WD/WLI
120 IF U=3 THEN D=WD
130 PRINT "Wire Diameter in Wavelengths:";D
140 L=.4342945*LOG(D*10^5):LL=L^2:LM=LL*.0128:LN=LM+1.0413:D1=.4342945*LOG(D)
150 IF D1<-4.5 THEN 160 ELSE 170
160 print "Wire diameter less than 3E-5 wavelengths: results uncertain."
170 IF D1>-2 THEN 180 ELSE 190
180 PRINT "Wire diameter greater than 1E-2 wavelengths: results uncertain."
190 AD=.00064:BD=.01044148148#:CD=.06484444444#:DD=.1886626455#:ED=1.232080635#
200 DE=(AD*(D1^4))+(BD*(D1^3))+(CD*(D1^2))+(DD*D1)+ED
210 AR=.0009333333333#:BR=.01915555556#:CR=.13983333333#:DR=.4587492063#:ER=1.64042381#
220 RE=(AR*(D1^4))+(BR*(D1^3))+(CR*(D1^2))+(DR*D1)+ER
230 AI=-.0012#:BI=-.0209037037#:CI=-.13021111111#:DI=-.3498137566#:EI=.5941126984#
240 IR=(AI*(D1^4))+(BI*(D1^3))+(CI*(D1^2))+(DI*D1)+EI
250 AS=-.0033#:BS=-.03927777778#:CS=-.1724583333#:DS=-.3239603175#:ES=-.04951547619#
260 SP=(AS*(D1^4))+(BS*(D1^3))+(CS*(D1^2))+(DS*D1)+ES
270 AP=-.004866666667#:BP=-.06262962963#:CP=-.29347222222#:DP=-.6174457672#:EP=-.2289269841#
280 IP=(AP*(D1^4))+(BP*(D1^3))+(CP*(D1^2))+(DP*D1)+EP
290 AZ=-2.227066667#:BZ=-26.75247407#:CZ=-115.9142556#:DZ=-217.8183323#:EZ=-79.59203175#
300 ZR=(AZ*(D1^4))+(BZ*(D1^3))+(CZ*(D1^2))+(DZ*D1)+EZ
310 AG=-.07#:BG=-.7877777778#:CG=-3.350833333#:DG=-6.143888889#:EG=5.104166667#
320 GN=(AG*(D1^4))+(BG*(D1^3))+(CG*(D1^2))+(DG*D1)+EG
330 AW=-.05847333333#:BW=-.5028392593#:CW=-.4586494444#:DW=6.080227037#:EW=17.61091389#
340 SW=(AW*(D1^4))+(BW*(D1^3))+(CW*(D1^2))+(DW*D1)+EW
350 AF=.11695666667#:BF=1.717985556#:CF=9.6510925#:DF=25.23848992#:EF=27.78167988#
360 FB=(AF*(D1^4))+(BF*(D1^3))+(CF*(D1^2))+(DF*D1)+EF
370 AN=-.04666666667#:BN=-.5414814815#:CN=-2.302777778#:DN=-4.364074074#:EN=-3.092777778#
380 DG=(AN*(D1^4))+(BN*(D1^3))+(CN*(D1^2))+(DN*D1)+EN
390 WL=299.7925/F:PRINT "Wavelength in Meters =";WL;" ";
400 WF=983.5592/F:PRINT "Wavelength in Feet =";WF
410 PRINT "Quad Dimensions in Wavelengths, Feet, and Meters:"
420 PRINT "Driver Side =";(DE/4);" WL or";(DE/4)*WF;"Feet or";(DE/4)*WL;"Meters"
430 PRINT "Driver Circumference =";DE;" WL or";DE*WF;"Feet or";DE*WL;"Meters"
440 PRINT "Reflector Side =";(RE/4);" WL or";(RE/4)*WF;"Feet or";(RE/4)*WL;"Meters"
450 PRINT "Reflector Circumference =";RE;" WL or";RE*WF;"Feet or";RE*WL;"Meters"
460 PRINT "Reflector-Driver Space =";SP;" WL or";SP*WF;"Feet or";SP*WL;"Meters"
470 PRINT "Director Side =";(IR/4);" WL or";(IR/4)*WF;"Feet or";(IR/4)*WL;"Meters"
480 PRINT "Director Circumference =";IR;" WL or";IR*WF;"Feet or";IR*WL;"Meters"
490 PRINT "Director-Driver Space =";IP;" WL or";IP*WF;"Feet or";IP*WL;"Meters"
500 PRINT "Approx. Feedpoint Impedance =";ZR;"Ohms ";
510 PRINT "Free-Space Gain =";GN;"dBi"
520 PRINT "Approximate 2:1 VSWR Bandwidth =";SW;"% of Design Frequency"
530 PRINT "Approximate >20 dB F-B Ratio Bandwidth =";FB;"% of Design Frequency"
540 PRINT "Approximate Rate of Gain Change =";DG;"dB per 1% of Design Frequency"
550 INPUT "Another Value = 1, Stop = 2: ";P
560 IF P=1 THEN 10 ELSE 570
570 END
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Wire size and 3-Element Quad Performance
The effects of wire size on gain are as vivid for a 3-element quad as for a 2-element quad, as shown in Figure 2.
Automating the Design of 3 Element Monoband Quad Beams - Figure 2
In this figure, wire size is listed in wavelengths, using values that translate into a linear progression of the logarithms of the wire sizes. There is well over a dB difference in the gain of arrays
using the thinnest wire size and arrays using the fattest wire size. Moreover, the increase in gain over the corresponding 2-element quad also increases with wire size. The thinnest wire size
3-element quad shows a 1.3 dB improvement in gain over a 2-element quad using the same wire, whereas the fattest wire 3-element quad shows a gain improvement of nearly 2 dB over its corresponding
2-element array.
Automating the Design of 3 Element Monoband Quad Beams - Figure 3
Figure 3 shows the change of maximum front-to-back ratio with increasing wire size. Theoretically, the curve should be smooth and almost linear across the scale. Since the checkpoint models were hand
optimized, allowing the maximum front-to-back ratio to occur as little as 10-15 kHz from the design frequency yields the flat portion of the curve. However, in practice, this slightly less than
optimal design curve makes no practical difference, since constructing a quad so that its front-to-back ratio maximum is precisely at the design frequency is more hope than reality. Nevertheless, the
increase of both gain and front-to-back ratio with wire diameter demonstrates the importance that wire size has in effecting maximum mutual coupling between quad elements. Thin wire quads of the sort
we generally construct at HF with #14 or #12 wire simply are not capable of achieving all of the performance that a quad can provide.
Automating the Design of 3 Element Monoband Quad Beams - Figure 4
The feedpoint impedances as a function of wire size appear in Figure 4. Here, the curve is very real and not a function of optimizing variance. With the thinnest wire, the gain peak and the
front-to-back ratio peak are very close together, yielding less than a peak feedpoint impedance value. As the wire size increases, the gain peak occurs well below the design frequency so that the
front-to-back maximum value dominates the production of the feedpoint impedance.
Automating the Design of 3 Element Monoband Quad Beams - Figure 5
Figure 5 shows the circumference of each of the 3 elements in wavelengths. As with the 2-element quads, the reflector size increases more rapidly than the driver size. However, the required reflector
circumference is shorter in the 3-element quad than in the 2-element quad for any given wire size.
Interestingly, the director circumference does not follow the pattern for the other two elements. As the wire size increases, the required director size decreases. If we were to normalize the driver
circumference so that it graphs as a straight line across the page, the director line would move down at almost the same rate as the reflector line moves up.
Automating the Design of 3 Element Monoband Quad Beams - Figure 6
The spacing graphic, in Figure 6, gives some precision to the earlier remark that the driver-to-director spacing is about twice the reflector to driver spacing. In fact, the spacing from the driver
to the reflector increases with wire diameter–between about 0.14 and 0.17 wl for the span of wire sizes included in the exercise. In contrast, the required director-to-driver spacing decreases with
increases in wire size–from about 0.32 wl for the smallest wire to about 0.25 wl for the fattest wire.
The full story of what happens as we change wire sizes becomes much more evident if we perform some frequency sweeps. So I designed quads using three wire sizes: 0.0131″ (near to #28 AWG), 0.131″
(near to #8 AWG), and 1.31″. The design frequency was 28.5 MHz, and the wire sizes correspond to 0.0000316, 0.000316, and 0.00316 wl diameters. The frequency sweep used 0.1 MHz intervals from 28 to
29 MHz.
Automating the Design of 3 Element Monoband Quad Beams - Figure 7
In Figure 7, we have the gain curves across the first MHz of 10 meters for the 3 quads. The lowest curve for the thinnest wire shows the gain peak within 0.1 MHz of the design frequency, with a rapid
drop in gain at the low end of the band. For the middle-size wire, the gain peak is evident at 28.1 MHz, while for the fattest wire, gain is peak but flat for the first 0.2 MHz of the passband.
Equally evident to the gain advantage of the fattest wire is the very slow rate of gain decrease compared to the thinner wires. The thinner wires show a full half dB variance in gain across the
passband, while the range of gains is only 0.2 dB for the fattest wire.
Automating the Design of 3 Element Monoband Quad Beams - Figure 8
The front-to-back curves in Figure 8 show several things. First, the slight displacement of the curve for the fattest wire downward in frequency by about 15 kHz corresponds to the flattened portion
of the front-to-back curve in Figure 3. Apart from that slight offset, the three curves are remarkable congruent with each other. The rates of decrease from the peak are similar for all three curves
and are parallel both above and below the design frequency. Finally, note the steeper rate of decrease below design frequency than above design frequency. These curves are fully consistent with those
for 2-element quads.
If we select some arbitrary dividing value, such as a 20 dB front-to-back ratio, it is clear that even the fattest wire version of the 3-element quad does not cover all of the 1 MHz passband of the
exercise. In this regard, the 3-element quads designed here have slightly narrower front-to-back ratio operating bandwidths than corresponding 2-element models.
Automating the Design of 3 Element Monoband Quad Beams - Figure 9
A similar narrowing of the operating bandwidth applies to the 2:1 SWR dividing lined commonly used to denote acceptable performance, as shown in Figure 9. Only the fattest wire model covers the
entire passband. By judiciously lowering the resonant frequency of the middle-size wire model, it can be set to show under 2:1 SWR just about all the way across the passband. This fact results from
the more rapid rise in SWR below design frequency than above it. Since the entire set of resonant feedpoint impedance is between 70 and 80 Ohms for all wire sizes at the design frequency, all SWR
values in the curves are referenced to 75 Ohms.
Some Sample 3-Element Quad Arrays
To provide a sample of the program’s output, here are some dimensions and performance data for a few 3-element quads.
Wire Diameter: 0.0641″ or 7.70E-5 wl
Reflector Circumference: 73.09′
Driver Circumference: 70.06′
Director Circumference: 65.31′
Refl-Driver Spacing: 10.69′
Driver-Dir Spacing: 21.58′
Total Boom Length: 32.27′
Feedpoint Impedance: 79.5 Ohms
Free-Space Gain: 8.47 dBi
SWR Bandwidth: 3.10% or 0.439 MHz
>20 dB F-B Bandwidth: 1.18% or 0.167 MHz
Rate of Gain Change: 0.22 dB/1% of design frequency
1. 20 meters, #14 wire, design frequency: 14.175 MHz
Although the quad array modeled here has an acceptable SWR across all of 20 meters, the front-to-back ratio becomes a limiting factor. On a crowded band such as 20 meters, front-to-back ratio is very
often an important antenna design consideration. For most installations, therefore, the antenna would likely be designed for either the CW/digital end of the band or for the phone end of the band.
Wire Diameter: 0.0808″ or 1.95E-4 wl
Reflector Circumference: 36.64′
Driver Circumference: 34.95′
Director Circumference: 32.43′
Refl-Driver Spacing: 5.49′
Driver-Dir Spacing: 10.30′
Total Boom Length: 15.79′
Feedpoint Impedance: 77.2 Ohms
Free-Space Gain: 8.74 dBi
SWR Bandwidth: 3.34% or 0.952 MHz
>20 dB F-B Bandwidth: 1.41% or 0.402 MHz
Rate of Gain Change: 0.21 dB/1% of design frequency
2. 10 meters, #12 wire, design frequency: 28.5 MHz
Let’s compare this array with another for the same frequency.
Wire Diameter: 0.5″ or 1.21E-3 wl
Reflector Circumference: 37.42′
Driver Circumference: 35.22′
Director Circumference: 32.39′
Refl-Driver Spacing: 5.66′
Driver-Dir Spacing: 9.57′
Total Boom Length: 15.23′
Feedpoint Impedance: 72.3 Ohms
Free-Space Gain: 9.00 dBi
SWR Bandwidth: 4.42% or 1.20 MHz
>20 dB F-B Bandwidth: 2.11% or 0.601 MHz
Rate of Gain Change: 0.10 dB/1% of design frequency
3. 10 meters, 0.5″ wire, design frequency: 28.5 MHz
The 0.5″ wire quad shows all of the dimensional characteristics in comparison to the #12 AWG version that we have seen in the curves. As well, 0.5″ performance is slightly up, while the feedpoint
impedance is slightly down relative to the #12 wire model. Most significantly, the SWR and front-to-back operating bandwidths for the fat-wire model are 30% or more greater than those of the
thin-wire array. Of course, it is impractical to consider construction of a quad array for 10 meters that has half-inch diameter elements. However, we shall return to this problem before we close the
book on this exercise.
Wire Diameter: 0.25″ or 1.08E-3 wl
Reflector Circumference: 20.87′
Driver Circumference: 19.67′
Director Circumference: 18.10′
Refl-Driver Spacing: 3.16′
Driver-Dir Spacing: 5.37′
Total Boom Length: 8.53′
Feedpoint Impedance: 72.4 Ohms
Free-Space Gain: 8.99 dBi
SWR Bandwidth: 4.14% or 2.11 MHz
>20 dB F-B Bandwidth: 2.05% or 1.05 MHz
Rate of Gain Change: 0.11 dB/1% of design frequency
4. 6 meters, 0.25″ wire, design frequency: 51 MHz
The 6-meter version of the 3-element quad is similar in characteristics to the 0.5″ 10-meter array, since the wire diameters are similar relative to a wavelength. However, even a wire size of about
0.001 wl is insufficient to provide a full front-to-back operating bandwidth for the wide 6-meter band. Elements closer to 1″ in diameter would be necessary for this task.
Wire Diameter: 0.1″ or 1.24E-3 wl
Reflector Circumference: 7.31′
Driver Circumference: 6.88′
Director Circumference: 6.32′
Refl-Driver Spacing: 1.11′
Driver-Dir Spacing: 1.87′
Total Boom Length: 2.98′
Feedpoint Impedance: 72.2 Ohms
Free-Space Gain: 9.00 dBi
SWR Bandwidth: 4.24% or 6.19 MHz
>20 dB F-B Bandwidth: 2.19% or 3.20 MHz
Rate of Gain Change: 0.10 dB/1% of design frequency
5. 2 meters, 0.1″ wire, design frequency: 146 MHz
The same 4-MHz bandwidth, when moved from 6 to 2 meters, presents less of a problem for a 3-element quad composed of 0.001 wl wire. The >20 dB operating bandwidth now covers about 80% of the band.
The use of 0.25″ wire would easily permit the achievement of all benchmarks across the entire 2-meter band.
Hopefully, these examples will provide some guidance in developing a sense of the requisite wire size to achieve not only a desired gain level, but as well a desired operating bandwidth for 3-element
quad arrays of the present design.
Simulating Large-Diameter Elements
In a past 2-element quad exercise, we looked at the use of spaced #14 AWG wires to simulate fatter single wires. In that effort, we used 2 #14 AWG copper wires spaced 5″ apart and joined at the
corners. We explored two different configurations and found no significant difference between them. The resulting 2-element quad easily replicated the performance of a 0.5″ diameter quad, with a bit
to spare. The consequences of substituting 2 thinner wires for one fatter one were a slight enlargement of the reflector and a slight decrease in the driver circumference.
I repeated the exercise for the 3-element 0.5″ wire array noted among the examples. Since the number of variables increases with every new element, I restricted my efforts to planar loops,
illustrated in Figure 10. Note the structure of the planar loops, including the necessary corner wires. Optimizing the model required some further adjustments in director circumference and spacing,
since the 2-element array showed the dual-wire version to act like a wire slightly fatter than a half-inch in diameter.
Automating the Design of 3 Element Monoband Quad Beams - Figure 10
Here is a comparison of the dimensions (in inches) between the two models. Note that the dimensions for the dual-wire model represent positions halfway between the two wires, so that the actual wire
positions are +/- 2.5″ relative to the coordinates that would emerge from the listed dimensions.
Dimension 0.5″ Single Wire 2x#14 AWG Wires
Reflector Circumference: 449.0″ 1.084 wl 449.3″ 1.085 wl
Driver Circumference: 422.7″ 1.021 wl 421.4″ 1.018 wl
Director Circumference: 388.6″ 0.939 wl 385.3″ 0.930 wl
Refl-Driver Spacing: 67.9″ 0.164 wl 67.9″ 0.164 wl
Driver-Dir Spacing: 114.8″ 0.277 wl 111.0″ 0.268 wl
Total Boom Length: 182.7″ 0.441 wl 178.9″ 0.432 wl
Although the differences are small, they are significant in arriving at the final operating characteristics of the array. While the dual-wire reflector is slightly larger than the single-wire
elements, the dual-wire driver and director are both slightly smaller. As well, the dual-wire director is closer to the driver, resulting in a shorter overall boom length for the array.
Performance for the 3-element dual-wire array parallels that of its 2-element cousin. The model shows slightly less gain at the design frequency, but whether this minuscule gain loss is real or an
artifact of the closely spaced wires in the model remains uncertain.
Automating the Design of 3 Element Monoband Quad Beams - Figure 11
Figure 11 shows the gain curves for both versions of the array from 28 to 29 MHz. Immediately apparent is the fact that the dual-wire gain decreases more slowly than the single wire gain. Shallower
gain curves are generally characteristic of fatter wires with higher overall gain–a fact which contributes to the uncertainty over the slight gain deficit in the dual wire model at the design
frequency. However, the gain differences between versions of the antenna would make no operational difference at all.
Automating the Design of 3 Element Monoband Quad Beams - Figure 12
A second piece of evidence that the 5″ spacing of the dual wire model acts similarly to a wire somewhat fatter than the 0.5″ model appears in Figure 12. The front-to-back curve of the dual-wire
version is slightly wider than that of the 0.5″ single-wire model. Again, the differences make no operational difference, but their existence is numerically significant in the process of equating
dual-wire arrangements with corresponding diameters of single wires.
Comparing the feedpoint impedances between the two version of the array does not permit an easy chart. The dual-wire model uses a dual feed system of driver wires fed essentially in parallel. Hence,
the composite feedpoint impedance required hand calculation. However, the following table of values may be useful in exploring the feedpoint situation. Resistances and reactances are in Ohms.
Frequency 0.5″ Single Wire 2x#14 AWG Wires
(MHz) Resistance Reactance 75-Ohm SWR Resistance Reactance
28.0 53.10 -43.78 2.13 53.76 -38.55
28.1 56.78 -34.52 1.80 57.97 -30.44
28.2 60.57 -25.56 1.54 60.25 -22.55
28.3 64.44 -16.90 1.33 63.57 -14.86
28.4 68.33 – 8.51 1.16 66.89 – 7.34
28.5 72.19 – 0.38 1.04 70.18 – 1.01
28.6 75.99 7.54 1.11 73.43 7.20
28.7 79.70 15.28 1.23 76.62 14.32
28.8 83.31 22.89 1.36 79.73 21.36
28.9 86.79 30.41 1.49 82.76 28.36
29.0 90.16 37.88 1.63 85.73 35.37
Both antennas would easily cover the first MHz of 10 meters with a VSWR under 2:1, although the 0.5″ model might require a slight adjustment of the driver to bring its resonant point lower in the
band. (Such adjustments to the driver, if modest, have no significant effects on the other operating characteristics of the array.)
Besides looking at raw feedpoint impedance values, it is often useful to examine the swing of both resistance and reactance across the passband in question. The dual-wire array changes resistance
nearly 14% less than the single-wire model, while the dual-wire reactance changes nearly 10% less. Both numbers are clear indications that the dual-wire system with its 5″ spacing represents a single
wire that is larger than the 0.5″ diameter used for comparison.
The bottom line on the exercise is that a set of dual-wire loops for a quad array can effectively improve 3-element quad performance relative to the customary single #14 AWG quad structure. Even if
one discounts the gain advantage of the dual-wire array as operationally marginal, the improvement to both the SWR and front-to-back operating bandwidths is undeniably significant to all except those
operators who use only small portions of the wider amateur bands.
The process of converting one of the automated designs to a dual-wire version does require hand-optimization at present. However, the automated designs that emerge from the utility program shown in
this article provide some useful starting points for developing realistic 3-element monoband quad arrays that live up to their theoretical potential. This wide-band design focuses on one potential;
next month’s high-gain design focuses on another.
Originally posted on the AntennaX Online Magazine by L. B. Cebik, W4RNL
Last Updated : 21st May 2024
Share WhatsappTelegramLINESkypeEmail
|
{"url":"https://www.md0mdi.im/automating-the-design-of-3-element-monoband-quad-beams/","timestamp":"2024-11-07T06:48:08Z","content_type":"text/html","content_length":"497387","record_id":"<urn:uuid:92bf8696-b344-4328-b0fa-6ea218cf2089>","cc-path":"CC-MAIN-2024-46/segments/1730477027957.23/warc/CC-MAIN-20241107052447-20241107082447-00505.warc.gz"}
|
Golden Ratio – Explanation and Examples
Two quantities $a$ and $b$ with $a > b$ are said to be in golden ratio if $\dfrac{ a + b}{a} = \dfrac{a}{b}$
The ratio $\frac{a}{b}$ is also denoted by the Greek letter $\Phi$ and we can show that it is equal to $\frac{1 + \sqrt{5}}{2} \approx 1.618$. Note that the golden ratio is an irrational number,
i.e., the numbers of the decimal point continue forever without any repeating pattern, and we use $1.618$ as an approximation only. Some other names for the golden ratio are golden mean, golden
section, and divine proportion.
What is the golden ratio:
Golden ration can easily be understood using the example of a stick that we break into two unequal parts $a$ and $b$, where $a>b$, as shown in the figure below
Now there are many ways in which we can break the stick into two parts; however, if we break it in a particular manner, i.e., the ratio of the long part ($a$) and the short part ($b$) is also equal
to the ratio of the total length ($a + b$) and the long part ($a$), then $a$ and $b$ are said to be in the golden ratio. The figure below shows an example of when the two parts of a stick are in the
golden ratio and when they are not.
Calculating the golden ratio:
We stated above that the golden ratio is exactly equal to $\frac{1 + \sqrt{5}}{2}$. Where does this number come from? We will describe two methods to find the value $\Phi$. First, we start with the
definition that $a$ and $b$ are in golden ratio if
$\frac{a}{b} = \frac{a + b}{a} = 1 + \frac{b}{a}$
Let $\Phi = \frac{a}{b}$ then $\frac{b}{a} = \frac{1}{\Phi}$, so the above equation becomes
$\Phi = 1 + \frac{1}{\Phi}$.
Method-1: The recursive method
We assume any value for the $\Phi$, lets say we assume $\Phi=1.2$. Now, we put this value in the above formula, i.e., $\Phi = 1 + \frac{1}{\Phi}$ and get a new value of $\Phi$ as follows:
$\Phi = 1 + \frac{1}{1.2} = 1.8333$.
Now, we put this new value again in the formula for the golden ratio to get another value, i.e.,
$\Phi = 1 + \frac{1}{1.8.3333} = 1.54545$.
If we keep on repeating this process, we get closer and closer to the actual value of $\Phi$. As we show in the table below
Value 1 + 1/Value
1.2 1.8333
1.8333 1.5454
1.5454 1.647
1.647 1.607
1.607 1.622
1.622 1.616
1.616 1.618
Method-2: The quadratic formula
Using the fact that $\Phi = 1 + \frac{1}{\Phi}$ and multiplying by $\Phi$ on both sides, we get a quadratic equation.
$\Phi^2 = \Phi + 1$.
This can also be rearranged as
$\Phi^2 – \Phi – 1 = 0$.
By using the quadratic formula for the equation $\alpha x^2 + \beta x + c = 0$, and noting that $x=\Phi$, $\alpha=1$, $\beta=-1$ and $c=-1$, we get
$\Phi = \frac{1 \pm \sqrt{1- 4 \times 1 \times -1}}{2} = \frac{1 \pm \sqrt{5}}{2}$.
The quadratic equation always has two solutions, in this case, one solution, i.e., $\frac{1 + \sqrt{5}}{2}$ is positive and the second solution, i.e., $\frac{1 – \sqrt{5}}{2}$ is negative. Since we
assume $\Phi$ to be a ratio of two positive quantities, so the value of $\Phi$ is equal to $\frac{1 + \sqrt{5}}{2}$, which is approximately equal to 1.618.
Golden ratio definition:
Using the above discussion, we can define the golden ratio simply as:
The golden ratio $\Phi$ is the solution to the equation $\Phi^2 = 1 + \Phi$.
Golden ratio examples:
There are many interesting mathematical and natural phenomenon where we can observe the golden ratio. We describe some of these below
The golden ratio and the Fibonacci numbers
The Fibonacci numbers are a famous concept in number theory. The first Fibonacci number is 0, and the second is 1. After that, each new Fibonacci number is created by adding the previous two numbers.
For example, we can write the third Fibonacci number by adding the first and the second Fibonacci number, i.e., 0 + 1 = 1. Likewise, we can write the fourth Fibonacci number by adding the second and
third Fibonacci numbers, i.e., 1+1 = 2, etc. The sequence of Fibonacci numbers is called a Fibonacci sequence and is shown below:
$0, \,\, 1, \,\, 1, \,\, 2,\,\, 3,\,\, 5,\,\, 8,\,\, 13,\,\, 21,\,\, 34, \cdots$
If we start dividing subsequent Fibonacci numbers, the results approach closer and closer to the golden ratio as shown in the table below:
$\frac{1}{1}$ $1$
$\frac{2}{1}$ $2$
$\frac{3}{2}$ $1.5$
$\frac{5}{3}$ $1.66$
$\frac{8}{5}$ $1.6$
$\frac{13}{8}$ $1.625$
$\frac{21}{13}$ $1.615$
$\frac{34}{21}$ $1.619$
Golden ratio in Geometry
Pentagon and pentagram
The golden ratio makes numerous appearances in a regular pentagon and its associated pentagram. We draw a regular pentagon in the figure below.
If we connect the vertices of the pentagon, we get a star-shaped geometrical figure inside, which is called a pentagram, shown below
Many lines obey the golden ratio in the above figure. For example,
$\frac{DE}{EF}$ is in golden ratio
$\frac{EF}{FG}$ is in golden ratio
$\frac{EG}{EF}$ is in golden ratio
$\frac{BE}{AE}$ is in golden ratio,
$\frac{CF}{GF}$ is in golden ratio,
to name a few.
The golden spiral
Let us take a rectangle with one side equal to 1 and the other side equal to $\Phi$. The ratio of the large side to the small side is equal to $\frac{\Phi}{1}$. We show the rectangle in the figure
Now let’s say we divide the rectangle into a square of all sides equal to 1 and a smaller rectangle with one side equal to 1 and the other equal to $\Phi-1$. Now the ratio of the large side to the
smaller one is $\frac{1}{\Phi-1}$. The new rectangle is drawn in blue in the figure below
From the definition of the golden ratio, we note that
$\Phi^2 -\Phi -1 = 0$, we can rewrite it as
$\Phi(\Phi -1) = 1$, or
$\frac{\Phi}{1} = \frac{1}{\Phi -1}$
Hence, the new rectangle in blue has the same ratio of the large side to the small side as the original one. These rectangles are called golden rectangles. If we keep on repeating this process, we
get smaller and smaller golden rectangles, as shown below.
If we connect the points that divide the rectangles into squares, we get a spiral called the golden spiral, as shown below.
The Kepler triangle
The famous astronomer Johannes Kepler was fascinated by both the Pythagoras theorem and the golden ratio, so he decided to combine both in the form of Kepler’s triangle. Note that the equation for
the golden ratio is
$\Phi^2 = \Phi + 1$.
It is similar in format to the Pythagoras formula for the right-angled triangle, i.e.,
$\textrm{Hypotenuse}^2 = \textrm{Base}^2 + \textrm{Perpendicular}^2$,
If we draw a right-angled triangle with hypotenuse equal to $\Phi$, base equal to $\sqrt{\Phi}$ and perpendicular equal to 1, it will be a right-angled triangle. Such a triangle is called the Kelper
triangle, and we show it below:
Golden ratio in Nature
There are many natural phenomena where the golden ratio appears rather unexpectedly. Most readily observable is the spiraling structure and Fibonacci sequence found in various trees and flowers. For
instance, in many cases, the leaves on the stem of a plant grow in a spiraling, helical pattern, and if we count the number of turns and number of leaves, we usually get a Fibonacci number. We can
see this pattern in Elm, Cherry almond, etc. However, we must remember that many plants and flowers do not follow this pattern. Hence, any claim that the golden ratio is some fundamental building
block of nature is not exactly valid.
It is also claimed that the ideal or perfect human face follows the golden ratio. But, again, this is highly subjective, and there is no uniform consensus on what constitutes an ideal human face.
Also, all types of ratios can be found in any given human face.
In the human body, the ratio of the height of the naval to the total height is also close to the golden ratio. However, again we must remember that many ratios between 1 and 2 can be found in the
human body, and if we enumerate them all, some are bound to be close to the golden ratio while others would be quite off.
Finally, the spiraling structure of the arms of the galaxy and the nautilus shell is also quoted as examples of the golden ratio in nature. These structures are indeed similar to the golden spiral
mentioned above; however, they do not strictly follow the mathematics of the golden spiral.
How much of the golden ratio is actually present in nature and how much we force in on nature is subjective and controversial. We leave this matter to the personal preference of the reader.
Golden ration in architecture and design
Many people believe that the golden ratio is aesthetically pleasing, and artistic designs should follow the golden ratio. It is also argued that the golden ratio has appeared many times over the
centuries in the design of famous buildings and art masterpieces.
For example, We can find the golden ratio many times in the famous Parthenon columns. Similarly, it is argued that the pyramids of Giza also contain the golden ratio as the basis of their design.
Some other examples are the Taj Mahal and Notre Damn etc. However, it should be remembered that We cannot achieve the perfect golden ratio as it is an irrational number. Since we are good at finding
patterns, it may be the case that we are forcing the golden ratio on these architectures, and the original designers did not intend it.
However, some modern architectures, such as the United Nations secretariat buildings, have actually been designed using a system based on golden ratios.
Similarly, it is thought that Leonardo Di Vinci relied heavily on the use of the golden ratio in his works such as Mona Lisa and the Vitruvian Man. Whether the golden ratio is indeed aesthetic and it
should be included in the design of architecture and art is a subjective matter and we leave this matter to the artistic sense of the reader.
If you are indeed interested in using the golden ratio in your works, some simple tips would be to use fonts, such as the heading font and the body text, such that they follow a golden ratio. Or
divide your canvas or screen for any painting/pictures/documents so that the golden ratio is maintained.
Once you have used the golden ratio in your work, you will be in a better position to decide the aesthetic value of the golden ratio.
Golden ratio in History:
We have discussed the relation of the Fibonacci sequence and the golden ratio earlier. We can find the Fibonacci sequence in the works of Indian mathematicians as old as the second or third century
BC. It was later taken up by Arab mathematicians such as Abu Kamil. From the Arabs, it was transmitted to Leonardo Fibonacci, whose famous book Liber abaci introduced it to the western world.
We have already mentioned some ancient structures such as the pyramids of Giza and the Parthenon that are believed to have applied the golden ratio in their designs. We also find mentions of the
golden ratio in the works of Plato. Elements is an ancient and famous book on geometry by the Greek mathematician Euclid. We find some of the first mentions of the golden ratio as “extreme and mean
ratio” in Elements.
The golden ratio gained more popularity during the Renaissance. Luca Pacioli, in the year 1509, published a book on the golden ratio called divine proportion. Leonardo Da Vinci did the illustrations
of this book. Renaissance artists used the concept of the golden ratio in their works owing to its aesthetic appeal.
The famous astronomer Johannes Kepler also discusses the golden ratio in his writings, and we have also described the Kepler triangle above.
The term “Golden ratio” is believed to be coined by Martin Ohm in 1815 in his book “The Pure Elementary Mathematics.”
The Greek letter Phi (i.e., $\Phi$), which we have also used in this article to denote the golden ratio, was first used in 1914 by the American Mathematician Mark Barr. Note that Greek $\Phi$ is
equivalent to the alphabet “F,” the first letter of Fibonacci.
More recently, Le Corbusier, the lead architect of the UN secretariat, created a design system based on the golden ratio of the UN secretariat building. In his bestseller book ” The Da Vinci Code,
“the fiction writer Dan Brown popularized the myths and legends around the golden ratio in his bestseller book “The Da Vinci Code.”
Practice Questions
1. The 30th Fibonacci number is $6765$. The 21st number is $10946$. What is the 22nd Fibonacci number?
2. The 30th Fibonacci number is $6765$. The 21st number is $10946$. Which of the following is an estimate of the golden ratio using the 21st and 22nd Fibonacci numbers?
3. True or False: The Fibonacci sequence starts with 0 and 1. Suppose that we make our own sequence by starting with any two numbers and using the rule that the following number is the addition of
the two previous numbers. It is still possible for the ratio of consecutive numbers of your own defined sequence also to approach the golden ratio.
4. Which of the segments shown below are in golden ratio?
5. Which of the rectangles given below is the golden rectangle?
|
{"url":"https://www.storyofmathematics.com/golden-ratio/","timestamp":"2024-11-10T07:27:24Z","content_type":"text/html","content_length":"187392","record_id":"<urn:uuid:4ded1a29-b3e1-46a4-9c97-e12c5c1184c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00389.warc.gz"}
|
Planar Graph
• A graph is a collection of vertices connected to each other through a set of edges.
• The study of graphs is known as Graph Theory.
│Formal Definition │
│ │
│Formally, │
│ │
│A graph is defined as an ordered pair of a set of vertices and a set of edges. │
│ │
│G = (V, E) │
│ │
│Here, V is the set of vertices and E is the set of edges connecting the vertices.│
In this graph,
V = { A , B , C , D , E }
E = { AB , AC , BD , CD , DE }
Types of Graphs-
Various important types of graphs in graph theory are-
1. Null Graph
2. Trivial Graph
3. Non-directed Graph
4. Directed Graph
5. Connected Graph
6. Disconnected Graph
7. Regular Graph
8. Complete Graph
9. Cycle Graph
10. Cyclic Graph
11. Acyclic Graph
12. Finite Graph
13. Infinite Graph
14. Bipartite Graph
15. Planar Graph
16. Simple Graph
17. Multi Graph
18. Pseudo Graph
19. Euler Graph
20. Hamiltonian Graph
1. Null Graph-
• A graph whose edge set is empty is called as a null graph.
• In other words, a null graph does not contain any edges in it.
• This graph consists only of the vertices and there are no edges in it.
• Since the edge set is empty, therefore it is a null graph.
2. Trivial Graph-
• A graph having only one vertex in it is called as a trivial graph.
• It is the smallest possible graph.
• This graph consists of only one vertex and there are no edges in it.
• Since only one vertex is present, therefore it is a trivial graph.
3. Non-Directed Graph-
• A graph in which all the edges are undirected is called as a non-directed graph.
• In other words, edges of an undirected graph do not contain any direction.
• This graph consists of four vertices and four undirected edges.
• Since all the edges are undirected, therefore it is a non-directed graph.
4. Directed Graph-
• A graph in which all the edges are directed is called as a directed graph.
• In other words, all the edges of a directed graph contain some direction.
• Directed graphs are also called as digraphs.
• This graph consists of four vertices and four directed edges.
• Since all the edges are directed, therefore it is a directed graph.
5. Connected Graph-
• A graph in which we can visit from any one vertex to any other vertex is called as a connected graph.
• In connected graph, at least one path exists between every pair of vertices.
• In this graph, we can visit from any one vertex to any other vertex.
• There exists at least one path between every pair of vertices.
• Therefore, it is a connected graph.
6. Disconnected Graph-
• A graph in which there does not exist any path between at least one pair of vertices is called as a disconnected graph.
• This graph consists of two independent components which are disconnected.
• It is not possible to visit from the vertices of one component to the vertices of other component.
• Therefore, it is a disconnected graph.
7. Regular Graph-
• A graph in which degree of all the vertices is same is called as a regular graph.
• If all the vertices in a graph are of degree ‘k’, then it is called as a “k-regular graph“.
In these graphs,
• All the vertices have degree-2.
• Therefore, they are 2-Regular graphs.
8. Complete Graph-
• A graph in which exactly one edge is present between every pair of vertices is called as a complete graph.
• A complete graph of ‘n’ vertices contains exactly ^nC[2] edges.
• A complete graph of ‘n’ vertices is represented as K[n].
In these graphs,
• Each vertex is connected with all the remaining vertices through exactly one edge.
• Therefore, they are complete graphs.
9. Cycle Graph-
• A simple graph of ‘n’ vertices (n>=3) and n edges forming a cycle of length ‘n’ is called as a cycle graph.
• In a cycle graph, all the vertices are of degree 2.
In these graphs,
• Each vertex is having degree 2.
• Therefore, they are cycle graphs.
10. Cyclic Graph-
• A graph containing at least one cycle in it is called as a cyclic graph.
• This graph contains two cycles in it.
• Therefore, it is a cyclic graph.
11. Acyclic Graph-
• A graph not containing any cycle in it is called as an acyclic graph.
• This graph do not contain any cycle in it.
• Therefore, it is an acyclic graph.
12. Finite Graph-
• A graph consisting of finite number of vertices and edges is called as a finite graph.
• This graph consists of finite number of vertices and edges.
• Therefore, it is a finite graph.
13. Infinite Graph-
• A graph consisting of infinite number of vertices and edges is called as an infinite graph.
• This graph consists of infinite number of vertices and edges.
• Therefore, it is an infinite graph.
14. Bipartite Graph-
A bipartite graph is a graph where-
• Vertices can be divided into two sets X and Y.
• The vertices of set X only join with the vertices of set Y.
• None of the vertices belonging to the same set join each other.
Read More- Bipartite Graphs
15. Planar Graph-
• A planar graph is a graph that we can draw in a plane such that no two edges of it cross each other.
• This graph can be drawn in a plane without crossing any edges.
• Therefore, it is a planar graph.
Read More- Planar Graphs
16. Simple Graph-
• A graph having no self loops and no parallel edges in it is called as a simple graph.
• This graph consists of three vertices and three edges.
• There are neither self loops nor parallel edges.
• Therefore, it is a simple graph.
17. Multi Graph-
• A graph having no self loops but having parallel edge(s) in it is called as a multi graph.
• This graph consists of three vertices and four edges out of which one edge is a parallel edge.
• There are no self loops but a parallel edge is present.
• Therefore, it is a multi graph.
18. Pseudo Graph-
• A graph having no parallel edges but having self loop(s) in it is called as a pseudo graph.
• This graph consists of three vertices and four edges out of which one edge is a self loop.
• There are no parallel edges but a self loop is present.
• Therefore, it is a pseudo graph.
19. Euler Graph-
• Euler Graph is a connected graph in which all the vertices are even degree.
• This graph is a connected graph.
• The degree of all the vertices is even.
• Therefore, it is an Euler graph.
Read More- Euler Graphs
20. Hamiltonian Graph-
• If there exists a closed walk in the connected graph that visits every vertex of the graph exactly once (except starting vertex) without repeating the edges, then such a graph is called as a
Hamiltonian graph.
• This graph contains a closed walk ABCDEFG that visits all the vertices (except starting vertex) exactly once.
• All the vertices are visited without repeating the edges.
• Therefore, it is a Hamiltonian Graph.
Read More- Hamiltonian Graphs
Important Points-
• Edge set of a graph can be empty but vertex set of a graph can not be empty.
• Every polygon is a 2-Regular Graph.
• Every complete graph of ‘n’ vertices is a (n-1)-regular graph.
• Every regular graph need not be a complete graph.
The following table is useful to remember different types of graphs-
│ │Self-Loop(s)│Parallel Edge(s) │
│Graph │Yes │Yes │
│Simple Graph│No │No │
│Multi Graph │No │Yes │
│Pseudo Graph│Yes │No │
Applications of Graph Theory-
Graph theory has its applications in diverse fields of engineering-
1. Electrical Engineering-
• The concepts of graph theory are used extensively in designing circuit connections.
• The types or organization of connections are named as topologies.
• Some examples for topologies are star, bridge, series and parallel topologies.
2. Computer Science-
Graph theory is used for the study of algorithms such as-
3. Computer Network-
The relationships among interconnected computers in the network follows the principles of graph theory.
4. Science-
Following structures are represented by graphs-
• Molecular structure of a substance
• Chemical structure of a substance
• DNA structure of an organism etc
5. Linguistics-
The parsing tree of a language and grammar of a language uses graphs.
6. Other Applications-
• Routes between the cities are represented using graphs.
• Hierarchical ordered information such as family tree are represented using special types of graphs called trees.
Next Article- Planar Graph
Get more notes and other study material of Graph Theory.
Watch video lectures by visiting our YouTube channel LearnVidFun.
Planar Graph in Graph Theory | Planar Graph Example
Types of Graphs-
Before you go through this article, make sure that you have gone through the previous article on various Types of Graphs in Graph Theory.
We have discussed-
• A graph is a collection of vertices connected to each other through a set of edges.
• The study of graphs is known as Graph Theory.
In this article, we will discuss about Planar Graphs.
Planar Graph-
A planar graph may be defined as-
│In graph theory, │
│ │
│Planar graph is a graph that can be drawn in a plane such that none of its edges cross each other.│
Planar Graph Example-
The following graph is an example of a planar graph-
• In this graph, no two edges cross each other.
• Therefore, it is a planar graph.
Regions of Plane-
The planar representation of the graph splits the plane into connected areas called as Regions of the plane.
Each region has some degree associated with it given as-
• Degree of Interior region = Number of edges enclosing that region
• Degree of Exterior region = Number of edges exposed to that region
Consider the following planar graph-
Here, this planar graph splits the plane into 4 regions- R1, R2, R3 and R4 where-
• Degree (R1) = 3
• Degree (R2) = 3
• Degree (R3) = 3
• Degree (R4) = 5
Planar Graph Chromatic Number-
• Chromatic Number of any planar graph is always less than or equal to 4.
• Thus, any planar graph always requires maximum 4 colors for coloring its vertices.
Planar Graph Properties-
In any planar graph, Sum of degrees of all the vertices = 2 x Total number of edges in the graph
In any planar graph, Sum of degrees of all the regions = 2 x Total number of edges in the graph
│Special Cases │
│ │
│ │
│ │
│Case-01: │
│ │
│ │
│ │
│In any planar graph, if degree of each region is K, then- │
│ │
│ │
│ │
│┌─────────────────┐ │
││K x |R| = 2 x |E|│ │
│└─────────────────┘ │
│ │
│ │
│ │
│Case-02: │
│ │
│ │
│ │
│In any planar graph, if degree of each region is at least K (>=K), then-│
│ │
│ │
│ │
│┌──────────────────┐ │
││K x |R| <= 2 x |E|│ │
│└──────────────────┘ │
│ │
│ │
│ │
│Case-03: │
│ │
│ │
│ │
│In any planar graph, if degree of each region is at most K (<=K), then- │
│ │
│ │
│ │
│┌──────────────────┐ │
││K x |R| >= 2 x |E|│ │
│└──────────────────┘ │
│ │
│ │
If G is a connected planar simple graph with ‘e’ edges, ‘v’ vertices and ‘r’ number of regions in the planar representation of G, then-
│r = e – v + 2│
This is known as Euler’s Formula.
It remains same in all the planar representations of the graph.
If G is a planar graph with k components, then-
│r = e – v + (k + 1)│
Also Read- Bipartite Graph
Let G be a connected planar simple graph with 25 vertices and 60 edges. Find the number of regions in G.
• Number of vertices (v) = 25
• Number of edges (e) = 60
By Euler’s formula, we know r = e – v + 2.
Substituting the values, we get-
Number of regions (r)
= 60 – 25 + 2
= 37
Thus, Total number of regions in G = 37.
Let G be a planar graph with 10 vertices, 3 components and 9 edges. Find the number of regions in G.
• Number of vertices (v) = 10
• Number of edges (e) = 9
• Number of components (k) = 3
By Euler’s formula, we know r = e – v + (k+1).
Substituting the values, we get-
Number of regions (r)
= 9 – 10 + (3+1)
= -1 + 4
= 3
Thus, Total number of regions in G = 3.
Let G be a connected planar simple graph with 20 vertices and degree of each vertex is 3. Find the number of regions in G.
• Number of vertices (v) = 20
• Degree of each vertex (d) = 3
Calculating Total Number Of Edges (e)-
By sum of degrees of vertices theorem, we have-
Sum of degrees of all the vertices = 2 x Total number of edges
Number of vertices x Degree of each vertex = 2 x Total number of edges
20 x 3 = 2 x e
∴ e = 30
Thus, Total number of edges in G = 30.
Calculating Total Number Of Regions (r)-
By Euler’s formula, we know r = e – v + 2.
Substituting the values, we get-
Number of regions (r)
= 30 – 20 + 2
= 12
Thus, Total number of regions in G = 12.
Let G be a connected planar simple graph with 35 regions, degree of each region is 6. Find the number of vertices in G.
• Number of regions (n) = 35
• Degree of each region (d) = 6
Calculating Total Number Of Edges (e)-
By sum of degrees of regions theorem, we have-
Sum of degrees of all the regions = 2 x Total number of edges
Number of regions x Degree of each region = 2 x Total number of edges
35 x 6 = 2 x e
∴ e = 105
Thus, Total number of edges in G = 105.
Calculating Total Number Of Vertices (v)-
By Euler’s formula, we know r = e – v + 2.
Substituting the values, we get-
35 = 105 – v + 2
∴ v = 72
Thus, Total number of vertices in G = 72.
Let G be a connected planar graph with 12 vertices, 30 edges and degree of each region is k. Find the value of k.
• Number of vertices (v) = 12
• Number of edges (e) = 30
• Degree of each region (d) = k
Calculating Total Number Of Regions (r)-
By Euler’s formula, we know r = e – v + 2.
Substituting the values, we get-
Number of regions (r)
= 30 – 12 + 2
= 20
Thus, Total number of regions in G = 20.
Calculating Value Of k-
By sum of degrees of regions theorem, we have-
Sum of degrees of all the regions = 2 x Total number of edges
Number of regions x Degree of each region = 2 x Total number of edges
20 x k = 2 x 30
∴ k = 3
Thus, Degree of each region in G = 3.
What is the maximum number of regions possible in a simple planar graph with 10 edges?
In a simple planar graph, degree of each region is >= 3.
So, we have 3 x |R| <= 2 x |E|.
Substituting the value |E| = 10, we get-
3 x |R| <= 2 x 10
|R| <= 6.67
|R| <= 6
Thus, Maximum number of regions in G = 6.
What is the minimum number of edges necessary in a simple planar graph with 15 regions?
In a simple planar graph, degree of each region is >= 3.
So, we have 3 x |R| <= 2 x |E|.
Substituting the value |R| = 15, we get-
3 x 15 <= 2 x |E|
|E| >= 22.5
|E| >= 23
Thus, Minimum number of edges required in G = 23.
To gain better understanding about Planar Graphs in Graph Theory,
Next Article- Euler Graph
Get more notes and other study material of Graph Theory.
Watch video lectures by visiting our YouTube channel LearnVidFun.
|
{"url":"https://www.gatevidyalay.com/tag/planar-graph/","timestamp":"2024-11-10T12:21:42Z","content_type":"text/html","content_length":"143972","record_id":"<urn:uuid:49757234-cb90-4df8-845f-09125260f097>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00588.warc.gz"}
|
The Stacks project
Remark 99.15.5. Let $B$ be an algebraic space over $\mathop{\mathrm{Spec}}(\mathbf{Z})$. Let $B\text{-}\mathcal{C}\! \mathit{urves}$ be the category consisting of pairs $(X \to S, h : S \to B)$ where
$X \to S$ is an object of $\mathcal{C}\! \mathit{urves}$ and $h : S \to B$ is a morphism. A morphism $(X' \to S', h') \to (X \to S, h)$ in $B\text{-}\mathcal{C}\! \mathit{urves}$ is a morphism $(f,
g)$ in $\mathcal{C}\! \mathit{urves}$ such that $h \circ g = h'$. In this situation the diagram
\[ \xymatrix{ B\text{-}\mathcal{C}\! \mathit{urves}\ar[r] \ar[d] & \mathcal{C}\! \mathit{urves}\ar[d] \\ (\mathit{Sch}/B)_{fppf} \ar[r] & \mathit{Sch}_{fppf} } \]
is $2$-fibre product square. This trivial remark will occasionally be useful to deduce results from the absolute case $\mathcal{C}\! \mathit{urves}$ to the case of families of curves over a given
base algebraic space.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0D54. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0D54, in case you are confused.
|
{"url":"https://stacks.math.columbia.edu/tag/0D54","timestamp":"2024-11-03T16:12:15Z","content_type":"text/html","content_length":"14587","record_id":"<urn:uuid:b0acdcf3-f508-40a6-b809-fab61bbf4f38>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00466.warc.gz"}
|
1996 Mazda B4000 Tire Size
Your Mazda was manufactured with different tire sizes. To determine the best tire size for your specific 1996 Mazda B4000, we first need to determine your rim size. Please review the information
How to Determine Rim Size
Check your existing tires. Your Mazda B4000's rim size is the number to the right of the R. In the example pictured here, the tire size fits 16-inch rims.
Rim Size Selection
Now that you know your rim size, make a selection below to filter your results.
14-Inch Rims
1996 Mazda B4000
The original tire size for your 1996 Mazda B4000 is listed below. Tap on the box to view a color-coded explanation of your Mazda B4000's' tire size.
Trim Options:
LE 4x2
P225/70R14 98
Simplified Size: 225-70-14
Simplified size is useful for shopping and buying tires.
The original tire size for your 1996 Mazda B4000 is P225/70R14 98S. A color-coded explanation of the 1996 Mazda B4000's tire size is shown below.
This letter denotes the intended use of the tire.
P P Passenger Vehicle
LT Light Truck
C Commercial Vehicle
225 This number indicates that your tire has a width of 225 millimeters.
14 The tire size was designed to fit rims or wheels that are 14 inches in diameter.
98 This tire has a load index of 98, which means it's capable of carrying a load of 1650 pounds (750 kg) or less. A higher number means the tire can carry more weight. A lower number means the
70 This number means that your tire has an aspect ratio of 70%. In other words, your tire's sidewall height (from the edge of the rim to the tire's tread) is 70% of the width. In this case, the
sidewall height works out to be 157 millimeters.
This letter denotes how your tire was constructed. Radial is the standard construction method for about 99% of all tires sold today.
R R Radial
B Bias Belt
D Diagonal
This tire has a speed rating of S, which means 111 mph (180 km/h) is the maximum speed that can be sustained for 10 minutes. A higher speed becomes dangerous.
15-Inch Rims
1996 Mazda B4000
There are multiple tire sizes for your 1996 Mazda B4000 that depend upon the trim level. Look for your trim level below to get a color-coded explanation of your tire size. Then pick the best tire
size for your 1996 Mazda B4000.
Trim Options:
SE 4x4
P215/75R15 100
Simplified Size: 215-75-15
Simplified size is useful for shopping and buying tires.
The original tire size for your 1996 Mazda B4000 is P215/75R15 100S. A color-coded explanation of the 1996 Mazda B4000's tire size is shown below.
This letter denotes the intended use of the tire.
P P Passenger Vehicle
LT Light Truck
C Commercial Vehicle
215 This number indicates that your tire has a width of 215 millimeters.
15 The tire size was designed to fit rims or wheels that are 15 inches in diameter.
100 This tire has a load index of 100, which means it's capable of carrying a load of 1760 pounds (800 kg) or less. A higher number means the tire can carry more weight. A lower number means the
75 This number means that your tire has an aspect ratio of 75%. In other words, your tire's sidewall height (from the edge of the rim to the tire's tread) is 75% of the width. In this case, the
sidewall height works out to be 161 millimeters.
This letter denotes how your tire was constructed. Radial is the standard construction method for about 99% of all tires sold today.
R R Radial
B Bias Belt
D Diagonal
This tire has a speed rating of S, which means 111 mph (180 km/h) is the maximum speed that can be sustained for 10 minutes. A higher speed becomes dangerous.
Trim Options:
LE 4x4
P235/75R15 105
Simplified Size: 235-75-15
Simplified size is useful for shopping and buying tires.
The original tire size for your 1996 Mazda B4000 is P235/75R15 105S. A color-coded explanation of the 1996 Mazda B4000's tire size is shown below.
This letter denotes the intended use of the tire.
P P Passenger Vehicle
LT Light Truck
C Commercial Vehicle
235 This number indicates that your tire has a width of 235 millimeters.
15 The tire size was designed to fit rims or wheels that are 15 inches in diameter.
105 This tire has a load index of 105, which means it's capable of carrying a load of 2035 pounds (925 kg) or less. A higher number means the tire can carry more weight. A lower number means the
75 This number means that your tire has an aspect ratio of 75%. In other words, your tire's sidewall height (from the edge of the rim to the tire's tread) is 75% of the width. In this case, the
sidewall height works out to be 176 millimeters.
This letter denotes how your tire was constructed. Radial is the standard construction method for about 99% of all tires sold today.
R R Radial
B Bias Belt
D Diagonal
This tire has a speed rating of S, which means 111 mph (180 km/h) is the maximum speed that can be sustained for 10 minutes. A higher speed becomes dangerous.
|
{"url":"https://www.sizemytires.com/vehicle/1996/mazda/b4000","timestamp":"2024-11-08T11:24:08Z","content_type":"text/html","content_length":"35847","record_id":"<urn:uuid:bdc2d4cc-0612-42da-9a23-3d2abbde9340>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00688.warc.gz"}
|
An Output Sensitive Algorithm for Computing Viewsheds and Total Viewsheds on 2D Terrains
Level of Access
Restricted Access Thesis
Department or Program
Computer Science
Modeling and computing visibility on terrains has useful applications in many fields. The most common visibility-related concepts on terrains are the viewshed and total viewshed. The viewshed of a
point v in a terrain is the area in the terrain that is visible from v. The total viewshed is a surface which, at point v, has a value equal to the size of the viewshed of v. In many applications, it
is desirable to model a terrain and its corresponding viewshed and total viewshed with a high level of accuracy. This is possible due to widely available high-resolution terrain data collected using
LiDAR technology. In order to compute viewsheds and total viewsheds on large high-resolution datasets, efficient algorithms are necessary. The only known method for computing the total viewshed is to
compute the viewshed for each point in the terrain, which is too slow in practice as it can take on the order of days or more for large terrains. In this thesis, we present a new output sensitive
algorithm for computing viewsheds and total viewsheds on grid terrains, which is based on a multi-resolution approach. First, we compute the viewshed on a smaller low-resolution grid terrain
consisting of blocks of points, which is guaranteed to be a superset of the ground-truth viewshed. Second, we refine the low-resolution viewshed to get the exact ground-truth viewshed. On a grid
terrain of n points, our algorithm runs in O( n/k log n/k + v log n), where k is the block size and v is the size of the low-resolution viewshed. In practice, our algorithm runs over 20 times faster
than the previously fastest algorithm on large high-resolution grid terrains.
Available only to users on the Bowdoin campus.
|
{"url":"https://digitalcommons.bowdoin.edu/honorsprojects/93/","timestamp":"2024-11-06T02:56:55Z","content_type":"text/html","content_length":"37767","record_id":"<urn:uuid:432c76f3-8e7f-4304-b808-afbc71f8bce5>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00716.warc.gz"}
|
Simple Matrix Practice
To explore Python matrices, it's essential to understand that a matrix is simply a list of lists, with each row being a separate list. Given this structure, we can easily access matrix elements with
the indices of the row and the column. Our practice problems will be based on similar logic, where we traverse and manipulate matrix data.
One practical exercise that we will cover is, given a sorted matrix where each row and column is sorted in ascending order, we have to search for a particular target value. This exercise enhances
your problem-solving skills and deepens your understanding of matrix traversal.
Since the matrix is sorted both row-wise and column-wise, we can leverage this property for an efficient search. Start from the top-right corner of the matrix:
• If the current element equals the target, you've found the value.
• If the current element is greater than the target, move left (one column back).
• If the current element is less than the target, move down (one row forward).
Continue these steps until you either find the target or exhaust the search space. This method ensures that each step narrows down the potential search area efficiently.
|
{"url":"https://learn.codesignal.com/preview/lessons/2474","timestamp":"2024-11-02T08:44:14Z","content_type":"text/html","content_length":"111482","record_id":"<urn:uuid:db327e1a-8fda-4f8b-acea-6b63a24ac224>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00351.warc.gz"}
|
Solow Model
Problem 3
In 2003, Pres. George W. Bush convinced Congress to reduce the maximum tax rate that shareholders pay on dividends from 38.6 percent to 15 percent. In lobbying for this measure, he argued that
cutting the tax would encourage people to invest more – i.e. increase the economy's saving rate.
Opponents of the policy argued that cutting the tax on dividends was a giveaway to Pres. Bush's rich friends and that it would not benefit workers.
Answer the following questions using the Solow Model without technological progress. Throughout the problem, assume that the U.S. economy was in steady state when Pres. Bush announced his dividend
tax plan. Until part e., assume that Pres. Bush's tax policy would increase the saving rate.
a. Under what condition would Pres. Bush's tax policy increase steady-state consumption per worker? Under what condition would it decrease steady-state consumption per worker?
ANSWER: Because we are assuming that Pres. Bush's tax policy would increase the saving rate, such a tax policy would increase steady-state consumption per worker if the previous saving rate was lower
than the golden-rule level.
b. How would the marginal product of labor differ between the initial steady state and the one to which the economy will converge to after reduction of the tax on dividends?
ANSWER: Because the economy would converge to a higher steady-state level of capital per worker, the marginal product of labor will increase as capital per worker increases.
c. How would Pres. Bush's tax policy affect wages, $w$? Hint: remember that: $w=p·MPL$
ANSWER: Because the Solow Model assumes that the labor supply is constant at a given point in time, the higher marginal product of labor will increase equilibrium wages.
d. Given your answers to the previous three questions, was Pres. Bush's tax policy a giveaway to the rich without any benefit for workers?
ANSWER: No. If the change in tax policy increases the saving rate, workers will benefit from higher wages.
e. Now assume that Pres. Bush's tax policy would not increase the saving rate. Under this assumption, was the tax policy a giveaway to the rich without any benefit for workers?
ANSWER: Yes. If the change in tax policy does not increase the saving rate, then it's a giveaway to the rich without any benefit for workers.
|
{"url":"https://www.doviak.net/courses/macro/macrobook/macrobook_lec04c.shtml","timestamp":"2024-11-06T20:14:39Z","content_type":"text/html","content_length":"21061","record_id":"<urn:uuid:42c3371e-ac1d-46ee-85dd-2289bcc7dc1e>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00344.warc.gz"}
|
Mathematics and Music
Melvyn Bragg and guests discuss the mathematical structures that lie within the heart of music. The seventeenth century philosopher Gottfried Leibniz wrote: ‘Music is the pleasure the human mind
experiences from counting without being aware that it is counting’. Mathematical structures have always provided the bare bones around which musicians compose music and have been vital to the very
practical considerations of performance such as fingering and tempo. But there is a more complex area in the relationship between maths and music which is to do with the physics of sound: how pitch
is determined by force or weight; how the complex arrangement of notes in relation to each other produces a scale; and how frequency determines the harmonics of sound. How were mathematical
formulations used to create early music? Why do we in the West hear twelve notes in the octave when the Chinese hear fifty-three? What is the mathematical sequence that produces the so-called ‘golden
section’? And why was there a resurgence of the use of mathematics in composition in the twentieth century?
→ Listen on BBC Sounds website
• Marcus du Sautoy 15 episodes
Professor of Mathematics at the University of Oxford
• Robin Wilson 2 episodes
Professor of Pure Mathematics at the Open University
• Ruth Tatlow No other episodes
Lecturer in Music Theory at the University of Stockholm
Related episodes
For more related episodes, visit the
visual explorer
Programme ID: p003c1b9
Episode page: bbc.co.uk/programmes/p003c1b9
Auto-category: 780.1 (Music and mathematics)
Hello (First sentence from this episode) “Hello the 17th century philosopher Gottfried Leibniz wrote quote music is the pleasure the human mind experiences from counting without being aware that it
is counting unquote.”
|
{"url":"https://www.braggoscope.com/2006/05/25/mathematics-and-music.html","timestamp":"2024-11-13T21:40:43Z","content_type":"text/html","content_length":"22543","record_id":"<urn:uuid:2992bea2-755d-4e38-88d3-fa7d36e39fbf>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00073.warc.gz"}
|
SKLearn NMF Vs Custom NMF
I am trying to build a recommendation system using Non-negative matrix factorization. Using scikit-learn NMF as the model, I fit my data, resulting in a certain loss(i.e., reconstruction error). Then
I generate recommendation for new data using the inverse_transform method.
Now I do the same using another model I built in TensorFlow. The reconstruction error after training is close to that obtained using sklearn's approach earlier. However, neither are the latent
factors similar to one another nor the final recommendations.
One difference between the 2 approaches that I am aware of is: In sklearn, I am using the Coordinate Descent solver whereas in TensorFlow, I am using the AdamOptimizer which is based on Gradient
Descent. Everything else seems to be the same:
1. Loss function used is the Frobenius Norm
2. No regularization in both cases
3. Tested on the same data using same number of latent dimensions
Relevant code that I am using:
1. scikit-learn approach:
model = NMF(alpha=0.0, init='random', l1_ratio=0.0, max_iter=200,
n_components=2, random_state=0, shuffle=False, solver='cd', tol=0.0001,
result = model.inverse_transform(model.transform(data))
2. TensorFlow approach:
w = tf.get_variable(initializer=tf.abs(tf.random_normal((data.shape[0],
2))), constraint=lambda p: tf.maximum(0., p))
h = tf.get_variable(initializer=tf.abs(tf.random_normal((2,
data.shape[1]))), constraint=lambda p: tf.maximum(0., p))
loss = tf.sqrt(tf.reduce_sum(tf.squared_difference(x, tf.matmul(w, h))))
My question is that if the recommendations generated by these 2 approaches do not match, then how can I determine which are the right ones? Based on my use case, sklearn's NMF is giving me good
results, but not the TensorFlow implementation. How can I achieve the same using my custom implementation?
The choice of the optimizer has a big impact on the quality of the training. Some very simple models (I'm thinking of GloVe for example) do work with some optimizer and not at all with some others.
Then, to answer your questions:
how can I determine which are the right ones ?
The evaluation is as important as the design of your model, and it is as hard i.e. you can try these 2 models and several available datasets and use some metrics to score them. You could also use A/B
testing on a real case application to estimate the relevance of your recommendations.
How can I achieve the same using my custom implementation ?
First, try to find a coordinate descent optimizer for Tensorflow and make sure all step you implemented are exactly the same as the one in scikit-learn. Then, if you can't reproduce the same, try
different solutions (why don't you try a simple gradient descent optimizer first ?) and take profit of the great modularity that Tensorflow offers !
Finally, if the recommendations provided by your implementation are that bad, I suggest you have an error in it. Try to compare with some existing codes.
|
{"url":"https://www.edureka.co/community/3246/sklearn-nmf-vs-custom-nmf?show=3251","timestamp":"2024-11-02T01:24:40Z","content_type":"text/html","content_length":"175909","record_id":"<urn:uuid:6500a663-bb69-403c-a111-3fd5ec26449a>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00241.warc.gz"}
|
What I Think About When I Think About Voting
Inevitably, I think back to my favorite result in mathematics: when Diaconis used the representation theory of the symmetric group to show us that psychologists just don’t get along…
What I Think About When I Think About Voting
Sarah Wolff
Denison University
It’s November. Here in Ohio, that means cozy sweaters, crisp mornings, pumpkin everything, Thanksgiving, and sometimes a first snow. November also means election season, and as someone who likes to
think about voting I find myself lost in thought more than usual around this time.
Inevitably, election coverage in the news will start me down a rabbit hole that quickly expands to much broader scenarios than choosing a presidential candidate. You see, the type of voting that I
like to think about is ranked-choice voting, aka any scenario where a group of people is asked to choose and rank from a list of items. This could be used for anything from electing a mayor to
filling out a survey of preferences. So yes, I’m quickly thinking about a group of diners ranking their favorite food items. Or a group of students choosing a new Denison mascot—Buzzy the turkey
vulture? Denideer the deer? Swasey the walking chapel?
Mascot images from hypothetical election used courtesy of Denison University Communications.
Or if I’m in a nostalgic mood I might think about how my college soccer coach made all 20 of his players rank each other from 1 to 20. And, yes, he then shared that data with us. At the time it felt…
cruel. But now that I know just how hard it really is to understand this type of data and also how hard it is for humans to handle ranking more than five choices, it feels, well, misguided.
When I think about voting, sometimes I wonder if I could get my hands on that 15+ year old data set. Sometimes I think about the incredible amount of data in this world that comes from asking people
to rank their choices, and the incredible amount of variation in analyzing that data (see e.g. Bargagliotti et al.). But inevitably, I think back to my favorite result in mathematics: when Diaconis
used the representation theory of the symmetric group to show us that psychologists just don’t get along. That is the result that I’d like to build to here.
Proceeding to Choose a Procedure
Let’s back up a bit and talk about the different considerations that go into setting up an election. First and foremost: which voting method will we use? In other words: how will the voters vote?
Next, which voting procedure will we use, i.e., how will a winner be determined from the votes? Once the votes are in, will we analyze that data? If so, how? Will the analysis support the outcome or
point to other considerations?
Of course there is also the question of whether the voters were grouped into districts, which opens up an entire world of mathematical and moral questions (see for example the work of the MGGG
Redistricting Lab).
So how will the voters vote? Well, there are many different voting methods. One is plurality voting where voters select their favorite candidate from a list (see for example the 2023 AMS Presidential
election). Another, often used for electing committee members, is approval voting where voters select any candidate they approve of from a list, sometimes with a set maximum number (see for example
the 2023 AMS Editorial Boards Committee election). As with all voting methods, plurality and approval voting have pros and cons. For example, approval voting could potentially create a committee
entirely comprised of members reflecting the values of 51% of the electorate without a single member reflecting the values of the remaining 49%.
One voting method rising in popularity is ranked-choice voting. While its implementation in the 2021 New York City mayoral elections has made ‘ranked choice voting’ seem synonymous with ‘instant
runoff voting’ (IRV), ranked choice voting is a voting method while IRV is a voting procedure. Ranked-choice voting just means the voters ranked some or all of the candidates in order of preference.
Given an election between $n$ candidates consider $S_n$, the symmetric group on $n$ letters. Each element of $S_n$ is a permutation—ranking—of the $n$ letters and corresponds to one of the choices a
voter can make. We represent a ranked-choice election by a function $p:S_n\rightarrow \mathbb{C}$ where $p(\sigma)$ gives the number of voters who voted for ranking $\sigma$. In the voting theory
literature $p$ is often called a profile. Note that in a typical election $p(\sigma)$ is an integer; however, working with normalized data quickly puts us outside the realm of the integers and
viewing our functions with range $\mathbb{C}$ allows us to do interesting representation theory.
Here is a possible profile. (University Communications would like me to note that this is a hypothetical scenario: no such vote for a Denison mascot has happened.)
p\large(\text{Buzzy, Denideer, Swasey})=12 &&&& p\large(\text{Buzzy, Swasey, Denideer})=7\\
p\large(\text{Denideer, Buzzy, Swasey})=22 &&&& p\large(\text{Denideer, Swasey, Buzzy})=5\\
p\large(\text{Swasey, Buzzy, Denideer})=25&&&& p\large(\text{Swasey, Denideer, Buzzy})=3 \end{array}
Given a profile there are many different ways to select a winner. For example, we could apply a weighting vector $\mathbf{w}$ to each ranking that gives weight $w_j$ to the candidate ranked in the
$j$th position. A weighting vector of $\mathbf{w}=[1,0,\dots,0]$ recovers plurality voting while $\mathbf{w}=[1,1,\dots,1,0]$ would be anti-plurality—voters indicating their least-desired
candidate—and $\mathbf{w}=[n-1, n-2,\dots,1,0]$ is the well-known Borda count.
Outside of assigning weighting vectors there are plenty of other options. In 1785 the Marquis de Condorcet proposed Condorcet’s criterion: if there is a candidate that wins every head-to-head contest
then that candidate should win. The 2021 New York mayoral election used instant runoff voting, which first checks if there is a candidate who is chosen in first position by more than 50% of the
voters. If not, the candidate with the fewest number of votes is eliminated, the rankings are updated for the remaining candidates, and the process continues.
Arrow Takes Aim
Different voting procedures have the potential to lead to different election outcomes. Indeed, for the profile $p$ above representing a contest between Denison mascots, plurality, Borda, and IRV each
produce a different winner. So which procedure should we use?
Interestingly, that question is much harder to answer than it would seem. In 1951, economist Kenneth Arrow investigated reasonable conditions that a voting procedure should satisfy. Applied to a
ranked-choice election these are:
• Unrestricted Domain: voters should be allowed to choose any ranking of the $n$ candidates.
• Pareto Principle: if all voters prefer candidate $A$ to candidate $B$, then candidate $A$ should place above candidate $B$.
• Independence of Irrelevant Alternatives (IIA): the presence of an irrelevant candidate, eg $C$, should not affect the head-to-head ranking of candidates $A$ and $B$.
Arrow then proved his impossibility theorem, often summarized as: “the only voting procedure with three or more candidates that satisfies the above conditions is a dictatorship.”
While it may seem like Arrow’s theorem declares that voting is broken, we can see that there is more to the story. The first two conditions are exceedingly reasonable but IIA is one that quickly
leads to debate. A research mentor once explained IIA this way: suppose you are at a restaurant and the server tells you that you can choose between lemon meringue pie and caramel cake. You choose
the cake. But then the server comes back, having remembered that they also offer ice cream sundaes. You say: “Oh! Well in that case I’ll have the pie!”
Even with this simple example, people often come up with reasons why the interaction could make sense. And in an election, the introduction of a new, seemingly ‘irrelevant’ candidate could absolutely
change some voters’ ranking of two other candidates.
Really, Arrow is telling us to be thoughtful and purposeful about elections: when choosing the voting system we will use, we need to think through which criteria are most important to us.
Method $\neq$ Procedure $\neq$ Analysis
Think of how much ranked-choice data is out in the world. Any time you have been asked to fill out a survey of preferences, you were adding to a particular ranked-choice data set. I don’t know about
you, but I’ve contributed a lot of data in my life.
Despite this sea of data, there is no standard method of analysis. Analyses vary widely but usually fall into one of three categories: descriptive methods, regression methods and clustering methods.
As a simple example of a descriptive method, we could provide the average ranking of each candidate or a table of how many times each candidate is ranked in each position. A rich discussion of these
three categories can be found in Bargagliotti et al., which provides examples of ranked-choice data in the fields of education and psychology, delves into how the data is traditionally analyzed and
used in decision-making in these fields, and proposes new techniques that could “more fully leverage information contained in ranked data” (page 17).
To me, however, the most fascinating analysis of ranked data falls outside of the three categories above: using a generalized Fourier transform to reveal patterns within the data. To be completely
honest, it can be hard to convince a non-mathematics audience to do a Fourier transform on the symmetric group to analyze their data set. The categories above are perhaps more practical—easier to
explain, easier to interpret, easier to talk about with a broad audience—but the structure that a Fourier transform can reveal is just too beautiful to leave out of the conversation. My favorite
example is Diaconis’s work delving into the results of the 1980 American Psychological Association (APA) presidential election.
The classical discrete Fourier transform (DFT) of a function $f:[0,1,\dots,n]\rightarrow \mathbb{C}$ is the map $f\rightarrow\{\hat{f}(k)\mid k=0,\dots,n-1\}$ arising from expressing $f$ in form:
\[f=\sum_{k=0}^{n-1} \hat{f}(k)\omega_{k}.\]
where $\omega_k:[0,1,\dots,n]\rightarrow \mathbb{C}$ is defined as $\omega_{k}(j)=e^{2\pi ikj/n}$. We call $\hat{f}(k)$ a Fourier coefficient.
From an algebraic perspective we see that $f$ is a function on the cyclic group $C_n$ and therefore an element of the group algebra $\mathbb{C}C_n$ of complex-valued functions on $C_n$. Taking a
representation-theoretic perspective, the set of functions $\{\omega_1,\dots,\omega_n\}$ forms a complete set of inequivalent irreducible representations of $C_n$ and thus a DFT is a change-of-basis
map that uses an orthogonal basis coming from the irreducible representations of $C_n$.
Taking this viewpoint allows for immediate generalization to any finite group $G$: take a function $f\in\mathbb{C}G$ and consider the Fourier coefficients that come from rewriting $f$ using a
complete set of irreducible representations of $G$—an orthogonal basis for $\mathbb{C}G$. These are broad strokes: for more details see for example Section 3 of Rockmore's "Some applications of
generalized FFTs" or see Crisman and Orrison for an excellent survey of representation theory applications in voting theory.
Remember how we captured election data using a profile $p$? Realizing that $p\in\mathbb{C}S_n$, we can project it into orthogonal subspaces determined by the irreducible representations of $S_n$! One
reason it might make sense to use irreducible representations is because these subspaces are invariant under the action of $S_n$. This has the nice voting-theoretic interpretation that the analysis
is invariant under relabeling of candidates. We definitely wouldn’t want our analysis to change if we just swapped the labels of candidates! Also, the representation theory of the symmetric group is
both intrinsically beautiful and meaningful. Each irreducible representation corresponds to a partition of $n$ which can point us to relationships amongst the candidates.
Using similar ideas, Diaconis took the profile $p\in\mathbb{C}S_5$ corresponding to the APA’s 1980 presidential election among 5 candidates and projected $p$ into isotypic subspaces corresponding to
each irreducible representation of $S_5$. The data seemed to be concentrated in the subspace $V_3$ corresponding to the irreducible representation labeled by:
This could indicate that there was a strong ‘pairs’ effect—that voters placed a pair of candidates together, either above or below the remaining three.
This observation led Diaconis to consider second-order information—positions of pairs of candidates (See Table 5 in his article “A generalization of spectral analysis with application to ranked
data”). He found a large effect for ranking candidates 1 and 3 together—either both at the top or both at the bottom. It seemed that the voters either liked both or hated both. A similar effect was
found for candidates 4 and 5.
Back to the APA election: at least back in 1980, the academicians and clinicians in the APA were on uneasy terms. Diaconis’s analysis captured that dynamic! That particular year, two of the
candidates were academicians, two were clinicians, and one fell a bit more in the middle. "Voters seem to choose one type or the other, and then choose within, but the group effect predominates”
(page 956).
This is one small example of how a seemingly obscure generalization of a Fourier transform can extract meaningful information in elections and surveys. But remember: extracting meaningful information
is different from choosing a winner. While analysis may lead to considerations in choosing a voting procedure, analyzing voting data is not the same as choosing a voting procedure which is not the
same as choosing a voting method.
So how should we vote? How should we analyze the votes? And how should we use this analysis? Maybe we should vote on it.
Further Reading
• K. J. Arrow. Social Choice and Individual Values. Yale University Press, 2012.
• A. E. Bargagliotti et al. “Using ranked survey data in education research:
Methods and applications”. In: Journal of School Psychology 85 (2021),
pp. 17–36.
• N. M. Bradburn, S. Sudman, and B. Wansick. Asking questions: The definitive
guide to questionnaire design–For market research, political polls, and social
and health questionnaires. rev. ed. San Francisco, CA: Jossey-Bass, 2004.
How hard is it to rank more than five choices?
• N. Condorcet. Essai sur l’application de l’analyse à la probabilité des décisions
rendues à la pluralité des voix. Cambridge Library Collection - Mathematics.
Cambridge University Press.
• K.-D. Crisman and M. E. Orrison. “Representation theory of the symmetric
group in voting theory and game theory”. In: Algebraic and geometric methods
in discrete mathematics. Vol. 685. Contemp. Math. Amer. Math. Soc., Providence,
RI, 2017, pp. 97–115.
• P. Diaconis. “A generalization of spectral analysis with application to ranked
data”. In: Ann. Statist. 17.3 (1989), pp. 949–979.
• J. Malkevitch, Feature Column Archive.
Includes several previous Feature Columns on the mathematics of voting and apportionment.
• D. Rockmore. “Some applications of generalized FFTs”. In: Groups and computation,
II (New Brunswick, NJ, 1995). Vol. 28. DIMACS Ser. DiscreteMath.
Theoret. Comput. Sci. Amer. Math. Soc., Providence, RI, 1997, pp. 329–369.
Leave a Reply Cancel reply
|
{"url":"https://mathvoices.ams.org/featurecolumn/2023/11/01/what-i-think-about-when-i-think-about-voting/","timestamp":"2024-11-10T17:17:47Z","content_type":"text/html","content_length":"89141","record_id":"<urn:uuid:b9e846a6-82e7-4215-93db-88cc1e8cfc64>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00517.warc.gz"}
|
Question Bank on Diploma Electronics Engineering
16 March 2021
Question Bank on Diploma Electronics Engineering
1. The temperature co-efficient of resistance of a semiconductor is
(a) +ve
(b) –ve
(c) Zero
(d) very high
2. In an unbiased P-N junction, thickness of depletion layer is of the order of
(a) 0.005μm
(b) 0.5 μm
(c) 5 μm
(d) 1nm
3. JFET can operate in
(a) depletion mode and enhancement mode
(b) depletion mode only
(c) enhancement mode only
(d) neither enhancement nor depletion mode
4. FETs have similar properties to
(a) PNP transistors
(b) NPN transistors
(c) thermionic valve
(d) UJTs
5. Which statement about MOSFET is false? MOSFETs can operate in
(a) depletion mode
(b) enhancement mode
(c) both depletion & enhancement mode
(d) depletion only mode
6. In a JFET, point of reference is
(a) Drain
(b) Source
(c) Gate
(d) none of these
7. Input gate current of a FET is
(a) a few microampere
(b) negligibly small
(c) a few milliamperes
(d) a few amperes
8. For enhancement only mode N-channel MOSFET polarity of Vgs is
(a) –ve
(b) +ve
(c) Zero
(d) variable
9. In a JFET operating above pinch-off voltage, the
(a) drain current increases steeply
(b) drain current remains constant
(c) drain current starts decreasing
(d) depletion region become smaller
10. The best electronic device for fast switching is
(a) BJT
(b) Triac
(c) JFET
(d) MOSFET
11. Which semiconductor device acts like a diode and two resistors?
(a) SCR
(b) Triac
(c) Diac
(d) UJT
12. Which semiconductor device behaves like two SCRs?
(a) UJT
(b) Triac
(c) JFET
(d) MOSFET
13. After firing an SCR, the gating pulse is removed. Then the current in the SCR will
(a) remain same
(b) fall to zero
(c) rise up
(d) rise a little then fall to zero
14. The oscillator circuit that uses a tapped coil in the tuned circuit is
(a) Multivibrator
(b) Hartley
(c) Colpits
(d) Armstrong
15. An oscillator produces oscillations due to which type of feedback?
(a) +ve feedback
(b) –ve feedback
(c) Both +ve & –ve
(d) neither +ve nor –ve
16. Frequency stability in an oscillator can be achieved by
(a) adjusting the phase shift
(b) controlling its gain
(c) incorporating a tuned circuit
(d) employing automatic biasing
17. An oscillator that consists of two interdependent circuits such that output of each controls the input of the other is called a
(a) sine wave oscillator
(b) feedback oscillator
(c) relaxation oscillator
(d) –ve resistance oscillator
18. The Wien bridge oscillator is
(a) a free running oscillator
(b) a square wave generator
(c) a stable sine wave generator
(d) also called cosine oscillator
19. If the frequency of incoming rectangular wave in a staircase generator is 100Hz, the number of steps in the output staircase pattern is
(a) 100
(b) 200
(c) 300
(d) 500
20. Resistance of an accurate ammeter is
(a) High
(b) Low
(c) very low
(d) very high
21. The duty cycle of a pulse of width 2 microsecond and repetition frequency 4kHz is
(a) 0.5
(b) 0.06
(c) 0.008
(d) 0.8
22. Deflection sensitivity of a CRO is
(a) directly proportional to
(b) inversely proportional to
(c) does not depend on distance between the deflecting plate and screen
(d) none of these
23. Noise figure of a two stage amplifier depends on the gain of
(a) first stage
(b) second stage
(c) both the stages
(d) none of the stages
24. Wien bridge is usually used for measuring
(a) Resistance
(b) Capacitance
(c) frequency
(d) current
25. The Kelvin double bridge is used for measuring accurately
(a) low value resistors
(b) high value resistors
(c) any resistors
(d) inductors
More Question Set on Electronics Engineering Diploma MCQs from Various Exam
Model Question Old Question
Sample Papers Mock Test
Practice Set Question Bank
Important Questions Test Papers
Typical Questions Selected Questions
26. Under identical values of cold and hot junction temperatures, which thermocouple gives the highest output?
(a) iron constantan
(b) nickel iron
(c) chromal constantan
(d) platinum rhodium
27. A delay line is used in high speed CRO to introduce time delay in
(a) vertical channel
(b) horizontal channel
(c) z axis of the CRT
(d) all of the above
28. For dc voltage an inductor behaves like a
(a) short circuit
(b) open circuit
(c) depends on polarity
(d) depends on voltage
29. A connected planar network has 4 nodes and 5 elements. The number of meshes in its dual network is
(a) 4
(b) 3
(c) 2
(d) 1
30. If there are b branches and n nodes in a network, then the number of independent KVL equations will be
(a) b
(b) b–n
(c) n–1
(d) b–n+1
31. In a linear network containing only independent current sources and resistors, if the values of all the current sources are doubled then the values of node voltages will be
(a) Doubled
(b) Halved
(c) same
(d) none of these
32. Which of the following theorem is a manifestation of the law of conservation of energy?
(a) Thevenin’s theorem
(b) Telegan’s theorem
(c) Reciprocity theorem
(d) compensation theorem
33. When a source is delivering maximum power to a load, the efficiency of the circuit
(a) is always 50%
(b) is always 75%
(c) depends on the
(d) none of these parameters
34. Twelve 1Ω resistances are used as edges to form a cube. The resistance between two diagonally opposite corners of the cube is
(a) 6/5Ω
(b) 1Ω
(c) 5/6Ω
(d) 6Ω
35. The ratio of active power to apparent power is known as
(a) power factor
(b) load factor
(c) form factor
(d) demand factor
36. The transient current in an RLC circuit is oscillating when
(a) R = 2√(L/C)
(b) R>2√(L/C)
(c) R< 2√(L/C)
(d) R = 0
37. A coil with a certain number of turns has a specified time constant. If the no. of turns is doubled, the time constant will be
(a) Halved
(b) Doubled
(c) become four fold
(d) unaffected
38. An R-L-C series circuit has R=1Ω, L=1H, and C=1F. Damping ratio of the circuit will be
(a) >1
(b) Unity
(c) 0.5
(d) zero
39. A series R-L circuit with R=100Ω, L=50H is connected to a dc source of 100V. The time taken for the current to rise to 70% of its steady value is
(a) 0.2s
(b) 0.6s
(c) 2.4s
(d) none of these
40. The steady state current in the R-C series circuit on the application of a step voltage of magnitude E will be
(a) Zero
(b) E/R
(c) (E/R) ℮ -t/RC
(d) (E/RC) ℮ -t
41. When an unit impulse voltage is applied to an inductor of 1H, the energy supplied by the source is
(a) ∞
(b) 1 Joule
(c) 0.5 Joule
(d) zero
42. An initially relaxed R-C series network with R=2MΩ and C=1μF is switched on to a 10V step input. The voltage across the capacitor after 2sec will be
(a) Zero
(b) 3.68V
(c) 6.32V
(d) 10V
43. A network has seven nodes and five independent loops. The number of branches in the network is
(a) 5
(b) 7
(c) 11
(d) 13
44. If all the elements in a particular network are linear then the superposition theorem holds, when the excitation is
(a) dc
(b) ac
(c) dc or ac
(d) impulse
45. Two coils having equal resistances but different inductances are connected in series. The time constant of the series combination is the
(a) sum of the time constants of the individual coils
(b) average of the time constants of the individual coils
(c) geometric mean of the time constants of the individual coils.
(d) Product of the time constants of the individual coils.
46. Which of the following pair is correctly matched?
(a) symmetrical two port network : AD – BC = 1
(b) reciprocal two port network : Z11 = Z22
(c) inverse hybrid parameters : A, B, C, D
(d) hybrid parameters : (V1 , I 2) = f(I 1, V2)
47. For a two port network to be reciprocal, it is necessary that
(a) Z11 = Z22 and Y12 = Y21
(b) Z11 = Z22 and AD – BC = 0
(c) h21 = – h12 and AD – BC = 0
(d) Y12 = Y21 and h21 = – h12
48. With usual notation a two port resistive network satisfied the condition A = D = (3/2)B = (4/3)C The Z11 of the network is
(a) (5/3)Ω
(b) (4/3)Ω
(c) (2/3)Ω
(d) (1/3)Ω
49. A system has a single pole. The constant multiplier ‘k’ is 1. For the given excitation sin(t), the response is √2 with 45º lagging. The system has a pole and a zero respectively at
(a) zero and 1
(b) ∞ and – 1
(c) – 1 and zero
(d) zero and – 1
50. The frequency at which two asymptotes meets is known as
(a) corner or break frequency
(b) threshold frequency
(c) cut off frequency
(d) critical frequency
|
{"url":"https://www.examyear.com/question-bank-on-diploma-electronics-engineering/","timestamp":"2024-11-09T17:14:07Z","content_type":"text/html","content_length":"119857","record_id":"<urn:uuid:1a1498ce-4061-4669-82a7-4dbd342fb3d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028125.59/warc/CC-MAIN-20241109151915-20241109181915-00228.warc.gz"}
|
Math Composer: Software to Create and Print Math Worksheets, Tests,
or Any Math Document
Math Composer?
The Math Composer software is a powerful yet easy to use tool for creating all your math documents. It is a simple way for math teachers and instructors to create math worksheets, tests, quizzes, and
exams. This math software is perfectly suited for use in a wide range of subjects including arithmetic, geometry, algebra, trigonometry, calculus, physics and chemistry. Download a free trial of Math
Math Composer doesn't use pre-made graphics like "test bank" software. Everything is created within Math Composer and is fully editable. Math Composer is not just a drawing program either. All
figures and diagrams are built using "snap-together" elements that adhere to certain rules. This keeps the construction process simple while still allowing you the freedom to create what ever you
Edit Equations and Text
Math Composer contains a full-featured text and equation editor. Quickly write text and equations together without the need to switch back and forth between programs.
Create Grids and Graphs
Create any style of coordinate grid. Easily graph any equation just like on a graphing calculator.
Print Professional Quality Documents
Math Composer is designed for printing. All documents print exactly as they appear on the screen.
Create and Edit in One Program
One of the design goals of Math Composer was to create a viable software alternative to "cut and paste" test creation. This not only applies to paper and scissors but also to computer software. Most
math software on the market is specialized and not intended for publishing. To overcome this, many people use a word processor to arrange images and clipart copied from various programs. This is time
consuming and often fraught with formatting problems and in the end you end up with a document that is hard to edit and reuse. Math Composer provides all the tools of math document creation in one
easy to use environment.
Reuse and Share
Users of Math Composer can easily transfer and reuse items from other Math Composer documents and all items retain there ability to be edited. You can also export images and clipart to other software
such as word processors.
See Features for more information.
Try it now! Download a FREE 30-day trial version of Math Composer.
|
{"url":"http://mathcomposer.com/","timestamp":"2024-11-08T11:12:04Z","content_type":"text/html","content_length":"8538","record_id":"<urn:uuid:97c9a26f-4ab8-4359-a356-eca609b76c4e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00867.warc.gz"}
|
Localizing University Basis Data Using Regression Analysis - Data Fields
Localizing University Basis Data Using Regression Analysis
One of the first resources I found in my quest to quantify the local basis market was this dataset from the University of Illinois. It provides historical basis and cash price data for 3 futures
months in 7 regions of Illinois going back to the 1970s.
On its own, I haven’t found any region in this dataset to be that accurate for my location. Perhaps I’m on the edge of two or three regions, I thought. This got me thinking- what if I had the actual
weekly basis for my local market for just a year or two and combined that with the U of I regional levels reported for the same weeks? Could I use that period of concurrent data to calibrate a
regression model to reasonably approximate many more years of historical basis for free?
I ended up buying historical data for my area before I could full test and answer this. While I won’t save myself any money on a basis data purchase, I can at least test the theory to see if it would
have worked and give others an idea for the feasibility of doing this in their own local markets.
Regression Analysis
If you’re unfamiliar with regression analysis, essentially you provide the algorithm a series of input variables (basis levels for the 7 regions of Illinois) and a single output to find the best fit
to (the basis level of your area) and it will compute a formula for how a+b+c+d+e+f+g (plus or minus a fixed offset) approximates y. Then you can run that formula against the historical data to
estimate what those basis levels would have been.
We can do this using Frontline Solver’s popular XLMiner Analysis Toolpak for Excel and Google Sheets. Here‘s a helpful YouTube tutorial for using the XLMiner Toolpak in Google Sheets and
understanding regression theory, and here’s a similar video for Excel.
Input Data Formatting
You’ll want to reformat the data with each region’s basis level for a given contract in separate columns (Columns E-K below), then a column for your locally observed values on/near those same dates
(Column N). To keep things simple starting out, I decided to make one model that includes December, March, and July rather than treat each contract month as distinct models. To account for
local-to-regional changes that could occur through a marketing year, I generated Column L to account for this, and since anything fed into this analysis needs to be numerical, I called it “Delivery
Months after Sept” where December is 3, March is 6, and July is 10. I also created a “Days Until Delivery” field (Column M) that could help it spot relationship changes that may occur as delivery
Always keep in mind: what matters in this model isn’t what happens to the actual basis, but the difference between the local basis and what is reported in the regional data.
⚠️ One quirk with the Google Sheet version- after setting your input and output cells and clicking the OK button, nothing will happen. There’s no indication that the analysis is running, and you
won’t even see a CPU increase on your computer as this runs in Google’s cloud. So be patient, but keep in mind there is a hidden 6 minute timeout for queries (this not shown anywhere in the UI, I had
to look in the developer console to see the API call timing). If your queries are routinely taking several minutes, set a timer when you click OK and know that if your output doesn’t appear within 6
minutes you’ll need to reduce the input data or use Excel. In my case, I switched to Excel part way through the project because I was hitting this limit and the 6 minute feedback loop was too slow to
play around effectively. Even on large datasets (plus running in a Windows virtual machine) I found Excel’s regression output to be nearly instantaneous, so there’s something bizarrely slow about
however Google is executing this.
Results & Model Refinement
For my input I used corn basis data for 2018, 2019, and 2020 (where the year is the growing season) which provided 9 contract months and 198 observations of overlapping data between the U of I data
and my actual basis records. Some of this data was collected on my own and some was a part of my GeoGrain data purchase.
Here’s the regression output for the complete dataset:
The data we care most about is the intercept (the fixed offset) and the coefficients (the values we multiply the input columns by). Also note the Adjusted R Squared value in the box above- this is a
metric for measuring the accuracy of the model- and the “adjustment” applies a penalty for additional input variables since even random data can appear spuriously correlated by chance. Therefore, we
can play with adding and removing inputs in an attempt to maximize the Adjusted R Squared.
0.597 is not great- that means this model can only explain about 60% of the movement seen in actual basis data by using the provided inputs. I played around with adding and removing different columns
from the input:
Data Observations Adjusted R Squared
All Regions 198 0.541
All Regions + Month of Marketing Year 198 0.596
All Regions + Days to Delivery 198 0.541
All Regions + Month of Mkt Year + Days to Delivery 198 0.597
Regions 2, 6, 7, + Month of Mkt Year + Days to Delivery 198 0.537
Regions 1, 3, 4, + Month of Mkt Year + Days to Delivery 198 0.488
All Data except Region 3 198 0.588
December Only 95 0.540
March Only 62 0.900
July Only 41 0.935
December Only- limited to 41 rows 41 0.816
Looking at this, you’d assume everything is crap except the month-specific runs for March and July. However, it seems like a good part of those gains can be attributable to the sample size though as
shown in the December run limited to the last 41 rows, a comparable comparison to the July data.
Graphing Historical Accuracy
While the error metrics output by this analysis are quantitative and helpful for official calculations, my human brain finds it helpful to see a graphical comparison of the regression model vs the
real historical data.
I created a spreadsheet to query the U of I data alongside my actual basis history to graph a comparison for a given contract month and a given regression model to visualize the accuracy. I’m using
the “All Regions + Month of Marketing Year” model values from above as it had effectively the highest Adjusted R Square given the full set of observations.
In case it’s not obvious how you would use the regression output together with the U of I data to calculate an expected value, here’s a pseudo-formula:
= intercept + (region1UofIValue * region1RegressionCoefficient) + (region2UofIValue * region2RegressionCoefficient) + ... + (monthOfMktYear * monthOfMktYearRegressionCoefficient)
Now here are graphs for 2017-2019 contracts using that model:
Here’s how I would describe this model so far:
“60% of the time it works every time”
Keep in mind, the whole point of a regression model is to train it on a subset of data, then run that model on data it has never seen before and expect reasonable results. Given that this model was
created using 2018-2020 data, 2017’s large errors seem concerning and could indicate futility in creating a timeless regression model with just a few years of data.
To take a stab at quantifying this performance, I’ll add a column to calculate the difference between the model and the actual value each week, then take the simple average of those differences
across the contract. This is a very crude measure and probably not statistically rigorous, but it could provide a clue that this model won’t work well on historical data.
Contract Month Average Difference
July 2021 <insufficient data>
March 2021 <insufficient data>
December 2020 0.05
July 2020 0.03
March 2020 0.04
December 2019 0.03
July 2019 0.05
March 2019 0.03
December 2018 0.04
– – – – – – – – – – – – – – – – –
July 2018 0.04
March 2018 0.05
December 2017 0.08
July 2017 0.15
March 2017 0.06
December 2016 0.08
July 2016 0.10
March 2016 0.06
December 2015 0.07
July 2015 0.07
March 2015 0.04
December 2014 0.07
The dotted line represents the end of the data used to train the model. If we take a simple average of the six contracts above the line, it is 4.3 cents. The six contracts below the line average 7.6
Is this a definitive or statistically formal conclusion? No. But does it bring you comfort if you wanted to trust the historical estimation without having the actual data to spot check against? No
To provide a visual example, here’s December of 2016 which is mathematically only off by an average of $0.08.
How much more wrong could this seasonal shape be?
Conclusion: creating a single model with 2-3 years of data does not consistently approximate history.
Month-Specific Model
Remember the month-specific models with Adjusted R Squares in the 80s and 90s? Let’s see if the month-specific models do better on like historical months.
Contract Month Average Difference (Old Model) Average Difference (Month-Model)
July 2021 <insufficient data>
March 2021 <insufficient data>
December 2020 0.05 0.03
July 2020 0.03 0.02
March 2020 0.04 0.01
December 2019 0.03 0.03
July 2019 0.05 0.02
March 2019 0.03 0.02
December 2018 0.04 0.03
– – – – – – – – – – – – – – – – – – – – – – – –
July 2018 0.04 0.04
March 2018 0.05 0.03
December 2017 0.08 0.05
July 2017 0.15 0.11
March 2017 0.06 0.06
December 2016 0.08 0.05
July 2016 0.10 0.08
March 2016 0.06 0.08
December 2015 0.07 0.04
July 2015 0.07 0.04
March 2015 0.04 0.03
December 2014 0.07 0.05
Well, I guess it is better. ¯\_(ツ)_/¯ Using the month-specific model shaved off about 2 cents of error overall. The egregious December 2016 chart is also moving in the right direction although still
not catching the mid-season hump.
Even with this improvement, it’s still hard to recommend this practice as a free means of obtaining reasonably accurate local basis history. What does it mean that July 2017’s model is off by an
average of $0.11? In a normalized distribution, the average is equal to the median and thus half of all occurrences are above/below that line. If we assume the error to be normally distributed, that
would mean half of the weeks were off by greater than 11 cents. You could say this level of error is rare in the data, but it’s still true that this has been shown to occur, so if you did this blind
you have no great way of knowing if what you were looking at was even 10 cents within the actual history. In my basis market at least, 10 cents can be 30%-50% of a seasonal high/low cycle and is not
Conclusion: don’t bet the farm on this method either. If you need historical basis data, you probably need it to be reliable, so just spend a few hundred dollars and buy it.
Last Ditch Exploration- Regress ALL the Available Data
Remember how Mythbusters would determine that something wouldn’t work, but for fun, replicate the result? That’s not technically what I’m doing here, but it feels like that mood.
Before I can call this done, let’s regression analyze the full history of purchased basis data on a month-by-month basis to see if the R Squared or historical performance ever improves by much. I’ll
continue to use the basis data for corn at M&M Service for consistency with my other runs. It also has the most data (solid futures from 2010-2020) relative to other elevators or terminals I have
history for.
Here are the top results of those runs:
Month Number of Observations Adjusted R Squared Standard Error
December 329 0.313 0.049
March 247 0.342 0.055
July 175 0.455 0.116
All Months Together 751 0.305 0.078
Final Disclaimer: Don’t Trust my Findings
It’s been a solid 6-7 years since my last statistics class and I know I’m throwing a lot of numbers around completely out of context. Writing this article has given me a greater appreciation for the
disciplined work in academic statistics of being consistent in methodology and knowing when a figure is statistically significant or not. I’m out here knowing enough to be dangerous and moonlighting
as an amateur statistician.
It’s also possible that other input variables could improve the model beyond the ones I added. What if you added a variable for “years after 2000” or “years after 2010” that would account for the
ways regional data is reflected differently over time? What about day of the calendar year? How do soybeans or wheat compare to corn? The possibilities are endless, but my desire to bark up this tree
for another week is not, especially when I already have the data I personally need going forward.
I’d be delighted for feedback about any egregious statistical error I committed here or if anyone get different results doing this analysis in their local market.
Quick feedback on this article
|
{"url":"https://datafields.blog/localizing-university-basis-data-using-regression-analysis/","timestamp":"2024-11-01T20:44:28Z","content_type":"text/html","content_length":"61127","record_id":"<urn:uuid:c0052e37-0c5a-483f-b597-96a166797104>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00850.warc.gz"}
|
Prediction of the Toroidal form of Black Hole | Scienceteen
Prediction of the Toroidal form of Black Hole
• Post author:Rahul Aggrawal
• Post published:May 29, 2018
• Post category:Articles / Astronomy / Astrophysics / Cosmology / GTR / Physics
Post comments:0 Comments
Prediction of the Toroidal form of Black Hole
I was right regarding my prediction of the Toroidal form of Black Hole. Toroidal topology is possible in string theory, even in classical GR similar ring type of event horizon exist for rotating BH.
And most important thing is that Toroidal Black Hole is helpful in explaining the structure of Quasar and other AGN, Active Galactic Nuclei. Here is a short note on my work. Very soon I will try to
do some calculations.
A black hole is a fascinating topic nowadays. As we know that Einstein theory of Gravity gives the dramatic prediction that sufficiently dense star collapses to form a black hole. In scientific
language, it is spacetime singularity surrounded by the horizon. An observer can enter inside the horizon but he can not return back beyond the horizon. A black hole has a mass, they can carry a
charge and angular momentum means they can rotate. But except these, they don’t have any other classical characteristics.
I am interested here in the shape of a black hole. Is there a well-defined shape of the black hole? Or horizon of a black hole can take any shape? First, we should think that after all BH is an
Astrophysical object and every Astrophysical object should satisfy certain conditions to be stable. Then we will think what is the maximum possibility regarding the shape of BH. So what are the
conditions for the stability of an astrophysical object? These conditions are defined as energy conditions in General relativity.
These conditions are:
1. Null Energy Condition.
2. Weak Energy Condition,
3. Dominant Energy Condition.
4. Strong Energy Condition.
First I should introduce you one quantity known as energy-momentum tensor, which describes the distribution of mass, momentum, and stress due to matter.
Technically every condition defines a certain type of density and pressure condition for the perfect fluid matter. These conditions are important in General relativity to prove No Hair theorem in BH
and entropy of BH. Let us assume BH has a shape other than spheroid or ellipsoid. Then what is the possibility of the existence of BH? Let us consider a Toroid shape BH means event horizon of BH is
in a toroid shape. This condition of BH is very well discussed in the case of Kerr’s BH.
When a heavy massive spherical rotating body collapse under its own gravity. Its distribution of mass no longer remains spherical and equatorial bulge. Because point can not support a rotation in
classical physics, so the minimum shape of singularity is a ring with zero thickness and non-zero radii. It is sometimes known as Kerr’s Singularity.
An observer falling into this type of black hole may be able to avoid the central singularity. This type of BH sometimes behaves like a wormhole. But BH shape can not be toroid shape just due to No
hair Theorem. Because toroidal shape can generate noticeable ‘hairs’ that the outside observers could feel. When I am talking here the shape of BH. I mean topology of BH. In classical GR there is no
chance of toroidal BH. But in higher dimensional theory like String theory, we can discuss toroidal BH. This type of black hole is important to explain the structure of Quasars and other active
galactic nuclei.
Leave a Reply Cancel reply
Rahul Aggrawal
I am a teacher and a theoretical physicist. Physics gives me pleasure and teaching physics gives me stable happiness. For More info visit www.rahulaggrawalphysics.blogspot.com
You Might Also Like
|
{"url":"https://www.scienceteen.com/prediction-of-the-toroidal-form-of-black-hole/","timestamp":"2024-11-12T16:23:45Z","content_type":"text/html","content_length":"91508","record_id":"<urn:uuid:c7a9a7d4-df9f-4575-95f8-4352835ad0fa>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00814.warc.gz"}
|
Generate random integers that sums to a specific number within a sp...
Generate random integers that sums to a specific number within a specific range
Dear members
I am attempting to generate random integers that sums to a specific number within a specific range. Infact, Roger Stafford provided a similar function randfixedsumint, but lacking the ability for
specifying the min and max value that each integers could go.
In other words, I am looking for a function that could randomly generate 3 integers that sums to 1000 within a gange of 100 <= x <= 700.
function R = randfixedsumint(m,n,S);
% This generates an m by n array R. Each row will sum to S, and
% all elements are all non-negative integers. The probabilities
% of each possible set of row elements are all equal.
if ceil(m)~=m|ceil(n)~=n|ceil(S)~=S|m<1|n<1|S<0
error('Improper arguments')
P(:,in) = cumsum(P(:,in+1));
R(im,in) = sum(P(s+1,in)*rand<=P(1:s,in));
Quite similar to what randfixsum(Random Vectors with Fixed Sum) could achieve, but I am having trouble limiting the generated value as just integers
Conditional Random number generation
2 Comments
randi function is not allow?
Is this way out?
while sum(data)~=1000
Accepted Answer
Quite similar to what randfixsum(Random Vectors with Fixed Sum) could achieve, but I am having trouble limiting the generated value as just integers
I think a good approximate solution would be to use randfixedsum to get an initial continuous-space randomization x0. Then, you can use minL1intlin (Download) to project x0 to the nearest (in
L1-norm) integer solution x. I don't know if this will be perfectly uniformly distributed, but I think it will be close. Example:
n=7; %number of variables
x=round( minL1intlin( speye(n), x0, 1:n, [],[],e,s,lb*e,ub*e) );
More Answers (6)
For such a small problem size, you can pre-generate a table of allowable combinations:
comboTable = [x(idx),y(idx),z(idx)];
Then, you can just select rows randomly from the table:
DGM on 17 Jun 2021
I've been wasting time on trying to do something similar lately. I can't believe I missed this question. For my own purposes, I couldn't figure out how to get randfixedsum() to do the various things
I wanted (within memory constraints), so I did it my own naive way. If you only need a handful of values, I doubt that my answer is of any use to you, but maybe someone else who finds this post will
find it helpful.
Attached is the current version of said file. It will output an array of integers with specified size and sum, with lower/upper boundaries depending on mode. Values can be generated to fit uniform,
skew, exponential, or gaussian distributions. For example, all of these arrays have the same size and sum (i.e. they have the same mean of 200)
% generate uniform random integers with a sum of 2000 and maximum of 50
a = randisum([NaN 50],2000,[5 10],'uniform')
[sum(a(:)) min(a(:)) max(a(:)) round(mean(a(:)))]
Note that this file is not thoroughly tested yet. Don't be surprised if there are bugs. The lazy sum adjustment method potentially imparts some distortion to the distributions, so don't use it if you
can't bear that, or come up with something better. I make no claims that these are particularly efficient, robust, or statistically meaningful ways to solve the problem. My own tolerance for these
flaws is potentially higher than yours.
I may update the file as I work on it. EDIT: updated to improve sum adjustment and 'exponential' mode speed.
3 Comments
When I wrote that, my needs were very undemanding, and I was unsatisfied with examples from other threads that were either slower or were limited by memory use. I took the opportunity to play a
little outside my comfort zone.
I think this is a difficult problem to find a truly uniform integer solution. (Even the continuous case is quite a test of code.) Clearly the very best solution is to generate the set of all possible
solutions, and then sample from them uniformly. Anything less than that needs to trade off aherance to a uniform sampling with speed. Your solution seems a reasonable tradeoff.
Revisit this old thread here is the randfixedsum version but for integers.
It's a quick implementation and it is slow, but mathematically it should generate conditional uniform distribution.
R = ricb(m, k, starget, lo, hi)
function R = ricb(m, k, s, lo, hi)
% R = ricb(m, k, s, lo, hi)
% pseudo random integer integer array with sum and bound constraints.
% m: scalar number of solutions
% k: scalar length of the solutions
% Return a pseudo random integer array R (m x k)
assert(N >= 0, 'lo too large')
assert(r >= 0, 'lo too large or hi too small')
R(i,:) = ricb_helper(k, N, r);
function v = ricb_helper(k, N, r)
% function v = ricb(k, N, r)
% k: length of the vector
% head: optional parameter to by concatenate in the first column
% v: (1 x k) array such as sum(v,2)==N
c = arrayfun(@(j) icbcount(k-1, N-j, r), J, 'unif', 1);
ir = discretize(rand(), cc);
v = [j, ricb_helper(k-1, N-j, r)];
function c = icbcount(k, N, r)
% count the number of solutions v1+...+vk = N
% v1,...vk are integers such that 0<=vi<=r
% (integer composition with common upper bounds)
C1 = arrayfun(@(q)nchoosek(k, q), q);
C2 = arrayfun(@(q)nchoosek(q.*(r + 1)-1-N, k-1), q);
2 Comments
@Bruno Luong - well done. I gave each of the solutions posted here a decent test in my answer. And yours seems to be as close to uniform as I would expect.
Here is a speed up version.
EDIT: a smaller change has been made makes it even faster than my fisrt attempt
R = ricb(m, k, starget, lo, hi);
fprintf('Generate time = %1.3f second\n', t)
function R = ricb(m, k, s, lo, hi)
% R = ricb(m, k, s, lo, hi)
% pseudo random integer array with sum and bound constraints.
% m: scalar number of random vectors
% k: scalar length of the solutions
% Return a pseudo random integer array R of size (m x k)
% R = ricb(m, k, starget, lo, hi)
assert(N >= 0, 'LO too large')
assert(r >= 0, 'LO too large or HI too small')
assert(k*r >= N, 'HI too small')
for i=1:m % possible parfor but not necessary faster
% generate one vector at the time
R(i,:) = ricb_helper(k, N, r, i, UR);
function v = ricb_helper(k, N, r, snum, UR)
% function v = ricb(k, N, r, UR)
% k: length of the vector
% r: upper bounds (lo bound is assumed to be 0)
% head: optional parameter to by concatenate in the first column
% snum: solution number, in 1:m where m is size(UR,2)
% UR: array of size (k x m), contains uniform distribution in (0,1)
% interval. We use the pre-random array to avoid calling over and
% over rand() function and save calling overhead
% v: (1 x k) array such as sum(v,2)==N
q1 = ceil((N-jmax+k-1)/(r+1));
% Compute C1 = arrayfun(@(q)nchoosek(k, q), q); with alternate sign
C1(nq+1-i) = x; % store in reverse order
% Compute C2 = nchoosek(q.*(r + 1)-1-N-J, k-2), q)
%y = nchoosek(eta, kappa); % q = k-1, j = 0;
y = 1; % no need to set y = nchoosek(eta, kappa) to have correct absolute count
% since it scales the count c which we don't need.
% Note C2 is still integer with this change
y = (y * eta) / (eta - kappa);
% Number of solutions (up to a factor of 1/nchoosek(q1.*(r + 1)-1+di, k-2))
% Pick the first element according to the respective
% number of solutions count
j = discretize(UR(k, snum), cc)-1;
v = [j, ricb_helper(k-1, N-j, r, snum, UR)];
I had to put in my 2 cents here. I saw @Matt J and @Bruno Luong both claim uniformity, or at least close to approximate uniformity. And that made me wonder, what does a uniform sampling mean in this
context? And how would I test for uniformity? Finally, since this is effectively a comment on several answers in this post, I've made my post an answer in itself, rathrer than a comment on any one of
these eminently interesting solutions.
First, what does uniform mean? Since there are only a finite number of possible solutions, uniform MUST mean that every solution is as likely to arise as any other. So if we sample a large number of
times, we should get replicates. But all of them should be at least close to uniformity.
Looking at Matt's solution. It seems entirely plausible. It does work. But, is it uniform? A good test seemed to be to find a sample of 5 integers, each of which comes from the set [0,1,2], where the
sum is 5. This is a good stress test. There are not too many possible solutions (51 such possible solutions it would appear.) I downloaded Matt's very nice code minL1intlin. (Well worth having in
your toolbox anyway.)
Here is my test function:
function X = randfixedsumint(nvars,nsets,setsum,lb,ub)
opts =optimoptions('intlinprog');
X(:,n) = round( minL1intlin( speye(nvars), x0, 1:nvars, [],[],e,setsum,lb*e,ub*e,[],opts) );
Now for the test.
X = randfixedsumint(5,10000,5,0,2)';
Not terribly fast. But then what can you expect? It is doing a call to intlinprog for each set.
Before I go any further, what does uniform mean?
[Xu,I,J] = unique(X,'rows');
So we saw 51 unique events, in a sample of size 10000. We would expect that roughly each possible event will arise almost 200 times in that sample. 196.08 times, but 200 is a good round number to
shoot for.
How well did this idea work?
Drat. The [1 1 1 1 1] event is roughly 10 times as likely as many of the other events. So not really that uniform. Again, this was a stress test. It is designed to be a highly difficult case.
How about randisum? This is the entry from @DGM. I think the corresponding call to randisum would be:
Now for a test of uniformity.
X(i,:) = randisum([NaN,2],5,[1,5],'uniform');
[Xu2,I,J] = unique(X,'rows');
That is a little better, but still not as good as I might have hoped to see. In a sample of size 10000, we would expect to see a distributino that lies a little more tightly around the mean.
int51 = randi(51,[10000,1]);
Do you see the problem in what we see from randisum? It is not too bad looking, BUT the [1 1 1 1 1] event happened 441 times, more than twice as often as I would expect. Even so, not bad, and far
better than Matt's solution.
Finally, takin a look at the solution from Bruno, though I had to fix a few lines, and repair the end statements, it did work in the end. This should be the comparable call:
[Ru,I,J] = unique(R,'rows');
As you can see, these counts are much closer to what I would expect from a true uniform sampling. Is ricb truly fully uniform? That is a good question. I'd need to look very carefully at the code,
and then test the hell out of it. But I think @Bruno Luong came pretty close to nailing it. Note that the [1 1 1 1 1] case does not look any different from the rest.
11 Comments
@Paul Beside the eye, classical testing is chi-square test
[Xu2,~,J] = unique(X,'rows');
c_BLU = accumarray(J,1,[51,1]);
xlabel('solution [1-51]')
chi2 = sum((c_BLU-meancount).^2./meancount)
fprintf('Uniformity test passed\n');
fprintf('Uniformity test failed\n');
Unfortunately the uniform distribution with bounds for integer is much more challenging. One way is to approximate by rounding the results
lo = 100;
hi = 700;
s = 3000;
n = 7;
if n*ceil(lo) > s || s*floor(hi) < s
error('Not possible')
while true
x = randfixedsum(n,1,s,lo,hi);
c = round([0; cumsum(x)]); c(end)=s;
xi = diff(c);
if all(xi >= lo & xi <= hi)
Paul on 20 Sep 2024
How about a simple loop that finds all combinations of the random variables that satisfy the constraint and then sample from that set uniformly as suggested above.
X = sumintuni2(1e5,0:2,5,5);
function X = sumintuni2(N,uval,usum,ny)
ucombinations = zeros(1000,ny);
term1 = nelements.^(nvectors:-1:1);
term2 = nelements.^((nvectors-1):-1:0);
for ii = 0:(nelements^nvectors - 1)
index = floor(mod(ii,term1)./term2) + 1;
if sum(uval(index)) == usum
ucombinations(ucount,:) = uval(index);
if ucount == height(ucombinations)
ucombinations = [ucombinations;zeros(1000,ny)];
ucombinations(ucount+1:end,:) = [];
X = ucombinations(randi(height(ucombinations),N,1),:);
1 Comment
@Paul, try to run this example with lower bound = 100, upper bound = 700 and target sum = 3000, ny = 7. I'm affraid enumerate all combonation is not a good strategy. This method would not scale up
well for long uval and large ny.
See Also
Community Treasure Hunt
Find the treasures in MATLAB Central and discover how the community can help you!
Start Hunting!
|
{"url":"https://ch.mathworks.com/matlabcentral/answers/741202-generate-random-integers-that-sums-to-a-specific-number-within-a-specific-range","timestamp":"2024-11-13T05:11:45Z","content_type":"text/html","content_length":"461444","record_id":"<urn:uuid:606ac880-d2c5-4384-a135-7598b1e32a56>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00017.warc.gz"}
|