content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
Descriptive Statistics Using Python
Descriptive Statistics — is used to understand your data by calculating various statistical values for given numeric variables. For any given data our approach is to understand it and calculated
various statistical values. This will help us to identify various statistical test that can be done on provided data. Let understand in more detail.
Under descriptive statistics we can calculate following values
1. Central tendency — mean, median, mode
2. Dispersion — variance, standard deviation, range, interquartile range(IQR)
3. Skewness — symmetry of data along with mean value
4. Kurtosis — peakedness of data at mean value
Note- I have not given mathematical formula for all these values.
We have system defined functions to get these values for any given datasets. Let’s understand these values and their business usages.
import pandas as pd
import numpy as np
1. Calculating Central Tendency
#mean — is average value of given numeric values
#median — is middle most value of given values
#mode — is most frequently occurring value of given numeric variables
Why we need to calculated mean, median and mode ?
These values will help us to identify our potential customer or target audience. If a new customer/client comes who’s age is close to average age of your right customer, then you can put extra effort
to make him do business with you.
Example — If you watch TV, you can observe any TV commercial product and actor in Ads, they are of same age group. Please take look on below TV ads.
2. Dispersion
Dispersion is used to define variation present in given variable. Variation means how values are close or away from the mean value.
Variance — its gives average deviation from mean value
Standard Deviation — it is square root of variance
Range — it gives difference between max and min value
InterQuartile Range(IQR) — it gives difference between Q3 and Q1, where Q3 is 3rd Quartile value and Q1 is 1st Quartile value.
Why we need to calculate dispersion of given variable ?
Variance of given variable will help you to get customer requirement range. h this means you get to know in what highest and lowest my customer needs. This will help you to understand your customer
requirement variation and you maintain your inventory accordingly.
3. Skewness
Skewness is used to measure symmetry of data along with the mean value. Symmetry means equal distribution of observation above or below the mean.
skewness = 0: if data is symmetric along with mean
skewness = Negative: if data is not symmetric and right side tail is longer than left side tail of density plot.
skewness = Positive: if data is not symmetric and left side tail is longer than right side tail in density plot.
We can find skewness of given variable by below given formula.
4. Kurtosis
Kurtosis is used to defined peakedness ( or flatness) of density plot (normal distribution plot). But you research more regarding definition of Kurtosis you will Dr. Westfall and Dr. Donald Wheeler
name and their definitions. As per Dr. Wheeler defines kurtosis defined as: “The kurtosis parameter is a measure of the combined weight of the tails relative to the rest of the distribution.” This
means we measure tail heaviness of given distribution.
kurtosis = 0: if peakedness of graph is equal to normal distribution.
kurtosis = Negative: if peakedness of graph is less than normal distribution(flat plot)
kurtosis = Positive: if peakedness of graph is more than normal distribution (more peaked plot)
We can find kurtosis of given variable by below given formula.
Let see the graph representation of given variable and interpretation of skewness and peakedness of distribution from it.
import seaborn as sns
Density plot of variable ‘A’
In the above graph we can clearly see that we more under left side of tail, so it is left skewed (or it has negative skewness). Histogram is above the line that means data has flat plot. This means
kurtosis of this distribution is negative. in case line plot is above histogram then kurtosis is taken as positive.
How we make decision by seeing these graph
Negative skewness — this means we have more observation below mean. This conclude data have more people who like product with below mean value product. so you should keep more stock of below mean
price product.
Positive Kurtosis — this means we have more peaked data compare to normal. This means your product comes under premium categories and you have only limited customer range. You can sell your product
without giving any discount. But in case you negative kurtosis you have to give discount to your product or service in order to sell it.
I request you please give your valuable feedback on this.
Be the first to comment on "Descriptive Statistics Using Python"
|
{"url":"https://dataanalyticsedge.com/2019/11/25/descriptive-statistics-using-python/","timestamp":"2024-11-07T18:43:54Z","content_type":"text/html","content_length":"100394","record_id":"<urn:uuid:dfe890ba-174f-4c27-8187-e07eafac170c>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00388.warc.gz"}
|
A nurse earns $800.00 per week before any tax deductions. The following taxes are deducted each week: $83.00 federal income tax, $38.00 state income tax, and $79.00 Social Security tax. How much will
the nurse make in 4 weeks after taxes are deducted?
Correct Answer : C
we need to find the net income of the nurse in 4 weeks from the weekly net income.
Weekly net income=gross income-total tax
Total tax=federal income tax+state income tax+Social Security tax
Total tax=$(83.00+38.00+79.00)
Total tax=$200.00
Weekly net income=$(800.00-200.00)=$600.00
In one week, the net income of the nurse is $600.00 and in 4 weeks the nurse will a net income of:
The nurse will earn $2,400.00 in 4 weeks after taxes are deducted.
TEAS 7 Exam Quiz Bank
HESI A2 Exam Quiz Bank
Find More Questions 📚
$69/ month
Teas 7 Questions: We got the latest updated TEAS 7 questions
100% Money Refund: 100% money back guarantee if you take our full assessment pass with 80% and fail the actual exam.
Live Tutoring: Fully customized live tutoring lessons.
Guaranteed A Grade: All students who use our services pass with 90% guarantee.
Related Questions
Correct Answer is D
we need to form a mathematical expression from the given word problem.
Let the number be x.
Twice a number=2x
Five less than twice a number=2x-5
So the mathematical express from the word problem is 2x-5
Correct Answer is B
We need to find how many mL are in 2.5 teaspoons. We need to use dimensional analysis to solve this problem as follows.
Converting between teaspoon and mL uses the following conversions:
We want to end up with mL, we utilize the second conversion and set up the following equation.
Thus, 2.5 teaspoons can hold 12.325 mL.
Correct Answer is B
in the given problem, we use the calculator to find 1.3*0.47=0.611
Correct Answer is A
We use 1 L =1000 mL to convert between the two units. The conversion fractions of interconverting are:
Since we want to end up with millimeters, the second conversion is used in converting L to mL as:
Thus, 0.5 L is equivalent to 500 mL.
Correct Answer is A
to change L to mL we use the following options of interconversions:
Since we are needed to change L to mL, use the second option.
Thus, a bucket can hold 3000 mL which is equal to 3 L.
Correct Answer is A
Based on the above data, the horizontal axis will be tree type and vertical axis will represent the number of trees.
Based on these, a bar graph is appropriate to represent the number trees.
Correct Answer is D
Correlation of two variables falls into:
Positive correlation: an increase in one variable causes another variable to increase
Negative correlation: an increase in one variable causes another one to decrease
No correlation: a change in one variable does not cause any response in another variable.
From the given choices
Option a is no correlation
Option b is a negative correlation
Option c is a negative correlation
Option d is a positive correlation
Correct Answer is B
here we use the US customary system to convert between yards and feet. We use the conversion 1 yard =3 feet. Then, 6 yards to ft is found as
Thus, 6 yards is equal to 18 ft.
Correct Answer is D
To form an equation from the word problem, first break the given statement into smaller statements.
First, we are given the width of the rectangle as x. We are told, the length is three times width. Mathematically, this means
Again, the length is 4 less than 3 times width of the rectangle. Thus, the length of rectangle in terms of width becomes:
Length =3x-4
This is the required equation.
Correct Answer is C
to convert fraction to percent, we multiply the fraction with 100. Therefore, the percent equivalent of 5/8 is
Thus, 5/8 is equal to 62.5%.
Access the best ATI complementary Questions by joining
our TEAS community
This question was extracted from the actual TEAS Exam. Ace your TEAS exam with the actual TEAS 7 questions, Start your journey with us today
Visit Naxlex, the Most Trusted TEAS TEST Platform With Guaranteed Pass of 90%.
Money back guarantee if you use our service and fail the actual exam. Option of personalised live tutor on your area of weakness.
|
{"url":"https://www.naxlex.com/questions/a-nurse-earns-80000-per-week-before-any-tax-deductions-the-following-taxes-ar","timestamp":"2024-11-12T23:14:24Z","content_type":"text/html","content_length":"96080","record_id":"<urn:uuid:cc8ba742-0380-4f2a-8232-7ac5f881d6bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00504.warc.gz"}
|
The key technology of content traffic management
One: Business Background
The volume maintenance strategy is a very important delivery strategy for video content. New hot video content needs to increase its own exposure resources to maximize the amount of playback, but the
overall resources of each scene (home page, channel page, etc.) are limited and the daily exposure resources of each drawer are limited, so the exposure resources of each content Allocation has a
race problem. In addition, different scenarios are independent of each other, and each scenario optimizes efficiency and experience according to its own goals, but traffic coordination between
scenarios cannot be achieved by optimizing a single scenario.
Assigning exposure to content involves modeling issues about exposure and clicks, and predicting the future clicks of content. Content exposure, clicks, and playback constitute a complex nonlinear
chaotic system, which depends not only on the quality of the content itself, but also on the content update time, update strategy, and user click habits. The traditional statistical forecasting model
cannot explain the various disturbance factors of the external environment and the chaotic characteristics of the system, that is, it cannot describe the essence of the system from the mechanism. In
response to this problem, we firstly analyzed the historical exposure click logs of new hot content, and established a new hot content exposure sensitive model using ordinary differential equations,
namely the pv-click-ctr model (P2C model for short). Based on the P2C model, combined with the exposure resource constraints of each scene and drawer, a multi-objective optimization framework and
algorithm under exposure resource constraints are given.
Two: Content Exposure Sensitivity Model
Normally, the click PV (click) increases with the increase of the exposure PV, that is, high exposure leads to high clicks. However, the number of content consumers is limited, and repeated exposure
of a single content to the same consumer will not bring more clicks. This click "saturation" phenomenon can be observed from the content's historical exposure click logs. Inspired by this phenomenon,
based on the historical data characteristics of content exposure PV and click PV, we established an ordinary differential equation (Ordinary Differential Equation, ODE) model that can describe the
change trend of content clicks with exposure, that is, pv-click-ctr ( P2C) model, the overall structure is shown in Figure 3.
Due to the limitations of its own factors and the external environment, a content has a maximum or saturation value of image.png in terms of clicks. When given an exposure image.png, there exists a
unique hit image.png and saturation image.png. For a hit image.png, the saturation image.png is defined as the ratio of the gap between the current hit and the saturation value to the saturation
value, that is
For any content, as the pv increases, the click saturation decreases, and the ratio of the click increment brought by the unit pv (referred to as click increment) to the current click shows a
downward trend. That is to say, there is a positive correlation between click increment and saturation, which can be expressed by the following formula:
Among them, image.png is the positive correlation coefficient. According to formula (2), the ordinary differential equation model in which click increases with pv can be obtained.
Integrating the two ends of equation (3) after separating the variables, we can get
Among them, image.png and image.png are the initial pv and click respectively.
For the parameters image.png and image.png in formula (4), the least square method can be used for fitting. Here, it is first necessary to filter and preprocess the historical pv and click data and
(a) Sample point filtering principle. Select the largest incremental subsequence in the daily historical pv and click data sequences respectively.
(b) Parameter preprocessing. Since the order of magnitude of the click saturation value image.png is usually large, and the order of magnitude of the correlation coefficient image.png is usually
small, in order to avoid the phenomenon of "big numbers eat small numbers", the data conversion of these two parameters is performed separately, namely: image. png
(c) Sample point preprocessing. In order to prevent the least square method from falling into local optimum when fitting parameters, data transformation is performed on the historical samples (click
value y, pv value x) respectively, namely: image.png, image.png. After the parameter fitting process, a single content pv-click function relationship can be obtained. Further, pv-click-ctr prediction
can be performed. Here, the numerical solution method of finite difference can be used for prediction, and the data points can also be substituted into formula (4) for prediction.
Three: Guaranteed Model & Algorithm
Based on the P2C model established in the previous section, the task of this section is to give approximately optimal exposure for each content when the exposure resources of each scene and drawer
are limited. The overall program flow is as follows:
First, based on the ordinary differential equation (ODE) model predicted by pv-click-ctr, for each content in the content pool, two parameters in the ODE are fitted by least squares: click saturation
value image.png and click with pv intrinsic growth rate image.png. Thus, the pv-click function relation of each content is given.
Second, based on the given optimization objectives and constraints, a multi-objective nonlinear optimization model for pv allocation can be established. Before abstracting the business problem into a
mathematical model, it is necessary to explain the symbols in the model, as follows.
The optimization objectives of the above model include two: maximization of multi-scene vv, and minimum variance of content pool content ctr. It should be noted that the minimum ctr variance here is
a formal description of exposure fairness, which is used to balance "overexposure" and "underexposure". Constraints represent the exposure PV constraints of scenes, drawers, pits, and content,
respectively. Since we use numerical methods to solve the objective function, the above optimization model cannot be solved by traditional gradient-based algorithms. The evolutionary algorithm
provides a solution, and the genetic algorithm (GA) is chosen here to solve it. It should be noted that the fitness value function calculation in GA adopts the P2C model.
Four: Experimental results
We select a number of new and popular content, and respectively give the prediction effect of the P2C model and the offline effect of the guaranteed model. The evaluation metrics here are Root Mean
Square Error (RMSE) and Absolute Percent Error (APE). The P2C model and the smoothed ctr method [1] are used to predict the click volume of new hot content respectively. From the table, it can be
seen that the P2C model can effectively predict clicks, and outperforms the smooth ctr method in terms of RMSE.
In the online experiment part, we set up a bucket experiment. The benchmark bucket adopts a manual strategy to maintain volume; the experimental bucket adopts the strategy proposed in this paper.
During the experiment, we pay attention to and compare the daily delivery effects of the benchmark bucket and the experimental bucket (CTR variance, the overall CTR of the strategy on the scene,
etc.). The 30-day and 7-week volume maintenance effect data are given below. Compared with the manual strategy results, it is found that the volume maintenance strategy has different degrees of
improvement in terms of CTR variance and the overall CTR of the scene. In particular, in terms of CTR variance, the effect of the quantity preservation strategy is very obvious, with an average
relative improvement of +50%.
V: Summary & Outlook
The content maintenance strategy aims to solve the contradiction between limited traffic resources and excessive demand, and provide an optimized exposure suggestion for each content, so that the
exposure resources of each scene can generate greater value. This paper proposes a resource-constrained model and algorithm framework for the multi-scenario VV maintenance requirements of new hot
content. This framework consists of two stages: prediction and optimization. We conducted offline tests and bucketing experiments in some scenarios, and the experimental results reflect the
feasibility and effectiveness of the strategy in this paper. In the future, there are many aspects that need to be continuously explored and improved, such as PUV volume maintenance, volume
maintenance cold start issues, etc.
Related Articles
Explore More Special Offers
1. 50,000 email package starts as low as USD 1.99, 120 short messages start at only USD 1.00
|
{"url":"https://www.alibabacloud.com/en/developer/a/ai/key-technology-of-content-traffic-management?_p_lc=1","timestamp":"2024-11-07T17:17:23Z","content_type":"text/html","content_length":"137614","record_id":"<urn:uuid:49c0fc2d-1121-4eb9-918e-f17b1afcc90c>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00214.warc.gz"}
|
45 45 90 Triangle Calculator (right Triangle Calculator) | Examples And Formulas
Mathematical Calculators
45 45 90 Triangle Calculator (right Triangle Calculator)
Calculate hypotenuse, measurements and ratio easily with our 45 45 90 triangle calculator.
Table of contents
What is a 45 45 90 triangle?
The 45 45 45 90 triangle is a right-angled isosceles triangle with two equal sides. Since its third side is not equal with the others, it's called the hypotenuse.
What is a right triangle?
In geometry, a right triangle is a triangle with one right angle. Right triangles are the most common shapes in the world and are found in everyday life, from the shapes of houses to the design of
toys. Right triangles are also the basic shapes used to describe systems of coordinates.
How does right triangle calculator work?
Right triangle calculator is a simple app that helps you solve right triangles quickly and easily. It includes a diagram of a right triangle, as well as instructions on how to solve it. So if you’re
ever faced with a problem involving a right triangle, you can use the right triangle calculator to get your answer quickly and easily.
45-45-90 Triangle is a special kind of triangle
The sides of 45-45-90-degree triangles have a unique ratio. For instance, the two legs are the same length, and the hypotenuse equals that length times the square root of 2.
45 45 90 triangle is special kind of triangle
What are the ratios of a 45 45 90 triangle?
45 45 90 triangle is the simplest of right-angle triangles, and the ratios of the length of sides are 1:1:sqrt(2).
How to solve a 45 45 90 triangle?
Solving 45 45 90 triangles is the simplest right-sided triangle to solve.
You simply apply Pythagorean theorem as follows:
b = second side length (equals to first side)
How to solve 45 45 90 triangle
Does the Pythagorean theorem work for 45 45 90 triangles?
The Pythagorean theorem states the relation of the hypotenuse to lengths of the sides on a right-angle triangle. Since 45 45 90 triangle is a right angle triangle, the Pythagorean theorem can be
applied to solving measurements.
For 45 45 90 triangles, usage of the Pythagorean theorem is particularly easy, given that the sides have equal length.
Can a right-angled triangle have equal sides?
A right triangle can't have all three sides equal, as one has to be 90 degrees to be equal. However, it can have its two non-hypotenuse sides equal in length.
Right triangle facts
What is the Pythagorean theorem?
The Pythagorean theorem states that the sum of the square roots of a right triangle is equal or better than the square on the hypotenuse. It is commonly associated with Greek mathematician
Pythagoras. However, it is not known that he was aware of the theorem.
According to historian Iamblichus, Pythagoras was first introduced to math by Thales of Miletus and Anaximander, his pupil. He traveled to Egypt around 535 BCE, was captured during an invasion of
Persia and may have visited India. It is also known that he founded a school in Italy.
Pythagorean theorem
Article author
John Cruz
John is a PhD student with a passion to mathematics and education. In his freetime John likes to go hiking and bicycling.
45 45 90 Triangle Calculator (right Triangle Calculator) English
Published: Sat Nov 06 2021
In category Mathematical calculators
Add 45 45 90 Triangle Calculator (right Triangle Calculator) to your own website
Other mathematical calculators
|
{"url":"https://purecalculators.com/45-45-90-calculator","timestamp":"2024-11-06T01:41:07Z","content_type":"text/html","content_length":"195001","record_id":"<urn:uuid:ac2e8248-b2e4-4990-9a21-88b881d244e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00645.warc.gz"}
|
Rinku Mukherjee
• Control of Laminar Boundary-Layer Separation on a Rectangular Wing using Decambering Approach
Roy, Aritras
This paper investigates an improvement of the aerodynamic performance of a rectangular wing by re-designing its camberline to control the laminar separation of its boundary layer. This is
experimentally implemented using an Aluminium secondary skin on the wing surface, which aligns itself to the separated boundary layer, such that the flow remains attached to it, which otherwise
would have separated on the baseline configuration. The shape of the skin, which is now regarded as the effective flow surface, is essentially a decambered version of the baseline shape of the
wing and is predicted numerically using an in-house code based on two linear functions that account for the local deviation of camber by accounting for the difference in coefficients of lift and
pitching moments. Aerodynamic characteristics of the effective decambered configurations using numerical analysis, CFD, and wind tunnel experiments are reported. Results indicate that significant
improvement in aerodynamic performance can be achieved for laminar separation control through this active flow surface.
• Numerical morphing of a rectangular wing to prevent flow separation
Roy, Aritras
The surface of a rectangular wing is morphed numerically at high angles of attack such that it still operates at the reduced coefficient of lift at which the baseline wing operates but while the
flow is separated on the baseline wing, it remains attached on the morphed wing. The aerodynamic characteristics of the baseline wing are obtained experimentally and that of the morphed wing is
obtained numerically. The morphed surface at high angles of attack is obtained using a novel ‘decambering’ technique, which accounts for the deviation of the coefficients of lift and pitching
moment from that predicted by the potential flow. Two wings with different airfoil sections, N ACA0012 and N ACA4415 are tested and compared at high α. Numerical morphing of wing surface for
design coefficient of lift (CL ) (in terms of percentage increment) is presented at angles of attack 50 and 150 . This concept of design CL of a morphing wing is one of the possible solutions to
fly at different flight conditions with corresponding targets and maneuvering requirements. A significant addition to the present numerical approach is to provide some comparison of the flow
separation behavior with CFD at the root section of both the wing sections. The effects of morphed surfaces on drag penalty, coefficient of lift and post-stall angles of attack are studied and
compared in terms of aerodynamic performance.
• Time series behaviour of laminar separation bubble at low reynolds number
Roy, Aritras
This paper identifies the laminar separation bubble at the root or span-wise midsection of a rectangular wing using direct surface pressure measurements and analyses their behavior. The locations
of separation, transition, and reattachment are identified from surface pressure measurements and oil flow visualizations. Surface oil flow visualization results also clarified the wing-tip and
separation bubble interactions near the leading edge of the wing. The transition structure and turbulence characteristics in the separated shear layer locations are studied using Laser Doppler
Velocimetry. Time series analysis is carried out to distinguish the flow patterns of transition and later transition locations along with chordwise locations of the root section of the wing.
• Three dimensional rectangular wing morphed to prevent stall and operate at design local two dimensional lift coefficient
Roy, Aritras
The surface of a rectangular wing is morphed at high angles of attack such that it continues to operate at the reduced coefficient of lift (Cl) at which the baseline wing operates, but unlike the
baseline wing, where the flow is separated, the flow remains attached on the morphed wing. A morphed surface is also generated to operate at a local design 2D (two-dimensional) Cl, which is
obtained by incrementing the baseline Cl by a percentage at pre and post-stall angles of attack. The morphed surface is generated numerically using a novel ‘decambering’ technique, which accounts
for the deviation of the coefficients of lift and pitching moment from that predicted by potential flow, analytically, using CFD and implemented experimentally by attaching an external Aluminium
skin to the leading edge of the wing. Two different wing sections, NACA0012 and NACA4415, are tested on a rectangular planform. The effect of morphing on the aerodynamic performance is discussed,
and aerodynamic characteristics are reported. Results indicate that significant improvement in aerodynamic performance is achieved at high angles of attack, especially at post-stall through this
active morphed flow surface.
• Experimental validation of numerical decambering approach for flow past a rectangular wing
Roy, Aritras
Vinoth Kumar, R.
Experimental investigation on two rectangular wings with NACA0012 and NACA4415 profiles is performed at different Reynolds numbers to understand their aerodynamic behaviours at a high α regime.
In-house developed numerical code VLM3D is validated using this experimental result in predicting the aerodynamic characteristics of a rectangular wing with cambered and symmetrical wing profile.
The sectional coefficient of lift ((Formula presented.)) obtained from the numerical approach is used to study the variation in spanwise lift distribution. The lift and moment characteristics
obtained from wind tunnel experiments are plotted, and change in the maximum coefficient of lift ((Formula presented.)) and stall angle (α stall) are studied for both of the wing sections. A
significant addition to the novelty of the present experiments is to provide some comparison of the numerical induced drag coefficient, (Formula presented.) with experimentally fitted model
coefficients using least square technique. A novel method is used to examine the aerodynamic hysteresis at high angles of attack. The area included in the lift- Re curve loop is a measure of
aerodynamic efficiency, and its variation with angle of attack and wing plan forms is studied.
• Delay or control of flow separation for enhanced aerodynamic performance using an effective morphed surface
Roy, Aritras
This paper investigates an improvement of the aerodynamic performance of a wing at high, including post-stall angles of attack by re-designing its camber line to control the separation of its
boundary layer. This is experimentally implemented using an Aluminum secondary skin on the wing surface, which aligns itself to the separated boundary layer at high angles to attack, such that
the flow remains attached to it, which otherwise would have separated on the baseline configuration. The shape of the skin, which is now regarded as the active flow surface, is essentially a
morphed version of the baseline shape of the wing and is predicted numerically using an in-house code based on a ‘decambering’ technique that accounts for the local deviation of camber by
accounting for the difference in the coefficients of lift and pitching moment predicted by viscous and potential flows. This technique is tested on a rectangular planform using different wing
sections, NACA0012, NACA4415, and NRELS809. The effective morphed flow surface is also used for the baseline wing to operate at a design local 2D Cl, which is obtained by incrementing the
baseline Cl by a user defined percentage at design pre and post-stall angles of attack. Aerodynamic characteristics of the effective morphed configurations using numerical analysis, CFD, and wind
tunnel experiments are reported.
• Effect of Airfoil Section on Unsteady Aerodynamics of a Rectangular Wing at High Angles of Attack
Roy, Aritras
This paper reports an investigation into the effects of the transient flow behavior on a rectangular wing at high angles of attack. In-house developed numerical code Unsteady Vortex Lattice
Method (UVLM) coupled with decambering technique is used to understand the aerodynamic behaviors of a rectangular wing with different airfoil sections. The spectral density of the transient
coefficient of lift data is calculated for both wing sections, and a low-frequency oscillation is identified near the static stall. The transient sectional coefficient of lift (Clsec) is utilized
to study the variation in span-wise lift distribution at different time steps numerically. Transitional behavior of decambered surfaces for different wing sections is also discussed to provide
insight into the unsteady separated boundary layer characteristics.
|
{"url":"https://irepose.iitm.ac.in/entities/person/67467/publications?f.author=Roy,%20Aritras,equals&spc.page=1","timestamp":"2024-11-11T03:56:57Z","content_type":"text/html","content_length":"1049239","record_id":"<urn:uuid:a18463b1-26ee-4607-a147-a3efc983dc41>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00610.warc.gz"}
|
Proof of Narnian NFT NFT Prices, Stats, and Sales Chart
NFT Collection Proof of Narnian NFT Price, Stats, and Review
Proof of Narnian NFT Collection Sales Stats for 24h / 7d / 30d
Sales Volume Sales Average Price
10.95 ETH (+22%) 1.22 ETH (-55%) 10.95 ETH (+56%) 1 (+0%) 7 61 0 ETH ($0) 0.17 ETH ($284.54) 0.18 ETH ($293.06)
What is Proof of Narnian NFT?
Proof of Narnian NFT is a ERC721 non-fungible tokens collection built on the Ethereum network launched in August 9, 2022. 3333 items of the Proof of Narnian NFT collection can now be viewed at
Proof of Narnian NFT
Proof of Narnian NFT All Time Sales Report
The Proof of Narnian NFT NFT collection currently holds a market capitalization of 580.75 ETH. Since its creation, there have been 2810 collection sales with an average price of 0.39 ETH
(approximately $639.91 at the time of writing), resulting in a total volume of 1,101.135 ETH. The floor price for Proof of Narnian NFT is 0.192999, and the 30-day trading volume is maintained at
10.95 ETH.
Proof of Narnian NFT NFT Examples
Other NFTs collections from the Collections
Share it with your friends!
|
{"url":"https://nftmetrica.com/collections/nft-collection-proof-of-narnian-nft-price-stats-and-review","timestamp":"2024-11-03T03:35:16Z","content_type":"text/html","content_length":"71623","record_id":"<urn:uuid:2ba99fce-f6ed-43c5-99ca-317b07f3b3e9>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00771.warc.gz"}
|
Generate Code and Deploy Controller to Real-Time Targets
Model Predictive Control Toolbox™ software provides code generation functionality for controllers designed in MATLAB^® or Simulink^®.
Code Generation in MATLAB
After designing an MPC controller in MATLAB, you can generate C code using MATLAB Coder™ and deploy it for real-time control.
To generate code for computing optimal MPC control moves for an implicit or explicit linear MPC controller:
1. Generate data structures from an MPC controller or explicit MPC controller using getCodeGenerationData.
2. To verify that your controller produces the expected closed-loop results, simulate it using mpcmoveCodeGeneration in place of mpcmove.
3. Generate code for mpcmoveCodeGeneration using codegen (MATLAB Coder). This step requires MATLAB Coder software.
For an example, see Generate Code to Compute Optimal MPC Moves in MATLAB.
You can also generate code for nonlinear MPC controllers that use the default fmincon solver with the SQP algorithm. To generate code for computing optimal control moves for a nonlinear MPC
1. Generate data structures from a nonlinear MPC controller using getCodeGenerationData.
2. To verify that your controller produces the expected closed-loop results, simulate it using nlmpcmoveCodeGeneration in place of nlmpcmove.
3. Generate code for nlmpcmoveCodeGeneration using codegen (MATLAB Coder). This step requires MATLAB Coder software.
Code Generation in Simulink
After designing a controller in Simulink using any of the MPC blocks, you can generate code and deploy it for real-time control. You can deploy controllers to all targets supported by the following
• Simulink Coder
• Embedded Coder^®
• Simulink PLC Coder™
• Simulink Real-Time™
You can generate code for any of the Model Predictive Control Toolbox Simulink blocks.
For more information on generating code, see Simulation and Code Generation Using Simulink Coder and Simulation and Structured Text Generation Using Simulink PLC Coder.
The MPC Controller, Explicit MPC Controller, Adaptive MPC Controller, and Nonlinear MPC Controller blocks are implemented using the MATLAB Function (Simulink) block. To see the structure, right-click
the block, and select Mask > Look Under Mask. Then, open the MPC subsystem underneath.
If your nonlinear MPC controller uses optional parameters, you must also generate code for the Bus Creator block connected to the params input port of the Nonlinear MPC Controller block. To do so,
place the Nonlinear MPC Controller and Bus Creator blocks within a subsystem, and generate code for that subsystem.
Generate CUDA Code for Linear MPC Controllers
You can generate CUDA^® code for your MPC controller using GPU Coder™. For more information on supported GPUs, see GPU Computing Requirements (Parallel Computing Toolbox). For more information on
installing and setting up the prerequisite product, see Installing Prerequisite Products (GPU Coder) and Setting Up the Prerequisite Products (GPU Coder).
To generate and use GPU code in MATLAB:
1. Design a linear controller using an mpc object.
2. Generate the structures for the core, states, and online data from your linear MPC controller using the getCodeGenerationData function.
3. Optionally simulate your closed loop iteratively using the mpcmoveCodeGeneration function and the data structures created in the previous step.
4. Create a coder configuration options object using the coder.gpuConfig function, and configure the code generation options.
5. Generate code for the mpcmoveCodeGeneration function using the codegen function and the coder configuration options object. Doing so generates a new function which uses code running on the GPU.
6. Simulate your controller using the new generated function and the data structures.
For an example on using GPU code in MATLAB, see Use the GPU to Compute MPC Moves in MATLAB.
You can generate and use GPU code from the MPC Controller, Adaptive MPC Controller, or Explicit MPC Controller blocks.
To generate GPU code from a Simulink model containing any of these blocks, open the Configuration Parameters dialog box by clicking Model Settings. Then, in the Code Generation section, select
Generate GPU code.
For details on how to configure your model for GPU code generation, see Code Generation from Simulink Models with GPU Coder (GPU Coder).
Sampling Rate in Real-Time Environment
The sampling rate that a controller can achieve in a real-time environment is system-dependent. For example, for a typical small MIMO control application running on Simulink Real-Time, the sample
time can be as long as 1–10 ms for linear MPC and 100–1000 ms for nonlinear MPC. To determine the sample time, first test a less-aggressive controller whose sampling rate produces acceptable
performance on the target. Next, decrease the sample time and monitor the execution time of the controller. You can further decrease the sample time as long as the optimization safely completes
within each sampling period under normal plant operating conditions. To reduce the sample time, you can also consider using:
• Explicit MPC. While explicit MPC controllers have a faster execution time, they also have a larger memory footprint, since they store precomputed control laws. For more information, see Explicit
MPC Design.
• A suboptimal QP solution after a specified number of maximum solver iterations. For more information, see Suboptimal QP Solution.
A lower controller sample time does not necessarily provide better performance. In fact, you want to choose a sample time that is small enough to give you good performance but no smaller. For the
same prediction time, smaller sample times result in larger prediction steps, which in turn produces a larger memory footprint and more complex optimization problem.
QP Problem Construction for Generated C Code
At each control interval, an implicit or adaptive MPC controller constructs a new QP problem, which is defined as:
subject to the linear inequality constraints
• x is the solution vector.
• H is the Hessian matrix.
• A is a matrix of linear constraint coefficients.
• f and b are vectors.
In generated C code, the following matrices are used to provide H, A, f, and b. Depending on the type and configuration of the MPC controller, these matrices are either constant or regenerated at
each control interval.
Constant Size Purpose Implicit MPC Implicit MPC with Online Weight Adaptive MPC or LTV MPC
Matrix Tuning
Hinv N[M]-by-N[M] Inverse of the Hessian matrix, H
Linv N[M]-by-N[M] Inverse of the lower-triangular Cholesky decomposition of Regenerated
Ac N[C]-by-N[M] Linear constraint coefficients, A Constant
Kx N[xqp]-by-(N[M]–1) Regenerated
Kr p*N[y]-by-(N[M]–1)
Ku1 N[mv]-by-(N[M]–1) Used to generate f Regenerated
Kv (N[md]+1)*(p+1)-by-(N[M] Constant
Kut p*N[mv]-by-(N[M]–1)
Mlim N[C]-by-1 Constant, except when there are custom
Mx N[C]-by-N[xqp] Used to generate b Constant
Mu1 N[C]-by-N[mv] Regenerated
Mv N[C]-by-(N[md]+1)*(p+1)
• p is the prediction horizon.
• N[mv] is the number of manipulated variables.
• N[md] is the number of measured disturbances.
• N[y] is the number of output variables.
• N[M] is the number of optimization variables (m*N[mv]+1, where m is the control horizon).
• N[xqp] is the number of states used for the QP problem; that is, the total number of the plant states and disturbance model states.
• N[C] is the total number of constraints.
At each control interval, the generated C code computes f and b as:
$f=K{x}^{⊺}\ast {x}_{q}+K{r}^{⊺}\ast {r}_{p}+Ku{1}^{⊺}\ast {m}_{l}+K{v}^{⊺}\ast {v}_{p}+Ku{t}^{⊺}\ast {u}_{t}$
$b=-\left(Mlim+Mx\ast {x}_{q}+Mu1\ast {m}_{l}+Mv\ast {v}_{p}\right)$
• x[q] is the vector of plant and disturbance model states estimated by the Kalman filter.
• m[l] is the manipulated variable move from the previous control interval.
• u[t] is the manipulated variable target.
• v[p] is the sequence of measured disturbance signals across the prediction horizon.
• r[p] is the sequence of reference signals across the prediction horizon.
When generating code in MATLAB, the getCodeGenerationData command generates these matrices and returns them in configData.
See Also
Related Examples
More About
|
{"url":"https://se.mathworks.com/help/mpc/ug/generate-code-and-deploy-controller-to-real-time-targets.html","timestamp":"2024-11-09T05:55:29Z","content_type":"text/html","content_length":"93132","record_id":"<urn:uuid:bb8f3857-4f02-41d0-9511-a58c7adf394e>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.30/warc/CC-MAIN-20241109053958-20241109083958-00631.warc.gz"}
|
APRODUCT procedure • Genstat v21
Forms a new experimental design from the product of two designs (R.W. Payne).
PRINT = string token Controls printing of the design (design); default desi
ANALYSE = string token Whether to analyse the design by ANOVA (yes, no); default no
METHOD = string token How to combine the designs (cross, nest); default nest
BF1 = formula Block formula for design 1
TF1 = formula Treatment formula for design 1
BF2 = formula Block formula for design 2
TF2 = formula Treatment formula for design 2
No parameters
APRODUCT forms an experimental design by taking the product of two other designs. The METHOD option controls whether the product is formed by nesting the second design within the first, or by
crossing the two designs together. For example, suppose that the first design has a single factor Units in the block structure and a single treatment factor A, while the second design is a Latin
square with block structure Rows*Colums and treatment factor B. If we nest the second design within the first, we would obtain a design with block structure Units/(Rows*Columns) in which each unit of
the first design has been subdivided into a row by column array of subplots to contain a Latin square of the sort defined by the second design. Nesting is thus useful when you want to subdivide the
units of a design and apply further treatments (in this case those defined by the factor B) to the resulting subplots. Similarly, if we cross the two designs, the new design will have a block
structure of Units*(Rows*Columns), or Units*Rows*Columns, in which we have duplicated the second design for every level of Units. Crossing is useful if you need to introduce a new blocking structure
into an existing design. For example, the Units factor might represent different time periods or different locations in which the latin square design was to be used, and the factor A the different
systematic conditions that might apply on each occasion.
With both nesting and crossing, the new design will contain a unit for every combination of the block factors in the two original designs, and so every combination of the treatment factors in the
first design will occur with every combination of the treatment factors in the second design. The treatment structure is thus defined for the new design by crossing the treatment structures of the
two original designs, to estimate all the original treatment terms and their interactions. So, in the example above, the treatment structure is defined to be A*B.
APRODUCT redefines the values of the factors as required for the new design, and executes BLOCKSTRUCTURE and TREATMENTSTRUCTURE directives with the new block and treatment formulae. The new formulae
can then be accessed, outside the procedure, using the GET directive or procedure ASTATUS. The PRINT option can be set to design to print the new design, and the ANALYSE option can be set to yes to
produce a skeleton analysis of variance from ANOVA. Options BF1, TF1, BF2, and TF2 define the block structure and treatment structure of the first and then the second design.
Options: PRINT, ANALYSE, METHOD, BF1, TF1, BF2, TF2.
Parameters: none.
APRODUCT uses the standard Genstat manipulation directives such as FCLASSIFICATION, CALCULATE and DUPLICATE. Procedure PDESIGN is used to print the design.
None of the factors must be restricted, and any existing restrictions will be cancelled.
See also
Procedure: AMERGE.
Commands for: Design of experiments, Calculations and manipulation.
CAPTION 'APRODUCT example',\
!t('Design 1 is a design with no blocking (the single',\
'block factor Rep merely identifies the different units),',\
'Design 2 is a Latin square with 4 rows and 4 columns.');\
FACTOR [VALUES=1...6; LEVELS=6] Rep
& [VALUES=2(1...3); LEVELS=3] A
FACTOR [NVALUES=16; LEVELS=4] Row,Column,B;\
VALUES=!(4(1...4)),!((1...4)4),!(1,2,3,4, 2,3,4,1, 3,4,1,2, 4,1,2,3)
APRODUCT [PRINT=design; ANALYSE=yes; METHOD=nest;\
BF1=Rep; TF1=A; BF2=Row*Column; TF2=B]
|
{"url":"https://genstat21.kb.vsni.co.uk/knowledge-base/aproduct/","timestamp":"2024-11-11T19:46:45Z","content_type":"text/html","content_length":"42022","record_id":"<urn:uuid:1bf7ab3b-dd40-41fe-9d95-758fea4d772b>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00711.warc.gz"}
|
Notes: The HP 300s+ is a non-RPN scientific calculator meant for (high school) students. It has TFD (Textbook Format Display) which means that results are displayed as accurately as possible on its
four lines, 15 characters display (dot matrix, 31 x 96 pixels). The calculator can be set to either "Mthlo" which implements said Textbook Format Display, or "linelO" which displays all results
This calculator is not produced by HP itself but under licence by Moravia Consulting. They produce other HP calculator models as well. Their page on this calculator can be found here (link validated
Parenthesis do not need to be balanced. When pressing the = key all open parenthesis are presumed closed. Also, multiplications are implied when for instance entering "2sin(45".
When an error occurs, or when one wants to edit a previous calculation, simply press one of the arrow keys ◄ or ► to edit it.
The CALC key makes this calculator semi-programmable. It works as follows. Enter one or more of calculations using variables separated by colons (up to 99 steps). When the CALC key is pressed the
calculator will query for the values of the variables and after pressing = will show the results of the entered calculations.
Using the CONV function, numerous unit conversions are possible.
The RanInt function can be used to play dice, "RanInt# (1,6" will randomly display a number within that range. Press the = key to throw again!
All in all, this seems to be HEWLETT PACKARD’s answer to the CASIO fx-82 and the TI-30 series of calculators, with many of the same functions and keys.
Very useful calculator nonetheless.
|
{"url":"https://calculator-museum.nl/calculators/hp-300splus-body.html","timestamp":"2024-11-10T18:40:16Z","content_type":"text/html","content_length":"4870","record_id":"<urn:uuid:88715302-9a6b-4218-bb39-5b83aa355ae2>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00035.warc.gz"}
|
RD Sharma Solutions for Class 6 Chapter 23 Data Handling - III (Bar Graphs)RD Sharma Solutions for Class 6 Chapter 23 Data Handling - III (Bar Graphs)
RD Sharma Solutions for Class 6 Chapter 23 Data Handling - III (Bar Graphs) Free Online
Exercise 23.1 page: 23.7
1. The following table shows the daily production of T.V. sets in an industry for 7 days of a week:
Represent the above information by a pictograph.
Consider that a TV icon represents 50 TVs.
So the number of icons produced by the industry on various days of a week are given below:
Below given is the pictograph which represents the above data:
2. The following table shows the number of Maruti cars sold by five dealers in a particular month:
Represent the above information by a pictograph.
Consider that one car icon represents 5 Maruti cars.
So the number of icons sold by the 5 dealers in a particular month are as given below:
Below given is the pictograph which represents the above data:
3. The population of Delhi State in different census years is as given below:
Represent the above information with the help of a bar graph.
In order to represent the data on a bar graph, we should construct a horizontal and a vertical line. We know that the horizontal line represents the census year and the vertical line represents the
population in lakhs.
Here 5 values are given so mark 5 points on the horizontal axis having equal distances and erect rectangles having same width and heights proportional to the given data.
The same way, on vertical axis, difference between two points is 10 which represents a population of 10 lakhs.
4. Read the bar graph show in Fig. 23.8 and answer the following questions:
(i) What is the information given by the bar graph?
(ii) How many tickets of Assam State Lottery were sold by the agent?
(iii) Of which state, were the maximum number of tickets sold?
(iv) State whether true or false.
The maximum number of tickets sold is three times the minimum number of tickets sold.
(v) Of which state were the minimum number of tickets sold?
(i) The bar graph represents the number of tickets of different state lotteries sold by an agent on a day.
(ii) 40 tickets of Assam State Lottery were sold by the agent.
(iii) The maximum number of tickets were sold in the state Haryana.
(iv) False.
We know that
Maximum vertical length = 100 units (Haryana)
Maximum vertical length = 20 units (Rajasthan)
So the maximum number of lottery sold for one state is 100 and the minimum is 20 tickets.
(v) The minimum number of tickets were sold of Rajasthan.
5. Study the bar graph representing the number of persons in various age groups in a town shown in Fig. 23.9. Observe the bar graph and answer the following questions:
(i) What is the percentage of the youngest age-group persons over those in the oldest age group?
(ii) What is the total population of the town?
(iii) What is the number of persons in the age-group 60 – 65?
(iv) How many persons are more in the age-group 10-15 than in the age group 30-35?
(v) What is the age-group of exactly 1200 persons living in the town?
(vi) What is the total number of persons living in the town in the age-group 50-55?
(vii) What is the total number of persons living in the town in the age-groups 10-15 and 60-65?
(viii) Whether the population in general increases, decreases or remains constant with the increase in the age-group.
(i) We know that the youngest age is 10-15 years.
No. of persons in the youngest age group = 1400
70-75 years is the oldest age group.
No. of persons in the oldest age group = 300
So the difference = 1400 – 300 = 1100
Hence, the youngest group has 1100 more people than the oldest group.
Percentage of the youngest group over oldest group = (1100/300) × 100 = 1100/3 = 366 2/3 %
(ii) We know that the total population of the town = total number of people from all age groups
By substituting the values
Total population of the town = 1400 + 1200 + 1100 + 1000 + 900 + 800 + 300 = 6700
(iii) From the bar graph we know that the age group 60-65 years consists of 800 persons.
(iv) No. of persons in the age group 10-15 = 1400
No. of persons in the age group 30-35 = 1100
So the number of more persons in the age group 10-15 when compared to that of 30-35 = 1400 – 1100 = 300
(v) We know that 1200 people are living in the age group 20-25 years.
(vi) No. of people of the age group 50-55 is 900.
(vii) We know that 1400 persons exist of the age group 10-15 years and 9800 exist in the age group 60-65 years.
So the total number of persons in the age group 10-15 years and 60-65 years = 1400 + 800 = 2200
(viii) We know that increase in the age group has decrease in population.
6. Read the bar graph shown in Fig. 23.10 and answer the following questions:
(i) What is the information given by the bar graph?
(ii) What was the number of commercial banks in 1977?
(iii) What is the ratio of the number of commercial banks in 1969 to that in 1980?
(iv) State whether true or false:
The number of commercial banks in 1983 is less than double the number of commercial banks in 1969.
(i) The bar graph represents the number of commercial banks in India during some years.
(ii) 130 was the number of commercial banks in 1977.
(iii) No. of commercial banks in 1969 = 90
No. of commercial banks in 1980 = 150
Hence, the ratio of the number of commercial banks in 1969 to that in 1980 = 90/150 = 3/5 = 3: 5.
(iv) False.
We know that
No. of commercial banks in 1983 = 230
No. of commercial banks in 1969 = 90
So we get 2 × 90 = 180
Here, 230 is greater than 180 which means the number of commercial banks in 1983 is not less than double the number of commercial banks in 1969.
7. Given below (Fig. 23.11) is the bar graph indicating the marks obtained out of 50 in mathematics paper by 100 students. Read the bar graph and answer the following questions:
(i) It is decided to distribute work books on mathematics to the students obtaining less than 20 marks, giving one workbook to each of such students. If a work book costs Rs. 5, what sum is required
to buy the work books?
(ii) Every student belonging to the highest mark group is entitled to get a prize of Rs. 10. How much amount of money is required for distributing the prize money?
(iii) Every student belonging to the lowest mark-group has to solve 5 problems per day. How many problems, in all, will be solved by the students of this group per day?
(iv) State whether true or false.
(a) 17% students have obtained marks ranging from 40 to 49.
(b) 59 students have obtained marks ranging from 10 to 29.
(v) What is the number of students getting less than 20 marks?
(vi) What is the number of students getting more than 29 marks?
(vii) What is the number of students getting marks between 9 and 40?
(viii) What is the number of students belonging to the highest mark group?
(ix) What is the number of students obtaining more than 19 marks?
Below given is the chart of 100 students by using the information from the bar graph.
(i) No. of students who score less than 20 marks = 27 + 12 = 39
Amount required to buy the workbooks = 5 × 39 = Rs 195
(ii) We know that the highest marks group is 40 – 49
No. of students in this marks group = 17
Money required to distribute the prize = 10 × 17 = Rs 170
(iii) We know that the lowest marks group is 0 – 9
No. of students in this marks group = 27
No. of problems that will be solved by students each day = 5 × 27 = 135
(iv) (a) True
(b) False
(v) No. of students scoring less than 20 marks = No. of students in 0-9 marks group + No. of students in 10-19 marks group
By substituting the values
No. of students scoring less than 20 marks = 27 + 12 = 39
(vi) No. of students scoring more than 29 marks = No. of students in 30-39 marks group + No. of students in 40-49 marks group
By substituting the values
No. of students scoring more than 29 marks = 24 + 17 = 41
(vii) No. of students who score between 9 and 40 = No. of students in 10-19 marks group + No. of students in 20-29 marks group + No. of students in 30-39 marks group
By substituting the values
No. of students who score between 9 and 40 = 12 + 30 + 24 = 56
(viii) We know that 40-49 is the highest marks group
So the number of students in this group is 17.
(ix) No. of students who score more than 19 marks = No. of students in 20-29 marks group + No. of students in 30-39 marks group + No. of students in 40-49 marks group
By substituting the values
No. of students who score more than 19 marks = 20 + 24 + 17 = 61
8. Read the following bar graph (Fig. 23.12) and answer the following questions:
(i) What is the information given by the bar graph?
(ii) State each of the following whether true or false.
(a) The number of government companies in 1957 is that of 1982 is 1: 9.
(b) The number of government companies have decreased over the year 1957 to 1983.
(i) The bar graph represents the number of government companies in India during some years.
(ii) (a) False.
We know that
No. of government companies in 1957 = 50
No. of government companies in 1982 = 375
So the ratio = 50/375 = 2/15 = 2: 15
(b) False. There is no data given for 1983. Hence the statement is not true.
9. Read the following bar graph and answer the following questions:
(i) What information is given by the bar graph?
(ii) Which state is the largest producer of rice?
(iii) Which state is the largest producer of wheat?
(iv) Which state has total production of rice and wheat as its maximum?
(v) Which state has the total production of wheat and rice minimum?
Consider a chart by using data from the bar graph:
(i) The bar graph gives information regarding rice and wheat production in various states.
(ii) The largest producer of rice is W.B.
(iii) The largest producer of wheat is U.P.
(iv) U.P has the total production of rice and wheat as its maximum.
(v) Maharashtra has the total production of wheat and rice minimum.
10. The following bar graph (Fig. 23.14) represents the heights (in cm) of 50 students of Class XI of a particular school. Study the graph and answer the following questions:
(i) What percentage of the total number of student have their heights more than 149 cm?
(ii) How many students in the class are in the range of maximum height of the class?
(iii) The school wants to provide a particular type of tonic to each student below the height of 150 cm to improve his height. If the cost of the tonic for each student comes out to be Rs 55, how
much amount of money is requires?
(iv) How many students are in the range of shortest height of the class?
(v) State whether true or false:
(a) There are 9 students in the class whose heights are in the range of 155−159 cm.
(b) Maximum height (in cm) of a student in the class is 17.
(c) There are 29 students in the class whose heights are in the range of 145−154 cm.
(d) Minimum height (in cm) of a student is the class is in the range of 140−144 cm.
(e) There are 14 students each of whom has height more than 150 cm is 12.
(f) There are 14 students each of whom has height more than 154 cm.
Consider a chart by using data from the bar graph:
(i) No. of students having height more than 149 cm = 17 + 9 + 5 = 31
Total students = 50
So the percentage of students having height more than 149 cm = 31/50 = 31 × 2 = 62%
(ii) Maximum height range of the class is 160-164
No. of students in this range = 5
(iii) No. of students having height less than 150 cm = 7 + 12 = 19
Amount required to be spent for the tonic = 19 × 55 = Rs 1045
(iv) We know that minimum height range of the class is 140-144
No. of students in this range = 7
(v) (a) True. No. of students in the height range 155 – 159 is 9.
(b) False. No. of students in the maximum height range 160 – 164 is 17.
(c) True.
No. of students having heights in the range 145 – 154 cm = No. of students having heights in the range 145-149 + No. of students having heights in the range 150-154
We get
No. of students having heights in the range 145 – 154 cm = 12 + 17 = 29
(d) True. 140-144 cm is the minimum height range of the students.
(e) False.
No. of students having heights less than 150 cm = 7 + 12 = 19
(f) True.
No. of students having heights more than 154 cm = 9 + 5 = 14
11. Read the following bar graph (Fig.23.15) and answer the following questions:
(i) What information is given by the bar graph?
(ii) What was the production of cement in the year 1980-81?
(iii) What is the minimum and maximum productions of cement and corresponding years?
(i) The bar graph represents industrial production of cement in different years in India.
(ii) The production of cement in the year 1980-81 is 186 lakh tonnes.
(iii) We know that minimum height of bar in 1950-51 is 30 units.
Hence, minimum production is 30 lakh tonnes in 1950-51.
We know that maximum height of bar in 1982-83 is 232 units.
Hence, maximum production is 232 lakh tonnes in 1982-83.
12. The bar graph shown in Fig. 23.16 represents the circulation of newspapers in 10 languages. Study the bar graph and answer the following questions:
(i) What is the total number of newspapers published in Hindi, English, Urdu, Punjabi and Bengali?
(ii) What percent is the number of newspapers published in Hindi of the total number of newspapers?
(iii) Find the excess of the number of newspapers, published in English over those published in Urdu.
(iv) Name two pairs of languages which publish the same number of newspapers.
(v) State the language in which the smallest number of newspapers are published.
(vi) State the language in which the number of newspapers are published.
(vii) State the language in which the number of newspapers published is between 2500 and 3500.
(viii) State whether true or false:
(a) The number of newspapers published in Malayalam and Marathi together is less than those published in English.
(b) The number of newspapers published in Telugu is more than those published in Tamil.
Consider a chart by using data from the bar graph:
(i) We know that the total number of newspapers published = 3700 + 3400 + 700 + 200 + 1100 = 9100
(ii) No. of newspapers published in Hindi = 3700
Total newspapers published = 700 + 400 + 1000 + 200 + 1400 + 1400 + 3700 + 1100 + 3400 + 1100 = 14400
% of Hindi newspaper published = 3700/14400 × 100 = 25.7%
(iii) No. of English newspapers published = 3400
No. of Urdu newspapers published = 700
So the excess number of newspapers which are published in English over Urdu = 3400 – 700 = 2700
(iv) Gujarati and Bengali, Marathi and Malayalam are the two pairs in which same number of newspapers are published.
(v) The smallest number of newspapers are published in Punjabi language.
(vi) The maximum number of newspapers are published in Hindi language.
(vii) The number of newspapers published is between 2500 and 3500 in English language.
(viii) (a) True.
No. of newspapers in Malayalam and Marathi = 1400 + 1400 = 2800
No. of English newspapers is 3400 which is more than total number of Malayalam and Marathi newspapers.
(b) False.
No. of newspapers published in Telugu = 400
No. of newspapers published in Tamil = 1000
Exercise 23.2 page: 23.18
1. Explain the reading and interpretation of bar graphs.
A graph with its length proportional to the value it represents is called a bar graph. The bars can either be [plotted horizontally or vertically. It is basically a visual display which is used to
compare the amount of occurrence of carious characteristics of data.
Bar graph help us to
(i) Compare the groups of data
(ii) Generalize the data
2. Read the following bar graph and answer the following questions:
(i) What information is given by the bar graph?
(ii) In which year the export is minimum?
(iii) In which year the import is maximum?
(iv) In which year the difference of the values of export and import is maximum?
Consider a chart by using data from the bar graph:
(i) The bar graph gives us information regarding import and export from 1982-83 to 1886-87.
(ii) The export is minimum in the year 1982-83.
(iii) The import is maximum in the year 1986-87.
(iv) The difference of the values of import and export is maximum in the year 1986-87.
3. The following bar graph shows the results of an annual examination in a secondary school.
Read the bar graph (Fig.23.22) and choose the correct option in each of the following:
(i) The pair of classes in which the results of boys and girls are inversely proportional are:
(a) VI, VIII
(b) VI, IX
(c) VIII, IX
(d) VIII, X
(ii) The class having the lowest failure rate of girls is:
(a) VII
(b) X
(c) IX
(d) VIII
(iii) The class having the lowest pass rate of student is:
(a) VI
(b) VII
(c) VIII
(d) IX
Consider a chart by using data from the bar graph:
(i) The option (b) is the correct answer.
The pair of classes in which the results of boys and girls are inversely proportional are VI, IX.
We know that
In class VI
% of boys = 80
% of girls = 70
In class IX
% of boys = 80
% of girls = 70
(ii) The option (a) is the correct answer.
The class having the lowest failure rate of girls is VII.
We know that the passing percentage of girls in Class VII is 100%.
So 0% of girls have failed in this is class.
(iii) The option (b) is the correct answer.
The class having the lowest pass rate of student is VII.
We know that the sum of vertical heights of % of boys and girls in class VII is same i.e. 140 units and the sum is least compared to other classes.
4. The following bar graph shows the number of persons killed in industrial accidents in a country for some years (Fig.23.23).
Read the bar graph and choose the correct alternative in each of the following:
(i) The year which shows the maximum percentage increase in the number of persons killed in coal mines over the preceding year is:
(a) 1996
(b) 1997
(c) 1999
(d) 2000
(ii) The year which shows the maximum decrease in the number of persons killed in industrial accidents over the preceding year is:
(a) 1996
(b) 1997
(c) 1998
(d) 1999
(iii) The year in which the maximum number of persons were killed in industrial accidents other than those killed in coal mines is:
(a) 1995
(b) 1997
(c) 1998
(d) 1999
Consider a chart by using data from the bar graph:
(i) The option (d) is the correct answer.
The year which shows the maximum percentage increase in the number of persons killed in coal mines over the preceding year is 2000.
In the year 1997 the death increased to 300 from 200 and in the year 200 increased to 200 from 100.
% increase in death in 1997 = 50%
% increase in death in 2000 = 100%
(ii) The option (a) is the correct answer.
The year which shows the maximum decrease in the number of persons killed in industrial accidents over the preceding year is 1996.
The year 1996 and 1999 show decrease in the amount of persons killed by industrial accidents.
% decrease in the death in 1996 = 43.75%
% decrease in the death in 1999 = 30.77%
(iii) The option (a) is the correct answer.
The year in which the maximum number of persons were killed in industrial accidents other than those killed in coal mines is 1995.
1600 persons were killed in the year 1995 due to industrial accidents which is higher when compared to other years.
5. The production of saleable steel in some of the steel plants of our country during 1999 is given below:
Construct a bar graph to represent the above data on a graph paper by using the scale 1 big divisions = 20 thousand tonnes.
Construct two mutually perpendicular lines OX and OY.
Let us mark plants along the horizontal line OX and mark production along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 division = 20 thousand tons
So the heights of the bars are as given below:
Bhilai = 160/20 = 8 units
Durgapur = 80/20 = 4 units
Rourkela = 200/20 = 10 units
Bokaro = 150/20 = 7.5 units
Using the above calculation, the graph is as given below:
6. The following data gives the number (in thousands) of applicants registered with an Employment Exchange during. 1995-2000:
Construct a bar graph to represent the above data.
Construct two mutually perpendicular lines OX and OY.
Let us mark years along the horizontal line OX and mark number of applicants registered along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 division = 4 thousand applicants
So the heights of the bars are as given below:
1995 = 18/4 = 4.5 units
1996 = 20/4 = 5 units
1997 = 24/4 = 6 units
1998 = 28/4 = 7 units
2000 = 34/4 = 8.5 units
Using the above calculation, the graph is as given below:
7. The following table gives the route length (in thousand kilometres) of the Indian Railways in some of the years:
Represent the above data with the help of a bar graph.
Construct two mutually perpendicular lines OX and OY.
Let us mark years along the horizontal line OX and mark route length along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 big division = 10 thousand kilometres
So the heights of the bars are as given below:
1960-61 = 56/10 = 5.6 units
1970-71 = 60/10 = 6 units
1980-81 = 61/10 = 6.1 units
1990-91 = 74/10 = 7.4 units
2000-2001 = 98/10 = 9.8 units
Using the above calculation, the graph is as given below:
8. The following data gives the amount of loans (in crores of rupees) disbursed by a bank during some years:
(i) Represent the above data with the help of a bar graph.
(ii) With the help of the bar graph, indicate the year in which amount of loan is not increased over that of the preceding year.
(i) Construct two mutually perpendicular lines OX and OY.
Let us mark years along the horizontal line OX and mark loans in crores along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 big division = 10 crores of loan
So the heights of the bars are as given below:
1992 = 28/10 = 2.8 units
1993 = 33/10 = 3.3 units
1994 = 55/10 = 5.5 units
1995 = 55/10 = 5.5 units
1996 = 80/10 = 8.0 units
Using the above calculation, the graph is as given below:
(ii) 1995 is the year where the loan amount has not increased than its previous year.
9. The following table shows the interest paid by a company (in lakhs):
Draw the bar graph to represent the above information.
Construct two mutually perpendicular lines OX and OY.
Let us mark years along the horizontal line OX and mark amount of interest paid by the company along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 big division = 5 lakhs of rupees paid as interest by the company
So the heights of the bars are as given below:
1995-96 = 20/5 = 4 units
1996-97 = 25/5 = 5 units
1997-98 = 15/5 = 3 units
1998-99 = 18/5 = 3.6 units
1999-2000 = 30/5 = 6 units
Using the above calculation, the graph is as given below:
10. The following data shows the average age of men in various countries in a certain year:
Represent the above information by a bar graph.
Construct two mutually perpendicular lines OX and OY.
Let us mark countries along the horizontal line OX and mark average age for men along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 big division = 10 years
So the heights of the bars are as given below:
India = 55/10 = 5.5 units
Nepal = 52/10 = 5.2 units
China = 60/10 = 6.0 units
Pakistan = 50/10 = 5.0 units
UK = 70/10 = 7 units
USA = 75/10 = 7.5 units
Using the above calculation, the graph is as given below:
11. The following data gives the production of foodgrains (in thousand tonnes) for some years:
Represent the above data with the help of a bar graph.
Construct two mutually perpendicular lines OX and OY.
Let us mark years along the horizontal line OX and mark production of food grains in tonnes along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 big division = 20 thousand tonnes
So the heights of the bars are as given below:
1995 = 120/20 = 6 units
1996 = 150/20 = 7.5 units
1997 = 140/20 = 7 units
1998 = 180/20 = 9 units
1999 = 170/20 = 8.0 units
2000 = 190/20 = 9.5 units
Using the above calculation, the graph is as given below:
12. The following data gives the amount of manure (in thousand tonnes) manufactured by a company during some years:
(i) Represent the above data with the help of a bar graph.
(ii) Indicate with the help of the bar graph the year in which the amount of manure manufactured by the company was maximum.
(iii) Choose the correct alternative:
The consecutive years during which there was maximum decrease in manure production are:
(a) 1994 and 1995
(b) 1992 and 1993
(c) 1996 and 1997
(d) 1995 and 1996
(i) Construct two mutually perpendicular lines OX and OY.
Let us mark years along the horizontal line OX and mark amount of manure in tonnes along the vertical line OY.
Take equal width for each bar on the axis OX.
Now let us take a suitable scale to find the heights of the bar.
Take 1 big division = 5 thousand tonnes of manure
So the heights of the bars are as given below:
1992 = 15/5 = 3 units
1993 = 35/5 = 7 units
1994 = 45/5 = 9 units
1995 = 30/5 = 6 units
1996 = 40/5 = 8.0 units
1997 = 20/5 = 4 units
Using the above calculation, the graph is as given below:
(ii) Maximum amount of manure was manufactured by the company in the year 1994.
(iii) The option (c) is the correct answer.
The consecutive years during which there was maximum decrease in manure production are 1996 and 1997.
Production in the year 1996 and 1997 was decreased by 20 thousand tonnes.
|
{"url":"https://www.studyguide360.com/2020/06/rd-sharma-solutions-class-6-maths-chapter-23-data-handling-bar-graphs.html","timestamp":"2024-11-09T23:12:06Z","content_type":"application/xhtml+xml","content_length":"258089","record_id":"<urn:uuid:7e99c3c5-22ef-460c-a91e-c40d34852d24>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00541.warc.gz"}
|
FD Calculator | Fixed Deposit Interest Rates & Returns Calculator Online | Upstox
FD Calculator
Calculate your compound interest.
What Is a Fixed Deposit Calculator?
In India, fixed deposits are one of the most popular investment options. It provides stable or guaranteed returns. Also, you can pick a duration that works for you. You can open a fixed deposit for a
term ranging from seven days to 10 days.
Several banks and financial institutions provide fixed deposits to their customers. A fixed deposit calculator or FD calculator is an online financial calculator that you may use to estimate your
accumulated amount and interest at maturity.
Based on the principal amount, tenure of investment, type of deposit, and interest rates, the FD calculator online assists you in calculating the maturity amount.
In simple terms, using an FD maturity calculator, you get to know about the final amount you’ll be receiving from putting your money in fixed deposits.
How does the FD calculator work?
The working of a fixed deposit calculator is convenient and easy. It works based on the information you feed into it. The inputs we are talking about are
• Principal amount
• Interest rate
• Type of deposits, i.e. cumulative or payout
• Tenure of investment.
First, you need to add the principal amount, which is the initial amount that you will deposit to open a fixed deposit.
Next comes the expected interest rate. Typically, the higher the term of your fixed deposit, the higher is the interest rate. You need to note that online FD calculators can vary from one another.
For instance, if you use a bank’s FD calculator, you don’t have to add an interest rate. The calculator will take the current bank’s interest rate on FD on the time frame and calculate the maturity
amount. As a result, you won’t be able to compare the future value of the principal amount under different interest rates. With Upstox’s Online FD Calculator, you can compare the different scenarios.
You can also select the type of fixed deposit that you want to open. If you don’t need the interest to be paid out to you, you can choose the cumulative option. In this option, you will receive the
interest cumulated throughout the tenure and the principal at maturity.
However, if you need the interest to be paid out to you quarterly or monthly, you can select the quarterly or the monthly payout option from the drop-down list.
Time is another important factor you need to add to the fixed deposit calculator. You will have the option to add your FD tenure in months or years.
After all, the required inputs are filled, the digital FD calculator will generate the final output, i.e. the maturity amount and the interest earned.
Individuals are also keen on opening a post office FD as it offers slightly higher interest than bank FDs. Also, many individuals consider post office FD to be safer than bank FDs. You can use the
post office FD calculator to know your maturity amount and interest earned on post office FD.
Alternatively, use the FD return calculator to help you decide how much, which FD and how long to invest to reach your goals.
What Is the Formula of the FD Calculator?
The interest on a fixed deposit can be calculated in two ways: simple interest and compound interest. The FD formula that will be taken will depend on the type of deposit.
Simple Interest
In the case of simple interest, the principal amount remains constant throughout the tenure, and there is no concept of interest on interest.
The simple interest formula to calculate interest rates and maturity amount is as below:
Interest: SI= P x R x T / 100
Where SI= Simple interest, P=Principal amount, R= Rate of interest, T= Time period(years).
Maturity amount: M = P + (P x R x T / 100).
Typically, simple interest formula is used for FDs that mature within a year. Moreover, fixed deposits that offer monthly or quarterly payout use a simple interest formula to calculate the interest
as the interest is paid out at regular intervals.
Compound interest:
In the case of compound interest FD type, the formulas are as follows.
M = P (1 + r / n) ^ (n x t) where
M= Maturity amount,
P= Principal amount,
r= interest rate,
T= Tenure,
n= number of compounding in a year.
So, the compound interest earned over that period is Maturity amount (M) – Principal (P)
How Is FD Interest Calculated?
FDs offer stable and steady interest. Depositors can easily calculate the interest amount on the principal through the FD interest calculator online as the FD interest rates stay fixed throughout the
Let us know how your FD interest would be calculated.
Simple interest FD account
In a simple interest fixed deposit account, interest is calculated on the principal amount. We have seen earlier that it is calculated by multiplying the principal amount, interest rate and period.
It can be denoted as:
I= P x R x T / 100
Where P= Principal amount, R= interest rate, T= Time period(years).
Let us take an example.
You plan to book an FD of Rs 20,000 for five years in a simple interest FD account. The bank is offering a 7% p.a interest rate. So let us find out interest rates at the end of each year.
Year Interest earned at the end of each year
1 20,000 x 7 x 1 / 100 = 1,400
2 20,000 x 7 x 1 / 100 = 1,400
3 20,000 x 7 x 1 / 100 = 1,400
4 20,000 x 7 x 1 / 100 = 1,400
5 20,000 x 7 x 1 / 100 = 1,400
At the end of 5 years, total interest comes out to be 7000 i.e., (1400 x 5). Hence, you will get a total of Rs 27,000 after five years.
Compound Interest FD Account
In this account, interest is calculated on the principal amount and interest earned in the previous year. It is considered better than simple interest FDs as it focuses on the principle of
compounding and generates more returns.
Let us take the same amount at the same interest rate and for the same number of years to understand this better. Check out how the interest earned on the previous year is added to the principal,
which makes more interest. The simple interest formula is used but the principal amount increases every year.
Year Principal Amount Interest rate Interest earned
1 20,000 7% 1400
2 21,400 7% 1498
3 22,898 7% 1603
4 24,501 7% 1715
5 26,216 7% 1835
Maturity Amount 28,051
You see, the interest amount rises after each passing year. Interest in the previous year adds to the principal amount, and the final rate is calculated on the principal amount +interest.
Hence, the amount is rising each year due to the compounding effect.
We can use the compounding interest to figure out the compound interest:
M = P (1 + r / n) ^ (n x t) where
M= Maturity amount,
P= Principal amount = 20,000
r= interest rate = 7%
T= Tenure = 5 years
n= number of compounding in a year = 1
Maturity amount = 20,000(1+7%/1)^(1*5) = Rs. 28,051.03
Interest earned = Rs.8,051.03
You are getting a higher return for the same amount invested, i.e. Rs 8,051.
If you still don’t get it or have no time to perform these calculations, fixed deposit interest calculators are for your rescue.
What Are the Benefits of a Fixed Deposit Calculator Online?
An online fixed deposit calculator is a reliable tool as it is automatic and gives near to accurate results.
• No Expert Knowledge Needed
As long as you know how to use an FD calculator, you are good to go, as it is easy to understand and use. It will perform all the calculations even if you do not have advanced knowledge in this
Since everyone has different amounts to invest, varying tenure to keep money invested, and unique goals, the FD maturity calculator will produce customised results according to your needs.
Since all the complex calculations are done instantly after adding the required information, it saves you ample energy and time.
Planning for finances is one of the undeniable benefits of online calculators, as you get to know about the maturity amount and interest earned at the end of the investment period.
How Upstox Is the Best Fixed Deposit calculator Online?
Upstox is the best FD return calculator as it is convenient and highly reliable. It is free to use and helps you plan your finances better.
Further, it will show you the interest that you can earn by booking a FD with different financial institutions and can be used as many times as you need without any cost.
Frequently Asked Questions
How to use the FD calculator?
Using an FD calculator is simple and easy. You can use it through the following steps:
Step 1: Depending on your age, you can select between the normal citizen or senior citizen option.
Step 2: Pick your preferred type of FD. Example- Cumulative Fixed Deposit or Non-Cumulative Fixed Deposit.
Step 3: Fill in the amount you wish to invest (principal amount).
Step 4: Decide your desired period of investment.
Finally, hit the “calculate” button, and you will see the maturity amount after the investment tenure gets over. Alternatively, you can determine the principal amount to invest and investment tenure
by entering your desired returns.
Is FD interest calculated monthly or yearly?
FD interest is the annual interest provided by banks and other financial institutions. However, if you opt for the payout option, the annual interest is used to calculate the monthly and quarterly
Can I get monthly interest on fixed deposits?
Most banks offer monthly interest on fixed deposits. You can get monthly returns on FDs if you choose the Cumulative FD type and opt for “monthly payouts” as the FD option.
What is the maturity amount?
The maturity amount is the final amount you get when your FD tenure gets over. It is a total of the principal amount and interest earned.
Maturity amount = Principal amount + Total interest earned.
What are the factors affecting FD interest rates?
Citizen type and tenure affect interest rates. Most banks give a higher FD interest to senior citizens. Also, they offer higher interest rates for FDs with a longer term.
Do we have separate fixed deposit calculators for different banks?
Yes. Each bank has its own FD return calculator. However, the interest rate will be pre-filled, and you may not be able to carry out comparisons. You can use UPTSOX’s fixed deposit calculator online
to compare the interest earned at different rates.
Can we use the fixed deposit calculator on mobile?
Yes, you can use an FD return calculator on mobile phones effectively.
What is a cumulative fixed deposit?
In cumulative fixed deposits, interest is paid on interest earned. In this type of FD, the returns are comparatively higher than non-cumulative FDs because of wealth accumulation due to compounding.
The amount compounded over the years is paid fully at the maturity of investment tenure.
What is a non-cumulative fixed deposit?
In non-cumulative fixed deposits, the returns(interests) are paid periodically, i.e. monthly, quarterly, half-yearly or yearly. You can choose the return period as per your preference.
How Is FD Return Calculated?
The inflation rate in the country and the RBI’s repo rate are the two most important factors that determine the bank’s fixed deposit interest rate. Depending on these various options, the banks and
other financial institutions will calculate FD interest rates and revise the interest rates from time to time. However, the interest rate remains fixed throughout the tenure.
This calculator is meant to be used for indicative purposes only. It is designed to assist you in determining the appropriate amount of prospective investments. This calculator alone is not
sufficient and shouldn’t be used for the development or implementation of any investment strategy. Upstox does not take the responsibility/liability nor does it undertake the authenticity of the
figures calculated therein. Upstox makes no warranty about the accuracy of the calculators/reckoners. The examples do not claim to represent the performance of any security or investments. In view of
the individual nature of tax consequences, each investor is advised to consult his/her own professional tax advisor before making any investment decisions on the basis of the results provided through
the use of this calculator.
|
{"url":"https://upstox.com/calculator/fd-fixed-deposit/","timestamp":"2024-11-10T19:41:29Z","content_type":"text/html","content_length":"356066","record_id":"<urn:uuid:186f8bcb-f6bd-455a-ba35-ec6fe4b64f05>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00739.warc.gz"}
|
Rational And Irrational Numbers Worksheet Grade 9
Rational And Irrational Numbers Worksheet Grade 9 act as fundamental devices in the realm of mathematics, providing an organized yet flexible system for students to explore and master mathematical
concepts. These worksheets provide a structured strategy to understanding numbers, supporting a strong foundation whereupon mathematical efficiency grows. From the most basic checking workouts to the
ins and outs of innovative computations, Rational And Irrational Numbers Worksheet Grade 9 deal with students of varied ages and ability degrees.
Revealing the Essence of Rational And Irrational Numbers Worksheet Grade 9
Rational And Irrational Numbers Worksheet Grade 9
Rational And Irrational Numbers Worksheet Grade 9 - Rational And Irrational Numbers Worksheet Grade 9 Pdf, Rational And Irrational Numbers Worksheet 9th Grade, Rational And Irrational Numbers Class 9
Icse Worksheets, Examples Of Rational And Irrational Numbers Worksheet, Rational And Irrational Numbers Worksheet Answer Key
A set of Rational numbers involve having integers and fraction on the other hand irrational numbers are numbers that cannot be expressed as fractions In mathematics a rational number is any number
that you can represent it in the fractional form like p q where q is greater than zero 0
Get Started Rational and Irrational Numbers Worksheets A rational number is expressed in the form of p q where p and q are integers and q not equal to 0 Every integer is a rational number A real
number that is not rational is called irrational Irrational numbers include pi phi square roots etc
At their core, Rational And Irrational Numbers Worksheet Grade 9 are cars for theoretical understanding. They envelop a myriad of mathematical principles, assisting students with the maze of numbers
with a collection of engaging and purposeful workouts. These worksheets transcend the limits of traditional rote learning, urging energetic interaction and promoting an intuitive understanding of
numerical connections.
Nurturing Number Sense and Reasoning
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Rational OR Irrational Showing 8 worksheets for Grade 9 Rational And Irrational Numbers Worksheets are Class 9 number system Work on rational and irrational numbers for gr
Strand Number Specific Outcomes 3 4 5 and 6 This Planning Guide addresses the following outcomes from the program of studies Strand Number Specific Outcomes 3 Demonstrate an understanding OF rational
numbers by comparing and ordering rational numbers solving problems that involve arithmetic operations on rational numbers 4
The heart of Rational And Irrational Numbers Worksheet Grade 9 depends on cultivating number sense-- a deep understanding of numbers' meanings and interconnections. They motivate exploration,
inviting learners to study arithmetic procedures, figure out patterns, and unlock the secrets of sequences. With thought-provoking challenges and sensible problems, these worksheets come to be
gateways to developing thinking skills, supporting the logical minds of budding mathematicians.
From Theory to Real-World Application
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Subtract 14 12 10 from 32 4 10 Stuck Review related articles videos or use a hint Report a problem Learn for free about math art computer programming economics physics chemistry biology medicine
finance history and more Khan Academy is a nonprofit with the mission of providing a free world class education for
Rational Irrational B Irrational Stuck Review related articles videos or use a hint Report a problem Learn for free about math art computer programming economics physics chemistry biology medicine
finance history and more Khan Academy is a nonprofit with the mission of providing a free world class education for anyone anywhere
Rational And Irrational Numbers Worksheet Grade 9 act as conduits bridging academic abstractions with the apparent realities of day-to-day life. By instilling useful scenarios right into mathematical
exercises, students witness the significance of numbers in their environments. From budgeting and measurement conversions to comprehending analytical data, these worksheets equip pupils to wield
their mathematical expertise beyond the boundaries of the classroom.
Varied Tools and Techniques
Versatility is inherent in Rational And Irrational Numbers Worksheet Grade 9, employing a collection of pedagogical tools to accommodate varied understanding designs. Aesthetic help such as number
lines, manipulatives, and digital sources work as friends in visualizing abstract concepts. This diverse approach makes certain inclusivity, suiting learners with different preferences, toughness,
and cognitive styles.
Inclusivity and Cultural Relevance
In an increasingly varied globe, Rational And Irrational Numbers Worksheet Grade 9 welcome inclusivity. They transcend cultural limits, integrating examples and issues that reverberate with learners
from diverse backgrounds. By incorporating culturally relevant contexts, these worksheets promote an atmosphere where every learner feels stood for and valued, enhancing their connection with
mathematical principles.
Crafting a Path to Mathematical Mastery
Rational And Irrational Numbers Worksheet Grade 9 chart a course towards mathematical fluency. They impart willpower, essential thinking, and analytic abilities, vital features not only in
mathematics however in numerous facets of life. These worksheets encourage students to navigate the intricate terrain of numbers, supporting an extensive recognition for the style and reasoning
inherent in mathematics.
Embracing the Future of Education
In an era marked by technological improvement, Rational And Irrational Numbers Worksheet Grade 9 seamlessly adjust to electronic systems. Interactive interfaces and digital sources increase
traditional learning, offering immersive experiences that go beyond spatial and temporal boundaries. This combinations of traditional methods with technological technologies declares a promising
period in education and learning, cultivating a much more dynamic and engaging knowing environment.
Final thought: Embracing the Magic of Numbers
Rational And Irrational Numbers Worksheet Grade 9 illustrate the magic inherent in maths-- an enchanting trip of exploration, discovery, and mastery. They go beyond traditional rearing, acting as
catalysts for firing up the fires of inquisitiveness and query. Via Rational And Irrational Numbers Worksheet Grade 9, students embark on an odyssey, opening the enigmatic globe of numbers-- one
problem, one remedy, at once.
Identifying Rational And Irrational Numbers Worksheet Download
Irrational Numbers Worksheet For 9th Grade Lesson Planet
Check more of Rational And Irrational Numbers Worksheet Grade 9 below
Rational Irrational Numbers Worksheet
Rational Numbers Grade 9 Worksheets Db Excelcom Rational Numbers Grade 9 Worksheets Db
Rational Numbers Worksheet For 9th Grade Lesson Planet
Ordering Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheets Online Free PDFs
Get Started Rational and Irrational Numbers Worksheets A rational number is expressed in the form of p q where p and q are integers and q not equal to 0 Every integer is a rational number A real
number that is not rational is called irrational Irrational numbers include pi phi square roots etc
Irrational And Rational Numbers Practice Questions
Videos and Worksheets Primary 5 a day 5 a day GCSE 9 1 5 a day Primary 5 a day Further Maths Further Maths Practice Papers Conundrums
Get Started Rational and Irrational Numbers Worksheets A rational number is expressed in the form of p q where p and q are integers and q not equal to 0 Every integer is a rational number A real
number that is not rational is called irrational Irrational numbers include pi phi square roots etc
Videos and Worksheets Primary 5 a day 5 a day GCSE 9 1 5 a day Primary 5 a day Further Maths Further Maths Practice Papers Conundrums
Ordering Rational And Irrational Numbers Worksheet
Rational Numbers Grade 9 Worksheets Db Excelcom Rational Numbers Grade 9 Worksheets Db
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Rational And Irrational Numbers Worksheet
Rational Or Irrational Worksheet
Rational Or Irrational Worksheet
Rational And Irrational Numbers Worksheet With Answers Pdf Ntr Blog
|
{"url":"https://szukarka.net/rational-and-irrational-numbers-worksheet-grade-9","timestamp":"2024-11-08T08:12:59Z","content_type":"text/html","content_length":"28179","record_id":"<urn:uuid:01b33765-3310-4e22-b177-02fb623d7061>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00060.warc.gz"}
|
Ordinal Numbers English Worksheet Pdf - OrdinalNumbers.com
Ordinal Numbers Esl Worksheet Pdf – By using ordinal numbers, it is possible to count infinite sets. They can also be used as a generalization of ordinal quantities. 1st One of the fundamental ideas
of mathematics is the ordinal numbers. It is a number that indicates the position of an object in a list. Ordinally, … Read more
|
{"url":"https://www.ordinalnumbers.com/tag/ordinal-numbers-english-worksheet-pdf/","timestamp":"2024-11-13T20:52:34Z","content_type":"text/html","content_length":"46117","record_id":"<urn:uuid:f5e9822f-ae4d-4bf5-ab22-48c8df33c7c6>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00137.warc.gz"}
|
Use Properties of Real Numbers
Learning Outcomes
• Simplify expressions with real numbers that require all operations
Order of Operations
You may or may not recall the order of operations for applying several mathematical operations to one expression. Just as it is a social convention for us to drive on the right-hand side of the road,
the order of operations is a set of conventions used to provide order when you are required to use several mathematical operations for one expression. The graphic below depicts the order in which
mathematical operations are performed.
Order of operations are a set of conventions used to provide a formulaic outline to follow when you are required to use several mathematical operations for one expression. The box below is a summary
of the order of operations depicted in the graphic above.
The Order of Operations
• Perform all operations within grouping symbols first. Grouping symbols include parentheses ( ), brackets [ ], braces { }, and fraction bars.
• Evaluate exponents or square roots.
• Multiply or divide, from left to right.
• Add or subtract, from left to right.
This order of operations is true for all real numbers.
Simplify [latex]7–5+3\cdot8[/latex]
Show Solution
In the following example, you will be shown how to simplify an expression that contains both multiplication and subtraction using the order of operations.
When you are applying the order of operations to expressions that contain fractions, decimals, and negative numbers, you will need to recall how to do these computations as well.
Simplify [latex]3\cdot\dfrac{1}{3}\normalsize -8\div\dfrac{1}{4}[/latex]
Show Solution
Try It
If the expression has exponents or square roots, they are to be performed after parentheses and other grouping symbols have been simplified and before any multiplication, division, subtraction, and
addition that are outside the parentheses or other grouping symbols.
When you are evaluating expressions, you will sometimes see exponents used to represent repeated multiplication. Recall that an expression such as [latex]7^{2}[/latex] is exponential notation for
[latex]7\cdot7[/latex]. (Exponential notation has two parts: the base and the exponent or the power. In [latex]7^{2}[/latex], [latex]7[/latex] is the base and [latex]2[/latex] is the exponent; the
exponent determines how many times the base is multiplied by itself.)
Exponents are a way to represent repeated multiplication; the order of operations places it before any other multiplication, division, subtraction, and addition is performed.
Simplify [latex]3^{2}\cdot2^{3}[/latex].
Show Solution
In the video that follows, an expression with exponents on its terms is simplified using the order of operations.
Grouping Symbols
Grouping symbols such as parentheses ( ), brackets [ ], braces[latex] \displaystyle \left\{ {} \right\}[/latex], fraction bars, and roots can be used to further control the order of the four
arithmetic operations. The rules of the order of operations require computation within grouping symbols to be completed first, even if you are adding or subtracting within the grouping symbols and
you have multiplication outside the grouping symbols. After computing within the grouping symbols, divide or multiply from left to right and then subtract or add from left to right. When there are
grouping symbols within grouping symbols, calculate from the inside to the outside. That is, begin simplifying within the innermost grouping symbols first.
Remember that parentheses can also be used to show multiplication. In the example that follows, both uses of parentheses—as a way to represent a group, as well as a way to express multiplication—are
Simplify [latex]\left(3+4\right)^{2}+\left(8\right)\left(4\right)[/latex]
Show Solution
Simplify [latex]4\cdot{\frac{3[5+{(2 + 3)}^2]}{2}}[/latex]
Show Solution
Try It
In the following video, you are shown how to use the order of operations to simplify an expression with grouping symbols, exponents, multiplication, and addition.
Try it
Square roots are another grouping symbol. Operations inside of a square root need to be performed first. In the next example, we will simplify an expression that has a square root.
Simplify [latex]\dfrac{\sqrt{7+2}+2^2}{(8)(4)-11}[/latex]
Show Solution
Think About It
These problems are very similar to the examples given above. How are they different and what tools do you need to simplify them?
a) Simplify [latex]\left(1.5+3.5\right)–2\left(0.5\cdot6\right)^{2}[/latex]. This problem has parentheses, exponents, multiplication, subtraction, and addition in it, as well as decimals instead of
Use the box below to write down a few thoughts about how you would simplify this expression with decimals and grouping symbols.
Show Solution
b) Simplify [latex] {{\left(\dfrac{1}{2}\normalsize\right)}^{2}}+{{\left(\dfrac{1}{4}\normalsize\right)}^{3}}\cdot \,32[/latex]
Use the box below to write down a few thoughts about how you would simplify this expression with fractions and grouping symbols.
Show Solution
Try It
Combining Like Terms
One way we can simplify expressions is to combine like terms. Like terms are terms where the variables match exactly (exponents included). Examples of like terms would be [latex]5xy[/latex] and
[latex]-3xy[/latex] or [latex]8a^2b[/latex] and [latex]a^2b[/latex] or [latex]-3[/latex] and [latex]8[/latex]. If we have like terms we are allowed to add (or subtract) the numbers in front of the
variables, then keep the variables the same. As we combine like terms we need to interpret subtraction signs as part of the following term. This means if we see a subtraction sign, we treat the
following term like a negative term. The sign always stays with the term.
This is shown in the following examples:
Combine like terms: [latex]5x-2y-8x+7y[/latex]
Show Solution
In the following video you will be shown how to combine like terms using the idea of the distributive property. Note that this is a different method than is shown in the written examples on this
page, but it obtains the same result.
Combine like terms: [latex]x^2-3x+9-5x^2+3x-1[/latex]
Show Solution
In the video that follows, you will be shown another example of combining like terms. Pay attention to why you are not able to combine all three terms in the example.
Distributive Property
The distributive property states that the product of a factor times a sum is the sum of the factor times each term in the sum.
[latex]a\cdot \left(b+c\right)=a\cdot b+a\cdot c[/latex]
This property combines both addition and multiplication (and is the only property to do so). Let us consider an example.
Use the distributive property to show that [latex]4\cdot[12+(-7)]=20[/latex]
Show Solution
To be more precise when describing this property, we say that multiplication distributes over addition.
The reverse is not true as we can see in this example.
[latex]\begin{array}{ccc}\hfill 6+\left(3\cdot 5\right)& \stackrel{?}{=}& \left(6+3\right)\cdot \left(6+5\right) \\ \hfill 6+\left(15\right)& \stackrel{?}{=}& \left(9\right)\cdot \left(11\right)\
hfill \\ \hfill 21& \ne & \text{ }99\hfill \end{array}[/latex]
A special case of the distributive property occurs when a sum of terms is subtracted.
For example, consider the difference [latex]12-\left(5+3\right)[/latex]. We can rewrite the difference of the two terms [latex]12[/latex] and [latex]\left(5+3\right)[/latex] by turning the
subtraction expression into addition of the opposite. So instead of subtracting [latex]\left(5+3\right)[/latex], we add the opposite.
[latex]12+\left(-1\right)\cdot \left(5+3\right)[/latex]
Now, distribute [latex]-1[/latex] and simplify the result.
Rewrite the last example by changing the sign of each term and adding the results.
Show Solution
This seems like a lot of trouble for a simple sum, but it illustrates a powerful result that will be useful once we introduce algebraic terms.
Identity Properties
The identity property of addition states that there is a unique number, called the additive identity (0) that, when added to a number, results in the original number.
The identity property of multiplication states that there is a unique number, called the multiplicative identity (1) that, when multiplied by a number, results in the original number.
[latex]a\cdot 1=a[/latex]
Show that the identity property of addition and multiplication are true for [latex]-6 \text{ and }23[/latex].
Show Solution
Inverse Properties
The inverse property of addition states that, for every real number a, there is a unique number, called the additive inverse (or opposite), denoted a, that, when added to the original number, results
in the additive identity, [latex]0[/latex].
For example, if [latex]a=-8[/latex], the additive inverse is [latex]8[/latex], since [latex]\left(-8\right)+8=0[/latex].
The inverse property of multiplication holds for all real numbers except [latex]0[/latex] because the reciprocal of [latex]0[/latex] is not defined. The property states that, for every real number a,
there is a unique number, called the multiplicative inverse (or reciprocal), denoted [latex]\dfrac{1}{a}[/latex], that, when multiplied by the original number, results in the multiplicative identity,
[latex]a\cdot\dfrac{1}{a}\normalsize =1[/latex]
1) Define the additive inverse of [latex]a=-8[/latex], and use it to illustrate the inverse property of addition.
2) Write the reciprocal of [latex]a=-\dfrac{2}{3}[/latex], and use it to illustrate the inverse property of multiplication.
Show Solution
A General Note: Properties of Real Numbers
The following properties hold for real numbers a, b, and c.
Addition Multiplication
Commutative [latex]a+b=b+a[/latex] [latex]a\cdot b=b\cdot a[/latex]
Associative [latex]a+\left(b+c\right)=\left(a+b\right)+c[/latex] [latex]a\left(bc\right)=\left(ab\right)c[/latex]
Distributive [latex]a\cdot \left(b+c\right)=a\cdot b+a\cdot c[/latex]
There exists a unique real number called the additive identity, 0, such that, for There exists a unique real number called the multiplicative identity, 1, such that, for any real
any real number a number a
Identity Property
[latex]a+0=a[/latex] [latex]a\cdot 1=a[/latex]
Every real number a has an additive inverse, or opposite, denoted [latex]–a[/ Every nonzero real number a has a multiplicative inverse, or reciprocal, denoted [latex]\dfrac{1}
Inverse Property latex], such that {a}[/latex], such that
[latex]a+\left(-a\right)=0[/latex] [latex]a\cdot \left(\dfrac{1}{a}\normalsize\right)=1[/latex]
Use the properties of real numbers to rewrite and simplify each expression. State which properties apply.
1. [latex]3\left(6+4\right)[/latex]
2. [latex]\left(5+8\right)+\left(-8\right)[/latex]
3. [latex]6-\left(15+9\right)[/latex]
4. [latex]\dfrac{4}{7}\normalsize\cdot \left(\dfrac{2}{3}\normalsize\cdot\dfrac{7}{4}\normalsize\right)[/latex]
5. [latex]100\cdot \left[0.75+\left(-2.38\right)\right][/latex]
Show Solution
|
{"url":"https://courses.lumenlearning.com/wm-developmentalemporium/chapter/read-use-properties-of-real-numbers/","timestamp":"2024-11-05T20:26:42Z","content_type":"text/html","content_length":"82651","record_id":"<urn:uuid:18dd704d-73e0-4551-927b-0f1447e2ca62>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00228.warc.gz"}
|
Plot Four: Positive and Negative Numbers
Suggested Grades
Students will practice recognizing the nature of positive and negative numbers, as well as practice plotting graph coordinates.
• a game board for each pair of students: a 12X12 grid divided into quadrants by vertical and horizontal axes. Mark the intersection of the axes with a zero, and number the other axis/grid-line
intersections from 1-6 (positive numbers above and to the right of zero, negative numbers below and to the left of the zero).
• 2 sets of dice per pair in two different colours. One set will be positive and the other will be negative.
• markers
• The first player rolls all the dice, then chooses any two of the four numbers in any order to plot, and marks the area with a dot from a marker.
• The second player then repeats the process.
• The goal of the game is to get four points in a row in any direction.
• If the player is unable to work out a combination of numbers that isn’t already taken up, the player loses his turn.
• If both of the players reach a stalemate before anyone has been able to receive four in a row, the player with the most points on the board wins.
|
{"url":"https://www.canteach.ca/resources/math/numbers/plot-four-positive-and-negative-numbers/","timestamp":"2024-11-04T12:28:47Z","content_type":"text/html","content_length":"28969","record_id":"<urn:uuid:5161b460-4254-4146-a0eb-83c1a9fbe4b1>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00525.warc.gz"}
|
BioInformatics & BestKeeeper Software & Gene
Quantification Homepage
Data Analysis and BioInformatics in real-time qPCR (5)
main page
subpage 1
subpage 2
subpage 3
subpage 4 -- integrative data analysis
subpage 5 -- latest paper updates
Molecular Regulatory Networks
Big Data in Transcriptomics & Molecular Biology
• Molecular Networks
Latest papers:
MAKERGAUL - an innovative MAK2-based model and software for real-time PCR quantification
Bultmann CA and Weiskirchen R
Clin Biochem. 2014 47(1-2): 117-122
OBJECTIVES: Gene expression analysis by quantitative PCR is a standard laboratory technique for RNA quantification with high accuracy. In particular real-time PCR techniques using SYBR Green and
melting curve analysis allowing verification of specific product amplification have become a well accepted laboratory technique for rapid and high throughput gene expression quantification. However,
the software that is applied for quantification is somewhat circuitous and needs actually above average manual operation.
DESIGN AND METHODS: We here developed a novel, simple to handle open source software package (i.e., MAKERGAUL) for quantification of gene expression data obtained by real time PCR technology.
RESULTS: The developed software was evaluated with an already well characterized real time PCR data set and the performance parameters (i.e., absolute bias, linearity, reproducibility, and
resolution) of the algorithm that are the basis of our calculation procedure compared and ranked with those of other implemented and well-established algorithms. It shows good quantification
performance with reduced requirements in computing power.
CONCLUSIONS: We conclude that MAKERGAUL is a convenient and easy to handle software allowing accurate and fast expression data analysis.
• The open source software MAKERGAUL allows easy gene expression quantification.
• The program is available in different program and server-side scripting languages.
• Quantification does not require standard curves or normalization.
• MAKERGAUL has good precision, linearity, bias, resolution, and variability.
• MAKERGAUL shows good quantification performance and needs little computing power.
Comparing real-time quantitative polymerase chain reaction analysis methods for precision, linearity, and accuracy of estimating amplification efficiency
Tellinghuisen J, Spiess AN
Anal Biochem. 2014 Mar 15;449:76-82
New methods are used to compare seven qPCR analysis methods for their performance in estimating the quantification cycle (Cq) and amplification efficiency (E) for a large test data set (94 samples
for each of 4 dilutions) from a recent study. Precision and linearity are assessed using chi-square (χ(2)), which is the minimized quantity in least-squares (LS) fitting, equivalent to the variance
in unweighted LS, and commonly used to define statistical efficiency. All methods yield Cqs that vary strongly in precision with the starting concentration N0, requiring weighted LS for proper
calibration fitting of Cq vs log(N0). Then χ(2) for cubic calibration fits compares the inherent precision of the Cqs, while increases in χ(2) for quadratic and linear fits show the significance of
nonlinearity. Nonlinearity is further manifested in unphysical estimates of E from the same Cq data, results which also challenge a tenet of all qPCR analysis methods - that E is constant throughout
the baseline region. Constant-threshold (Ct) methods underperform the other methods when the data vary considerably in scale, as these data do.
A new method for quantitative real-time polymerase chain reaction data analysis
Rao X, Lai D, Huang X.
J Comput Biol. 2013 20(9): 703-711
Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantification method that has been extensively used in biological and biomedical fields. The currently used methods for
PCR data analysis, including the threshold cycle method and linear and nonlinear model-fitting methods, all require subtracting background fluorescence. However, the removal of background
fluorescence can hardly be accurate and therefore can distort results. We propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two
consecutive PCR cycles, we subtract the fluorescence in the former cycle from that in the latter cycle, transforming the n cycle raw data into n-1 cycle data. Then, linear regression is applied to
the natural logarithm of the transformed data. Finally, PCR amplification efficiencies and the initial DNA molecular numbers are calculated for each reaction. This taking-difference method avoids the
error in subtracting an unknown background, and thus it is more accurate and reliable. This method is easy to perform, and this strategy can be extended to all current methods for PCR data analysis.
The choice of reference gene affects statistical efficiency in quantitative PCR data analysis
Guo Y, Pennell ML, Pearl DK, Knobloch TJ, Fernandez S, Weghorst CM.
Biotechniques. 2013 55(4): 207-209
Quantitative polymerase chain reaction (qPCR), a highly sensitive method of measuring gene expression, is widely used in biomedical research. To produce reliable results, it is essential to use
stably expressed reference genes (RGs) for data normalization so that sample-to-sample variation can be controlled. In this study, we examine the effect of different RGs on statistical efficiency by
analyzing a qPCR data set that contains 12 target genes and 3 RGs. Our results show that choosing the most stably expressed RG for data normalization does not guarantee reduced variance or improved
statistical efficiency. We also provide a formula for determining when data normalization will improve statistical efficiency and hence increase the power of statistical tests in data analysis.
Eprobe mediated real-time PCR monitoring and melting curve analysis
Hanami T, Delobel D, Kanamori H, Tanaka Y, Kimura Y, Nakasone A, Soma T, Hayashizaki Y, Usui K, Harbers M.
PLoS One. 2013 Aug 7;8(8):e70942
Real-time monitoring of PCR is one of the most important methods for DNA and RNA detection widely used in research and medical diagnostics. Here we describe a new approach for combined real-time PCR
monitoring and melting curve analysis using a 3' end-blocked Exciton-Controlled Hybridization-sensitive fluorescent Oligonucleotide (ECHO) called Eprobe. Eprobes contain two dye moieties attached to
the same nucleotide and their fluorescent signal is strongly suppressed as single-stranded oligonucleotides by an excitonic interaction between the dyes. Upon hybridization to a complementary DNA
strand, the dyes are separated and intercalate into the double-strand leading to strong fluorescence signals. Intercalation of dyes can further stabilize the DNA/DNA hybrid and increase the melting
temperature compared to standard DNA oligonucleotides. Eprobes allow for specific real-time monitoring of amplification reactions by hybridizing to the amplicon in a sequence-dependent manner.
Similarly, Eprobes allow for analysis of reaction products by melting curve analysis. The function of different Eprobes was studied using the L858R mutation in the human epidermal growth factor
receptor (EGFR) gene, and multiplex detection was demonstrated for the human EGFR and KRAS genes using Eprobes with two different dyes. Combining amplification and melting curve analysis in a
single-tube reaction provides powerful means for new mutation detection assays. Functioning as "sequence-specific dyes", Eprobes hold great promises for future applications not only in PCR but also
as hybridization probes in other applications.
BootstRatio: A web-based statistical analysis of fold-change in qPCR and RT-qPCR data using resampling methods
Clèries R1, Galvez J, Espino M, Ribes J, Nunes V, de Heredia ML.
Comput Biol Med. 2012 42(4): 438-445
Real-time quantitative polymerase chain reaction (qPCR) is widely used in biomedical sciences quantifying its results through the relative expression (RE) of a target gene versus a reference one.
Obtaining significance levels for RE assuming an underlying probability distribution of the data may be difficult to assess. We have developed the web-based application BootstRatio, which tackles the
statistical significance of the RE and the probability that RE>1 through resampling methods without any assumption on the underlying probability distribution for the data analyzed. BootstRatio
perform these statistical analyses of gene expression ratios in two settings: (1) when data have been already normalized against a control sample and (2) when the data control samples are provided.
Since the estimation of the probability that RE>1 is an important feature for this type of analysis, as it is used to assign statistical significance and it can be also computed under the Bayesian
framework, a simulation study has been carried out comparing the performance of BootstRatio versus a Bayesian approach in the estimation of that probability. In addition, two analyses, one for each
setting, carried out with data from real experiments are presented showing the performance of BootstRatio. Our simulation study suggests that Bootstratio approach performs better than the Bayesian
one excepting in certain situations of very small sample size (N≤12). The web application BootstRatio is accessible through http://regstattools.net/br and developed for the purpose of these intensive
computation statistical analyses.
RT-qPCR work-flow for single-cell data analysis
Anders Ståhlberg, Vendula Rusnakova, Amin Forootan, Miroslava Anderova, Mikael Kubista
Methods 2013, Vol 59, Issue 1, pages 80-88
Individual cells represent the basic unit in tissues and organisms and are in many aspects unique in their properties. The introduction of new and sensitive techniques to study single-cells opens up
new avenues to understand fundamental biological processes. Well established statistical tools and recommendations exist for gene expression data based on traditional cell population measurements.
However, these workflows are not suitable, and some steps are even inappropriate, to apply on single-cell data. Here, we present a simple and practical workflow for preprocessing of single-cell data
generated by reverse transcription quantitative real-time PCR. The approach is demonstrated on a data set based on profiling of 41 genes in 303 single-cells. For some pre-processing steps we present
options and also recommendations. In particular, we demonstrate and discuss different strategies for handling missing data and scaling data for downstream multivariate analysis. The aim of this
workflow is provide guide to the rapidly growing community studying single-cells by means of reverse transcription quantitative real-time PCR profiling.
Evaluation of qPCR curve analysis methods for reliable biomarker discovery -- bias, resolution, precision, and implications
Ruijter JM1, Pfaffl MW, Zhao S, Spiess AN, Boggy G, Blom J, Rutledge RG, Sisti D, Lievens A, De Preter K, Derveaux S, Hellemans J, Vandesompele J.
Methods. 2013 59(1): 32-46
RNA transcripts such as mRNA or microRNA are frequently used as biomarkers to determine disease state or response to therapy. Reverse transcription (RT) in combination with quantitative PCR (qPCR)
has become the method of choice to quantify small amounts of such RNA molecules. In parallel with the democratization of RT-qPCR and its increasing use in biomedical research or biomarker discovery,
we witnessed a growth in the number of gene expression data analysis methods. Most of these methods are based on the principle that the position of the amplification curve with respect to the
cycle-axis is a measure for the initial target quantity: the later the curve, the lower the target quantity. However, most methods differ in the mathematical algorithms used to determine this
position, as well as in the way the efficiency of the PCR reaction (the fold increase of product per cycle) is determined and applied in the calculations. Moreover, there is dispute about whether the
PCR efficiency is constant or continuously decreasing. Together this has lead to the development of different methods to analyze amplification curves. In published comparisons of these methods,
available algorithms were typically applied in a restricted or outdated way, which does not do them justice. Therefore, we aimed at development of a framework for robust and unbiased assessment of
curve analysis performance whereby various publicly available curve analysis methods were thoroughly compared using a previously published large clinical data set (Vermeulen et al., 2009) [11]. The
original developers of these methods applied their algorithms and are co-author on this study. We assessed the curve analysis methods' impact on transcriptional biomarker identification in terms of
expression level, statistical significance, and patient-classification accuracy. The concentration series per gene, together with data sets from unpublished technical performance experiments, were
analyzed in order to assess the algorithms' precision, bias, and resolution. While large differences exist between methods when considering the technical performance experiments, most methods perform
relatively well on the biomarker data. The data and the analysis results per method are made available to serve as benchmark for further development and evaluation of qPCR curve analysis methods.
Download data => http://qPCRDataMethods.hfrc.nl
download the entire issue
Transcriptional Biomarkers
Methods Vol 59, Issue 1
Pages 1 - 163 & S1-S28
January 2013
edited by Michael W. Pfaffl
Table of content
Full papers and reviews
Sponsored Application Notes
|
{"url":"http://gmo-qpcr-analysis.info/bioinf-subpage5.html","timestamp":"2024-11-14T20:33:01Z","content_type":"text/html","content_length":"36887","record_id":"<urn:uuid:c885398f-f537-4f8d-98af-8d9b66910524>","cc-path":"CC-MAIN-2024-46/segments/1730477395538.95/warc/CC-MAIN-20241114194152-20241114224152-00056.warc.gz"}
|
Mathematics - Daytona Beach
The Department of Mathematics offers a Bachelor of Science in Computational Mathematics degree with two tracks (Data Science and Engineering Applications) as well as two minors, Applied Mathematics
and Computational Mathematics.
We also offer a
Master of Science in Data Science
, currently one of the most in-demand career fields.
Virtually every Embry-Riddle student — whether training to be a pilot, engineer, scientist, or manager — will pass through the Department of Mathematics while earning a degree. The flexibility of the
Computational Mathematics degree allows well-prepared students to pursue dual majors, increasing their career options and enhancing their marketability to potential employers. Some students will gain
foundational math skills, while others pursue innovative programs in pure and applied mathematics. The degree in Computational Mathematics allows students to blend mathematical theory and
computational techniques to address problems that arise in a variety of scientific disciplines.
Submissions from 2023
Liouville soliton surfaces obtained using Darboux transformations, S. C. Mancas and K. R. Acharya
Submissions from 2022
Hydrodynamics, Andrei Ludu
Kinematics of Fluids, Andrei Ludu
The Replacement Rule for Nonlinear Shallow Water Waves, A. Ludu and Z. Zong
One-Parameter Darboux-Deformed Fibonacci Numbers, Stefani C. Mancas and H. C. Rosu
Submissions from 2021
A Mathematical Model for Transport and Growth of Microbes in Unsaturated Porous Soil, Harihar Khanal, Andrei Ludu, Ramesh Chandra Timsina, and Kedar Nath Uprety
Nonlinear Schrödinger equation solitons on quantum droplets, A. Ludu and A.S. Carstea
A Numerical Solution of Water Flow in Unsaturated Soil with Evapotraspiration, Andrei Ludu, Harihar Khanal, Ramesh Chandra Timsina, and Kedar Nath Uprety
Reduced Multiplicative Complexity Discrete Cosine Transform (DCT) Circuitry, Sirani Kanchana Mututhanthrige Perera
Reduced Multiplicative Complexity Discrete Cosine Transform (DCT) Circuitry, Sirani Kanchana Mututhanthrige Perera
Submissions from 2020
Dynamics of Discontinuities in Elastic Solids, Arkadi Berezovski and Mihhail Berezovski
Discontinuity-Driven Mesh Alignment for Evolving Discontinuities in Elastic Solids, Mihhail Berezovski and Arkadi Berezovski
2N-Dimensional Canonical Systems and Applications, Andrei Ludu and Keshav Baj Acharya
Experimental study of breathers and rogue waves generated by random waves over non-uniform bathymetry, A. Ludu, A. Wang, Z. Zong, L. Zou, and Y. Pei
A Statistical Learning Regression Model Utilized To Determine Predictive Factors of Social Distancing During COVID-19 Pandemic, Timothy A. Smith, Albert J. Boquet, and Matthew V. Chin
Submissions from 2019
Titchmarsh–Weyl Theory for Vector-Valued Discrete Schrödinger Operators, Keshav R. Acharya
Action of Complex Symplectic Matrices on the Siegel Upper Half Space, Keshav R. Acharya and Matt McBride
An Explicit Finite Volume Numerical Scheme for 2D Elastic Wave Propagation, Mihhail Berezovski and Arkadi Berezovski
A Design of a Material Assembly in Space-Time Generating and Storing Energy, Mihhail Berezovski, Stan Elektrov, and Konstantin Lurie
Ice Spiral Patterns on the Ocean Surface, Andrei Ludu and Zhi Zong
Submissions from 2018
Stability of Solitary and Cnoidal Traveling Wave Solutions for a Fifth Order Korteweg-de Vries Equation, Ronald Adams and S.C. Mancas
Full Field Computing for Elastic Pulse Dispersion in Inhomogeneous Bars, A. Berezovski, R. Kolman, M. Berezovski, D. Gabriel, and V. Adamek
Numerical Simulation of Energy Localization in Dynamic Materials, Arkadi Berezovski and Mihhail Berezovski
Vortex Structures inside Spherical Mesoscopic Superconductor Plus Magnetic Dipole, A. Ludu
Nonlocal Symmetries for Time-Dependent Order Differential Equations, Andrei Ludu
An Asymptotic Analysis for Generation of Unsteady Surface Waves on Deep Water by Turbulence, Shahrdad Sajjadi
Submissions from 2017
Numerical Simulation of Acoustic Emission During Crack Growth in 3-point Bending Test, Mihhail Berezovski and Arkadi Berezovski
Differential Equations of Dynamical Order, Andrei Ludu and Harihar Khanal
Traveling Wave Solutions to Kawahara and Related Equations, S.C. Mancas
Elliptic Solutions and Solitary Waves of a Higher Order KdV-BBM Long Wave Equation, S.C. Mancas and Ronald Adams
Traveling Wave Solutions for Wave Equations with Exponential Nonlinearities, S. C. Mancas, H. C. Rosu, and M. Perez-Maldonado
Generalized Thomas-Fermi Equations as the Lampariello Class of Emden-Fowler Equations, Haret C. Rosu and S.C. Mancas
Almost-BPS Solutions in Multi-Center Taub-NUT, C. Rugina and A. Ludu
Variational Principle for Velocity-Pressure Formulation of Navier-Stokes Equations, Shahrdad Sajjadi
A Regression Model to Predict Stock Market Mega Movements and/or Volatility Using Both Macroeconomic Indicators & Fed Bank Variables, Timothy A. Smith and Alcuin Rajan
Submissions from 2016
A Note on Vector Valued Discrete Schrödinger Operators, Keshav R. Acharya
Remling's Theorem on Canonical Systems, Keshav R. Acharya
Thermoelastic Waves in Microstructured Solids, Arkadi Berezovski and Mihhail Berezovski
Growth of Groups of Wind Generated Waves, Frederique Drullion and Shahrdad Sajjadi
Energy Transfer from Wind to Wave Groups: Theory and Experiment, Andrei Ludu and Shahrdad G. Sajjadi
Evolution of Spherical Cavitation Bubbles: Parametric and Closed-Form Solutions, S.C. Mancas and Haret C. Rosu
Existence of Periodic Orbits in Nonlinear Oscillators of Emden-Fowler Form, S.C. Mancas and Haret C. Rosu
Integrable Abel Equations and Vein's Abel Equation, S.C. Mancas and Haret C. Rosu
Micro Cavitation Bubbles on the Movement of an Experimental Submarine: Theory and Experiments, S.C. Mancas, Shahrdad G. Sajjadi, Asalie Anderson, and Derek Hoffman
Signal Flow Graph Approach to Efficient DST I-IV Algorithms, Sirani M. Perera
Nongauge Bright Soliton of the Nonlinear Schrodinger (NLS) Equation and a Family of Generalized NLS Equations, M. A. Reyes, D. Gutierrez-Ruiz, S. C. Mancas, and H. C. Rosu
Ermakov Equation and Camassa-Holm Waves, Haret C. Rosu and S.C. Mancas
Growth of Stokes Waves Induced by Wind on a Viscous Liquid of Infinite Depth, Shahrdad Sajjadi
Exact Analytical Solution of Viscous Korteweg-deVries Equation for Water Waves, Shahrdad G. Sajjadi and Timothy A. Smith
Growth of Unsteady Wave Groups by Shear Flows, Shahrdad Sajjadi, Julian Hunt, and Frederique Drullion
Wave Motion Induced By Turbulent Shear Flows Over Growing Stokes Waves, Shahrdad Sajjadi, Serena Robertson, Rebecca Harvey, and Mary Brown
Submissions from 2015
Two-Dimensional Structures in the Quintic Ginzburg-Landau Equation, Florent Bérard, Charles-Julien Vandamme, and S.C. Mancas
Pattern Formation of Elastic Waves and Energy Localization Due to Elastic Gratings, A. Berezovski, J. Engelbrecht, and Mihhail Berezovski
Student and Faculty Perceptions of Attendance Policies at a Polytechnic University, Loraine Lowder, Adeel Khalid, Daniel R. Ferreira, Jeanne Law Bohannon, Beth Stutzmann, Mir M. Atiqullah, Rajnish
Singh, Tien Yee, Keshav R. Acharya, Craig A. Chin, M. A. Karim, Robert Segiharu Keyser, and Donna Colebeck
Pulses and Snakes in Ginzburg-Landau Equation, S.C. Mancas and Roy S. Choudhury
Integrable Equations with Ermakov-Pinney Nonlinearities and Chiellini Damping, S.C. Mancas and Haret C. Rosu
Barotropic FRW Cosmologies with Chiellini Damping, Haret C. Rosu, S.C. Mancas, and Pisin Chen
Barotropic FRW Cosmologies with Chiellini Damping in Comoving Time, Haret C. Rosu, S.C. Mancas, and Pisin Chen
One-Parameter Supersymmetric Hamiltonians in Momentum Space, H. C. Rosu, S. C. Mancas, and P. Chen
Formation of Three-Dimensional Surface Waves on Deep-Water Using Elliptic Solutions of Nonlinear Schrödinger Equation, Shahrdad G. Sajjadi, S.C. Mancas, and Frederique Drullion
An Economic Regression Model to Predict Market Movements, Timothy A. Smith and Andrew Hawkins
Submissions from 2014
An Alternate Proof of the De Branges Theorem on Canonical Systems, Keshav R. Acharya
Self-Adjoint Extension and Spectral Theory of a Linear Relation in a Hilbert Space, Keshav R. Acharya
Titchmarsh-Weyl Theory for Canonical Systems, Keshav R. Acharya
Computational Models for Nanosecond Laser Ablation, Harihar Khanal, David Autrique, and Vasilios Alexiades
Ermakov-Lewis Invariants and Reid Systems, S.C. Mancas and Haret C. Rosu
A Fast Algorithm for the Inversion of Quasiseparable Vandermonde-like Matrices, Sirani M. Perera, Grigory Bonik, and Vadim Olshevsky
One-Parameter Families of Supersymmetric Isospectral Potentials From Riccati Solutions in Function Composition Form, Haret C. Rosu, S.C. Mancas, and Pisin Chen
Shifted One-Parameter Supersymmetric Family of Quartic Asymmetric Double-Well Potentials, Haret C. Rosu, S.C. Mancas, and Pisin Chen
A Regression Model to Investigate the Performance of Black-Scholes using Macroeconomic Predictors, Timothy A. Smith, Ersoy Subasi, and Aliraza M. Rattansi
Not All Traces On the Circle Come From Functions of Least Gradient in the Disk, Gregory S. Spradlin and Alexandru Tamasan
Variable Viscosity Condition in the Modeling of a Slider Bearing, Kedar Nath Uprety and S.C. Mancas
Submissions from 2013
Hydrodynamic Modeling of ns-Laser Ablation, David Autrique, Vasilios Alexiades, and Harihar Khanal
Dispersive Waves in Microstructured Solids, A. Berezovski, J. Engelbrecht, A. Salupere, K. Tamm, T. Peets, and Mihhail Berezovski
Influence of Microstructure on Thermoelastic Wave Propagation, Arkadi Berezovski and Mihhail Berezovski
Time-Stepping for Laser Ablation, Harihar Khanal, David Autrique, and Vasilios Alexiades
Integrable Dissipative Nonlinear Second Order Differential Equations Via Factorizations and Abel Equations, S.C. Mancas and Haret C. Rosu
Weierstrass Traveling Wave Solutions for Dissipative Benjamin, Bona, and Mahoney (BBM) Equation, S.C. Mancas, Greg Spradlin, and Harihar Khanal
A Study of Energy Transfer of Wind and Ocean Waves, Shahrdad Sajjadi and Mason Bray
Asymptotic Multi-Layer Analysis of Wind Over Unsteady Monochromatic Surface Waves, Shahrdad Sajjadi, Julian Hunt, and Frederique Drullion
Submissions from 2012
Wave Propagation and Dispersion in Microstructured Solids, Arkadi Berezovski, Juri Engelbrecht, and Mihhail Berezovski
On the Stability of a Microstructure Model, Mihhail Berezovski and Arkadi Berezovski
ACE - A Model Centered REU Program Standing on the Three Legs of CSE: Analysis, Computation and Experiment, Hong P. Liu and Andrei Ludu
Vortex Patterns Beyond Hypergeometric, Andrei Ludu
Topology and Geometry of Mixing of Fluids, Andrei Ludu, S.C. Mancas, Ionutz Ionescu, Audrey Gbagaudi, and Matthew Schumacher
2D Novel Structures Along an Opitcal Fiber, Charles-Julien Vandamme and S.C. Mancas
Submissions from 2011
Dispersive Wave Equations for Solids with Microstructure, A. Berezovski, Juri Engelbrecht, and Mihhail Berezovski
Two-Scale Microstructure Dynamics, Arkadi Berezovski, Mihhail Berezovski, and Juri Engelbrecht
Waves in Microstructured Solids: A Unified Viewpoint of Modelling, Arkadi Berezovski, Juri Engelbrecht, and Mihhail Berezovski
On the Stability of a Microstructure Model, Mihhail Berezovski and Arkadi Berezovski
Interactions and Focusing of Nonlinear Water Waves, Harihar Khanal, S.C. Mancas, and Shahrdad Sajjadi
Differential Geometry of Moving Surfaces and Its Relation to Solitons, Andrei Ludu
Differential Geometry of Moving Surfaces and Its Relation to Solitons, Andrei Ludu
Solitary Waves, Periodic and Elliptic Solutions to the Benjamin, Bona & Mahony (BBM) Equation Modified by Viscosity, S.C. Mancas, Harihar Khanal, and Shahrdad G. Sajjadi
Turbulence and Wave Dynamics Across Gas–Liquid Interfaces, Shahrdad Sajjadi, Julian Hunt, Stephen Belcher, Derek Stretch, and John Clegg
Submissions from 2010
Spatiotemporal Two-Dimensional Solitons in the Complex Ginzburg-Landau Equation, Florent Berard and S.C. Mancas
Waves in Materials with Microstructure: Numerical Simulation, Mihhail Berezovski, Arkadi Berezovski, and Juri Engelbrecht
Deformation Waves in Microstructured Materials: Theory and Numerics, Juri Engelbrecht, Arkadi Berezovski, and Mihhail Berezovski
Book Review: Visual Motion of Curves and Surfaces, Andrei Ludu
Analytic treatment of vortex states in cylindrical superconductors in applied axial magnetic field, Andrei Ludu; J. Van Deun; M. V, Milosevic; A. Cuyt; and F. M. Peeters
|
{"url":"https://commons.erau.edu/db-mathematics/","timestamp":"2024-11-02T08:21:55Z","content_type":"text/html","content_length":"107774","record_id":"<urn:uuid:54616174-2134-4b2b-9fdd-57091d850f7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00784.warc.gz"}
|
Camera Sound Trigger
Introduction: Camera Sound Trigger
In this project, I am using an Adafruit Perma-Proto Pi HAT with a Raspberry Pi 2 and a sound sensor to build a mechanism for triggering a camera by sound.
Step 1: Parts Needed
• Raspberry Pi 2
• 1 x LM393 Sound Sensor
• 1 x NPN Transistor (2N2222A)
• 1 x 1K Ohm Resistor
• 2 x 3.5mm stereo male plug solder connector
• 2 x 3.5mm panel mount jack sockets
• Wired remote trigger for the specific make and model of your camera
Step 2: Wire the Sound Sensor to 3.5mm Stereo Male Plug
The sound sensor I used is actually advertised for use with an Arduino microcontroller and run at 5V. Nevertheless, it works fine with a Raspberry Pi operating at 3.3V. Whatever you do, do not
connect it to a 5V source when using it with a Raspberry Pi. You will damage your Pi.
Take some jumper wires with female ends and push fit them to the sound sensor. The sound sensor has three pins, 5V (Power), OUT (Output) and GND (Ground). I have used three different colours of wire,
red for 5V, green for OUT and black for GND.
The other ends of the wires need to be soldered to the 3.5mm male plug. Cut the connectors off the ends and expose bare wire. The male plug has three tabs under its casing. Before soldering, make
sure you feed the three wires through the plug casing. Solder one wire to each tab. It doesn't matter which wire you solder to which tab at this point. You just need to make sure the wires match up
correctly when soldering the plus socket later.
To finish off, I laser cut a case for the sensor out of 3mm plywood.
Step 3: Wiring Diagram
The pinholes used on the Adafruit Perma-Proto Pi Hat are 3V, GND, #20 and #21.
I use pinhole #20 for input from the sound sensor and pinhole #21 to trigger the camera.
Wire the sound sensor to the Pi Hat as follows :-
• 5V pin to any 3V pinhole (Red line)
• OUT pin to pinhole #20 (Green line)
• GND pin to any GND pinhole (Black line)
Wire Pi Hat to the camera as follows :-
• Pinhole #21 to the 1K ohm resistor
• 1K ohm resistor to the Base pin of the NPN transistor (Green line)
• Collector pin of the NPN transistor to one of the wires of camera trigger lead (Red line)
• Emitter pin of the NPN transistor to the other wire of the camera trigger lead (Black line)
The transistor is the switch that triggers the camera shutter. When the transistor is activated via the Base pin (green line from pinhole #21), it connects the Collector (red line) to the Emitter
(black line). This shorts the camera trigger wires thus firing the camera shutter.
The camera remote lead I used has 3 wires in it, white, red and yellow. Shorting red and yellow makes the camera focus but does not fire the shutter. Shorting red and white makes the camera focus and
fires the shutter. So the only wire I needed to use were red and white. You will need to work out which wires to use for your particular camera.
Step 4: Adafruit Perma-Proto Pi Hat
Solder the wires, transistor and resistor to the Pi Hat following the wiring diagram in the previous step. I am using a 3.5mm stereo plug and socket combination to connect both the sound sensor and
the camera. I plan to add different sensors to this project in the future and be able to connect different makes and models of camera. Each sensor and camera lead would have a 3.5mm plug attached and
so can be easily swapped in and out.
When soldering the socket for the sensor plug, you need to make sure the wires match up. To do this, I used the continuity function on my multi-meter to work out which tab on the socket joined to
which wire from the sensor.
Do the same for the camera trigger lead. Cut the actual button trigger hand piece off as you do not need this. Expose the wires and solder to the plug. Solder wires to the socket remembering to use
your multi-meter to work out the correct tab to use.
To finish, I made another case to house the Raspberry Pi, the Pi Hat and the two 3.5mm sockets.
Step 5: Raspbian and Python 3
I am running the standard Raspbian build on the Pi 2 which comes with Python 3 by default. I have to run in super user mode so to fire up the Idle 3 (the Python editor), open up a terminal and enter
sudo idle 3.
Enter the following Python program :-
import time<br>import RPi.GPIO as GPIO ## Import GPIO library
GPIO.setup(38,GPIO.IN) ## Set board pin 38 to IN (Pi HAT pin #20)
GPIO.setup(40,GPIO.OUT) ## Set board pin 40 to OUT (Pi Hat pin #21)
GPIO.output(40,False) ## Output default to off
while True:
if GPIO.input(38)==False: ## If sound detected
if not outputPinOn:
if outputPinOn:
Note: The sound sensor am using is HIGH when no there is no sound and outputs LOW when a sound is detected. This is a little counter-intuitive I think. The implication is that in our code, we need to
test if the input pin is FALSE to check if a sound has been detected.
Step 6: Testing Setup
Here I have the sensor plugged into the Input socket (though you can't see it in the photo!) and the camera plugged into the Output socket. I pre-focus the camera and then switch it to Manual Focus
mode. I launch Idle 3 and run the Python code. The equipment is now set for sound activated photography.
Test 1 - Pool Ball
The first test is a pool ball bouncing off the table. The sound of the ball hitting the table triggers the camera shutter.
Test 2 - Coin and water
The second test was to capture splashing water. The sound of splashing water is too quiet to trigger the camera so I used a shallow plate of water. It is the sound of the coin hitting the plate that
triggers the camera shutter.
Step 7: Result 1 - Pool Ball
There is an ever so slight delay between the noise of the ball hitting the table and the camera shutter firing. The photos are therefore capturing the pool ball as it bounces off the table. I have
since found out the delay is actually at the camera itself. My ageing camera is slow to react to the trigger signal. You may find better results with a newer camera.
Step 8: Result 2 - Coin and Water
|
{"url":"https://www.instructables.com/Camera-sound-trigger/","timestamp":"2024-11-04T02:25:33Z","content_type":"text/html","content_length":"96528","record_id":"<urn:uuid:4717844a-7603-4a7f-b550-437cad7ad6c8>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00529.warc.gz"}
|
how to calculate energy storage w - Suppliers/Manufacturers
Learn how compressed air storage works in this illustrated animation from OurFuture.EnergyDiscover more fantastic energy-related and curriculum-aligned resou...
Feedback >>
How to calculate the energy stored in a capacitor
Physics Ninja looks at 3 ways of calculating the energy stored in a parallel plate capacitor.
Calculating Energy of a wave if given wavelength
This lesson explains how to calculate the energy of a light wave when the wavelength of the wave is known.
Ferroelectric materials as energy storage devices
#Ferroelectric #materials as #energystorage
About how to calculate energy storage w - Suppliers/Manufacturers
As the photovoltaic (PV) industry continues to evolve, advancements in how to calculate energy storage w - Suppliers/Manufacturers have become critical to optimizing the utilization of renewable
energy sources. From innovative battery technologies to intelligent energy management systems, these solutions are transforming the way we store and distribute solar-generated electricity.
When you're looking for the latest and most efficient how to calculate energy storage w - Suppliers/Manufacturers for your PV project, our website offers a comprehensive selection of cutting-edge
products designed to meet your specific requirements. Whether you're a renewable energy developer, utility company, or commercial enterprise looking to reduce your carbon footprint, we have the
solutions to help you harness the full potential of solar energy.
By interacting with our online customer service, you'll gain a deep understanding of the various how to calculate energy storage w - Suppliers/Manufacturers featured in our extensive catalog, such as
high-efficiency storage batteries and intelligent energy management systems, and how they work together to provide a stable and reliable power supply for your PV projects.
محتويات ذات صلة
|
{"url":"https://www.littlearts.eu/Sun-13-Aug-2023-19703.html","timestamp":"2024-11-01T23:55:56Z","content_type":"text/html","content_length":"50639","record_id":"<urn:uuid:6b2dda78-72be-4010-bbca-976f6f5cd7ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00671.warc.gz"}
|
The twelve fold way
I really struggle to remember the combinatorial rules. Do I need to use choose or do I need to use permute? How do I calculate each of them? Is this a case of stars and bars or is this something
One of the things I want to add to my chatbot program is a way to ask which of the many systems should I be using? I did once try to build a javascript tool although I don’t think it ever really made
itself into a useful state.
I recently found a paper called “The twelvefold way” which explains all the different ways you put balls in boxes (or urns as is used in the paper).
It really helped me to understand, at least a little better, how to calculate the different permutations of the problem. With this newfound understanding, I once again had a go at producing a program
to calculate each, with a simple form-based input.
One of the things that I wanted to extend this to doing was informing me of what the calculation method was. I really like the way google now lends a hand when you ask it unit conversion problems.
The number of n-tuples of k things
The problem of how many ways there are to arrange b labeled balls in u labelled urns is equivalent to the problem of the number of b-tuples of u things.
There are u places for the first ball, u for the second... There are therefore u to the b possibilities.
At least one labelled ball in each of the labelled urns
We first assume that the urns are unlabeled, this gives us the formula from the Stirling numbers of the second kind.
Since the urns can be permuted in u! ways among themselves, the number of possibilities for labelled urns is u! times as many as when the urns are unlabeled. We can therefore simply multiply the
Stirling number by the factorial of the number of urns.
$u!S(b,u) = u!\begin{Bmatrix}b\\u \end{Bmatrix}$
The nth falling factor of k
The number of ways to place b labelled balls in u labelled urns is calculated as follows. There are u choices for where to place the first ball, u-1 for the second and so forth.
Unlabeled balls, labelled urns, at least one ball per urn
To calculate the number of ways to put b unlabeled balls into u labelled urns, put one ball in each urn. The problem is now how many ways with any number of the remaining b-u balls in each urn. This
is choosing without replacement of u urns for b-u balls.
$\begin{pmatrix}\begin{pmatrix}u\\b-u\end{pmatrix}\end{pmatrix} =\begin{pmatrix}b-1\\u-1\end{pmatrix}=\frac{(b-1)!}{(u-1)!(b-u)!}$
Choosing with replacement
The number of ways to distribute any number of b unlabeled balls into u urns is the same as asking how many ways can you chose k items from a set of size n with replacement. This is because for each
ball one can choose which urn to put it in. since an urn can be selected several times this is choosing with replacement. (It is important to think of choosing urns, not balls). Using a double
bracket notation for choosing with replacement, and a single bracket for choosing without replacement.
$\begin{pmatrix}n\\k\end{pmatrix}\ The\ number\ of\ ways\ to\ chose\ k\ things\ from\ a\ set\ of\ n\ \bold{without}\ replacement\\\\ \begin{pmatrix}\begin{pmatrix}n\\k\end{pmatrix}\end{pmatrix}\ The\
number\ of\ ways\ to\ chose\ k\ things\ from\ a\ set\ of\ n\ \bold{with}\ replacement\\\\$
It turns out that:
$\begin{pmatrix}\begin{pmatrix}n\\k\end{pmatrix}\end{pmatrix} =\begin{pmatrix}n+k-1\\k\end{pmatrix}$
This is really well explained by the numberphile video on stars and bars and bagels:
If there can be no more than one ball in each unlabeled urn the problem becomes which urn will each ball go into. There are u chose b ways to chose b urns out of a total of u urns. This is choosing
without replacement. The chose function is denoted below:
You can derive this by thinking about how you might place the balls. If we have 3 balls and 4 urns. We have 4 places to put the first ball 3 places to put the second and 2 places to put the first.
In general, we will have u places to put the first ball, u-1 places for the second and so on:
The order in which we pick the balls doesn't matter though. There are b! ways we can pick the balls. That means we b! times more combinations than we need. We can simply divide out these to get the
Labelled balls, unlabeled urns, any number of balls per urn
The number of ways to place b labelled balls into u unlabeled urns, such that there are any number of balls per urn, is related to the number of ways to place labelled balls in unlabeled urns such
that there is at least one. Each example where there are n empty urns can be thought of as the problem of placing balls into urns such that there is at least one ball per urn in u-n urns.
To calculate this we simply sum the number of ways to place at least one ball into the urns for every example where some of the urns are empty. This leads to the following formula:
Where the function S is the Stirling number of the second kind, as explained below.
Stirling numbers of the second kind
To work out the number of ways to place labelled balls into unlabeled urns such that there is at least one ball per urn is calculated using the Stirling numbers of the second kind. James Stirling was
a mathematician who did a lot of the founding work on combinatorics and coined the term the twelvefold way.
The Stirling number of the second kind S(n,k) is the number of ways to partition n objects into k non-empty sets. For example, there are 3 ways to partition 3 things into 2 sets:
The formula for the Stirling numbers is given recursively as follows:
$S(n,k) = \begin{Bmatrix} n\\ k \end{Bmatrix}=k\begin{Bmatrix}n-1\\k \end{Bmatrix}+ \begin{Bmatrix}n-1\\k-1 \end{Bmatrix}\\\\ With\ the\ base\ cases:\\\\ \begin{Bmatrix}n\ \end{Bmatrix}=1\\\\ \forall
(k>n) \bullet \begin{Bmatrix}n\\k \end{Bmatrix} =0\\\\ \forall (n\geq 1)\bullet \begin{Bmatrix}n\\1 \end{Bmatrix} = 1$
The base cases read:
• If n and k are the same S(n,k) will be 1
• For all cases where k is larger than n S(nk) will be zero
• For all cases where n is greater than or equal to 1 and k is 1. S(n,k) will be 1.
Unlabeled and unrestricted
For the case where the number of balls per urn is unrestricted and both the balls and urns are unlabeled, this becomes similar to the partitions problem. The difference here is that because we can
leave some of the earns empty. In the case where we leave n urns empty, the problem is the same as having at least one ball in u-n urns.
This leads us to the situation where to calculate how many ways there are, we sum over leaving different numbers of urns empty.
Where p is the partitions function as defined below.
The question of how many ways b unlabeled balls can go into u unlabeled urns, such that there is at least one ball per urn, turns out to be the same question as how many ways can you write n as the
sum of k positive integers. Numberphile did a great video on this using lego.
The way to think about this is recursively, and that is how the formula is defined:
$p_k(n)=p_{k-1}(n-1) + p_k(n-k)\\ With\ the\ base\ cases:\\ p_0(0)=1\\ p_k(n)=0\ for\ (k\leq 0\vee n\leq0)\wedge \overline{(k=0\wedge n=0 )}$
The second base case reads that the function will be 0 for all k and n if either k or n are less than or equal to zero. This has the exception of if the other base case is met, where both k and n are
zero, in that case, the first base case takes precedence and the value of the function is 1.
At most one ball per unlabeled urn
For any situation where the urns are unlabeled and there is the condition that there must be no more than one ball per urn, then depending on the number of balls and urns, it can either be done, or
it can't.
If there are more balls than urns, $b>u$ there is no way to place all the balls such that there is no more than one ball per urn.
In the case where the number of balls equals the number of urns, $b=u$ Each ball simply goes in an urn. Since the urns are no labelled, it is irrelevant if the balls are labelled.
In the final case where the number of balls is fewer than the number of urns $b<u$, since the urns are unlabeled there is only one way to place the balls, just put one per urn, until you run out of
balls, leave the other urns empty.
|
{"url":"https://www.jameshylands.co.uk/2020/07/the-twelve-fold-way.html","timestamp":"2024-11-11T19:51:12Z","content_type":"application/xhtml+xml","content_length":"116221","record_id":"<urn:uuid:cbae7a6a-af59-4c58-8825-194d52046a50>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00106.warc.gz"}
|
Jointly Distributed Random Variables: 11 Important Facts
Jointly distributed random variables
The jointly distributed random variables are the random variable more than one with probability jointly distributed for these random variables, in other words in experiments where the different
outcome with their common probability is known as jointly distributed random variable or joint distribution, such type of situation occurs frequently while dealing the problems of the chances.
Joint distribution function | Joint Cumulative probability distribution function | joint probability mass function | joint probability density function
For the random variables X and Y the distribution function or joint cumulative distribution function is
$F(a,b)= P\left \{ X\leq a, Y\leq b \right \} \ \ , \ \ -\infty< a , b< \infty$
where the nature of the joint probability depends on the nature of random variables X and Y either discrete or continuous, and the individual distribution functions for X and Y can be obtained using
this joint cumulative distribution function as
$F_{X}(a)=P \left \{ { X\leq a } \right \} \\ = P \left \{ X\leq a, Y< \infty \right \} \\ =P\left ( \lim_{b \to \infty} X\leq a, Y< b \right ) \\ =\lim_{b \to \infty} P \left \{ X\leq a, Y\leq
b \right \} \\ = \lim_{b \to \infty} F(a,b) \\ \equiv F(a, \infty)$
similarly for Y as
$F_{Y} (b)=P\left \{ Y\leq b \right \} \\ =\lim_{a \to \infty} F(a,b) \\ \equiv F(\infty, b)$
these individual distribution functions of X and Y are known as Marginal distribution functions when joint distribution is under consideration. These distributions are very helpful for getting the
probabilities like
and in addition the joint probability mass function for the random variables X and Y is defined as
$p(x,y)=P { X=x, Y=y }$
the individual probability mass or density functions for X and Y can be obtained with the help of such joint probability mass or density function like in terms of discrete random variables as
$p_{X}(x)=P { X=x } \\ =\sum_{y:p(x,y)> 0}^{}p(x,y) \\ p_{Y}(y)=\sum_{y:p(x,y)> 0}^{}p(x,y)$
and in terms of continuous random variable the joint probability density function will be
$P { (X,Y)\in C }=\int_{(x,y)\in C}^{}\int f(x,y)dxdy$
where C is any two dimensional plane, and the joint distribution function for continuous random variable will be
the probability density function from this distribution function can be obtained by differentiating
$f(a,b)=\frac{\partial^2 }{\partial a \partial b} F(a,b)$
and the marginal probability from the joint probability density function
$P { X\in A }=P { X\in A,Y\in (-\infty,\infty) } \ =\int_{A}^{}\int_{-\infty}^{\infty}f(x,y)dydx \ =\int_{A}^{}f_{X}(x)dx$
with respect to the random variables X and Y respectively
Examples on Joint distribution
1. The joint probabilities for the random variables X and Y representing the number of mathematics and statistics books from a set of books which contains 3 mathematics, 4 statistics and 5 physics
books if 3 books taken randomly
$p(0,0)=\binom{5}{3}/\binom{12}{3}=\frac{10}{220} \\ p(0,1)=\binom{4}{1} \binom{5}{2}/\binom{12}{3}=\frac{40}{220} \\ p(0,2)=\binom{4}{2} \binom{5}{1}/\binom{12}{3}=\frac{30}{220} \\ p(0,3)=\binom{4}
{3}/\binom{12}{3}=\frac{4}{220} \\ p(1,0)=\binom{3}{1} \binom{5}{2}/\binom{12}{3}=\frac{30}{220} \\ p(1,1)=\binom{3}{1} \binom{4}{1} \binom{5}{1}/\binom{12}{3}=\frac{60}{220} \\ p(1,2)=\binom{3}{1} \
binom{4}{2}/\binom{12}{3}=\frac{18}{220} \\ p(2,0)=\binom{3}{2} \binom{5}{1}/\binom{12}{3}=\frac{15}{220} \\ p(2,1)=\binom{3}{2} \binom{4}{1}/\binom{12}{3}=\frac{12}{220} \\ p(3,0)=\binom{3}{3}/\
• Find the joint probability mass function for the sample of families having 15% no child, 20% 1 child, 35% 2 child and 30% 3 child if the family we choose randomly from this sample for child to be
Boy or Girl?
The joint probability we will find by using the definition as
Jointly distributed random variables : Example
and this we can illustrate in the tabular form as follows
Jointly distributed random variables : Example of joint distribution
• Calculate the probabilities
$(a) P { X> 1, Y> 1 } , \ \ (b) P { X< Y }, and \ \ (c) P { X< a }$
if for the random variables X and Y the joint probability density function is given by
$f(x,y) = \begin{cases} 2e^{-x}y^{-2y} \ \ 0< x< \infty , \ \ 0< y< \infty \\ 0 &\text{otherwise} \end{cases}$
with the help of definition of joint probability for continuous random variable
and the given joint density function the first probability for the given range will be
$P { X> 1,Y< 1 }=\int_{0}^{1}\int_{1}^{\infty}2e^{-x} e^{-2y} dxdy$
$=\int_{0}^{1}2e^{-2y} \left ( -e^{-x}\lvert_{1}^{\infty} \right )dy$
in the similar way the probability
$P { X< Y }=\int_{(x,y):}^{}\int_{x< y}^{}2e^{-2x}e^{-2y}dxdy$
$=\int_{0}^{\infty}2e^{-2y}dy - \int_{0}^{\infty}2e^{-3y}dy =1-\frac{2}{3}=\frac{1}{3}$
and finally
$P\left \{ X< a \right \}=\int_{0}^{a}\int_{0}^{\infty}2e^{-2y}e^{-x}dydx$
• Find the joint density function for the quotient X/Y of random variables X and Y if their joint probability density function is
$f(x,y) = \begin{cases} e^{-(x+y)} \ \ 0< x< \infty , \ \ 0< y< \infty \\ \ 0 &\text{otherwise} \end{cases}$
To find the probability density function for the function X/Y we first find the joint distribution function then we will differentiate the obtained result,
so by the definition of joint distribution function and given probability density function we have
$F_{X}/_{Y}(a)=P { \frac{X}{Y}\leq a }$
$=\int_{\frac{X}{Y}\leq a}^{}\int e^{-(x+y)}dxdy$
$= { \int_{0}^{\infty}-e^{-y}dxdy +\frac{e^{-(a+1)y}}{a+1} }\lvert_{0}^{\infty}$
thus by differentiating this distribution function with respect to a we will get the density function as
where a is within zero to infinity.
Independent random variables and joint distribution
In the joint distribution the probability for two random variable X and Y is said to be independent if
$P{ X \in A, Y \in B } =P { X \in A } P { Y \in B }$
where A and B are the real sets. As already in terms of events we know that the independent random variables are the random variables whose events are independent.
Thus for any values of a and b
$P { X\leq a, Y\leq b } =P {X\leq a }P {Y\leq b }$
and the joint distribution or cumulative distribution function for the independent random variables X and Y will be
$F(a,b)=F_{X}(a)F_{Y}(b) \ \ for \ \ all \ \ a,b$
if we consider the discrete random variables X and Y then
$p(x,y)=p_{X}(x)p_{Y}(y) \ \ for \ \ all \ \ x,y$
$P { X\in A, Y\in B } =\sum_{y\in B}^{}\sum_{x \in A}^{}p(x,y)$
$=\sum_{y\in B}^{}\sum_{x \in A}^{}p_{X}(x)p_{Y}(y)$
$=\sum_{y\in B}p_{Y}(y) \sum_{x\in A}p_{X}(x)$
$= P { Y \in B } P { X \in A }$
similarly for the continuous random variable also
$f(x,y)=f_{X}(x)f_{Y}(y) \ \ for \ \ all \ \ x,y$
Example of independent joint distribution
1. If for a specific day in a hospital the patients entered are poisson distributed with parameter λ and probability of male patient as p and probability of female patient as (1-p) then show that
the number of male patients and female patients entered in the hospital are independent poisson random variables with parameters λp and λ(1-p) ?
consider the number of male and female patients by random variable X and Y then
$P { X=i, Y=j }= P { X=i, Y=j|X +Y=i+j }P { X+Y=i+j }+P { X=i,Y=j|X +Yeq i+j }P { X+Yeq i+j }$
$P{ X=i, Y=j }= P{ X=i, Y=j|X +Y=i+j }P { X+Y=i+j }$
as X+Y are the total number of patients entered in the hospital which is poisson distributed so
$P { X+Y=i+j }=e^{-\lambda }\frac{\lambda ^{i+j}}{(i+j)!}$
as the probability of male patient is p and female patient is (1-p) so exactly from total fix number are male or female shows binomial probability as
$P { X=i, Y=j|X + Y=i+j}=\binom{i+j}{i}p^{i}(1-p)^{j}$
using these two values we will get the above joint probability as
$P{ X=i, Y=j}=\binom{i+j}{i}p^{i}(1-p)^{j}e^{-\lambda} \frac{\lambda ^{i+j}}{(i+j)!}$
$=e^{-\lambda} \frac{\lambda p^i}{i! j!}\left [ \lambda (1-p) \right ]^{j}$
$=e^{-\lambda p} \frac{(\lambda p)^i}{i!} e^{-\lambda (1-p)} \frac{\left [ \lambda (1-p) \right ]^{j}}{j!}$
thus probability of male and female patients will be
$P{ X=i } =e^{-\lambda p} \frac{(\lambda p)^i}{i!} \sum_{j} e^{-\lambda (1-p)} \frac{\left [ \lambda (1-p) \right ]^{j}}{j!} = e^{-\lambda p} \frac{(\lambda p)^i}{i!}$
$P{ X=i } =e^{-\lambda p} \frac{(\lambda p)^i}{i!} \sum_{j} e^{-\lambda (1-p)} \frac{\left [ \lambda (1-p) \right ]^{j}}{j!} = e^{-\lambda p} \frac{(\lambda p)^i}{i!}$
$P { Y=j } =e^{-\lambda (1-p)} \frac{\left [ \lambda (1-p) \right ]^{j}}{j!}$
which shows both of them are poisson random variables with the parameters λp and λ(1-p).
2. find the probability that a person has to wait for more than ten minutes at the meeting for a client as if each client and that person arrives between 12 to 1 pm following uniform distribution.
consider the random variables X and Y to denote the time for that person and client between 12 to 1 so the probability jointly for X and Y will be
$=2 \int_{X+10 < Y} \int f_{X}(x) f_{Y}(y)dxdy$
$=2 \int_{10}^{60} \int_{0}^{y-10} \left (\frac{1}{60}\right )^{2} dxdy$
$=\frac{2}{(60)^{2}}\int_{10}^{60} (y-10)dy$
$P { X\geq YZ }$
where X,Y and Z are uniform random variable over the interval (0,1).
here the probability will be
$P { X\geq YZ } = \int \int_{x\geq yz}\int f_{X,Y,Z} (x,y,z) dxdydz$
for the uniform distribution the density function
$f_{X,Y,Z}(x,y,z)=f_{X} (x) f_{Y}(y) f_{Z}(z) =1, \ 0\leq x\leq 1, \ \ 0\leq y\leq 1, \ \ 0\leq z\leq 1$
for the given range so
$=\int_{0}^{1}\int_{0}^{1}\int_{yz}^{1} dxdydz$
$=\int_{0}^{1}\int_{0}^{1} (1-yz) dydz$
$=\int_{0}^{1}\left ( 1-\frac{z}{2} \right ) dydz$
The sum of independent variables X and Y with the probability density functions as continuous random variables, the cumulative distribution function will be
$F_{X+Y} (a)= P\left \{ X+Y\leq a \left. \right \} \right.$
$= \int_{x+y\leq a}\int f_{X} (x)f_{Y}(y)dxdy$
$= \int_{-\infty}^{\infty}\int_{-\infty}^{a-y} f_{X}(x)f_{Y}(y)dxdy$
$= \int_{-\infty}^{\infty}\int_{-\infty}^{a-y} f_{X}(x) dx f_{Y}(y)dy$
$= \int_{-\infty}^{\infty} F_{X} (a-y) f_{Y}(y)dy$
by differentiating this cumulative distribution function for the probability density function of these independent sums are
$= \int_{-\infty}^{\in[latex]f_{X+Y} (a)=\frac{\mathrm{d} }{\mathrm{d} a}\int_{-\infty}^{\infty} F_{X} (a-y)f_{Y} (y)dy[/latex]fty} F_{X} (a-y) f_{Y}(y)dy$
$f_{X+Y} (a)=\int_{-\infty}^{\infty} \frac{\mathrm{d} }{\mathrm{d} a} F_{X} (a-y)f_{Y} (y)dy$
$=\int_{-\infty}^{\infty} f_{X} (a-y)f_{Y} (y)dy$
by following these two results we will see some continuous random variables and their sum as independent variables
sum of independent uniform random variables
for the random variables X and Y uniformly distributed over the interval (0,1) the probability density function for both of these independent variable is
$f_{X}(a)=f_{Y}(a) = \begin{cases} 1 & \ 0< a< 1 \\ \ \ 0 & \text{ otherwise } \end{cases}$
so for the sum X+Y we have
$f_{X+Y}(a) = \int_{0}^{1}f_{X}(a-y)dy$
for any value a lies between zero and one
$f_{X+Y}(a)= \int_{0}^{a}dy =a$
if we restrict a in between one and two it will be
$f_{X+Y}(a)= \int_{a-1}^{a}dy =2-a$
this gives the triangular shape density function
$f_{X+Y}(a) = \begin{cases} \ a & 0\leq a \leq 1 \\ \ 2-a & \ 1< a< 2 \\ \ 0 & \text{ otherwise } \end{cases}$
if we generalize for the n independent uniform random variables 1 to n then their distribution function
$F_{n}(x)=P\left ( X_{1} + ......+ X_{n} \leq x \right )$
by mathematical induction will be
$F_{n}(x)=\frac{x^{n}}{n!} , 0\leq x\leq 1$
sum of independent Gamma random variables
If we have two independent gamma random variables with their usual density function
$f(y)= \frac{\lambda e^{-\lambda y}(\lambda y)^{t-1}}{\Gamma (t)} \ \ , 0< y< \infty$
then following the density for the sum of independent gamma random variables
$f_{X+Y}(a)=\frac{1}{\Gamma (s)\Gamma (t)}\int_{0}^{a}\lambda e^{-\lambda (a-y)}\left [ \lambda (a-y) \right ]^{s-1}\lambda e^{-\lambda y} (\lambda y)^{t-1}dy$
$=K e^{-\lambda a} \int_{0}^{a}\left [ (a-y) \right ]^{s-1}(y)^{t-1}dy$
$=K e^{-\lambda a} a^{s+t-1} \int_{0}^{1} (1-x)^{s-1}x^{t-1} dx \ \ by \ \ letting \ \ x=\frac{y}{a}$
$=C e^{-\lambda a} a^{s+t-1}$
$f_{X+Y}(a)=\frac{\lambda e^{-\lambda a} (\lambda a)^{s+t-1}}{\Gamma (s+t)}$
this shows the density function for the sum of gamma random variables which are independent
sum of independent exponential random variables
In the similar way as gamma random variable the sum of independent exponential random variables we can obtain density function and distribution function by just specifically assigning values of
gamma random variables.
Sum of independent normal random variable | sum of independent Normal distribution
If we have n number of independent normal random variables Xi , i=1,2,3,4….n with respective means μi and variances σ2i then their sum is also normal random variable with the mean as
Σμi and variances Σσ2i
We first show the normally distributed independent sum for two normal random variable X with the parameters 0 and σ^2 and Y with the parameters 0 and 1, let us find the probability density
function for the sum X+Y with
$c=\frac{1}{2\sigma ^{2}} +\frac{1}{2} =\frac{1+\sigma ^{2}}{2\sigma ^{2}}$
in the joint distribution density function
with the help of definition of density function of normal distribution
$f_{X}(a-y)f_{Y}(y)=\frac{1}{\sqrt{2\pi }\sigma } exp { -\frac{(a-y)^{2}}{2\sigma ^{2}} }\frac{1}{\sqrt{2\pi }}exp { -\frac{y^{2}}{2} }$
$=\frac{1}{2\pi \sigma } exp { -\frac{a^{2}}{2\sigma ^{2}} } exp { -c ( y^{2} -2y\frac{a}{1+\sigma ^{2}} ) }$
thus the density function will be
$f_{X+Y}(a)=\frac{1}{2\pi \sigma }exp { -\frac{a^{2}}{2\sigma ^{2}} } exp { \frac{a^{2}}{2\sigma ^{2}(1+\sigma ^{2})} } X \int_{-\infty}^{\infty} exp { -c ( y-\frac{a}{1+\sigma ^{2}} )^{2} } dy$
$=\frac{1}{2\pi \sigma } exp { - \frac{a^{2}}{2(1+\sigma ^{2})} } \int_{-\infty}^{\infty} exp { -cx^{2} } dx$
$=C exp { -\frac{a^{2}}{2(1+\sigma ^{2})} }$
which is nothing but the density function of a normal distribution with mean 0 and variance (1+σ2) following the same argument we can say
$X_{1} + X_{2}=\sigma <em>{2}\left ( \frac{X</em>{1}-\mu <em>{1}}{\sigma </em>{2}}+\frac{X_{2}-\mu <em>{2}}{\sigma </em>{2}} \right ) +\mu <em>{1} +\mu </em>{2}$
with usual mean and variances. If we take the expansion and observe the sum is normally distributed with the mean as the sum of the respective means and variance as the sum of the respective
thus in the same way the nth sum will be the normally distributed random variable with the mean as Σμ[i] and variances Σσ^2[i]
Sums of independent Poisson random variables
If we have two independent Poisson random variables X and Y with parameters λ[1] and λ[2] then their sum X+Y is also Poisson random variable or Poisson distributed
since X and Y are Poisson distributed and we can write their sum as the union of disjoint events so
$P{ X+Y =n } =\sum_{k=0}^{n}P { X=k, Y=n-k }$
$=\sum_{k=0}^{n}P { X=k },P { Y=n-k }$
$=\sum_{k=0}^{n}e^{-\lambda <em>{1}} \frac{\lambda </em>{1}^{k}}{k!}e^{-\lambda <em>{2}}\frac{\lambda </em>{2}^{n-k}}{(n-k)!}$
by using the of probability of independent random variables
$=e^{-(\lambda <em>{1}+\lambda </em>{2})} \sum_{k=0}^{n} \frac{\lambda <em>{1}^{k}\lambda </em>{2}^{n-k}}{k!(n-k)!}$
$=\frac{e^{-(\lambda <em>{1}+\lambda </em>{2})}}{n!}\sum_{k=0}^{n} \frac{n!}{k!(n-k)!} \lambda <em>{1}^{k}\lambda </em>{2}^{n-k}$
$=\frac{e^{-(\lambda <em>{1}+\lambda </em>{2})}}{n!} (\lambda <em>{1}+\lambda </em>{2})^{n}$
so we get the sum X+Y is also Poisson distributed with the mean λ[1] +λ[2]
Sums of independent binomial random variables
If we have two independent binomial random variables X and Y with parameters (n,p) and (m, p) then their sum X+Y is also binomial random variable or Binomial distributed with
parameter (n+m, p)
let use the probability of the sum with definition of binomial as
$P { X+Y= k } =\sum_{i=0}^{n}P { X=i, Y=k-i }$
$=\sum_{i=0}^{n}P { X=i } P { Y=k-i }$
$where \ \ q=1-p \ \ and \ \ where \ \ \binom{r}{j}=0 \ \ when \ \ j< 0$
which gives
$P { X+Y=k }=p^{k}q^{n+m-k}\sum_{i=0}^{n}\binom{n}{i}\binom{m}{k-i}$
so the sum X+Y is also binomially distributed with parameter (n+m, p).
The concept of jointly distributed random variables which gives the distribution comparatively for more than one variable in the situation is discussed in addition the basic concept of independent
random variable with the help of joint distribution and sum of independent variables with some example of distribution is given with their parameters, if you require further reading go through
mentioned books. For more post on mathematics, please click here.
A first course in probability by Sheldon Ross
Schaum’s Outlines of Probability and Statistics
An introduction to probability and statistics by ROHATGI and SALEH
I am DR. Mohammed Mazhar Ul Haque. I have completed my Ph.D. in Mathematics and working as an Assistant professor in Mathematics. Having 12 years of experience in teaching. Having vast knowledge in
Pure Mathematics, precisely on Algebra. Having the immense ability of problem design and solving. Capable of Motivating candidates to enhance their performance.
I love to contribute to Lambdageeks to make Mathematics Simple, Interesting & Self Explanatory for beginners as well as experts.
|
{"url":"https://techiescience.com/jointly-distributed-random-variables/","timestamp":"2024-11-01T19:53:09Z","content_type":"text/html","content_length":"164124","record_id":"<urn:uuid:c7d40716-49cf-41b8-9981-813d59712acc>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00872.warc.gz"}
|
Heat transfer by conduction across a furnace wall - EnggCyclopedia
Heat Transfer • Process Design • Solved Sample Problems
Heat transfer by conduction across a furnace wall
3 Min Read
During heat transfer by conduction across a furnace wall, heat flows through a solid material, such as the walls of a furnace, from a region of higher temperature to a region of lower temperature.
This transfer of heat occurs due to the collisions of particles within the solid material.
A furnace is a device in which heat is generated and transferred to materials with the object of bringing about physical and chemical changes. Calculating the heat transfer rate across a furnace wall
is very important for designing and improving the efficiency of furnaces.
Table of content:
Heat transfer - conduction
How to calculate rate of heat transfer across a furnace wall?
Heat transfer - conduction
In a furnace, heat transfer through the furnace wall occurs primarily by conduction since the solid wall material is the primary barrier between the high-temperature furnace interior and the
surrounding cooler environment. Heat transfer by conduction is the transfer of heat through a material without any net motion of the material itself. In other words, it occurs through a solid, liquid
or gas when heat is transferred from a region of high temperature to a region of lower temperature by molecular collisions, without any movement of the material as a whole.
Conduction occurs when there is a temperature difference across a material, resulting in a transfer of energy from the hotter side to the colder side. The rate of heat transfer by conduction depends
on the thermal conductivity of the material, the temperature gradient, and the cross-sectional area of the material.
Thermal conductivity is a measure of a material's ability to conduct heat. Materials with high thermal conductivity, such as metals, conduct heat more easily than materials with low thermal
conductivity, such as ceramics.
The temperature gradient is the change in temperature per unit length of the material. The greater the temperature gradient, the faster the rate of heat transfer by conduction.
The cross-sectional area of the material also affects the rate of heat transfer by conduction. The larger the cross-sectional area, the greater the rate of heat transfer. The thickness of the furnace
wall also affects the rate of heat transfer by conduction. Thicker walls offer greater resistance to heat transfer, slowing the rate of transfer.
How to calculate rate of heat transfer across a furnace wall?
Problem Statement
Determine the rate of heat transfer by conduction per unit area, by means of conduction for a furnace wall made of fire clay. Furnace wall thickness is 6" or half a foot. Thermal conductivity of the
furnace wall clay is 0.3 W/m·K. The furnace wall temperature can be taken to be same as furnace operating temperature which is 650^0C and temperature of the outer wall of the furnace is 150^0C.
Solution to this sample problem is quite straightforward as demonstrated below.
As per EnggCyclopedia's heat conduction article,
For heat transfer by conduction across a flat wall, the heat transfer rate is expressed by following equation,
For the given sample problem,
Temperature at inner wall of furnace = T[1] = 650^0C
Temperature at outer wall of furnace = T[2] = 150^0C
Thickness of furnace wall = L = 12" = 12 × 0.0254 m = 0.3048 m
Thermal conductivity of furnace wall material = k = 0.3 W/m·K
Hence, heat transfer rate per unit area of the wall is calculated as,
Q/A = k × (T[1] - T[2])/L
Q/A = 0.3×(650-150)/0.3048 W/m^2 = 492.13 W/m^2
This figure multiplied by the area of the furnace wall, will determine the total heat transfer rate in Joules/sec i.e. Watt.
You may also like
The term superheat corresponds to the excess heat given to saturated steam to increase its temperature beyond dew point or saturation temperature of steam at...
Read More
Table of content:
What is a water cooled condenser?
How does water cooled condenser works?
Water cooled condenser diagram
Advantages and disadvantages of water...
Read More
The line sizing calculations take into account various factors, including the fluid properties (density, viscosity, and specific heat), the flow rate, the...
Read More
Relief devices • Solved Sample Problems • Valves
Pressure Relief Valve Sizing – Subcritical Gas Flow
PRVs are installed to prevent such situations and to automatically relieve excess pressure by opening when the pressure inside the system reaches a...
Read More
Heat Transfer • Key equations for Process Engineers • Process Design
Heat transfer coefficients for heat exchangers
Table of Content:
1. Heat Transfer Coefficient Units
2. Convective Heat Transfer Coefficient
3. Conductive Heat Transfer Coefficient
4. Overall Heat Transfer...
Read More
|
{"url":"https://enggcyclopedia.com/2011/09/heat-transfer-conduction-furnace-wall/","timestamp":"2024-11-14T03:14:22Z","content_type":"text/html","content_length":"197364","record_id":"<urn:uuid:6575a7cf-94e5-429f-9838-39d09bef1012>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00317.warc.gz"}
|
NCERT Solutions for Class 10 Maths Chapter 3 Exercise 3.3: Free PDF 2024-25
NCERT Solutions Class 10 Maths Chapter 3 Exercise 3.3
NCERT Solution Class 10 Maths provides solutions for Mathematics Book textbooks. You can download the Class 10 Maths NCERT Solutions Chapter 3 Exercise 3.3 from our website for free in PDF format.
The NCERT Maths Class 10 PDF with the latest change are available on the Vedantu app and the website as per the latest CBSE syllabus.
1. NCERT Solutions Class 10 Maths Chapter 3 Exercise 3.3
2. Access NCERT Solutions for Class 10 Maths Chapter 3 – Pair of Linear Equations in Two Variables
3. NCERT Solutions for Class 10 Maths Chapter 3 Exercise 3.3 - Free PDF Download
3.1Difference Between Substitution Method and Elimination method
3.2NCERT Solutions for Class 10 Maths Chapter 3 Exercises
3.3Benefits of NCERT Solutions for Class 10 Maths
4. Other Related Links for CBSE Class 10 Maths Chapter 3
5. Chapter-wise NCERT Solutions Class 10 Maths
6. NCERT Study Resources for Class 10 Maths
Vedantu's NCERT Solution not only include all probable types of questions that are relevant from a board exam perspective but also include sufficient and more well-solved examples and practice
exercises. This gives you plenty of scope for practice. To obtain a better understanding of the exercise problems, download NCERT Solutions for Class 10 Maths Chapter 3. With the Vedantu learning
app, you'll be able to take part in free conceptual videos and live masterclasses. You will also have access to all of the free PDFs for solutions and study materials. Students can also download
Class 10 Science NCERT Solutions for free in PDF format only at Vedantu.
Watch videos on
NCERT Solutions for Class 10 Maths Chapter 3: Pair of Linear Equations in Two Variables - Exercise 3.3
Pair of Linear Equations in Two Variables L-1 | Consistency of a System by Graphical Method |CBSE 10
5.5K likes
143.7K Views
3 years ago
Pair of Linear Equations in Two Variables in One-shot | CBSE Class 10 Maths Chapter 3 NCERT Vedantu
6.8K likes
158K Views
4 years ago
FAQs on NCERT Solutions for Class 10 Maths Chapter 3: Pair of Linear Equations in Two Variables - Exercise 3.3
1. What we are going to study in chapter 3?
NCERT solutions for class 10 maths chapter 3 discusses the idea of a pair of linear equations in two variables, a graphical method of solving a pair of linear equations in two variables, algebraic
methods of solving a pair of linear equations, substitution, elimination, cross-multiplication method and equations reducible to a pair of linear equations in two variables, various types of
equations. The chapter concludes by summarizing points that help you to revise the chapter and its concepts quickly.
2. How to reduce the fear of Maths?
Some students face difficulties when studying math. Thus, with the aid of its expert teachers, Vedantu has made the subject easier to understand. The best way to learn is to move smartly forward and
download the NCERT Solutions PDF now. Vedantu's NCERT Solutions is one of the most important parts of Class 8 Maths study materials. These solutions have been developed with the utmost care by
qualified and professional teachers to make your exams easier.
3. How many questions are there in Exercise 3.3 of Class 10 Mathematics Chapter 3?
Chapter 3 of Class 10 Mathematics is a Pair of Linear Equations in Two Variables. This is one of the most important chapters that are to be focused on while preparing for the exams. Exercise 3.3 is
based on “Algebraic Methods Of Solving A Pair Of Linear Equations”. The exercise includes two questions. The first question has four parts, while the second one has five parts to be solved using the
substitution and elimination method.
4. How many examples are based on Exercise 3.3 of Class 10 Mathematics?
The examples provided in the NCERT book for Class 10 Maths have equal importance as the questions given in various exercises have. The questions based on the examples can also appear in the exams.
There are four examples in Chapter 3 that are based on Exercise 3.3. Students should practice these examples as well because they are important and they help students develop a better understanding
of all the concepts.
5. Is it tough to score higher marks in Class 10 Maths?
Scoring well in any subject can be made easy if students are determined to do so. Understanding all the concepts that are taught in Class 10 Mathematics is important to score higher marks in your
exams. However, understanding the concepts might seem difficult to some students. This problem can be solved by practising everything that has been taught in school. Regular practice of all exercises
in each chapter will make scoring well in Maths very easy.
6. Where can I find NCERT Solutions for Class 10 Maths Exercise 3.3?
The solution of Exercise 3.3 can be found on Vedantu for the students who need help in solving questions provided in the Class 10 Maths NCERT book. The NCERT Solutions Class 10 Maths Chapter 3
Exercise 3.3 has step-by-step solutions for all the exercise questions. Students can also refer to the official website of Vedantu to access study material related to the chapter.
7. What are the most important formulas that I need to remember in Class 10 Maths Chapter 3?
Chapter 3 Pair of Linear Equations in Two Variables in Class 10 Maths NCERT includes some formulas that are the basis for solving any question in the exercise, such as in Cross Multiplication method,
below is the important formula.
x=b1c2−b2c1 / a1b2−a2b1
and y=a2c1−a1c2 / a1b2−a2b1
These formulas vary depending on the method used for solving the given linear equations. All the formulas in the entire chapter are equally important to remember. You can also access study materials
like notes from Vedantu’s app. All the resources are free of cost.
|
{"url":"https://www.vedantu.com/ncert-solutions/ncert-solutions-class-10-maths-chapter-3-exercise-3-3","timestamp":"2024-11-11T17:28:02Z","content_type":"text/html","content_length":"556608","record_id":"<urn:uuid:c2d53592-4191-4a03-8761-ef0ea63e6672>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00287.warc.gz"}
|
xyplot and dotplot with Matrix Variables to Plot Error Bars
xYplot {Hmisc} R Documentation
xyplot and dotplot with Matrix Variables to Plot Error Bars and Bands
A utility function Cbind returns the first argument as a vector and combines all other arguments into a matrix stored as an attribute called "other". The arguments can be named (e.g., Cbind(pressure=
y,ylow,yhigh)) or a label attribute may be pre-attached to the first argument. In either case, the name or label of the first argument is stored as an attribute "label" of the object returned by
Cbind. Storing other vectors as a matrix attribute facilitates plotting error bars, etc., as trellis really wants the x- and y-variables to be vectors, not matrices. If a single argument is given to
Cbind and that argument is a matrix with column dimnames, the first column is taken as the main vector and remaining columns are taken as "other". A subscript method for Cbind objects subscripts the
other matrix along with the main y vector.
The xYplot function is a substitute for xyplot that allows for simulated multi-column y. It uses by default the panel.xYplot and prepanel.xYplot functions to do the actual work. The method argument
passed to panel.xYplot from xYplot allows you to make error bars, the upper-only or lower-only portions of error bars, alternating lower-only and upper-only bars, bands, or filled bands. panel.xYplot
decides how to alternate upper and lower bars according to whether the median y value of the current main data line is above the median y for all groups of lines or not. If the median is above the
overall median, only the upper bar is drawn. For bands (but not 'filled bands'), any number of other columns of y will be drawn as lines having the same thickness, color, and type as the main data
line. If plotting bars, bands, or filled bands and only one additional column is specified for the response variable, that column is taken as the half width of a precision interval for y, and the
lower and upper values are computed automatically as y plus or minus the value of the additional column variable.
When a groups variable is present, panel.xYplot will create a function in frame 0 (.GlobalEnv in R) called Key that when invoked will draw a key describing the groups labels, point symbols, and
colors. By default, the key is outside the graph. For S-Plus, if Key(locator(1)) is specified, the key will appear so that its upper left corner is at the coordinates of the mouse click. For R/
Lattice the first two arguments of Key (x and y) are fractions of the page, measured from the lower left corner, and the default placement is at x=0.05, y=0.95. For R, an optional argument to sKey,
other, may contain a list of arguments to pass to draw.key (see xyplot for a list of possible arguments, under the key option).
When method="quantile" is specified, xYplot automatically groups the x variable into intervals containing a target of nx observations each, and within each x group computes three quantiles of y and
plots these as three lines. The mean x within each x group is taken as the x-coordinate. This will make a useful empirical display for large datasets in which scatterdiagrams are too busy to see
patterns of central tendency and variability. You can also specify a general function of a data vector that returns a matrix of statistics for the method argument. Arguments can be passed to that
function via a list methodArgs. The statistic in the first column should be the measure of central tendency. Examples of useful method functions are those listed under the help file for
summary.formula such as smean.cl.normal.
xYplot can also produce bubble plots. This is done when size is specified to xYplot. When size is used, a function sKey is generated for drawing a key to the character sizes. See the bubble plot
example. size can also specify a vector where the first character of each observation is used as the plotting symbol, if rangeCex is set to a single cex value. An optional argument to sKey, other,
may contain a list of arguments to pass to draw.key (see xyplot for a list of possible arguments, under the key option). See the bubble plot example.
Dotplot is a substitute for dotplot allowing for a matrix x-variable, automatic superpositioning when groups is present, and creation of a Key function. When the x-variable (created by Cbind to
simulate a matrix) contains a total of 3 columns, the first column specifies where the dot is positioned, and the last 2 columns specify starting and ending points for intervals. The intervals are
shown using line type, width, and color from the trellis plot.line list. By default, you will usually see a darker line segment for the low and high values, with the dotted reference line elsewhere.
A good choice of the pch argument for such plots is 3 (plus sign) if you want to emphasize the interval more than the point estimate. When the x-variable contains a total of 5 columns, the 2nd and
5th columns are treated as the 2nd and 3rd are treated above, and the 3rd and 4th columns define an inner line segment that will have twice the thickness of the outer segments. In addition, tick
marks separate the outer and inner segments. This type of display (an example of which appeared in The Elements of Graphing Data by Cleveland) is very suitable for displaying two confidence levels
(e.g., 0.9 and 0.99) or the 0.05, 0.25, 0.75, 0.95 sample quantiles, for example. For this display, the central point displays well with a default circle symbol.
setTrellis sets nice defaults for Trellis graphics, assuming that the graphics device has already been opened if using postscript, etc. By default, it sets panel strips to blank and reference dot
lines to thickness 1 instead of the Trellis default of 2.
numericScale is a utility function that facilitates using xYplot to plot variables that are not considered to be numeric but which can readily be converted to numeric using as.numeric(). numericScale
by default will keep the name of the input variable as a label attribute for the new numeric variable.
xYplot(formula, data = sys.frame(sys.parent()), groups,
subset, xlab=NULL, ylab=NULL, ylim=NULL,
panel=panel.xYplot, prepanel=prepanel.xYplot, scales=NULL,
minor.ticks=NULL, sub=NULL, ...)
panel.xYplot(x, y, subscripts, groups=NULL,
type=if(is.function(method) || method=='quantiles')
'b' else 'p',
method=c("bars", "bands", "upper bars", "lower bars",
"alt bars", "quantiles", "filled bands"),
methodArgs=NULL, label.curves=TRUE, abline,
probs=c(.5,.25,.75), nx=NULL,
cap=0.015, lty.bar=1,
lwd=plot.line$lwd, lty=plot.line$lty, pch=plot.symbol$pch,
cex=plot.symbol$cex, font=plot.symbol$font, col=NULL,
lwd.bands=NULL, lty.bands=NULL, col.bands=NULL,
minor.ticks=NULL, col.fill=NULL,
size=NULL, rangeCex=c(.5,3), ...)
prepanel.xYplot(x, y, ...)
Dotplot(formula, data = sys.frame(sys.parent()), groups, subset,
xlab = NULL, ylab = NULL, ylim = NULL,
panel=panel.Dotplot, prepanel=prepanel.Dotplot,
scales=NULL, xscale=NULL, ...)
prepanel.Dotplot(x, y, ...)
panel.Dotplot(x, y, groups = NULL,
pch = dot.symbol$pch,
col = dot.symbol$col, cex = dot.symbol$cex,
font = dot.symbol$font, abline, ...)
setTrellis(strip.blank=TRUE, lty.dot.line=2, lwd.dot.line=1)
numericScale(x, label=NULL, ...)
for Cbind ... is any number of additional numeric vectors. Unless you are using Dotplot (which allows for either 2 or 4 "other" variables) or xYplot with method=
"bands", vectors after the first two are ignored. If drawing bars and only one extra variable is given in ..., upper and lower values are computed as described above.
If the second argument to Cbind is a matrix, that matrix is stored in the "other" attribute and arguments after the second are ignored. For bubble plots, name an
... argument cex.
Also can be other arguments to pass to labcurve.
formula a trellis formula consistent with xyplot or dotplot
x x-axis variable. For numericScale x is any vector such as as.numeric(x) returns a numeric vector suitable for x- or y-coordinates.
a vector, or an object created by Cbind for xYplot. y represents the main variable to plot, i.e., the variable used to draw the main lines. For Dotplot the first
y argument to Cbind will be the main x-axis variable.
data, subset, ylim, subscripts, see trellis.args. xlab and ylab get default values from "label" attributes.
groups, type, scales, panel,
prepanel, xlab, ylab
xscale allows one to use the default scales but specify only the x component of it for Dotplot
defaults to "bars" to draw error-bar type plots. See meaning of other values above. method can be a function. Specifying method=quantile, methodArgs=list(probs=c
method (.5,.25,.75)) is the same as specifying method="quantile" without specifying probs.
methodArgs a list containing optional arguments to be passed to the function specified in method
set to FALSE to suppress invocation of labcurve to label primary curves where they are most separated or to draw a legend in an empty spot on the panel. You can also
label.curves set label.curves to a list of options to pass to labcurve. These options can also be passed as ... to xYplot. See the examples below.
a list of arguments to pass to panel.abline for each panel, e.g. list(a=0, b=1, col=3) to draw the line of identity using color 3. To make multiple calls to
abline panel.abline, pass a list of unnamed lists as abline, e.g., abline=list(list(h=0),list(v=1)).
a vector of three quantiles with the quantile corresponding to the central line listed first. By default probs=c(.5, .25, .75). You can also specify probs through
probs methodArgs=list(probs=...).
number of target observations for each x group (see cut2 m argument). nx defaults to the minimum of 40 and the number of points in the current stratum divided by 4.
nx Set nx=FALSE or nx=0 if x is already discrete and requires no grouping.
cap the half-width of horizontal end pieces for error bars, as a fraction of the length of the x-axis
lty.bar line type for bars
see trellis.args. These are vectors when groups is present, and the order of their elements corresponds to the different groups, regardless of how many bands or bars
lwd, lty, pch, cex, font, col are drawn. If you don't specify lty.bands, for example, all band lines within each group will have the same lty.
used to allow lty, lwd, col to vary across the different band lines for different groups. These parameters are vectors or lists whose elements correspond to the added
band lines (i.e., they ignore the central line, whose line characteristics are defined by lty, lwd, col). For example, suppose that 4 lines are drawn in addition to
lty.bands, lwd.bands, col.bands the central line. Specifying lwd.bands=1:4 will cause line widths of 1:4 to be used for every group, regardless of the value of lwd. To vary characteristics over the
groups use e.g. lwd.bands=list(rep(1,4), rep(2,4)) or list(c(1,2,1,2), c(3,4,3,4)).
minor.ticks a list with elements at and labels specifying positions and labels for minor tick marks to be used on the x-axis of each panel, if any.
sub an optional subtitle
used to override default colors used for the bands in method='filled bands'. This is a vector when groups is present, and the order of the elements corresponds to the
col.fill different groups, regardless of how many bands are drawn. The default colors for 'filled bands' are pastel colors matching the default colors superpose.line$col
size a vector the same length as x giving a variable whose values are a linear function of the size of the symbol drawn. This is used for example for bubble plots.
a vector of two values specifying the range in character sizes to use for the size variable (lowest first, highest second). size values are linearly translated to
rangeCex this range, based on the observed range of size when x and y coordinates are not missing. Specify a single numeric cex value for rangeCex to use the first character
of each observations's size as the plotting symbol.
strip.blank set to FALSE to not make the panel strip backgrounds blank
lty.dot.line line type for dot plot reference lines (default = 1 for dotted; use 2 for dotted)
lwd.dot.line line thickness for reference lines for dot plots (default = 1)
label a scalar character string to be used as a variable label after numericScale converts the variable to numeric form
Unlike xyplot, xYplot senses the presence of a groups variable and automatically invokes panel.superpose instead of panel.xyplot. The same is true for Dotplot vs. dotplot.
Cbind returns a matrix with attributes. Other functions return standard trellis results.
Side Effects
plots, and panel.xYplot may create temporary Key and sKey functions in the session frame.
Frank Harrell
Department of Biostatistics
Vanderbilt University
Madeline Bauer
Department of Infectious Diseases
University of Southern California School of Medicine
See Also
xyplot, panel.xyplot, summarize, label, labcurve, errbar, dotplot, reShape, cut2, panel.abline
# Plot 6 smooth functions. Superpose 3, panel 2.
# Label curves with p=1,2,3 where most separated
d <- expand.grid(x=seq(0,2*pi,length=150), p=1:3, shift=c(0,pi))
xYplot(sin(x+shift)^p ~ x | shift, groups=p, data=d, type='l')
# Use a key instead, use 3 line widths instead of 3 colors
# Put key in most empty portion of each panel
xYplot(sin(x+shift)^p ~ x | shift, groups=p, data=d,
type='l', keys='lines', lwd=1:3, col=1)
# Instead of implicitly using labcurve(), put a
# single key outside of panels at lower left corner
xYplot(sin(x+shift)^p ~ x | shift, groups=p, data=d,
type='l', label.curves=FALSE, lwd=1:3, col=1, lty=1:3)
# Bubble plots
x <- y <- 1:8
x[2] <- NA
units(x) <- 'cm^2'
z <- 101:108
p <- factor(rep(c('a','b'),4))
g <- c(rep(1,7),2)
data.frame(p, x, y, z, g)
xYplot(y ~ x | p, groups=g, size=z)
Key(other=list(title='g', cex.title=1.2)) # draw key for colors
sKey(.2,.85,other=list(title='Z Values', cex.title=1.2))
# draw key for character sizes
# Show the median and quartiles of height given age, stratified
# by sex and race. Draws 2 sets (male, female) of 3 lines per panel.
# xYplot(height ~ age | race, groups=sex, method='quantiles')
# Examples of plotting raw data
dfr <- expand.grid(month=1:12, continent=c('Europe','USA'),
dfr <- upData(dfr,
y=month/10 + 1*(sex=='female') + 2*(continent=='Europe') +
lower=y - runif(48,.05,.15),
upper=y + runif(48,.05,.15))
xYplot(Cbind(y,lower,upper) ~ month,subset=sex=='male' & continent=='USA',
xYplot(Cbind(y,lower,upper) ~ month|continent, subset=sex=='male',data=dfr)
xYplot(Cbind(y,lower,upper) ~ month|continent, groups=sex, data=dfr); Key()
# add ,label.curves=FALSE to suppress use of labcurve to label curves where
# farthest apart
xYplot(Cbind(y,lower,upper) ~ month,groups=sex,
subset=continent=='Europe', data=dfr)
xYplot(Cbind(y,lower,upper) ~ month,groups=sex, type='b',
subset=continent=='Europe', keys='lines',
# keys='lines' causes labcurve to draw a legend where the panel is most empty
xYplot(Cbind(y,lower,upper) ~ month,groups=sex, type='b', data=dfr,
xYplot(Cbind(y,lower,upper) ~ month,groups=sex, type='b', data=dfr,
label(dfr$y) <- 'Quality of Life Score'
# label is in Hmisc library = attr(y,'label') <- 'Quality\dots'; will be
# y-axis label
# can also specify Cbind('Quality of Life Score'=y,lower,upper)
xYplot(Cbind(y,lower,upper) ~ month, groups=sex,
subset=continent=='Europe', method='alt bars',
offset=grid::unit(.1,'inches'), type='b', data=dfr)
# offset passed to labcurve to label .4 y units away from curve
# for R (using grid/lattice), offset is specified using the grid
# unit function, e.g., offset=grid::unit(.4,'native') or
# offset=grid::unit(.1,'inches') or grid::unit(.05,'npc')
# The following example uses the summarize function in Hmisc to
# compute the median and outer quartiles. The outer quartiles are
# displayed using "error bars"
dfr <- expand.grid(month=1:12, year=c(1997,1998), reps=1:100)
month <- dfr$month; year <- dfr$year
y <- abs(month-6.5) + 2*runif(length(month)) + year-1997
s <- summarize(y, llist(month,year), smedian.hilow, conf.int=.5)
xYplot(Cbind(y,Lower,Upper) ~ month, groups=year, data=s,
keys='lines', method='alt', type='b')
# Can also do:
s <- summarize(y, llist(month,year), quantile, probs=c(.5,.25,.75),
xYplot(Cbind(y, Q1, Q3) ~ month, groups=year, data=s,
type='b', keys='lines')
# Or:
xYplot(y ~ month, groups=year, keys='lines', nx=FALSE, method='quantile',
# nx=FALSE means to treat month as a discrete variable
# To display means and bootstrapped nonparametric confidence intervals
# use:
s <- summarize(y, llist(month,year), smean.cl.boot)
xYplot(Cbind(y, Lower, Upper) ~ month | year, data=s, type='b')
# Can also use Y <- cbind(y, Lower, Upper); xYplot(Cbind(Y) ~ ...)
# Or:
xYplot(y ~ month | year, nx=FALSE, method=smean.cl.boot, type='b')
# This example uses the summarize function in Hmisc to
# compute the median and outer quartiles. The outer quartiles are
# displayed using "filled bands"
s <- summarize(y, llist(month,year), smedian.hilow, conf.int=.5)
# filled bands: default fill = pastel colors matching solid colors
# in superpose.line (this works differently in R)
xYplot ( Cbind ( y, Lower, Upper ) ~ month, groups=year,
method="filled bands" , data=s, type="l")
# note colors based on levels of selected subgroups, not first two colors
xYplot ( Cbind ( y, Lower, Upper ) ~ month, groups=year,
method="filled bands" , data=s, type="l",
subset=(year == 1998 | year == 2000), label.curves=FALSE )
# filled bands using black lines with selected solid colors for fill
xYplot ( Cbind ( y, Lower, Upper ) ~ month, groups=year,
method="filled bands" , data=s, label.curves=FALSE,
type="l", col=1, col.fill = 2:3)
Key(.5,.8,col = 2:3) #use fill colors in key
# A good way to check for stable variance of residuals from ols
# xYplot(resid(fit) ~ fitted(fit), method=smean.sdl)
# smean.sdl is defined with summary.formula in Hmisc
# Plot y vs. a special variable x
# xYplot(y ~ numericScale(x, label='Label for X') | country)
# For this example could omit label= and specify
# y ~ numericScale(x) | country, xlab='Label for X'
# Here is an example of using xYplot with several options
# to change various Trellis parameters,
# xYplot(y ~ x | z, groups=v, pch=c('1','2','3'),
# layout=c(3,1), # 3 panels side by side
# ylab='Y Label', xlab='X Label',
# main=list('Main Title', cex=1.5),
# par.strip.text=list(cex=1.2),
# strip=function(\dots) strip.default(\dots, style=1),
# scales=list(alternating=FALSE))
# Dotplot examples
s <- summarize(y, llist(month,year), smedian.hilow, conf.int=.5)
setTrellis() # blank conditioning panel backgrounds
Dotplot(month ~ Cbind(y, Lower, Upper) | year, data=s)
# or Cbind(\dots), groups=year, data=s
# Display a 5-number (5-quantile) summary (2 intervals, dot=median)
# Note that summarize produces a matrix for y, and Cbind(y) trusts the
# first column to be the point estimate (here the median)
s <- summarize(y, llist(month,year), quantile,
probs=c(.5,.05,.25,.75,.95), type='matrix')
Dotplot(month ~ Cbind(y) | year, data=s)
# Use factor(year) to make actual years appear in conditioning title strips
# Plot proportions and their Wilson confidence limits
d <- expand.grid(continent=c('USA','Europe'), year=1999:2001,
# Generate binary events from a population probability of 0.2
# of the event, same for all years and continents
d$y <- ifelse(runif(6*100) <= .2, 1, 0)
s <- with(d,
summarize(y, llist(continent,year),
function(y) {
n <- sum(!is.na(y))
s <- sum(y, na.rm=TRUE)
binconf(s, n)
}, type='matrix')
Dotplot(year ~ Cbind(y) | continent, data=s, ylab='Year',
# Dotplot(z ~ x | g1*g2)
# 2-way conditioning
# Dotplot(z ~ x | g1, groups=g2); Key()
# Key defines symbols for g2
# If the data are organized so that the mean, lower, and upper
# confidence limits are in separate records, the Hmisc reShape
# function is useful for assembling these 3 values as 3 variables
# a single observation, e.g., assuming type has values such as
# c('Mean','Lower','Upper'):
# a <- reShape(y, id=month, colvar=type)
# This will make a matrix with 3 columns named Mean Lower Upper
# and with 1/3 as many rows as the original data
version 5.1-3
|
{"url":"https://search.r-project.org/CRAN/refmans/Hmisc/html/xYplot.html","timestamp":"2024-11-13T16:00:49Z","content_type":"text/html","content_length":"28611","record_id":"<urn:uuid:65f4694c-02e6-4ccb-8d4a-dc7ff3c6a74a>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00891.warc.gz"}
|
Power Triangle Calculator
• Real Power (P) - Measured in watts, defines the power consumed by the resistive part of a circuit. Also known as true or active power, performs the “real work” within an electrical circuit.
• Reactive Power (Q) - Measured in VAR, the power consumed in an AC circuit that does not perform any useful work, caused by inductors and capacitors. Reactive power counteracts the effects of real
power, taking power away from a circuit for use in magnetic fields.
• Apparent Power (S) - The product of RMS voltage and rms current flowing into a circuit, contains real power and reactive power.
• Power Factor (θ) - The ratio of real power (P) to apparent power (S), generally expressed as either a decimal or percentage value. Power factor defines the phase angle between the current and
voltage waveforms. The larger the phase angle, the greater the reactive power.
• Real Power (P) = VIcos θ, Watts (W)
• Reactive Power (Q) = VIsin θ, Volt-amperes Reactive (VAr)
• Apparent Power (S) = VI, Volt-amperes (VA)
• Power Factor (θ) = P/S
• VA = W / cos θ
• VA = VAR / sin θ
• VAR = VA * sin θ
• VAR = W * tan θ
• W = VA * cos θ
• W = VAR / tan θ
• Sin(θ) = Opposite / Hypotenuse = Q/S = VAr/VA
• Cos(θ) = Adjacent / Hypotenuse = P/S = W/VA = power factor, p.f.
• Tan(θ) = Opposite / Adjacent = Q/P = VAr/W
⚡️ High voltage testing comes with unique safety challenges. Join the discussion to share your insights, best practices, and strategies for mitigating risks and ensuring the safety of personnel in
high voltage testing scenarios. Let's prioritize safety together!
|
{"url":"https://app.testguy.net/module/power-triangle-calculator?s=46864ec4ad02493dd586c9ed63776d00","timestamp":"2024-11-06T20:40:15Z","content_type":"text/html","content_length":"23103","record_id":"<urn:uuid:e1866474-4ef5-4f48-93b5-d6fb7b1a1a26>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00613.warc.gz"}
|
An Etymological Dictionary of Astronomy and Astrophysics
Fr.: anticorrelation
Statistics: The correlation coefficient of two random variables X and Y is in general defined as the ratio of the Cov(X,Y) to the two standard deviations of X and Y. It varies between 1 and -1
corresponding to complete correlation or anticorrelation.
Anticorrelation, from → anti- + → correlation.
Pâdhambâzâneš, from pâd-, → anti-, + hambâzâneš, → correlation.
Fr.: autocorrélation
1) In radio astronomy, a process performed by an → autocorrelator.
2) In statistics, a linear relation between values of a random variable over time.
3) In electronics, a technique used to detect cyclic activity in a complex signal.
Autocorrelation, from → auto- "self" + → correlation.
Xod-hambâzâneš, from xod- "self" + hambâzâneš, → correlation.
autocorrelation function
کریای ِخودهمبازآنش
karyâ-ye xod-hambâzâneš
Fr.: fonction d'autocorrélation
A mathematical function that describes the correlation between two values of the same variable at different points in time.
→ autocorrelation; → function.
canonical correlation
همبازآنشِ هنجاروار
hambâzânš-e hanjârvâr
Fr.: correlation canonique
The highest correlation between linear functions of two data sets when specific restrictions are imposed upon them.
→ canonical; → correlation.
Fr.: corrélation
General: The degree to which two or more attributes or measurements on the same group of elements show a tendency to vary together; the state or relation of being correlated.
Statistics: The strength of the linear dependence between two random variables.
From M.Fr. corrélation, from cor- "together," → com- + → relation.
Hambâzâneš , from ham-→ com- + bâzâneš→ relation.
correlation coefficient
همگر ِهمبازآنش
hamgar-e hambâzâneš
Fr.: coefficient de corrélation
A number between -1 and 1 which measures the degree to which two variables are linearly related.
→ correlation; → coefficient.
cross correlation
همبازآنش ِچلیپایی، ~ خاجی
hamvbâzâneš-e calipâyi, ~ xâji
Fr.: corrélation croisée
In radio astronomy, the process performed by a → cross correlator or the result of the process.
→ cross; → correlation.
direct correlation
همبازآنش ِسرراست
hambâzâneš-e sarrâst
Fr.: corrélation directe
A correlation between two variables such that as one variable becomes large, the other also becomes large, and vice versa. The correlation coefficient is between 0 and +1. Also called positive
→ direct; → correlation.
linear correlation
همبازآنش ِخطی
hambâzâneš-e xatti
Fr.: corrélation linéaire
A measure of how well data points fit a straight line. When all the points fall on the line it is called a perfect correlation. When the points are scattered all over the graph there is no
→ linear; → correlation.
negative correlation
همبازآنش ِناییدار
hambâzâneš-e nâyidâr
Fr.: corrélation négative
A correlation between two variables such that as one variable's values tend to increase, the other variable's values tend to decrease.
→ negative; → correlation.
Orion correlation theory
نگرهی ِهمبازآنش ِاوریون
negare-ye hambâzâneš-e Oryon
Fr.: théorie de la corrélation d'Orion
A controversial proposition according to which a coincidence would exist between the mutual positions of the three stars of → Orion's Belt and those of the main Giza pyramids. More specifically,
Khufu, Khafre, and Menkaure would be the monumental representation of → Alnitak, → Alnilam, and → Mintaka, respectively.
→ Orion; → correlation; → theory.
positive correlation
همبازآنش ِداهیدار
hambâzâneš-e dâhidâr
Fr.: correlation positive
Same as → direct correlation.
→ positive; → correlation.
|
{"url":"https://dictionary.obspm.fr/index.php/?showAll=1&formSearchTextfield=correlation","timestamp":"2024-11-11T22:45:44Z","content_type":"text/html","content_length":"20870","record_id":"<urn:uuid:15455fed-389c-48eb-ab61-7f2090f0ff75>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00174.warc.gz"}
|
calf_exact_binary_subset {CALF} R Documentation
Runs Coarse Approximation Linear Function on a random subset of binary data provided, with the ability to precisely control the number of case and control data used.
times = 1,
optimize = "pval",
verbose = FALSE
data Matrix or data frame. First column must contain case/control dummy coded variable.
nMarkers Maximum number of markers to include in creation of sum.
nCase Numeric. A value indicating the number of case data to use.
nControl Numeric. A value indicating the number of control data to use.
times Numeric. Indicates the number of replications to run with randomization.
optimize Criteria to optimize. Indicate "pval" to optimize the p-value corresponding to the t-test distinguishing case and control. Indicate "auc" to optimize the AUC.
verbose Logical. Indicate TRUE to print activity at each iteration to console. Defaults to FALSE.
A data frame containing the chosen markers and their assigned weight (-1 or 1)
The optimal AUC or pval for the classification. If multiple replications are requested, a data.frame containing all optimized values across all replications is returned.
aucHist A histogram of the AUCs across replications, if applicable.
calf_exact_binary_subset(data = CaseControl, nMarkers = 6, nCase = 5, nControl = 8, times = 5)
version 1.0.17
|
{"url":"https://search.r-project.org/CRAN/refmans/CALF/html/calf_exact_binary_subset.html","timestamp":"2024-11-01T19:57:02Z","content_type":"text/html","content_length":"3548","record_id":"<urn:uuid:5c4717ef-c51b-4850-84d1-a2d9384502c4>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00382.warc.gz"}
|
Math 504: Discrete Mathematics for Teachers - Summer 2016
Instructor Contact and General Information
Instructor: Luís Finotti
Office: Ayres Hall 251
Phone: 974-1321 (don't leave messages! -- e-mail me if I don't answer!)
e-mail: lfinotti@utk.edu
Office Hours: by appointment. We can use Zoom (long distance) or you can come to my office.
Textbook: D. J. Velleman, "How to Prove It: A Structured Approach", 2dn Edition, Cambridge University Press, 2006.
Prerequisite: One year of calculus or equivalent.
Class Meeting Time: Mondays from 2:30pm to 3:30pm and Thursdays from 7pm to 8pm, via Zoom. (Section 301.)
Exams: Midterm: 06/25 (due on Blackboard by 11:59pm).
Final: 07/06 (due on Blackboard by 11:59pm).
Grade: Best between 34% for HW Average and 33% (each) for Midterm and Final, and 20% for HW Average and 40% (each) for Midterm and Final. (Lowest HW score dropped in the HW average.)
Course Information
Summer Course Warning!
This is a summer course, in which 16 weeks are squeezed into 5. So, as you can imagine, the pace is quite fast. Summer Courses are for very motivated students! If you usually study one hour a day
during the regular semester (5 hours a week), the equivalent would be to study three hours a day in the summer semester! If you count (regular) lecture time (7.5 hours a week) and studying time (15
hours a week), that would amount to 22.5 hours a week dedicate to this course!
You cannot just "catch up on the weekends" in a course like this, as by then we will have covered way too much material. You should catch up immediately if you fall behind, as you will not be able to
follow classes and things just start to accumulate in a faster pace than you will likely be able to catch up. I strongly recommend that you review, do problems and study every day!
I've had students taking more than one summer course in the past at the same session, and although it is possible to do it, I'd consider it a Herculean task and would usually advise against it. If
you decide to do it, just make sure you are prepared for it! (Tell your loved ones you will see them in July.)
Course Format
This will be a flipped course, i.e., students will learn a lot on their own, by reading the text and watching short related videos, while the times with the instructor will be spent with questions,
solving problems and interactions with students.
You can always request for something you want to see in a video: a problem, some proof in the book, an example, some clarification, etc. If you think it can be done well enough in a lecture (on-line
meeting) save it for then, though! If not, just post you request in the "Q&A - Math Related" forum on Blackboard.
"Lectures" will be on-line, via Zoom. I will assign reading and exercises to be done (or attempted) before our lectures. In lecture I will answer questions, solve problems and perhaps provide a few
more examples. On the other hand, all (or most of) the content of the lecture will be "question driven". (But, questions such as "Can you further explain X?" or "Can you give us examples of Y?" are
more than welcome.) If there are no questions or requests, the lecture will be quite short. It's essential you come to lectures prepared! Otherwise the chances of you getting anything out of this
course (and passing it) are quite slim.
I recommend you attend the lectures even if you don't have any questions about the material, as I will take surveys and ask questions that might be relevant to all. You also might learn different
ways of doing (or viewing) some problems.
It is only the third time I am teaching a course in this format (flipped/online), so your feedback is quite important and will help shape the course.
We will meet online Mondays from 2:30pm to 3:30pm and Thursdays from 7pm to 8pm using Zoom.
To join our class simply click here. (The link is https://tennessee.zoom.us/j/545412677.) There are also a links to access our class in Blackboard, on the Navigation area on top and in the section
Links, called Class Meeting Link (Zoom).
Alternatively, you can just login to Zoom and enter Meeting ID 545 412 677.
I strongly recommend you try it out before our first meeting. Please read the LiveOnline@UT page carefully. In particular, look for Test Flights dates, when you can test Zoom before our first
meeting. (Also, take a close look at Getting Started page.)
In our meetings we will use Sage Math Cloud (SMC) for our discussions. Before classes start, you should receive an invitation to collaborate on a project that I've created for this course (Math 504
-- Summer 2016).
On our meetings you will see me share my browser running SMC to answer your questions. You will be able to see and type in the same document live. (Similar to Google Docs.)
We can enter math in SMC using LaTeX. (More on LaTeX below.) The edited document with questions and answers will be stored in our project and you can look at it whenever you want/need. (I will also
use SMC to post solutions to HW problems.)
Please watch this video for more details: Introduction to SMC and How We Will Use It.
Note: A regular summer course like ours (5 weeks) meets for 7.5 hours in a week. We will meet online for just about 2 hours a week. The remaining 5.5 should be spent reading the text and watching the
videos. Note that these hours should not count as "studying time", but as "lecture time". Our 2 hour meetings, though, could count as "studying time". But again, relative to one hour study time a day
(5 hours a week) for a regular semester, translate to 15 hours a week for a summer semester, and the meeting times count as only 2 of those!
Piazza (Discussion Board)
We will use Piazza for discussions. (Except for live meetings.) The advantage of Piazza is that it allows us (or simply me) to use math symbols efficiently and with good looking results (unlike
To enter math, you can use LaTeX code. (See the section on LaTeX below.) The only difference is that you must surround the math code with double dollar signs ($$) instead of single ones ($). Even if
you don't take advantage of this, I can use making it easier for you to read the answers.
You can access Piazza through the link on the left panel of Blackboard or directly here: https://piazza.com/utk/summer2016/math504/home. (There is also a link at the "Navigation" section on the top
of this page and on the Links section.)
To keep things organized, I've set up a few different folders/labels for our discussions:
• HW Sets and Exams: Each HW set and exam has its own folder. Ask question related to each HW set or exam in the corresponding folder.
• Class Structure: Ask questions about the class, such as "how is the graded computed", "when is the final", etc. in this folder. (Please read the Syllabus first, though!)
• Computers: Ask questions about the usage of Zoom, LaTeX, Sage Math Cloud, Piazza itself and Blackboard using this folder.
• Feedback: Give (possibly anonymous) feedback about the course using this folder.
• Other: In the unlikely event that your question/discussion doesn't fit in any of the above, please use this folder.
I urge you to use Piazza often for discussions! (This is specially true for Feedback!) If you are ever thinking of sending me an e-mail, think first if it could be posted there. That way my answer
might help others that have the same questions as you and will be always available to all. (Of course, if it is something personal (such as your grades), you should e-mail me instead.)
Note that you can post anonymously. (Just be careful to check the proper box!) But please don't post anonymously if you don't feel compelled to, as it would help me to know you, individually, much
Students can (and should!) reply to and comment on posts on Piazza. Discussion is encouraged here! But please be careful with HW questions! You should not answer (or ask) questions about how to do a
HW problem! (You can ask for hints or suggestions, though.) If you are uncertain if you can answer a (math related) post, please e-mail me first!
Also, please don't forget to choose the appropriate folder(s) (you can choose more than one, like a label) for your question. And make sure to choose between Question, Note or Poll.
When replying/commenting/contributing to a discussion, please do so in the appropriate place. If it is an answer to the question, use the Answer area. (Note: The answer area for students can be
edited by other students. The idea is to be a collaborative answer. Only one answer will be presented for students and one from the instructor. So, if you want to contribute to answer already posted,
just edited it.) You can also post a Follow Up discussion instead of (or besides) an answer. There can be multiple follow ups, but don't start a new one if it is the same discussion.
Important: Make sure you set your "Notifications Settings" on Piazza to receive notifications for all posts: Click on the gear on the top right of the Piazza site, the choose "Account/Email Setting",
then "Edit Email Notifications" and then check "Automatically follow every question and note". Preferably, also set "Real Time" for both new and updates to questions and notes. I will consider a post
in Piazza official communication in this course, I will assume all have read every single post there!
You should receive an invitation to join our class in Piazza via your "@tennessee.edu" e-mail address before classes start. If you don't, you can sign up here: https://piazza.com/utk/summer2016/
math504. If you've register with a different e-mail (e.g., @vols.utk.edu) you do not need to register again, but you can consolidate your different e-mails (like @vols.utk.edu and @tennessee.edu) in
Piazza, so that it knows it is the same person. (Only if you want to! It is not required as long as you have access to our course there!) Just click on the gear icon on the top right of Piazza,
beside your name, and select "Account/Email Settings". Then, in "Other Emails" add the new ones.
Course Content
Math 504 is a basically a course on mathematical proofs. A proof is a series of logical steps based on predetermined assumptions to show that some statement is, beyond all doubt, true. Thus, there
are two main goals: to teach you how think in a logical and precise fashion, and to teach how to properly communicate your thoughts. Those are the "ingredients" of a proof.
Thus, the topics of the course themselves play a somewhat secondary role in this course, and there are many difference possible choices. On the other hand, since these will be your first steps on
proofs, the topics should be basic enough so that your first proofs are as simple as possible. Therefore, you will be dealing at times with very basic mathematics, and will prove things you've
"known" to be true for a long time. But it is crucial that you do not lose sight of our real goal: do you know how to prove those basic facts? In fact, the truth is that you don't really know if
something is true until you see a proof of it! You might believe it to be true, based on someone else's word or empirical evidence, but only the proof brings certainty.
In any event, the topics to be covered in this course are: logic, set theory, relations and functions, induction and combinatorics. We will use also basic notions of real and integer numbers, but
these will be mostly assumed (without proofs).
Chapters and Topics
The goal would be to cover the following:
• Chapters 1 and 2: all sections, but these will be covered quickly and skipping some parts. These are sections in formal logic, which although crucial, I find better to be introduce in more
concrete settings as the need arises in the following chapters.
• Chapter 3: All sections, except 3.7.
• Chapter 4: All sections, except 4.5.
• Chapter 5: All sections, except 5.4.
• Chapter 6: All sections, except 6.5.
Other topics (and digressions) might also be squeezed in as time allows.
For a break down of videos, outcomes and problems for each individual section, check this page.
Homework Policy
Homeworks and due dates are posted at the section Reading and Homework of this page. (They should also appear in your Blackboard Calendar.)
Homeworks must be turned in via Blackboard. (Please, don't e-mail them to me unless strictly necessary, e.g., Blackboard is not working.) Just click on "Assignments (Submit HW)" on the left panel of
Blackboard and select the correct assignment.
Scanned copies are acceptable, but typed in solutions are preferred. I recommend you learn and use LaTeX. (Resources are provided below.)
Note that you will also be graded on how well it is written, not only if it is correct! (Remember, how to communicate your proofs is part of the course.) The same applies to exams and all graded
In my opinion, doing the HW is one of the most important parts of the learning process, so the weight for them is quite high.
Also, you should make appointments for office hours having difficulties with the course. I will do my best to help you. Please try to first ask questions during class time (online)! I will not take
appointments from students who don't attend the lectures (unless there is a good excuse, of course).
Finally, you can check all your scores at Blackboard.
E-Mail Policy
I will assume you check your e-mail at least once a day, but preferably you should check your e-mail often. I will use your e-mail (given to me by the registrar's office) to make announcements. (If
that is not your preferred address, please make sure to forward your university e-mail to it!) I will assume that any message that I sent via e-mail will be read in less than twenty four hours, and
it will be considered an official communication.
Moreover, you should receive e-mails when announcements are posted on Blackboard, or where there is a new post in Piazza. (Again, please subscribe to receive notifications in Piazza! Important
information my appear in those.)
Please, post all comments and suggestions regarding the course using Piazza. Usually these should be posted as Notes and put in the Feedback folder/labels (and add other labels if relevant). These
can be posted anonymously (or not), just make sure to check the appropriate option. Others students and myself will be able to respond and comment. If you prefer to keep the conversation private
(between us), you can send me an e-mail, but then, of course, it won't be anonymous.
You are invited to post an introduction about yourself on Blackboard. (There is an "Introductions" link.) This is not required at all! But it might make things a bit more social.
Legal Issues
All students should be familiar and maintain their Academic Integrity: from Hilltopics, pg. 46:
Academic Integrity
The university expects that all academic work will provide an honest reflection of the knowledge and abilities of both students and faculty. Cheating, plagiarism, fabrication of data, providing
unauthorized help, and other acts of academic dishonesty are abhorrent to the purposes for which the university exists. In support of its commitment to academic integrity, the university has
adopted an Honor Statement.
All students should follow the Honor Statement: from Hilltopics, pg. 16:
Honor Statement
"An essential feature of The University of Tennessee is a commitment to maintaining an atmosphere of intellectual integrity and academic honesty. As a student of the University, I pledge that I
will neither knowingly give nor receive any inappropriate assistance in academic work, thus affirming my own personal commitment to honor and integrity."
You should also be familiar with the Classroom Behavior Expectations.
We are in a honor system in this course!
Students with disabilities that need special accommodations should contact the Office of Disability Services and bring me the appropriate letter/forms.
Sexual Harassment and Discrimination
For Sexual Harassment and Discrimination information, please visit the Office of Equity and Diversity.
Campus Syllabus
Please, see also the Campus Syllabus.
Course Goals and Outcomes
Course Relevance
This course is clearly crucial to mathematicians, as our job is to prove things (and find things to be proved). But, this is a course also required for computer scientists, not only here at UT, but
virtually everywhere. The most obvious reason is that computer programs are written using formal logic. Another relevant connection is Artificial Intelligence, where you basically have to "teach" a
machine to come up with its own proofs.
Moreover, the skills taught in this course are universally important, and their benefits cannot be overstated! Everyone should be able to think clearly and logically to make proper choices in life,
and you should be able to communicate your thoughts clearly and concisely if you want to convince, teach, or explain your choices to someone else. In particular, Law Schools are often interested in
Math Majors, as the ability to think logically and clearly develop an argument is (or should be) the essence of a lawyer's job.
For teachers, it is important to help your students, from an early age, to be understand the importance of proofs! In my opinion, high school (at the latest!) students should be introduced to formal
proofs, even if in the most simple settings. This is important to foster analytic and critical thinking and to understand what mathematics is really about.
Course Value
The students will:
• develop analytic and critical thinking;
• broaden their problem solving techniques;
• learn how to concisely and precisely communicate arguments and ideas.
Student Learning Outcomes
At the end of the semester students should be able to:
• write coherent, concise and well-written proofs with proper language and terminology;
• use counting arguments for solving concrete numerical problems and as tools in abstract proofs;
• master standard proof techniques such as direct proofs, by contradiction or contrapositive, proofs by induction, proofs of and/or statements, proof of equivalencies, among others;
• master the terminology and notation of basic set theory (such as membership, containment, union, complement, partition, among others);
• master the terminology and notation of basic fucntion theory (such as injective/one-to-one, surjective/onto, bijective, invertible, etc.);
• understand and be familiar with examples of equivalency relations and its relation with partitions.
Learning Environment
• Type: This will be a flipped course, i.e., students will learn a lot on their own, by reading the text and watching short related videos, while the times with the instructor will be spent with
questions, solving problems and interactions with students.
• Where: Students will work from home in activities such as reading, watching video, participating in video conferences and long distance office hours. A lot of the discussions should happen on the
Piazza discussion board.
• Student and Faculty roles:
□ Students will have to be more active in the learning process than in regular courses, as they will do most of the reading and learning on their own.
□ The instructor will be a facilitator, answering questions and offering advice and guidelines, answering questions and providing feedback.
• Students Responsibilities:
□ Keep up with the schedule, i.e., read the assigned sections, watch the recommended videos and solve assigned problems according to the schedule. This is crucial in this flipped format!
□ Carefully work on assigned problems.
□ Carefully review graded work to learn from past mistakes.
□ Check the course site often (at the very least once a day) for assignments and announcements.
□ Search for help if having difficulties!
□ Provide feedback to improve the course.
• Instructor Responsibilities:
□ Be available for help.
□ Provide examples and solve problems.
□ Be open to discussions concerning content, format and evaluations.
□ Provide relevant problems and exercises for homework, quizzes and exams.
□ Provide feedback to the students.
LaTeX is the most used software to produce mathematics texts. It is quite powerful and the final result is, when properly used, outstanding! Virtually all professional math text you will ever see is
done with LaTeX, or one of its variants.
LaTeX is available for all platforms and freely available.
The problem is that it has a steep learning curve at first, but after the first difficulties are overcome, it is not bad at all.
One of the first difficulties one encounters is that it is not WYSIWYG ("what you see is what you get"). It resembles a programming language: you first type some code and then this code is processed
to produce a nice document (a non-editable PDF file, for example). Thus, one has to learn how to "code" in LaTeX, but this brings many benefits.
I recommend that anyone with any serious interest in producing math texts to learn it! On the other hand, I don't expect all of you to do so. But note that there are processors that can make it
"easier" to create LaTeX documents, by making it "point-and-click" and (somewhat) WYSIWYG.
Here are some that you can use online (no need to install anything and files are available online, but need to register):
We will use the first one, SageMathCloud in our course, so you have to register for it, and thus might as well use it. It is probably the best of the services anyway, and it can do a lot more than
just LaTeX. You should have received, by the first day of classes, an invitation to collaborate on a project that I've created for this course (Math 504 -- Summer 2016).
If you want to install LaTeX in your computer (so that you don't need an Internet connection), check here.
I might need to use some LaTeX symbols when writing in our online meetings, but it should be relatively easy to follow. I will also provide samples and templates that should make it much easier for
you to start.
A few resources:
Reading and Homework (Course Calendar)
Lecture 1: 06/02 from 7pm to 8pm
Reading: Course Info, Introduction.
Lecture 2: 06/06 from 2:30pm to 3:30pm
Reading: Sections 1.1-4.
Lecture 3: 06/09 from 7pm to 8pm
Reading: Sections 1.5 and 2.1-3.
Homework 1: 06/11 by 11:59pm
Section 1.1: Turn in: 3(d), 6(b), 7(b).
Extra Problems: 1, 3(a-c), 6(a), (c), 7(a), (c-d).
Section 1.2: Turn in: 2(b), 12(b).
Extra Problems: 2(a), 12(a), (c).
Section 1.3: Turn in: None.
Extra Problems: 2, 4, 6, 8.
Section 1.4: Turn in: 6(a), 7(a), 9.
Extra Problems: 2, 6(b), 7(b).
Lecture 4: 06/13 from 2:30pm to 3:30pm
Reading: Sections 3.1-4.
Homework 2: 06/14 by 11:59pm
Section 1.5: Turn in: 5, 9.
Extra Problems: 3, 4.
Section 2.1: Turn in: 6.
Extra Problems: 3, 5.
Section 2.2: Turn in: 2(b-c), 7.
Extra Problems: 2(a), (d), 5, 10.
Section 2.3: Turn in: 2(c), 12(a-b).
Extra Problems: 2(a-b), (d), 5, 6, 9, 12(c). (Also, take a look at the statements of 14 and 15.)
Lecture 5: 06/16 from 7pm to 8pm
Reading: Sections 3.5-6, 4.1-2.
Homework 3: 06/18 by 11:59pm
Section 3.1: Turn in: 6, 10.
Extra Problems: 2, 15, 16.
Section 3.2: Turn in: 4, 7.
Extra Problems: 2, 9, 12.
Section 3.3: Turn in: 10, 15.
Extra Problems: 2, 4, 18, 21.
Section 3.4: Turn in: 10, 16.
Extra Problems: 3, 8, 24.
Lecture 6: 06/20 from 2:30pm to 3:30pm
Catch up! No reading, just questions and answers.
Homework 4: 06/21 by 11:59pm
Section 3.5: Turn in: 8, 21.
Extra Problems: 9, 13, 17.
Section 3.6: Turn in: 10.
Extra Problems: 2, 7.
Section 4.1: Turn in: 9, 10.
Extra Problems: 3, 7.
Section 4.2: Turn in: 5, 8.
Extra Problems: 2, 3, 6(b).
Lecture 7: 06/23 from 7pm to 8pm
Reading: Sections 4.3-4, 4.6, 5.1.
Midterm: 06/26 by 11:59pm
Sections: Chapters 1, 2 and 3.
Lecture 8: 06/27 from 2:30pm to 3:30pm
Reading: Sections 5.2-3, 6.1-2.
Homework 5: 06/28 by 11:59pm
Section 4.3: Turn in: 14, 16.
Extra Problems: 2, 4, 9, 12, 21.
Section 4.4: Turn in: 6, 22.
Extra Problems: 2, 3, 9, 15.
Section 4.6: Turn in: 13, 20.
Extra Problems: 4, 8, 16.
Section 5.1: Turn in: 9(b), 17(b).
Extra Problems: 9(a), 11, 13, 17(a).
Lecture 9: 06/30 from 7pm to 8pm
Reading: 6.3-4.
Lecture 10: 07/01 (Friday! -- note the date change) from 3pm to 4pm
Catch up! No reading, just questions and answers.
Homework 6: 07/01 by 11:59pm
Section 5.2: Turn in: 8(b), 9(a).
Extra Problems: 3, 6, 11, 18.
Section 5.3: Turn in: 10, 12.
Extra Problems: 4, 6.
Section 6.1: Turn in: 9(b), 16.
Extra Problems: 4, 9(a).
Section 6.2: Turn in: 3, 6 (here you can use, without proving, the Triangle Inequality: if $a, b \in \mathbb{R}$, then $|a+b| \leq |a|+|b|$).
Extra Problems: 5, 10.
Homework 7: Not to be turned in!
Section 6.3: Turn in: None.
Extra Problems: 2, 5, 9, 12, 16.
Section 6.4: Turn in: None.
Extra Problems: 4, 6, 7, 19.
Final: 07/06 by 11:59pm
Chapters 4, 5 and 6. (No Chapter 1, 2 or 3.)
|
{"url":"https://web.math.utk.edu/~finotti/sm16/m504/M504.html","timestamp":"2024-11-12T07:30:47Z","content_type":"application/xhtml+xml","content_length":"69862","record_id":"<urn:uuid:fc1f26bc-9830-4fc3-b29d-112d897785af>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00873.warc.gz"}
|
1.8: Sampling (1 of 2)
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
Learning Objectives
• For an observational study, critique the sampling plan. Recognize implications and limitations of the plan.
We now focus on observational studies and how to collect reliable and accurate data for an observational study.
We know that an observational study can answer questions about a population. But populations are generally large groups, so we cannot gather data from every individual in the population. Instead, we
select a sample and gather data from the sample. We use the data from the sample to make statements about the population.
Here are two examples:
• A political scientist wants to know what percentage of college students consider themselves conservatives. The population is college students. It would be too time consuming and expensive to poll
every college student, so the political scientist selects a sample of college students. Of course, the sample must be carefully selected to represent the political perspectives that are present
in the population.
• A government agency plans to test airbags from Honda to determine if the airbags work properly. Testing an airbag means it has to be inflated and punctured, which ruins the airbag, so the
researchers certainly cannot test every airbag. Instead, they test a sample of airbags and draw a conclusion about the quality of airbags from Honda.
Important Point
Our goal is to use a sample to make valid conclusions about a population. Therefore, the sample must be representative of the population. A representative sample is a subset of the population that
reflects the characteristics of the population.
A sampling plan describes exactly how we will choose the sample. A sampling plan is biased if it systematically favors certain outcomes.
In our discussion of sampling plans, we focus on surveys. The next example is a famous one that illustrates how biased sampling in a survey leads to misleading conclusions about the population.
The 1936 Presidential Election
In 1936, Democrat Franklin Roosevelt and Republican Alf Landon were running for president. Before the election, the magazine Literary Digest sent a survey to 10 million Americans to determine how
they would vote. More than 2 million people responded to the poll; 60% supported Landon. The magazine published the findings and predicted that Landon would win the election. However, Roosevelt
defeated Landon in one of the largest landslide presidential elections ever.
What happened?
The magazine used a biased sampling plan. They selected the sample using magazine subscriptions, lists of registered car owners, and telephone directories. The sample was not representative of the
American public. In the 1930s, Democrats were much less likely to own a car or have a telephone. The sample therefore systematically underrepresented Democrats. The poll results did not represent the
way people in the general population voted.
Before we discuss a method for avoiding bias, let’s look at some examples of common survey plans that produce unreliable and potentially biased results.
How to Sample Badly
Online polls: The American Family Association (AFA) is a conservative Christian group that opposes same-sex marriage. In 2004, the AFA began a campaign in support of a constitutional amendment to
define marriage as strictly between a man and a woman. The group posted a poll on its website asking AFA members to voice their opinion about same-sex marriage. The AFA planned to forward the results
to Congress as evidence of America’s opposition to same-sex marriage. Almost 850,000 people responded to the poll. In the poll, 60% favored legalizing same-sex marriage.
What happened? Against the wishes of the AFA, the link to the poll appeared in blogs, social-networking sites, and a variety of email lists connected to gay/lesbian/bisexual groups. The AFA claimed
that gay rights groups had skewed its poll. Of course, the results of the poll would have been skewed in the other direction had only AFA members been allowed to participate.
This is an example of a voluntary response sample. The people in a voluntary response sample are self-selected, not chosen. For this reason, a voluntary response sample is biased because only people
with strong opinions make the effort to participate.
Mall surveys: Have you ever noticed someone surveying people at a mall? People shopping at a mall are more likely to be teenagers, retired people, or people who have more money than the typical
American. In addition, unless interviewers are carefully trained, they tend to interview people with whom they are comfortable talking. For these reasons, mall surveys frequently overrepresent the
opinions of white middle-class or retired people. Mall surveys are an example of a convenience sample.
How to Eliminate Bias in Sampling
In a voluntary response sample, people choose whether to respond. In a convenience sample, the interviewer chooses who will be part of the sample. In both cases, personal choice produces a biased
sample. Random sampling is the best way to eliminate bias. Collecting a random sample is like pulling names from a hat (assuming every individual in the population has a name in the hat!). In a
simple random sample everyone in the population has an equal chance of being chosen.
Reputable polling firms use techniques that are more complicated than pulling names out of a hat. But the goal is the same: eliminate bias by using random chance to decide who is in the sample.
Random samples will eliminate bias, even bias that may be hidden or unknown. The next three activities will reveal a bias that most of us have but don’t know that we have! We will see how random
sampling avoids this bias.
Random Samples
Instructions: Use the simulation below for this activity. You will see 60 circles. This is the “population.” Our goal is to estimate the average diameter of these 60 circles by choosing a sample.
1. Choose a sample of five circles that look representative of the population of all 60 circles. Mark your five circles by clicking on each of them. They will turn orange. Record the average
diameter for the five circles. (Make sure you have five orange circles before you record the average diameter.)
2. Reset the simulation.
3. Choose another five circles and record the average diameter for this sample of circles. You can reuse a circle, but the sample should not have all the same circles. You now have the averages for
two samples.
4. Reset and repeat for a total of 10 samples. Record the average diameter for each sample.
Click here to open this simulation in its own window.
A link to an interactive elements can be found at the bottom of this page.
Now we estimate the average diameter of the 60 circles using random samples.
Instructions: Use the simulation below for this activity. You will again see the same 60 circles. As before, this is the “population.” Our goal is to estimate the average diameter of these 25 circles
by choosing a random sample.
1. Click on the “Generate sample” button to get a random sample of five circles by clicking on the random sample button. The simulation randomly chooses five circles. Record the average diameter for
the random sample.
2. Reset the simulation using the reset button.
3. Click on the “Generate sample” button to get another random sample. Record the average diameter for this random sample. You now have the averages for two samples.
4. Reset and repeat for a total of 10 samples. Record the average diameter for each sample.
Click here to open this simulation in its own window.
A link to an interactive elements can be found at the bottom of this page.
Random selection also guarantees that the sample results do not change haphazardly from sample to sample. When we use random selection, the variability we see in sample results is due to chance. The
results obey the mathematical laws of probability. We looked at this idea briefly in the Big Picture of Statistics. Probability is the machinery for drawing conclusions about a population on the
basis of samples. To use this machinery, the sample must be chosen by random chance.
Contributors and Attributions
CC licensed content, Shared previously
|
{"url":"https://stats.libretexts.org/Courses/Lumen_Learning/Concepts_in_Statistics_(Lumen)/01%3A_Types_of_Statistical_Studies_and_Producing_Data/1.08%3A_Sampling_(1_of_2)","timestamp":"2024-11-02T05:44:50Z","content_type":"text/html","content_length":"142713","record_id":"<urn:uuid:13aedf90-9d5f-4e20-948c-0ab46094cd00>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00521.warc.gz"}
|
The AstroStat Slog
X-ray telescopes generally work by reflecting photons at grazing incidence. As you can imagine, even small imperfections in the mirror polishing will show up as huge roadbumps to the incoming
photons, and the higher their energy, the easier it is for them to scatter off their prescribed path. So X-ray telescopes tend to have sharp peaks and fat tails compared to the much more well-behaved
normal-incidence telescopes, whose PSFs (Point Spread Functions) can be better approximated as Gaussians.
X-ray telescopes usually also have gratings that can be inserted into the light path, so that photons of different energies get dispersed by different angles, and whose actual energies can then be
inferred accurately by measuring how far away on the detector they ended up. The accuracy of the inference is usually limited by the width of the PSF. Thus, a major contributor to the LRF (Line
Response Function) is the aforementioned scattering.
A correct accounting of the spread of photons of course requires a full-fledged response matrix (RMF), but as it turns out, the line profiles can be fairly well approximated with Beta profiles, which
are simply Lorentzians modified by taking them to the power β –
where B(1/2,β-1/2) is the Beta function, and N is a normalization constant defined such that integrating the Beta profile over the real line gives the area under the curve as N. The parameter β
controls the sharpness of the function — the higher the β, the peakier it gets, and the more of it that gets pushed into the wings. Chandra LRFs are usually well-modeled with β~2.5, and XMM/RGS
appears to require Lorentzians, β~1.
The form of the Lorentzian may also be familiar to people as the Cauchy Distribution, which you get for example when the ratio is taken of two quantities distributed as zero-centered Gaussians. Note
that the mean and variance are undefined for that distribution.
Recent Comments
• Raoul LePage on From Terence’s stuff: You want proof?
• vlk on [MADS] Parallel Coordinates
• brianISU on Yes, please
• chasc on About
• Ali on About
• Gus on About
• vlk on Everybody needs crampons
• nestor on People
• hlee on Do people use Fortran?
|
{"url":"https://whipple.cfa.harvard.edu/astrostat/slog/groundtruth.info/AstroStat/slog/tag/beta-profile/index.html","timestamp":"2024-11-07T16:26:11Z","content_type":"application/xhtml+xml","content_length":"21174","record_id":"<urn:uuid:c53b9882-cf30-4992-b5a7-e644113863b1>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00830.warc.gz"}
|
The APL Programming Language Source Code
Software Gems: The Computer History Museum Historical Source Code Series
Thousands of programming languages were invented in the first 50 years of the age of computing. Many of them were similar, and many followed a traditional, evolutionary path from their predecessors.
But some revolutionary languages had a slant that differentiated them from their more general-purpose brethren. LISP was for list processing. SNOBOL was for string manipulation. SIMSCRIPT was for
simulation. And APL was for mathematics, with an emphasis on array processing.
What eventually became APL was first invented by Harvard professor Kenneth E. Iverson in 1957 as a mathematical notation, not as a computer programming language. Although other matrix-oriented symbol
systems existed, including the concise tensor notation invented by Einstein, they were oriented more towards mathematical analysis and less towards synthesis of algorithms. Iverson, who was a student
of Howard Aiken’s, taught what became known as “Iverson Notation” to his Harvard students to explain algorithms.
Iverson was hired by IBM in 1960 to work with Adin Falkoff and others on his notation. In his now famous 1962 book “A Programming Language” ^1, he says the notation is for the description of
“procedures…called algorithms or programs”, and that it is a language because it “exhibits considerable syntactic structure”. But at that point it was just a notation for people to read, not a
language for programming computers. The book gives many examples of its use both as a descriptive tool (such as for documenting the definition of computer instruction sets) and as a means for
expressing general algorithms (such as for sorting and searching). Anticipating resistance to something so novel, he says in the preface, “It is the central thesis of this book that the descriptive
and analytical power of an adequate programming language amply repays the considerable effort required for its mastery.” Perhaps he was warning that mastering the language wasn’t trivial. Perhaps he
was also signaling that, in his view, other notational languages were less than “adequate”.
The team, of course, soon saw that the notation could be turned into a language for programming computers. That language, which was called APL starting in 1966, emphasized array manipulation and used
unconventional symbols. It was like no other computer program language that had been invented.
APL became popular when IBM introduced “APL\360” for their System/360 mainframe computer. Unlike most other languages at the time, APL\360 was also a complete interactive programming environment. The
programmer, sitting at an electromechanical typewriter linked to a timeshared computer, could type APL statements and get an immediate response. Programs could be defined, debugged, run, and saved on
a computer that was simultaneously being used by dozens of other people.
Written entirely in 360 assembly language, this version of APL took control of the whole machine. It implemented a complete timesharing operating system in addition to a high-level language.
With the permission of IBM, the Computer History Museum is pleased to make available the source code to the 1969-1972 “XM6” version of APL for the System/360 for non-commercial use.
The text file contains 37,567 lines, which includes code, macros, and global definitions. The 90 individual files are separated by ‘./ ADD” commands. To access this material, you must agree to the
terms of the license displayed here, which permits only non-commercial use and does not give you the right to license it to third parties by posting copies elsewhere on the web.
Jürgen Winkelmann at ETH Zürich has done an amazing job of turning this source code into a runnable system. For more information, see MVT for APL Version 2.00.
Creating the APL Programming Language
Iverson’s book “A Programming Language” ^1 uses a graphical notation that would have been difficult to directly use as a programming language for computers. He considered it an extension of matrix
algebra, and used common mathematical typographic conventions like subscripts, superscripts, and distinctions based on the weight or font of characters. Here, for example, is a program for sorting
To linearize the notation for use as a computer programming language typed at a keyboard, the APL implementers certainly had to give up the use of labeled arrows for control transfers. But one
feature that they were able to retain, to some extent, was the use of special symbols for primitive functions, as illustrated in this program that creates Huffman codes:
APL uses symbols that are closer to standard mathematics than programming. For example, the symbol for division is ÷, not /. To support the unconventional symbols, APL\360 used a custom-designed
keyboard with special symbols in the upper case.
Even so, there were more special characters than could fit on the keyboard, so some were typed by overstriking two characters. For example, the “grade up” character ⍋, a primitive operator used for
sorting, was created by typing ∆ (shift H), then backspace, then ∣ (shift M). There was no room left for both upper- and lower-case letters, so APL supported only capital letters.
For printing programs, Iverson and Falkoff got IBM to design a special type ball for their 1050 and 2741 terminals, which used the IBM Selectric typewriter mechanism.
Now programs could be both typed in and printed. Here, for example, is the printed version a program from the APL Language manual ^2 that computes the mathematical determinant of a matrix:
A Taste of APL
APL is a concise high-level programming language that differs from most others developed in the 1960s in several respects:
Order of evaluation: Expressions in APL are evaluated right-to-left, and there is no hierarchy of function precedence. For example, typing the expression
causes the computer to immediately type the resulting value
The value is not, as in many other languages that have operator precedence, 11. Of course, parentheses can be used to group a subexpression to change the evaluation order. The general rule is that
the right argument of any function is the value of the expression to its right.
Automatic creation of vectors and arrays: A higher-dimensional structure is automatically created by evaluating an expression that returns it, and scalars can be freely mixed. For example,
A ← 2 + 1 2 3
creates the vector “1 2 3”, add the scalar 2 to it, and creates the variable A to hold the vector whose value is
Variables are never declared; they are created automatically and assume the size and shape of whatever expression is assigned to them.
A plethora of primitives: APL has a rich set of built-in functions (and “operators” that are applied to functions to yield different functions) that operate on scalar, vectors, arrays, even
higher-dimensional objects, and combinations of them. For example, the expression to sum the numbers in the vector “A” created above is simply
where / is the “reduction” operator that causes the function to the left to be applied successively to all the elements of the operand to the right. The expression to compute the average of the
numbers in A also uses the primitive function ρ to determine how many elements there are in A:
(+/A) ÷ ρA
Here are some tables from the 1970 “APL\360 User’s Manual” ^3 that give a flavor of the power and sophistication of the built-in APL functions and operators.
APL encourages you to think differently about programming, and to use temporary high-dimensional data structures as intermediate values that are then reduced using the powerful primitives. A famous
example is the following short but complete program to compute all the prime numbers up to R.
Here is how this expression is evaluated:
subexpression meaning value if R is 6
⍳R Generate a vector of numbers from 1 to R. 1 2 3 4 5 6
T←1↓ Drop the first element of the vector and assign the rest to the temporary vector T. 2 3 4 5 6
T∘.×T Create the multiplication outer product: a table that holds the result of multiplying each element of T by each element of T.
T∊ Use the “set membership” operator to find which elements of T are in the table. 0 0 1 0 1
~ Negate the result to identify which elements of T are not in the table. These are the integers which do not have any multiples in the table. 1 1 0 1 0
( )/T Select the elements of T which we have identified. These are all the primes less than R. 2 3 5
Note that there are no loops in this program. The power of APL expressions means that conditional branches and loops are required far less often than in more traditional programming languages.
APL operators can be used in easy ways for all sorts of computations that would usually require loops. For example, an expression that computes the number of elements of the vector X that are greater
than 100 is
It works because X>100 returns a bit vector of 0’s and 1’s showing which elements of X are greater than 100, and +/ adds up all the bits in that vector.
But conditional execution and loops are, of course, sometimes needed. In the light of later developments in structured programming, APL’s only primitive for control transfer, the “GO TO LINE x”
statement →, is particularly weak. Here is an example of a function that computes the greatest common divisor of its two arguments. The last statement creates a loop by branching to the beginning. In
line 2, conditional transfer of control to line 0 causes the function to exit and return the value last assigned to G.
To learn more about the 1960’s APL language, see the “APL Language” reference manual ^2 and Paul Berry’s 1969 “APL\360 Primer” ^4.
The language has of course evolved over the years, and more recent versions include control structures such as IF-THEN-ELSE.
How APL was Implemented
The first computer implementation of APL notation was a batch-oriented language interpreter written in FORTRAN in 1965 for the IBM 7090 mainframe computer, by Larry Breed at the IBM Research Center
in Yorktown Heights NY and Philip Abrams, then a graduate student at Stanford University.
The first interactive version was written soon after for the 7093 (an experimental 7090 with virtual memory) by Larry Breed and Roger Moore. It ran under the TSM timesharing system and was
whimsically called “IVSYS”, which rhymes with “IBSYS”, the name for the standard 7090 operating system. In a 2012 email Breed says,
IVSYS provided login, logout, immediate execution and function definition; it provided workspaces, both active and stored. Implementation of these was rudimentary; mostly we used whatever the TSM
project offered us for login/logout/saving files. We had only a few weeks to use the 7093 before it was decommissioned and Roger and I started planning for a standalone system on System/360. In those
weeks, Ken and his group saw for the first time what executable APL would be like.
Another implementation of a subset of the language was done in 1967 for the IBM 1130 minicomputer.
The first implementation of APL to get widespread use outside of IBM was for the IBM System/360. Called “APL\360”, it went into service first within IBM in November 1966. (The notation “APL\360″,
since the backslash was the APL “expansion” operator, also had a hidden meaning: “APL expands the 360″).
Breed says of the time just before,
This period, early 1966, was the transitional time from Iverson Notation to APL. (Indeed, Adin [Falkoff] came up with “APL” in Spring ’66.) Refinement and extension of the language and the
environment continued for many years. There was next to no code added to make a commercial version, just paperwork.
By August 1968 APL\360 was available to IBM customers as an unsupported (“Type III”) program in IBM’s “Contributed Program Library” ^5. The principal implementers were Larry Breed, Dick Lathwell, and
Roger Moore; others who had contributed were Adin Falkoff and Luther Woodrum.
Because of the dynamic nature of APL variables, APL\360 was implemented as an interpreter, not as a compiler that generated machine code. Programs were stored in an internal form called a
“codestring” that directly corresponded to what the user had typed. The interpreter would then examine the codestring as the program executed, and dynamically allocate and reconfigure variables as
expressions were evaluated.
The first versions of APL\360 took control of the entire machine. It was thus a combination operating system, file system, timesharing monitor, command interpreter, and programming language. Given
the limited main memory, user workspaces were swapped out to drum or disk as needed. Performance was impressive, which Larry Breed attributes, in his clear and succinct description of the
implementation ^6, to the ability to tailor the operating system to the requirements of the language.
APL\360 was a conversational language that provided fast response and efficient execution for as many as 50 simultaneous users. Each user had an “active workspace” that held programs, variables, and
the state of suspended program execution. System commands like “)LOAD”, “)SAVE”, and “)COPY” maintained the user’s library of stored workspaces. Other system commands controlled language features;
for example, with “)ORIGIN” the programmer could control whether vectors and arrays are numbered starting with 0 or 1.
APL was the first introduction to interactive timesharing for many in the generation of programmers who had suffered through batch programming with punched cards.
Applications of APL
Even before it was a computer programming language, Iverson Notation was useful as a language for documenting algorithms for people. The classic example is the formal definition of the
instruction-set architecture of the new IBM System/360 computer, which was published in an article in the IBM Systems Journal by Adin Falkoff, Ken Iverson, and Ed Sussenguth in 1965 ^7.
The description, which is formal rather than verbal, is accomplished by a set of programs, interacting through common variables, used in conjunction with auxiliary tables… Although the formal
description is complete and self-contained, text is provided as an aid to initial study.
The notation used the graphical style for control transfers that was in Iverson’s book. Here, for example, is the description of a memory access operation.
It was the transition of APL from a notation for publication into an interactive computer programming language that made it flourish. When the APL\360 implementation was available, IBM and others
stimulated use by producing diverse applications such as these:
• Starmap: A set of APL functions to calculate and plot the positions of the stars and planets. ^8 ^9 It was written in 1973 by Paul Berry of IBM and John Thorstensen, then an astronomy student at
Bryn Mawr college, now Professor of Physics and Astronomy at Dartmouth College. It uses classical solutions to Kepler’s equations for a particular date and time and a series of rotations of
coordinates to show where the planets and the stars would appear in the bowl of the sky.
• IBGS: Interactive Business Game Simulation: “A general computer management simulation involving decision making and planning in the functional areas of production, marketing and finance.”
• Zeros and Integrals in APL: “Using both classical methods such as Newton’s and Muller’s and recently developed methods such as Jenkins and Traub’s, it finds reals zeros of a real function, real
and complex zeros of a polynomial with real of complex coefficients, and complex zeros of a complex function.”
• Graphpak – Interactive Graphics Package for APL\360: “…capabilities which range from graphics interface support at the lowest level to several application areas at higher levels…A plotting
component…linear or logarithmic…curve-fitting… A descriptive geometry component allows definition, scaling, magnification, translation, rotation, and projected display of three-dimensional
• Graphs and Histograms in APL: “produces curves and barcharts at a typewriter terminal”.
• APL Coordinate Geometry System: “solves coordinate geometry problems interactively at a terminal…for use by surveyors, civil engineers, urban planners…”
• APL/PDTS – Programming Development Tracking System: “…to assist managers and planners in monitoring performance against plan on programming development projects.”
• MINIPERT: “A Critical Path Method (CPM) system for Project Management”
• APL Econometric Planning Language: “The practicing economist, business forecaster or teacher is provided with easy-to-use tools for interactive model building and model solving.”
• APL Financial Planning System: “lets the financial analyst and planner design reports, specify calculation statements, enter and change data, and obtain printed reports with immediate
• APL Text Editor and Composer: “This program is designed to process text interactively at a terminal…Functions are included for entering, revising, composing, printing, and storing text…for use by
secretaries, scientists, engineers, administrators or any others who produce papers, letters, reports or specifications.”
Many of these applications emphasized interactivity, which provided a huge productivity increase compared to the batch-job processing more typical at the time. In addition, APL allowed applications
to be developed much more quickly. In a 2012 email, Larry Breed noted,
Across all fields, the speed at which APL programs can be written makes it valuable for modeling and prototyping . … One example: Around 1973, Continental Can needed an inventory system for its 21
manufacturing plants. Their team of FORTRAN programmers had worked for a year, with no success in sight. One STSC salesman, in one weekend, built a usable working model in APL Plus.
The areas in which APL had the greatest penetration were in scientific, actuarial, statistical, and financial applications. For details about the progression of APL in its first 25 years, see the
special 1991 issue of the IBM System Journal ^10 with 12 papers and one essay on the subject.
APL Praise and Criticism
APL was not originally designed as a programming language. As Iverson said,
The initial motive for developing APL was to provide a tool for writing and teaching. Although APL has been exploited mostly in commercial programming, I continue to believe that its most important
use remains to be exploited: as a simple, precise, executable notation for the teaching of a wide range of subjects.
With so many terse and unusual symbols, APL computer programs, like the mathematical notation that inspired it, has a conciseness and elegance many find appealing. APL attracts fanatic adherents.
Alan Perlis (the first recipient of the ACM’s Turing Award, in 1966) was one:
The sweep of the eye across a single sentence can expose an intricate, ingenious and beautiful interplay of operation and control that in other programming languages is observable only in several
pages of text. One begins to appreciate the emergence and significance of style. 12
Many find the freedom of expression in APL liberating.
I used to describe [Pascal] as a ‘fascist programming language’, because it is dictatorially rigid. …If Pascal is fascist, APL is anarchist. 13
But APL programs are often cryptic and hard to decode. Some have joked that it is a “write-only language” because even the author of a program might have trouble understanding it later. It inspires
programming trickery. The challenge of writing an APL “one-liner” to implement a complete complex algorithm is hard to resist. Here, for example, are two different APL one-liners that implement
versions of John Conway’s “Game of Life“:
life←{↑1 ω∨.∧3 4=+/,¯1 0 1∘.⊖¯1 0 1∘.⌽⊂ω}
Not for the faint-hearted, clearly. Dutch computer scientist Edsger Dijkstra said,
APL is a mistake, carried through to perfection. It is the language of the future for the programming techniques of the past: it creates a new generation of coding bums. 14
But fans of APL would say that cryptic APL coding is a bad programming style that can be an affliction with any language. APL provides a richer palette for expressing algorithms, the argument goes,
so you can solve harder problems faster and with less irrelevant syntactic clutter.
Whatever your view, APL and the languages it inspired, such as APL2 and J, are still an active part of the diverse programming language universe.
A Short Biography of Ken Iverson
Kenneth Eugene Iverson was born on December 17, 1920 on a farm near Camrose, Alberta, Canada. He was educated in rural one-room schools until the end of 9th grade, when he dropped out of school
because it was the height of the Depression and there was work to do on the family farm. He later said the only purpose of continuing his schooling would have been to become a schoolteacher, and that
was a profession he decidedly did not want. During the long winter months he studied calculus on his own.
He was drafted in 1942, and during his service he took enough correspondence courses to almost complete high school. After the military service he earned a B.A. in both mathematics and physics from
Queen’s University in Kingston Ontario, and then an M.A. in physics from Harvard University. In 1954 he completed a PhD under computer pioneer Howard Aiken, with a thesis titled “Machine Solutions of
Linear Differential Equations: Applications to a Dynamic Economic Model”.
After completing his doctorate, Iverson joined the Harvard faculty to teach in Aiken’s new automatic data processing program. He was there for one year as an Instructor, and for five years as an
Assistant Professor. He became increasingly frustrated with the inadequacy of conventional mathematical notation for expressing algorithms, so he began to invent his own.
In 1960 Iverson joined the new IBM Research Center in Yorktown Heights, New York, on the advice of Frederick Brooks, who had been one of his teaching fellows at Harvard and was now at IBM. The two
collaborated on the continuing development of the new notation. In 1962 Ken published the now-classic book “A Programming Language” ^1, the title of which gave the name APL to the notation which had
up until then been informally called “Iverson’s notation”.
Iverson continued to work on the development of APL throughout his tenure at IBM. In 1980 he left IBM and returned to Canada to work for I.P. Sharp Associates, which had established an APL-based
timesharing service.
In 1987 he “retired from paid employment” and turned his full attention to the development of a more modern dialect of APL. APL was successfully being used for commercial purposes, but Iverson wanted
to develop a new simple executable notation more suitable for teaching, which would be available at low cost. The first implementation of this language, called J, was announced at the APL90 Users’
Iverson’s ability to create such languages came from his “sheer enjoyment of language and words,” recalls his daughter Janet Cramer. “He read dictionaries like people read novels.” Iverson thought it
was important that language, both English and mathematics, communicate clearly and concisely.
With collaborators that included his son Eric, Iverson continued to work on the development of J, and he continued to publish prolifically. On Saturday, October 16, 2004 he suffered a stroke –while
working on a J tutorial — and died three days later on October 19, at the age of 83.
There are many stories about Ken Iverson. Here are a few:
Ken didn’t get tenure at Harvard. He did his five years as an assistant professor and the Faculty decided not to put him up for promotion. I asked him what went wrong and he said, “Well, the Dean
called me in and said, ‘the trouble is, you haven’t published anything but the one little book’”. The one little book later got [him] the Turing Award. I think that is a comment on the conventional
mindset of promotion procedures rather than a comment on Ken; it’s a comment on academic procedure and on Harvard.
— Fred Brooks, A Celebration of Kenneth Iverson, 2004-11-30
In an early talk Ken was explaining the advantages of tolerant comparison. A member of the audience asked incredulously, “Surely you don’t mean that when A=B and B=C, A may not equal C?” Without
skipping a beat, Ken replied, “Any carpenter knows that!” and went on to the next question.
— Paul Berry
In a social conversation with Ken, I said, “You know, Ken, you are my favorite language designer and Don Knuth is my favorite programmer.” And Ken said immediately, “What’s wrong with my programming?
— Joey Tuttle, A Celebration of Kenneth Iverson, 2004-11-30
In 1973 or 1974 Ken and I gave a talk at Kodak in Rochester to a group of 40 to 50 programmers who were required to work in PL/I. In the question period a senior staff member said, “If I understand
what you people are saying, you are suggesting that we should adopt a new way of thinking.” And Ken jumped up out of his chair and said, “Yes! That’s exactly what I am saying!”
— Joey Tuttle, A Celebration of Kenneth Iverson, 2004-11-30
Thanks to Michael Karasick, Yvonne Perkins, Steve Selbst, and Ken Edwards of IBM for ending my ten-year odyssey to get permission to release the APL source code. Thanks to Curtis Jones, Larry Breed,
Paul Berry, and Roy Sykes for their comments on an early draft of this article.
— Len Shustek
1. L. M. Breed and R. H. Lathwell, “The Implementation of APL\360,” in ACM Symposium on Experimental Systems for Interactive Applied Mathematics, 1967.
2. A. D. Falkoff, K. E. Iverson and E. H. Sussenguth, “A Formal Description of SYSTEM/360,” IBM Systems Journal, vol. 3, no. 3, pp. 198-261, 1964.
3. P. C. Berry and J. R. Thorstensen, “Starmap,” 1978.
4. E. W. Dijkstra, “How Do We Tell Truths That Might Hurt?”,” SIGPLAN Notices, vol. 17, no. 5, May 1982.
5. A. D. Falkoff and K. E. Iverson, “The Design of APL,” IBM Journal of Research and Development, vol. 17, no. 4, 1973.
6. A. D. Falkoff and K. E. Iversion, “The Evolution of APL,” SIGPLAN Notices, vol. 13, no. 8, pp. 45-57, August 1978.
7. “The Origins of APL – 1974“; a wonderful interview with the original developers of APL.
Historical Source Code Releases
|
{"url":"https://computerhistory.org/blog/the-apl-programming-language-source-code/","timestamp":"2024-11-03T18:44:09Z","content_type":"text/html","content_length":"123273","record_id":"<urn:uuid:2350b859-a03e-4652-a7ad-8cb869592f28>","cc-path":"CC-MAIN-2024-46/segments/1730477027782.40/warc/CC-MAIN-20241103181023-20241103211023-00692.warc.gz"}
|
Blackholes evaporate much faster than Hawking first computed
I compute that black holes have much shorter evaporation times than Hawking et-al first computed. They computed surface vibrations and neglected thickness vibrations due to geometrodynamical field
zero point vacuum fluctuations.
On Apr 9, 2014, at 5:02 PM, Paul Zielinski <This email address is being protected from spambots. You need JavaScript enabled to view it.> wrote:
On 4/9/2014 4:42 PM, JACK SARFATTI wrote:
According to Einstein’s classical geometrodynamics, our future dark energy generated cosmological horizon is as real, as actualized as the cosmic blackbody radiation we measure in WMAP,
Planck etc.
But doesn't its location depend on the position of the observer? How "real" is that?
Irrelevant, red herring.
Alice has to be very far away from Bob for their respective de Sitter horizons not to have enormous overlap.
We all have same future horizon here on Earth to incredible accuracy.
I assume by "dark energy generated" you simply mean that the FRWL metric expansion is due to /, and
/ registers the presence of dark energy.
What else? Obviously.
We have actually measured advanced back-from-the-future Hawking radiation from our future horizon. It’s the anti-gravitating dark energy Einstein cosmological “constant” / accelerating the
expansion of space.
OK so the recession of our future horizon produces Hawking-like radiation due to the acceleration of our frame of reference
wrt the horizon?
No, static LNIF hovering observers have huge proper accelerations at Lp from the horizon with redshifted Unruh temperature T at us
kBT ~ hc/(A^1/2Lp^1/2)^1/2
use black body law
energy density ~ T^4
to get hc/ALp^2
The static future metric is to good approximation
g00 = (1 - r^2/A)
we are at r = 0
future horizon is g00 = 0
imagine a static LNIF hovering observer at r = A^1/2 - Lp
his proper radial acceleration hovering within a Planck scale of the horizon is
g(r) ~ c^2(1 - r^2/A)^-1/2 (A^1/2 - Lp)/A
= c^2(1 - (A^1/2 - Lp)^2/A)^-1/2(A^1/2 - Lp)/A
= c^2(1 - (1 - 2 Lp/A^1/2 + Lp^2/A )^-1/2(A^-1/2 - Lp/A)
= c^2(2Lp/A^1/2 - Lp^2/A )^-1/2(A^-1/2 - Lp/A)
~ c^2(2Lp^-1/2/A^-1/4 )A^-1/2(1 - Lp/A^1/2)
~ c^2(A^1/4/Lp^1/2)A^-1/2 ~ c^2/(Lp^1/2A^1/4)
f(emit) = c/(Lp^1/2A^1/4)
1 + z = (1 - (A^1’2 - Lp)^2/A)^-1/2 = (A^1/4/Lp^1/2)
f(obs) = f(emit)/(1 + z) = Lp^1/2/A^1/4c/(Lp^1/2A^1/4) = c/A^1/2
OK this is the standard low energy Hawking radiation formula from surface horizon modes
However, there is a second high energy quantum thickness radial mode
f'(emit) ~ c/Lp
f’(obs) = (Lp^1/2/A^1/4)c/Lp = c/(Lp^1/2A^1/4)
This advanced Wheeler-Feynman de Sitter blackbody radiation is probably gravity waves not electromagnetic waves.
You seem to be drawing a direct physical analogy between cosmological horizons and black hole horizons.
Hawking Gibbins did so in 1977 i sent his paper several times.
This requires the anti-Feynman contour for advanced radiation in quantum field theory.
i.e. mirror image of this
so that w = + 1/3 blackbody advanced radiation anti-gravitates
It’s energy density is ~ hc/Lp^2A
A = area of future horizon where the future light cone of the detector intersects it.
|
{"url":"https://stardrive.org/index.php/all-blog-articles/12574--sp-343","timestamp":"2024-11-06T09:14:07Z","content_type":"text/html","content_length":"30206","record_id":"<urn:uuid:7d874e0e-3c51-403f-ae21-34229d595869>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00888.warc.gz"}
|
Quantum Entanglement is Holistic?
*1. What is a Hilbert Space? :
In this blog, I aim to develop a comprehensive understanding of hilbert spaces cutting through the mathematical jargon. — Gnomon
That's nicely done. I suppose my point is that QM is all sophisticated mathematics and equally sophisticated experimental processes.
Plus the class I took was explicitly taught in the Copenhagen interpretation, and a lot of the discussions around here try to differentiate between the interpretations and, at least as I learned
it, there wasn't really a way to differentiate between the interpretations — Moliere
That's remarkably clear.
I've struggled to put together a QM primer before only to put it aside because it's hard to even understand, and so even harder to simplify. Plus the class I took was explicitly taught in the
Copenhagen interpretation, and a lot of the discussions around here try to differentiate between the interpretations and, at least as I learned it, there wasn't really a way to differentiate between
the interpretations. (though maybe that's changing now)
For chemists you're just expected to pick up the math along with the class :D. And then, to top it off, I went biochem since it seemed more employable -- and sometimes QM matters there, but not
often. A rough hand-wavey understanding, or some of the simpler technically not true theories, of bonding are adequate for the predictions there.
Moliere OptionsShare
Here's a quick look at ground zero in quantum studies by Mark John Fernee for Quora: — jgill
Thanks for that quickie quantum update. I assume the article is interesting to Theoretical/Mathematical Physicists. But, can you tell me, in a few jargon-free words, what that account means -- in the
real world -- to a non-mathematical layman, or to an atom-smashing CERN physicist, or to a matter-molding Chemist?
The Quora excerpt refers to "
an abstract space, called a Hilbert space
". In the Medium Blog*1, the author says : "
Space in “Hilbert Space”, is a mathematical construct and not the “space” which we normally understand
". Not the factual “space” which we normally know as a place for real things. Am I correct to say that as a "mathematical construct" it's not a Real physical space -- housing material objects -- but
an Imaginary metaphysical space -- containing non-euclidean "
inner products
" (mathematical objects)? If so, what significance does it have for a pragmatic non-mathematician? Or for an empirical physicist?
In Werner Heisenberg's
Physics and Philosophy
, he describes the revolutionary transition from 17th century Classical physics, to 20th century Quantum physics. He labels the before & after as "
old physics
" and "
new physics
". In David Lindley's Introduction, he says "
in the early 1920s, the old quantum theory . . . . had become over-elaborate and unwieldy
". Which sounds like descriptions of a previous epochal change in philosophical worldview, from Ptolemaic epicycles to Copernican models of the universe.
Ironically, it was pragmatic Heisenberg, who proposed "
that one should write down the mechanical laws not as equations for the positions and velocities of the electrons, but as equations for the frequencies and amplitudes of the Fourier expansion
". Thus began the transformation of physics from manipulating Matter to imagining Mathematical constructs. And that change of attitude marked the transition of "Physics" from observable Aristotelian
(corporal Nature) to imaginary Quantum
(abstract knowledge).
Heisenberg said it's a question of translation : "
the conventional language of physics is fashioned according to the world we experience . . . . the quantum world is not a world of waves and particles . . . . the thing measured and the thing doing
the measurement are inextricably intertwined
". Thus, in the New Physics, Real and Ideal have become entangled, just as illustrated in the Yin Yang symbol.
What is a Hilbert Space?
In this blog, I aim to develop a comprehensive understanding of hilbert spaces cutting through the mathematical jargon. https://medium.com/@brcsomnath/hilbert-space-7a36e3badac2
Here's a quick look at ground zero in quantum studies by Mark John Fernee for
Quantum mechanics is the governing theory. It's fundamental quality is that a system can be described by a vector in an abstract space, called a Hilbert space. The Hilbert space is the space of
all possible measurement outcomes, so it is distinct from 3D space that describes the position of objects. For instance, the Hilbert space can be, and often is, infinite dimensional. A vector in
Hilbert space has complex-valued coefficients and must be normalised to unity length. For an infinite dimensional space it must be square integrable.
Physical observables are described by hermitean matrices that act on the Hilbert space vector such that measurement outcomes are real-valued. The vector in Hilbert space evolves according to
rotations induced by various interactions described in the Hamiltonian operator (or Lagrangian density). This is called unitary evolution, as the vector is just rotated preserving the
Following a measurement, the Hilbert space vector is projected onto the measurement outcome. This evolution is considered non-unitary, as it is not a smooth rotation, but a projection.
So that is the underlying theory of quantum physics.
For quantum mechanics, we consider particles as immutable with various properties. This restricts the possible evolution of the associated Hilbert space. However, for fundamental particle
physics, the particles appear to be transmutable. Therefore, the theory required a mechanism to allow for this.
The first transmutable particle was the photon. The quantum theory of the electromagnetic field identified a set of non-hermitian operators that corresponded to the creation and destruction of
photons as energy quanta in the electromagnetic field. This was the first field theory. The key to this theory was the mapping of the electromagnetic field to the quantum simple harmonic
oscillator in order to identify quantum operators that satisfy the Heisenberg uncertainty principle. These field modes can be used to construct any field configuration using the superposition
principle according to the Fourier decomposition of the field. This opened the gates to modern quantum field theories. Other fields were introduced that gave rise to particles as excitations of
the field in a way analogous to the role of the photon in the electromagnetic field.
From here is gets complicated as various symmetries need to be satisfied and self-interaction terms need to be dealt with. However, the theory is essentially the same, just with more widgets
added to satisfy the properties observed in experiments. The Hilbert space is still there. Unitary evolution is still there. Hermitean operators are still there. The measurement procedure is
still there.
With particle physics, one focusses more on the scattering terms in the Hamiltonian (or Lagrangian density). These are generally expanded as a perturbation series with the high order terms
truncated. This allows the calculation of scattering cross sections that are applicable to particle physics experiments.
For math, one starts with calculus, then real and complex analysis, then functional analysis for Hilbert spaces, etc.
My best friend, who passed away seven years ago, was a physics major up until the required introductory senior level course in quantum theory. He switched to mathematics and retired a fellow
professor. A very bright guy - certainly smarter than me - but math made more sense at the time, easier to understand.
I think dropping a physics major at this crucial point of transition in thinking happens fairly frequently. Some become engineers, a profession using physics that moves along Newtonian lines.
Well, maybe not so much electrical engineers.
It's a shame the forum doesn't have quantum physicists who might elucidate better than philosophical minded novices. But this is not a physics forum. Our best is not good enough. — jgill
After Quantum Physics introduced Uncertainty into Science, and substituted Virtual Particles for Real Atoms, I suspect that quite a few disillusioned undergrads dropped-out of their physics programs.
The most famous expression of the "switch" you noted is Feyman's "shut-up and calculate" quip*1. Since then, physics divided into large teams of experimental scientists (atom smashers) and a few
individual philosophical (theoretical) scientists. But both groups are "chasing rainbows" that are more & more elusive. Also, the empiricists are typically distrustful of un-tethered Philosophical
Reasoning for epistemological knowledge of Material Reality.
Theoretical Physicists are mostly mathematicians*2 instead of empiricists. And what they are exploring is the Logical Structure of Reality, not the Material Stuff of classical physics. But the
pioneers of Quantum Theory were primarily classically-trained empirical-lab-laboring scientists, who were baffled when their real-world experiments returned weird results. Instead of just shaking
their heads, and "doing the math", several turned to Eastern philosophy for clues to the mysteries of Reality's foundation. Perhaps they concluded, as I have, that a
world with
observers is
Ideal. Moreover, Quantum Superposition is not just Uncertain, it is inconceivably Complex*3.
Since I am neither a Physicist nor a Mathematician, I turn to the experts for information about the non-classical under-pinnings of Reality. But, although I think it's a Holistic/Statistical state, I
don't rely on
New Age
gurus to explain the nuts & bolts & symbolism of Quantum Superposition. Speaking of quantum experts, I just ordered a copy of Heisenberg's 1958 book :
Physics and Philosophy: The Revolution in Modern Science
. He was already doing the math, but saw a need for philosophical generalizations to make sense of Quantum non-sensical specifications. What do you think he would make of the juxtaposition of Photon
photography and Yin Yang symbol?
Shut-up and Calculate
N. David Mermin coined the phrase "Shut up and calculate!" to summarize Copenhagen-type views, a saying often misattributed to Richard Feynman and which Mermin later found insufficiently nuanced
Theoretical Physicists today
Hawking, Higgs, Guth, Smolin, Weinberg, Penrose, Greene https://www.google.com/search?client=firefox-b-1-d&q=theoretical+physicists+today
Superposition Complexity
The principle of quantum superposition states that if a physical system may be in one of many configurations—arrangements of particles or fields—then the most general state is a combination of all of
these possibilities, where the amount in each configuration is specified by a complex number. https://en.wikipedia.org/wiki/Quantum_superposition
Holism in science, holistic science, or methodological holism is an approach to research that emphasizes the study of complex systems. https://en.wikipedia.org/wiki/Holism_in_science
Systems theory is the transdisciplinary study of complex systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or human-made
My best friend, who passed away seven years ago, was a physics major up until the required introductory senior level course in quantum theory. He switched to mathematics and retired a fellow
professor. A very bright guy - certainly smarter than me - but math made more sense at the time, easier to understand.
I think dropping a physics major at this crucial point of transition in thinking happens fairly frequently. Some become engineers, a profession using physics that moves along Newtonian lines. Well,
maybe not so much electrical engineers.
It's a shame the forum doesn't have quantum physicists who might elucidate better than philosophical minded novices. But this is not a physics forum. Our best is not good enough.
You wouldn't. By the time you can take a picture of something, the quantum superposition has already decohered.
It's good to see you've accepted the arbitrariness of the yin yang symbol in the context of this experiment. — flannel jesus
I agree : picture-taking is an observation/intervention, that -- like silver & vampire -- is incompatible with mystery-shrouded superposition. And the irrational "arbitrariness" of the symbol/article
juxtaposition is exactly why I started this thread. I was not trying to assert --- as some posters have assumed, and the quoted article seems to imply --- that Yin Yang is a "hard" scientific
concept, instead of a "soft" philosophical conjecture.
Instead, I was using this "arbitrary" association of symbol & science to ferret-out some diametrically-opposed worldviews on this forum. One way to define those clashing perspectives on reality is by
labeling them as A> Classical/Deterministic or B> Quantum/Statistical. Ironically, my
philosophy straddles those opposite shores : Objective Realism & Subjective Idealism. The latter takes the idealizing Observer as an active component of Reality.
I've recently been looking into
Quantum Bayesianism
(QBism)*1 as a way to make sense of these antagonistic views of what is Real, as opposed to Ideal. Just today, I found an article online that deals with QBism. On the question whether Quantum
Superposition is real & objective (Classical Physics) or ideal & subjective (Statistical Physics), the author says: "
Objective states . . . . are what classical physics is all about . . . . However, things are very different in quantum mechanics. Quantum states can be “superposed, meaning a particle can have many
values of position and momentum at the same time.
" Hence, superposition of entangled photons is a statistical state, not a deterministic state. Quantum mechanics is unavoidably uncertain. And the interpretation of statistical averages is inherently
subjective. Thus, the need for QB probabilities.
The author proposes, what he calls a "radical conclusion", which agrees with my own understanding of the quantum scale foundation of Reality*2."
Instead, quantum states are about our knowledge of the world. They are descriptions encoding our interactions with particles. QBism would say it’s not the particle’s state — it’s your state about the
particle .QBism leads not with ontology — a story about what fundamentally exists independent of us — but with epistemology: a story about our information about the world. That change makes all the
" {my bold} By "your state" he means a
or conjecture about the future deposed state of the currently superposed particle.
Classical Physics was fatalistic, in that its progress was rigidly determined --- according to Newton by God --- from the beginning of creation. But the QB author has a different idea : "
The answer QBism produces is as radical as it is mundane. By turning away from an impossible (and paradoxical) God’s-eye view of the Universe, QBism puts human beings squarely in the middle of the
scientific enterprise
". So, our universe is determined by stable Laws of Nature, but it is indeterminate in the degrees of freedom allowed by the uncertain/statistical nature of Quantum Nature.
top-down determined by law,
bottom-up arbitrary & unrestrained by selfish & willful agents.
PS___This may sound off-topic, but since it's my topic, it's right on course. Do you find human arbitrariness compatible with lawful Science?
: The most radical interpretation of quantum mechanics ever :
offers a radical interpretation of quantum physics, suggesting that quantum states are not objective realities.
Note --- The author, "
Adam Frank, is an American physicist, astronomer, and writer. His scientific research has focused on computational astrophysics with an emphasis on star formation and late stages of stellar
evolution. His work includes studies of exoplanet atmospheres and astrobiology
". ___Wikipedia
Foundation of Reality
The ancient fundamental particle was the Atom : "hard, massy & solid". But the classical search for that particular Holy Grail has passed through several soft bottoms : aggregate molecules,
Rutherford/Bohr atom, Electrons, Photons, Quarks. Yet, even ghostly mathematical quarks are defined in six delicious flavors, which seem to indicate that even the Quark is not the essential particle
of reality. At the moment the physical foundation of physics is open to question.
flannel jesus
Note --- Speaking of "collapse of wave function", how would you take a picture of Schrödinger's Cat when it's both dead and alive? — Gnomon
You wouldn't. By the time you can take a picture of something, the quantum superposition has already decohered.
It's good to see you've accepted the arbitrariness of the yin yang symbol in the context of this experiment.
Do not assume that just because you managed to misinterpret the meaning of it, that it was MEANT to be misinterpreted. You said yourself that you're not qualified to interpret the paper. —
flannel jesus
I didn't "misinterpret the paper", and I didn't "interpret the paper", because I didn't
the technical paper. I did request that others, more qualified, would interpret the significance of the symbol relative to the experiment.
The OP was based on the linked article that was reproduced by several other science publications. All have a big colorful Yin Yang image next to a brief description of a technique that was supposed
to produce a "
visualization of the wave function
". And there was no indication that the YY image had nothing to do with photons or functions. I hope the articles were not "meant" to be misinterpreted. But the fact that they are misleading, led me
to post the OP. What would you "interpret" if you saw the symbol and the quote below?
PS___TC has portrayed me as a "gullible New Ager", who could, presumably due to hippie-flower-child beliefs, "manage" to misinterpret the juxtaposition of an apple & orange as symbols of Peace &
Love. For the record : I am not now, nor have I ever been a hippie, or "New Ager". I will admit to being an amateur philosopher. And I do think the Yin Yang symbol represents a valid philosophical
concept :
. I also think -- along with Schrödinger & Heisenberg -- that entanglement is literally Holistic (i.e. an integral System), in the sense that the parts (e.g. photons) are not particular until the
system "collapses". But I don't interpret that scientific notion with any idealistic metaphysical moral. Anyway, it's all irrelevant to this thread about a misleading science article. Which has
devolved into smarter-than-you condescension.
:cool: Researchers at the University of Ottawa, . . . . recently demonstrated a novel technique that allows the visualization of the wave function of two entangled photons, the elementary particles
that constitute light, in real-time. https://phys.org/news/2023-08-visualizing-mysterious-quantum-entanglement-photons.html
Note --- Speaking of "collapse of wave function", how would you take a picture of Schrödinger's Cat when it's
dead and alive?
flannel jesus
So the published Yin Yang image seems to be a Red Herring*1. What does it reveal about entanglement? In what sense is the published image a "visualization of the wave function"? Can you enlighten
me? — Gnomon
Do not assume that just because you managed to misinterpret the meaning of it, that it was MEANT to be misinterpreted. You said yourself that you're not qualified to interpret the paper.
I'm not qualified either, but if I had to hazard a guess, I would guess they were trying to use a shape - ANY shape, they just happened to choose yin Yang - to demonstrate some kind of stability of
the measuring device or stability of a signal or something like that. "The input image remains legible in the output, so that means xyz".
Can you enlighten me? — Gnomon
No. The paper is too far over my head to grasp, without reading a lot more than the paper itself, and life is short.
wonderer1 OptionsShare
You posted your opinion implying that the common Yin Yang symbol was used as input... — Gnomon
Have you looked at the original paper? (Which T Clark linked early in the thread.)
I just took a look and the caption under the only picture of the Yin-Yang symbol says:
a, Coincidence image of interference between a reference SPDC state and a state obtained by a pump beam with the shape of a Ying and Yang symbol (shown in the inset). The inset scale is the same
as in the main plot. b, Reconstructed amplitude and phase structure of the image imprinted on the unknown pump. — wonderer1
Thanks for that information. I asked TC where he got the information to support his assertion that the Yin Yang image was both input & output, and he did not respond. I guess I was supposed to take
his word for it. But he didn't state his qualifications as an expert on the subject.
No, I didn't look at the original paper. I'm not qualified to interpret such technical data. For example, what is a "pump beam"? So, I was hoping someone else would. This confirms my suspicion that
the Yin Yang symbol had nothing to do with the shape of entangled photons. There are lots of other images in the paper that also don't mean anything to me. So the published Yin Yang image seems to be
Red Herring
*1. What does it reveal about entanglement? In what sense is the published image a "
visualization of the wave function
"? Can you enlighten me?
seems to be accusing me of being a "gullible New Ager". But I expressed my skepticism in the OP. "
The relevant question here : is the image a mere coincidence or a consonance?
" Apparently, it's neither. Just an irrelevance. But it served the purpose of the OP : "
A secondary question that might be illuminated : is this forum polarized along physics vs metaphysics, eastern vs western, science vs philosophy lines on an evocative topic like this?
" TC seems to have established his polarized position on that question.
*1. u]Red Herring[/u] : "
a clue or piece of information that is, or is intended to be, misleading or distracting.
" ___Oxford
flannel jesus
It appears to me that wonderer has made a very good case that the yin Yang symbol appeared in the output of the experiment because the yin Yang symbol was used as an input to the experiment. I think
you've been just a little bit unfair, gnomon. Take a step back and consider the possibility that you misinterpreted what was going on.
You posted your opinion implying that the common Yin Yang symbol was used as input... — Gnomon
Have you looked at
the original paper
? (Which T Clark linked early in the thread.)
I just took a look and the caption under the only picture of the Yin-Yang symbol says:
a, Coincidence image of interference between a reference SPDC state and a state obtained by a pump beam with the shape of a Ying and Yang symbol (shown in the inset). The inset scale is the same
as in the main plot. b, Reconstructed amplitude and phase structure of the image imprinted on the unknown pump.
wonderer1 OptionsShare
T Clark
Just stop lying about what I wrote and leave me out of this.
I did not say that. Ignore my evidence if you want, make up your own fantasies about little fairies dancing on the taiji, but don't misrepresent what I wrote. I always thought you were a little
goofy, but I didn't think you were dishonest too. — T Clark
What evidence? You posted your
opinion implying
that the common Yin Yang symbol was used as input for the process of photographing entangled photons (not your literal words). I asked, "for what scientific purpose?" and you gave no response. I
asked you to post a quote to support your
nonsensical opinion
, but you gave no response. Am I supposed to accept your
non-expert opinion
as evidence to support your own opinion? Talk about "dishonest". Show me the money (er, photon)!
That's nicely done. I suppose my point is that QM is all sophisticated mathematics and equally sophisticated experimental processes. — jgill
Yes. That seems to be the point that Heisenberg was making when he said "
. . . . it's a question of translation : the conventional language of physics is fashioned according to the world we experience
". But most of us don't directly experience the world on the subatomic level. So, it's an abstruse language for sophisticated initiates into the mysteries of the foundations of Reality. And easily
misconstrued*1. The current issue (157) of
Philosophy Now
magazine has an article about
Solving The Mystery of Mathematics
. For the purposes of this article, the author -- Jared Warren -- rejects the Ideal definition of math as presented by Plato, and also the Real definition of math as "
like the physics of this reality
". Instead, he prefers a linguistic definition : "
this is the idea that mathematical truths are a byproduct of our linguistic conventions
I like to think that pure & embodied & semantic notions of Math are all true in some contexts. Yet, like some of the Quantum scale objects of Physics, he notes that "
mathematical objects : numbers, points, sets, functions, groups . . . These are non-physical
". Which, for philosophical purposes, makes them Meta-physical*2 (mental concepts). He adds "
Mathematical objects are not out there in any sense of the term
". Moreover, "
clearly, our mathematical knowledge comes to us in special ways. No experiments are performed . . . . no data catalogued, no observations made
." We know math by reason (inference), not by sensory experience. By the same token, current theoretical Physics experiments are not done in a laboratory, but in a mind. For example, Einstein was
once asked where his laboratory was, and he simply held up a pencil.
Modern quantum-scale Physics, unlike Classical Physics, is not mechanical, with direct transfer of force from cog to cog. It is instead, non-local, and involves "
spooky action at a distance
". In place of hard, massy Atoms, it describes the fundamental object as a Quantum Field*3, not a cloud of physical particles, but intangible mathematical Points, which are imaginary objects in
hypothetical space. And it's these spooky improvised definitions that the quantum pioneers found to be weird.
So, Warren focuses, not on the physical substance of physics, but on the
of Math that tends to enshroud its Ontological & Epistemology essence in abstruse technical dialects. He refers to his approach to physics as
(based on or in accordance with what is generally done or believed). He notes that "
our conventional rules for using mathematical terms like "number', 'zero', 'plus', "set" 'functions", etc, determine our mathematical concepts. Conventional rules are also the source of mathematical
." Therefore, he concludes : "
Mathematics is mysterious. To solve the mystery we must fit mathematical practices into a reasonable picture of the world.
" {my bold}
That's what the quantum pioneers, like Heisenberg & Schrödinger, tried to do, when they adopted some Eastern cultural concepts --- like "holism", non-separability", "interconnection" --- to describe
the strange entangled behavior of subatomic particles. Some of them transferred such other-worldly concepts into their social world to support religious & mystical intuitions. Therefore, as
Heisenberg warned : "
it's a question of translation
". Some translate quantum reality as Fields of non-spatial mathematical points, while others prefer ghosts of spiritual ectoplasm. Who's to say which conventional translation is True or False? Show
me the evidence!
PS___ This post is not trying to prove anything. Just something to think about, when one man's conventional language & beliefs conflicts with another's. It's not Physics, just Philosophy.
Quantum Mysticism
A signature feature of quantum mysticism is its misappropriation of physics terminology in a wider, every-day context
it draws upon "coincidental similarities of language rather than genuine connections" to quantum mechanics.
Note --- Mystical entities (ghosts) are just as "reasonable" to some people, as Mathematical entities (quarks) are to others. Each community has its own conventional words & ideas.
: not supernatural, but merely non-physical, i.e. mental phenomena
Derived from the Greek meta ta physika ("after the notes about nature"); referring to an idea, doctrine, or posited reality outside of human sense perception. In modern philosophical terminology,
metaphysics refers to the studies of what cannot be reached through objective studies of material reality. https://www.pbs.org/engloss/metaph-body.html
Note --- In modern Western culture, we are comfortable with the concept of invisible Photons & Electrons causing things to appear and to move, but even with sophisticated technology, we never
directly sense the entities referred to by those words.
Quantum Field
According to our best laws of physics, the fundamental building blocks of Nature are not discrete particles at all. Instead they are continuous fluid-like substances, spread throughout all of space.
We call these objects fields. The most familiar examples of fields are the electric and magnetic field. https://www.damtp.cam.ac.uk/user/tong/whatisqft.html
Note --- Doesn't that sound ghostly?
Einstein was once asked where his laboratory was, and he simply held up a pencil. — Gnomon
"this is the idea that mathematical truths are a byproduct of our linguistic conventions". — Gnomon
The interplay is certainly interesting.
"this is the idea that mathematical truths are a byproduct of our linguistic conventions". — Gnomon
The interplay is certainly interesting. — jgill
Yes. I think you could safely say that Mathematics is a mental
language that is used by Science to describe it's sensory observations precisely. Ancient math concepts were originally devised by desert civilizations -- Egyptians & Mesopotamians -- in order to
understand why the stars (gods?) formed patterns that reminded men of terrestrial things & events : Astrology. Later, Greek logicians (e.g Euclid), with cloudier skies, refined geometry to make it
more abstract and less subject to variable interpretations : Astronomy.
Ironically, modern Physics has been transformed from a Classical
science into a more Philosophical
science. I continue to read Heisenberg's book, as he describes the "
history of quantum theory
". After noting some of the complexities & contradictions & paradoxes that the Q pioneers had to deal with, he noted : "
Bohr was well aware of the fact that the quantum conditions spoil in some way the consistency of Newtonian mechanics
". He went on to suggest that "
asking the right questions is frequently more than halfway to a solution of the problem
". Then he addressed some of those philosophical "how" & "why" questions : "
How could it be that the same radiation that produces interference patterns, and therefore must consist of waves, also produces the photoelectric effect, and therefore must consist of moving
" Their compromise solution was not to choose True/False or Either/Or, but to accept that both of those logically incompatible observations must be true, depending on the context. And that productive
Quantum holism led me to my
search for a middle ground on contentious questions.
The quantum pioneers in Europe were forced by the paradoxical evidence to engage in a series of philosophical arguments over many years. Heisenberg noted that they gradually became "
accustomed to these difficulties
". Then he summed it up as "
this was not sufficient to form a consistent general picture of what happens in a quantum process, but it changed the minds of the physicists in such a way that they somehow got into the spirit of
quantum theory
". The "spirit of quantum theory" is an attitude of compromise : to meet in the middle. That's why the Copenhagen Interpretation was called a "compromise" or "accord", to allow squabbling
practitioners with different
to "
shut-up and calculate
"*1. Perhaps that's what you meant by
Heisenberg also offered his "
impression that Bohr's theory gave a qualitative but not a quantitative description of what happens inside the atom
"*2. Such a
compromise between flakey Philosophy and factual Physics did not go down smoothly for those with strong opposing beliefs. But it did allow all parties to "shut-up" about their incompatible opinions,
and get-on with their calculations. Ironically, I get the strong impression that some posters on
The Philosophy Forum
are not willing to compromise their Black/White & True/False beliefs for the sake of philosophical accord. They won't accept that New Age interpretations of Quantum Physics are philosophical, and not
subject to the same numerical criteria as Classical Mechanics. For example, Holistic Philosophical opinions are not in the same game as Reductive Scientific facts. So, instead of annihilating each
other, they can offset each other's weaknesses, to form a complete system of knowledge, as the Yin/Yang symbol suggests.
PS__Absolute uncompromising Rules require perfect knowledge. For the rest of us, accommodation & adaptation is necessary for survival of a still-evolving species.
Philosophical Physics
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, https://en.wikipedia.org/wiki/Copenhagen_interpretation
Qualia vs Quanta
A Quantitative (mathematical) description provides abstract numerical values.
A Qualitative (philosophical) interpretation produces meaningful human values.
Both/And Principle
My coinage for the holistic principle of Complementarity, as illustrated in the Yin/Yang symbol. Opposing or contrasting concepts are always part of a greater whole. Conflicts between parts can be
reconciled or harmonized by putting them into the context of a whole system. https://blog-glossary.enformationism.info/page10.html
BothAnd Dilemma
Binary human nature – animal motives & rational choices – allows us to navigate a course between the Sirens of Good and the rocky shoals of Evil. But long experience indicates that the least bad
solution and the safest good option are usually in the middle range of moderation between extremes.
Fixed Rules for Life can never fit all situations. But Wise Character adapts to changing conditions.
There is no perfect answer to these common dilemmas :
When to be . . .
• General vs Specific
• Figurative vs Literal
• Typical vs Targeted
• Synthetical vs Analytical
• Holistic vs Particular
When to prefer . . .
• Ideal vs Real
• Left vs Right
• Liberal vs Conservative
• Individualism vs Tribalism
• Reasons vs Feelings
• Self vs Others
The BothAnd principle endorses general Wisdom instead of specific Rules.
The interplay is certainly interesting. — jgill
JG, I'm not picking on you by posting long dissertations to your name. It's just that I'm on a roll here, expanding the topic of
Quantum Entanglement is Holistic
. And your math background may allow you to hold apparent paradoxes (counterintuitive results) in your mind, while keeping an open mind ---
. For example, math has
Paradoxes of infinity ; of set theory ; Probability theory
. But those "non-commutative" sub-sets don't invalidate the consistency of mathematics in general. Note --- I'm using that term in an unconventional way.
A new forum post on the topic of
Explaining Bell violations from a statistical / stochastic quantum interpretation
seems to be parallel to this thread, although the lines may never meet :
says, "
even though realism is given up in a way compatible with the requirements of Bell's theorem*1, particles can still retain their definite properties as individuals in a realist way
". responded with the "
conclusion that the violation is strictly an artifact of the statistics. That is, it occurs in the aggregate but not identifiably for any individual
". Taken together, these statements seem to be talking about an apparent
paradox of Statistical
and Sensory
as complementary truths of
both Ideality and Reality
. In Quantum theory the "aggregate" is a statistical (holistic) value, while an "individual" property is the value of a single particle of matter/energy.
Fortunately, as philosophers and mathematicians, we can hold Holistic notions (Fields) & Reductive concepts (Particles) as aspects of a complete / comprehensive set : the Logical / Physical Universe.
I can imagine Quantum Entanglement as a Holistic-Statistical (non-local) state of being -- with unknowable properties --- which is also compatible with the Classical Mechanics concept of interacting
(local) individuals -- with properties of position & momentum. One way to interpret that paradox is to define "Statistics" as an
(Potential ; future) state, and Particles as
(Actual ; now) states*2.
All of this is just a long way to say, in philosophical terminology, that "
quantum entanglement is holistic
". Therefore, using the Yin/Yang symbol --- as a graphic illustration (not an actual photograph) of entangled photons --- is appropriate. Note that your term "interplay" occurs in the dictionary
definition below*3.
Bell's theorem shows that no theory that satisfies the conditions imposed can reproduce the probabilistic predictions of quantum mechanics under all circumstances. https://plato.stanford.edu/entries/
Is statistics "real" math?
The way I see it statistics is a practice of expressing data recorded from observations of some kinds of events, so that one may be able to potentially use these expressions to make inferences about
future events, or probability statements. Whereas other math, calculus for instance, deals with measurable parameters which can be used to find definite answers. https://www.reddit.com/r/math/
Note --- Another way to answer that question is to say that "Statistics is Philosophy with numbers". It doesn't produce here & now hard facts, but projections into the future.
Yin Yang
: philosophical meaning
the two complementary forces that make up all aspects and phenomena of life. . . .
The two are both said to proceed from the Great Ultimate (taiji), their interplay on one another (as one increases the other decreases) being a description of the actual process of the universe and
all that is in it. In harmony, the two are depicted as the light and dark halves of a circle. https://www.britannica.com/topic/yinyang
SIDE NOTE :
is a whole System ;
is particular Categories
Pinter begins by noting that the categories by which we parse the world are projected onto the field of view, not detected out there in the world. “Contrary to commonsense realism, the physical world
has no pre-existing segmentation”. So, A> the Spaces separating things, and B> the Connections binding things together, and C>the Borders defining composite objects are drawn by the brain in order to
make sense of random energy inputs. http://bothandblog8.enformationism.info/page10.html
|
{"url":"https://thephilosophyforum.com/discussion/comment/838211","timestamp":"2024-11-13T09:53:24Z","content_type":"text/html","content_length":"99250","record_id":"<urn:uuid:f3577b9e-388a-4e9a-ac63-12f5fde5aab2>","cc-path":"CC-MAIN-2024-46/segments/1730477028342.51/warc/CC-MAIN-20241113071746-20241113101746-00339.warc.gz"}
|
Lasers, Technology, and Teleportation with Prof. Magnes
Readings were taken at three distances (1 cm, 30 cm, and 60 cm) from the microwave oven door as well as from the right side (magnetron) of the oven.
Our data was collected with entirely with an RF Meter: with it, we tested the EM field strength around different microwaves at various locations around campus. The only other technology used in this
experiment were the microwaves tested, which were all of varying models and ages.
We collected our measurable data as a team, with all three people assessing the qualitative variables before beginning measurements, and then one person recording data, and two using the RF meter in
conjunction with the microwave to collect data. We allowed the microwave for ~30 seconds before starting to collect data to make for a more consistent readings from each distance. We first observed
the average value at each distance. This required us to be subjective–since the average value switched as the RF meter collected data, every second or so, two people observed the average values for
about 10 seconds before deciding an approximation where most of the values fell. We then switched to measuring the maximum average, a value that stood constant on the RF meter. We repeated this
process at each distance from the microwave. We of course switched roles periodically in order to give each member of the group a better understanding of the overall process!
We got to this streamlined process through trial and error, and toggling with the RF meter, whose manual is not extremely comprehensive. We had to test measuring from different axes and also
realized that we needed to measure the radiation in the general vicinity of the microwave before we measured for microwave radiation, so that we had an idea of the baseline of EM fields in the area.
We also had to toggle with measuring values on different settings. While we began measuring only maximum average, we realized this did not give a sufficient idea of the radiation emitted on
average. Furthermore, if the RF meter caught a signal from something like a cell phone receiving a text, that outlying measurement would appear on the meter rather than the measurement from the
microwave. We decided to use both average and maximum average measurements to get an idea of how much radiation was generally emitted, as well as how much could potentially be emitted.
We also collected data by researching the safety standards of microwaves. This data will help us understand what our values mean during analysis/conclusions. We found that the International
Electrotechnical Commission has set a standard of emission limit of 50 Watts per square meter at any point more than five centimeters from the oven surface. The United States Federal Food and Drug
Administration has set stricter standards of 5 milliWatts per square centimeter at any point more than two inches from the surface. Most consumer microwaves report to meet these standards easily.
Further, the dropoff in microwave radiation is significant with the FDA reporting “a measurement made 20 inches from an oven would be approximately one one-hundredth of the value measured at 2
The conditions under which our data was collected were simply the conditions of the microwaves habitats: some were found in secluded kitchens without much EM feedback from its surroundings (before
testing each microwave we made a note of the general, ground-level EM reading in the vicinity so we could adjust and compare microwaves after taking that initial radiation into account), and others
were found in areas where wi-fi signals and cell phone usage really bumped up the ground-level readings, and requiring us to adjust how we understood the data accordingly,
1. Distance from the microwave (cm.)
2. Power of Microwave, found on microwave label (Watts)
3. Radiation (µW/m^2)
1) Preliminary Observations
Sample # Location Brand Wear and Tear, Year? Radiation Off
GE June 2011 AVG: 0.00
M1 Strong Kitchen
MSES1139BC03 No major wear and tear MAX AVG: 0.00
AVG: 0.00
LG Orbit December 2004
M2 Retreat MAX AVG: 0.00
LRM1230W In good shape
*but when measuring not on avg., values did appear
Microfridge with Safe Plug February 2006 AVG: 0.00
M3 Noyes Dorm Room
N060203077 Squeaky noises MAX AVG: 0.00
Amana Commercial Microwave AVG: 17.7 µW/m^2
M4 UpC February 1999
RFS11MP2 MAX AVG: 18.4 µW/m^2
March 2013 AVG: 0.00
M5 South Commons Senior Housing Emerson MW8999SB
New condition MAX AVG: 0.00
2) Average EM Radiation Values
Sample Power EM Radiation from Front (1 EM Radiation from Front (30 EM Radiation from Front (60 EM Radiation from Magnetron (1 EM Radiation from Magnetron (30 EM Radiation from Magnetron (60
# (Watts) cm) cm) cm) cm) cm) cm)
M1 1,600 W 300 µW/m^2 475 µW/m^2 350 µW/m^2 800 µW/m^2 300 µW/m^2 125 µW/m^2
M2 1,200 W 275 µW/m^2 200 µW/m^2 250 µW/m^2 700 µW/m^2 1.00 mW/m^2 300 µW/m^2
M3 700 W 270 µW/m^2 100 µW/m^2 90 µW/m^2 600 µW/m^2 125 µW/m^2 130 µW/m^2
M4 1,250 W 1.5 mW/m^2 800 µW/m^2 300 µW/m^2 200 µW/m^2 350 µW/m^2 300 µW/m^2
M5 900 W 300 µW/m^2 275 µW/m^2 120 µW/m^2 250 µW/m^2 80 µW/m^2 30 µW/m^2
3) Maximum Average EM Radiation
Sample Power EM Radiation from Front (1 EM Radiation from Front (30 EM Radiation from Front (60 EM Radiation from Magnetron (1 EM Radiation from Magnetron (30 EM Radiation from Magnetron (60
# (Watts) cm) cm) cm) cm) cm) cm)
M1 1,600 W 628.3 µW/m^2 662.5 µW/m^2 508.8 µW/m^2 1.1 mW/m^2 571.3 µW/m^2 123.4 µW/m^2
M2 1,200 W 1.2 mW/m^2 282.9 µW/m^2 482.9 µW/m^2 1.8 mW/m^2 1.5 mW/m^2 1.1 mW/m^2
M3 700 W 457.7 µW/m^2 182.2 µW/m^2 169.1 µW/m^2 726.3 µW/m^2 252.2 µW/m^2 162.4 µW/m^2
M4 1,250 W 2.5 mW/m^2 1.7 mW/m^2 1.6 mW/m^2 372.0 µW/m^2 250.0 µW/m^2 477.9 µW/m^2
M5 900 W 798.2 µW/m^2 488.8 µW/m^2 149.6 µW/m^2 209.4 µW/m^2 92.3 µW/m^2 97.9 µW/m^2
1) Average Radiation from Front
2) Average Radiation from Magnetron
3) Maximum Average Radiation from Front
4) Maximum Average Radiation from Magnetron
Emma Foley; Hunter Furnish; Hannah Tobias
(G8 Project Abstract) How to Cook Yourself: Radiation and Microwaves
Group 8 will measure the amount of electromagnetic radiation that certain appliances give off. We will test a variety of devices, but predominantly focus on and compare microwaves that differ in
size, antiquity, and wear & tear. We will use RF meters to measure the amounts of radiation given off and the Watts Up Pro to measure the amount of power used to identify any correlation between
power and radiation. We will measure radiation with respect to distance from the microwave and direction of the RF meter in the surrounding electromagnetic field and compare our findings to
traditional beliefs about microwave radiation.
Hunter Furnish; Emma Foley; Hannah Tobias
Test Post – Hannah Tobias
Here’s some text!
Here’s a video of science!
Here’s a picture!
|
{"url":"https://pages.vassar.edu/ltt/?cat=5627","timestamp":"2024-11-01T22:34:25Z","content_type":"text/html","content_length":"76171","record_id":"<urn:uuid:44093acb-7f78-45b7-aab8-f5fed5d12593>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00456.warc.gz"}
|
question on maximun likelihood function
Dear all,
I am extremely new to sas, and I am working on a labor economics problem with sas.
the question is to generate a fake sample of observations and code the likelihood function assiciate with the problem.
matlab code of the likelihood function is the following:
function [loglikelihood] = ps1q1_llk(beta, D, X)
pr = normcdf(X*beta);
loglikelihood = -sum(D.*log(pr) + (1-D).*log(1-pr));
my sas cod is the following:
proc iml;
start loglike(param) global (x); (param is beta in the previous, and x includes D, X)
x1 = x[,{1 2}]; (x1 is X, and x3 is D)
x3 = x[,3];
pr = cdf("normal",x1*t(param));
pr2 = 1 - pr;
f = -(t(x3)*log(pr) + t(1-x3)*log(pr2));
return ( f );
I recevie error Invocation of unresolved module LOGLIK, so i messed up in the likelihoood function. could someone plz help me with it? If you somehow need more info, I would like to provide in
10-05-2015 04:00 PM
|
{"url":"https://communities.sas.com/t5/SAS-IML-Software-and-Matrix/question-on-maximun-likelihood-function/td-p/228513","timestamp":"2024-11-14T15:31:51Z","content_type":"text/html","content_length":"274940","record_id":"<urn:uuid:3d710e8d-67d3-4264-bd4a-5854a43d7918>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00171.warc.gz"}
|
LaTeX & Maths: Equations and Environments
Hello :) How are you today?
We are going to talk about an important environment in LaTeX, the equation environment and one alternative to this.
But before, a suggestion
In this article I talk about two important packages. It is a good idea to import these packages, in the preamble, to avoid mistakes of symbols and environments, because, remember that these
packages give us the most of them.
Do you remember, in this post I talked about the inline and displayed formulas, well today we are going to talk about other environment, the equation environment
What is an equation
Well, in the LaTeX world, an equation is a numbered displayed formula.
Yes, a numbered formula, and for that we need an special environment
The equation Environment
This environment assigns a number to the displayed formula. Its structure is
My Equation
WARNING You must NOT left blank lines within this environment
But, what if I do not want to enumerate my equations?
Do you remember that once I wrote about non-numbered environments? Well, to avoid numbering the environments just add a *
But, wait, that is what we checked last post, do you remember? The displayed formulas, let's compare
Then, what is the difference between them?
That is something for the next post.
Thanks. Do not forget to follow me on Twitter @latexteada
Top comments (0)
For further actions, you may consider blocking this person and/or reporting abuse
|
{"url":"https://dev.to/latexteada/latex-maths-equations-and-environments-56fh","timestamp":"2024-11-11T08:30:29Z","content_type":"text/html","content_length":"66226","record_id":"<urn:uuid:9db0a35b-fc84-428e-90cb-63958d00593a>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00594.warc.gz"}
|
How To Calculate The Quantity of Cement Sand And Aggregate In CFT
How To Calculate The Quantity of Cement Sand And Aggregate In Concrete Block
Here, we'll calculate how much cement, sand, and aggregate will go into each concrete block using the exact formula below.
It is terribly simple to calculate what proportion is needed for the development of the block.
Concrete block floors come in a variety of shapes and can be used to provide excellent thermal comfort and aesthetic benefits. Slabs will be suspended on the ground. A block could be a concrete
structural component to create horizontal flat surfaces like floors, roof decks, and ceilings.
Read More
Consider a two-way block with a length of twenty linear units, a width of twenty linear units, and concrete of the M15 grade.
The thickness of a block is six inches when the amount of cement, sand, and aggregate used in the league is calculated.
Given date
Length = twenty linear unit
Width = 20ft
Thickness = seven inches
Calculate the quantity of the block, then this volume is regenerated from a wet condition to a dry state, then determine what proportion of the material is needed for the block.
The total volume of the block
= L x W x H
= twenty x twenty x zero.583
= 233.2 isometric feet
Convert dry volume to wet volume
Dry volume = wet volume x one.54
= 233.2 x 1.54
= 359.12 cu.f
The ratio of The Concrete (1:2:4 =7)/n Volume of The Cement
= Quantitative relation of cement / total quantitative relation x volume
= one / seven x 233.2
= 33.31 cft
Put this volume into luggage made of cement. That one is one you are familiar with. One cement bag holds 25 cft.
= 33.31 / 1.25
= 26.65 bags
Volume of Sand
= Quantitative relation of sand / total quantitative relation x volume
= two / seven x233.2
= 66.62 cft
The volume of The Aggregate
= Quantitative relation of aggregate / total quantitative relation x volume
= four / seven x 233.2
= 133.25 cft.
Read More
I appreciate you reading this article. I hope you find it helpful.
Post a Comment
|
{"url":"https://www.civilengineeringinformation.com/2022/07/how-to-calculate-the-quantity-of-cement-sand-and-aggregate-in-cft.html","timestamp":"2024-11-11T17:25:35Z","content_type":"application/xhtml+xml","content_length":"215728","record_id":"<urn:uuid:21320c2d-6231-46ca-a6df-d3d5a9c9560c>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00206.warc.gz"}
|
What is the process of hypothesis testing ?
Hypothesis testing is a systematic method used to evaluate the validity of a hypothesis based on sample data. It’s a cornerstone of scientific research and plays a vital role in various fields,
including machine learning. Here’s a breakdown of the key steps involved:
1. Formulate the Hypotheses:
□ You define two competing statements:
☆ Null Hypothesis (H₀): This represents the default assumption, often stating no significant difference or effect. For instance, “Playing classical music does not affect plant growth.”
☆ Alternative Hypothesis (Hₐ): This is the opposite of the null hypothesis, what you actually aim to prove or disprove. In this case, “Playing classical music increases plant growth.”
2. Collect Data: You gather a representative sample of data from the population of interest. This sample should be chosen randomly to avoid bias and ensure it reflects the larger population.
3. Choose a Statistical Test: The type of test you choose depends on your data (numerical, categorical) and the hypotheses you formulated. Common tests include:
□ t-tests: Used to compare means between two groups.
□ Chi-square tests: Used to assess relationships between categorical variables.
□ ANOVA (Analysis of Variance): Used to compare means between three or more groups.
4. Analyze the Data: Apply the chosen statistical test to your sample data and calculate a test statistic (e.g., t-value, p-value). This test statistic helps quantify the evidence against the null
5. Interpret the Results:
□ You set a significance level (alpha, usually 0.05 or 0.01) – the maximum acceptable probability of rejecting a true null hypothesis (making a Type I error).
□ You compare the test statistic with a critical value from a statistical table or assess the p-value.
☆ If the test statistic falls outside the critical region or the p-value is less than alpha, you reject the null hypothesis (there’s evidence to support the alternative hypothesis). This
suggests the observed effect is unlikely due to random chance.
☆ Otherwise, you fail to reject the null hypothesis (but don’t necessarily prove it’s true). This could be due to a lack of evidence in the sample data or the need for a larger sample size.
Essentially, hypothesis testing helps you decide whether your data provides enough evidence to cast doubt on the null hypothesis.
Here are some additional points to remember:
• Hypothesis testing is just one tool in the scientific toolbox. It’s often used alongside other methods like observation and experimentation.
• The outcome of a hypothesis test is not a definitive answer, but it guides you in making evidence-based decisions.
• A well-designed hypothesis test is crucial for its validity. Factors like sample size and randomization of data collection can impact the results.
By following these steps, you can perform a hypothesis test and draw more meaningful conclusions from your data.sharemore_vert
Leave a Comment
|
{"url":"https://easyexamnotes.com/what-is-the-process-of-hypothesis-testing/","timestamp":"2024-11-10T18:53:28Z","content_type":"text/html","content_length":"116163","record_id":"<urn:uuid:8ce200f0-33e7-4fd8-8235-e7de24e910ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00490.warc.gz"}
|
Exponential Functions
Then g is onto, and f is not.
Slide 8
Exponential Functions
The exponential function with base b
is the following function from R to R+ :
expb(x) = bx
b0=1 b-x = 1/bx
bubv = bu+v
(bu)v = buv
(bc)u = bucu
Slide 9
Logarithic Functions
The logarithic function with base b
(b>0, b1)
is the following function from R+ to R:
logb(x) = the exponent to which b must raised to obtain x .
Symbolically, logbx = y by = x .
Slide 10
One-to-one Correspondences
Definition: A one-to-one correspondence
(or bijection) from a set X to a set Y
is a function f:X→Y
that is both one-to-one and onto.
1) Linear functions: f(x)=ax+b when a0
(with domain and co-domain R)
2) Exponential functions: f(x)=bx (b>0, b1)
(with domain R and co-domain R+)
3) Logarithic functions: f(x)=logbx (b>0, b1)
(with domain R+ and co-domain R)
Slide 11
Inverse Functions
Suppose F: X→Y is a one-to-one correspondence.
Then there is a function F-1: Y→X defined as follows:
Given any element in Y,
F-1(y) = the unique element x in X
such that F(x)=y .
The function F-1 is called the inverse function for F.
The logarithic function with base b (b>0, b 1)
is the inverse of the exponential function with base b.
|
{"url":"https://www.sliderbase.com/spitem-1386-2.html","timestamp":"2024-11-05T06:21:12Z","content_type":"text/html","content_length":"11656","record_id":"<urn:uuid:c1069e88-34cb-46be-84c9-2b85d986c445>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00694.warc.gz"}
|
What Three Things Could Happen To An Organism If Its Environment Changes?
it could die, it good get over populated, it could get better living secumstances
Step-by-step explanation:
Plants may die because they are not used to the new environmental change.
Step-by-step explanation:
animals might lack food and water. If organisms cannot adapt to the changes in their ecosystem, they may move to another location. If they will not move, the species may become threatened, endangered
or extinct.
The area of the rectangular board is 0.99 square meters.
Given that a rectangular board is 1100 millimeters long and 900 millimeters wide.
We need to find the area of the board,
To find the area of the rectangular board in square meters, we need to convert the given dimensions from millimeters to meters and then calculate the area.
Length of the board = 1100 millimeters
Width of the board = 900 millimeters
To convert millimeters to meters, we divide the given values by 1000:
Length in meters = 1100 mm / 1000 = 1.1 meters
Width in meters = 900 mm / 1000 = 0.9 meters
The formula to calculate the area of a rectangle is:
Area = Length × Width
Substituting the values, we get:
Area = 1.1 meters × 0.9 meters
Area = 0.99 square meters
Therefore, the area of the rectangular board is 0.99 square meters.
Learn more about area of a rectangle click;
|
{"url":"https://diemso.unix.edu.vn/question/what-three-things-could-happen-to-an-organism-if-its-environ-zcml","timestamp":"2024-11-09T04:58:12Z","content_type":"text/html","content_length":"70404","record_id":"<urn:uuid:ddebd699-3df9-4eab-9301-aef9038026bb>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00268.warc.gz"}
|
Classification of numbers is about identifying which set, or sets, a number might belong to. It might be helpful to remember the different types of numbers as a story about filling in the numbers on
a number line.
But are there numbers between the ones we already have marked on the above number lines? The answer is yes - an infinite amount of numbers between every little mark.
What sort of numbers are these? Well, rational numbers are all numbers that indicate whole numbers as well as parts of whole numbers. So fractions, decimals, and percentages are added to our number
line to create the set of rational numbers.
A rational number is a number that can be expressed as a fraction where both the numerator and the denominator in the fraction are integers, and the denominator is not equal to zero.
Integers together with all fractions (including repeating or terminating decimals) make up the set of rational numbers.
They cannot be listed, but here are some examples:
..., -8, \, -7.4, \, -7, \, -6, \, -5.333 \, 87, \, -4, \, -2, \, 0, \, \dfrac{1}{2}, \, 75\%, \, 1, \, 2, \, 3, \, 3.565 \,6 , \, ...
But wait, our number line is still not quite full, there are still gaps. These gaps are filled with numbers we call irrational numbers. These are numbers like \sqrt{21} and \pi:
Notice that some number sets are entirely contained within larger number sets. For example, all of the whole numbers like 1,\, 2, \, 3, \, 17, \, 28 \, 736, ... etc. are also integers. But there are
some integers, -1, \, -2, \, -56, \, -98\, 324 that are not whole numbers.
Similarly, rational numbers are also real numbers, but the set of real numbers includes all the rational numbers and all the irrational numbers.
|
{"url":"https://mathspace.co/textbooks/syllabuses/Syllabus-1157/topics/Topic-21903/subtopics/Subtopic-279877/?ref=blog.mathspace.co","timestamp":"2024-11-14T17:52:12Z","content_type":"text/html","content_length":"323356","record_id":"<urn:uuid:f4295bb5-d9b3-481f-9e6d-bb5689182b62>","cc-path":"CC-MAIN-2024-46/segments/1730477393980.94/warc/CC-MAIN-20241114162350-20241114192350-00366.warc.gz"}
|
Round in circles
Round in circles
Thursday, 15th November 2018 ◆ Something unexpected in grand omelette (6) Maths
Note: this is a follow-on post from follow your dart.
In making games, I often want to randomly place objects inside a circle. I have always done this one of two ways, but I feel unsatisfied with both.
Method 1: Randomly pick a point in the surrounding square, and reject points which are not in the circle.
function tick() {
do {
x = random( -radius, radius )
y = random( -radius, radius )
d = sqrt( x*x + y*y )
} while (d > radius)
drawDot( x, y )
This generates an even spread of points. However, I don't like the fact that you are not guaranteed to find a valid point on each iteration. In this example, dots which took more than one iteration
to draw appear in red.
Method 2: Uniformly pick an angle and a distance.
function tick() {
theta = random( 0, 2 * PI )
d = random( 0, radius )
drawDot( sin( theta ) * d, cos( theta ) * d )
This generates an uneven spread of points: there is a higher density of points near the centre. However, I like that each iteration will definitely pick a valid point.
I wanted to find the elusive method 3 which generates an even spread of points, but which is also guaranteed to generate a point every time. Using method 2 as the starting point, I will stick with a
uniform distribution for the angle. I just need to find the how to correctly generate the distance from the centre.
Thinking about such a distribution is what instigated a previous blog post: follow your dart. From that, I know the c.d.f. of the distribution: \(F_D(d) = \frac{d^2}{r^2}\). That gives us:
$$ \begin{aligned} \Theta &\sim \text{Uniform}(0, 2\pi) \\ D &\sim F_D \end{aligned} $$
In programming, rand()-esque functions are usually only uniform. I need a way, therefore, to use the uniform distribution to generate samples of \(F_D\). Let us define a new random variable \(X\)
which is the inverse of our target distribution applied to the uniform distribution: \(X = F_D^{-1}(U)\). It has c.d.f. \(F_X\).
$$ \begin{aligned} F_X(x) &= \mathbb{P}(X \le x) \\ &= \mathbb{P}(F_D^{-1}(U) \le x) \\ &= \mathbb{P}(U \le F_D(x)) \end{aligned} $$
For the last step, we apply \(F_D(\cdot)\) to both sides of the probability. This is acceptable since it's an increasing function (a property of the c.d.f.).
Finally, we look at the c.d.f. of the uniform distribution, it's just \(\mathbb{P}(U \le u) = u\). This means we have:
$$ \begin{aligned} F_X(x) &= \mathbb{P}(U \le F_D(x)) \\ &= F_D(x) \end{aligned} $$
Which is exactly the distribution we want to simulate. That is the distribution of \(X\) is exactly the same as that of \(D\). Generating \(X\) is easy, we apply \(F_D^{-1}\) to samples of the
uniform distribution.
$$ \begin{aligned} F_D(d) &= \frac{d^2}{r^2} \\ r^2 F_D(d) &= d^2 \\ \sqrt{r^2 F_D(d)} &= d \\ d &= \sqrt{r^2 F_D(d)} \\ F_D^{-1}(d) &= r\sqrt{d} \end{aligned} $$
Which means to generate \(D\), we simply use the square root of the uniform distribution and multiply by the radius. That's pretty elegant!
Let's put this into practice:
function tick() {
theta = random( 0, 2 * PI )
d = radius * sqrt ( random( 0, 1 ) )
drawDot( sin( theta ) * d, cos( theta ) * d )
This is certainly a success. Since the sqaure root is notoriously expensive, I'd be curious if it's actually any quicker than method 1 on average. Either way, I much prefer this method and I will be
using it at every opportunity!
There are no comments yet.
|
{"url":"https://sunil.page/blog/round-in-circles/","timestamp":"2024-11-13T05:41:56Z","content_type":"text/html","content_length":"38177","record_id":"<urn:uuid:df8178ff-c360-46d3-92c0-d121fddb9ebf>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00609.warc.gz"}
|
Parallel solutions of simple indexed recurrence equations
We define a new type of recurrence equations called 'Simple Indexed Recurrences' (SIR). In this type of equations, ordinary recurrences are generalized to X[g(i)] = op[i](X[f(i)], X[g(i)]), where
f,g: {1...n}→{1...m}, op[i](x,y) is a binary associative operator and g is distinct. This enables us to model certain sequential loops as a sequence of SIR equations. A parallel algorithm that solves
as a set of SIR equations will, in fact, parallelize sequential loops of the above type. Such a parallel SIR algorithm must be efficient enough to compete with the O(n) work complexity of the
original loop. We show why efficient parallel algorithms for the related problems of List Ranking and Tree Contraction, which require O(n) work, cannot be applied to solving SIR. A sequence of
experiments was performed to test the effect of synchronous and asynchronous executions on the actual performance of the algorithm. An efficient solution is given for the special case where we know
how to compute the inverse of op[i], and finally, useful applications of SIR to the well-known Livermore Loops benchmark are presented.
ASJC Scopus subject areas
• Signal Processing
• Hardware and Architecture
• Computational Theory and Mathematics
Dive into the research topics of 'Parallel solutions of simple indexed recurrence equations'. Together they form a unique fingerprint.
|
{"url":"https://cris.haifa.ac.il/en/publications/parallel-solutions-of-simple-indexed-recurrence-equations","timestamp":"2024-11-01T23:46:04Z","content_type":"text/html","content_length":"52805","record_id":"<urn:uuid:11923781-7eaa-4436-80bd-b88a781d38cc>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00498.warc.gz"}
|
Speed units converter and informations
Speed converters
From [Select unit ]
To [Select unit ]
List of Converters
We do everything possible to have the most accurate conversions and exchange rates possible. However, we cannot be held responsible for any errors found and do not give any guarantees regarding
conversions. If you find an error, please contact us via our contact details displayed in the corresponding section.
This website uses cookies & third party services
Accept & Close
|
{"url":"https://www.infoconverter.com/conversion/speed","timestamp":"2024-11-08T08:20:43Z","content_type":"text/html","content_length":"40407","record_id":"<urn:uuid:01ace134-8655-4488-8ce2-396c13f8c849>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00763.warc.gz"}
|
Incorrect Division and Remainder
I am using the following code and looking at the results in the inspector at runtime:
var number = 0.0;
var quotient = 0.0;
var remainder = 0.0;
function Update () {
quotient = Mathf.Floor(number / 12);
remainder = number % 12;
It should divide the varable ‘number’ by 12 giving the ‘quotient’ rounded down to the nearest whole number, and the ‘remainder’.
But, if I set ‘number’ to equal 12.99, the remainder given is 0.9899998, instead of 0.99.
Can anyone see where i am going wrong?
Any help is appreciated.
Hi, I guess this is a float precision issue. Here is a good article about it here: http://www.gamasutra.com/view/feature/1965/visualizing_floats.php?print=1
In your case, your may be able to increase precision using: remainder = number - (quotient * 12). You could also try using double precision float, it has the same behavior as regular floats but with
greater accuracy.
you can´t divide 0
I think that this is the error.
|
{"url":"https://discussions.unity.com/t/incorrect-division-and-remainder/134823","timestamp":"2024-11-14T10:32:28Z","content_type":"text/html","content_length":"29484","record_id":"<urn:uuid:31ee5162-e9e4-417e-ab61-8017236793fc>","cc-path":"CC-MAIN-2024-46/segments/1730477028558.0/warc/CC-MAIN-20241114094851-20241114124851-00438.warc.gz"}
|
Excel Essentials: Excel IF FormulaExcel IF Formula
Excel Essentials: Excel IF Formula
Excel IF Formula.
The Excel if formula or more correctly the IF Function checks a condition that must be either true or false.
If the condition is true, the Excel if formula carries out whatever instructions are in the true section.If the condition is false, the Excel if formula carries out whatever instructions are in the
False section
The IF function has three arguments: the condition you want to check, the instructions if the condition is true, and the instructions if the condition is false.
Here is the Excel IF()Syntax:
IF(Logical_test, Action_if_true, Action_if_false)
Logical _test
The logical_test evaluates an expression to see if it passes the test, i.e. is TRUE or does not pass the test, i.e. is FALSE
Action_if_true can be a value or an operation. Whichever, the result is placed in the cell that contains the IF ( ) Function if the logical_test is true.
Action_if_false can be a value or an operation. Whichever, the result is placed in the cell that contains the IF ( ) Function if the logical_test is false.
Excel IF Formula Example.
Let's look at an example for calculating bonuses based on total sales.
A company offers it's salesman a 10% bonus if the value of the total sales is over 4,000, otherwise the sales reps get no bonus. We will put the Bonus breakpoint in cell C1 and the Bonus amount in
When translated into the IF ( ) function,the Excel IF formula formula will look like the following:
=IF(B5>=$C$1,B5*$C$2,"No Bonus")
Note the importance of using cell References for the variables. If the commission rate changes, I only have to change the value in cell reference C2 and not change maybe hundreds of individual
Also note the importance of making C1 and C2 absolute references. For an understanding of different types of cell references , see our essential primer on Relative & Absolute references
You can build this Excel IF Function and other Functions using the Function help box. Type in the name of the Function into a cell. Make sure you include the first ( and then click the Fx symbol to
the left of the Formula bar.
This Function helps box assist you in giving the correct data or arguments to the Function.
Nested IF Function Example:
We have placed the following IF function in cell G15
=IF (F15=1,1.5, IF(F15=2, 1.4,1))
In this example, the Excel if formula checks if F15 is equal to 1, if it is , then G15 is set to 1.5.
But if it is not, then the second Excel if formula is evaluated. If F15 is equal to 2, then G15 is set to 1.4, otherwise G15 is set to 1
In Excel 2003, you can have up to a maximum of 7 nested if statements. From Excel 2007, that has risen to over 100.
|
{"url":"http://excelcoursesonline.com/excel-if-formula/","timestamp":"2024-11-08T17:34:04Z","content_type":"text/html","content_length":"68422","record_id":"<urn:uuid:be8ae7de-f7cc-4670-a65d-834cc99d32c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00196.warc.gz"}
|
Analytic and Geometric Approaches to Machine Learning
26/07/2021 - 30/07/2021
Dr Matthew Thorpe (The University of Manchester)
Dr Patricia Vitoria Carrera (Universitat Pompeu Fabra, Barcelona)
Dr Bamdad Hosseini (California Institute of Technology)
Funding: International Centre for Mathematical Sciences (ICMS)
Sponsor: Inistitute for Mathematial Innovation (IMI)
The aim of the workshop is to bring together researchers that apply mathematical methodology to machine learning. We particularly want to emphasise how mathematical theory can inform applications and
vice versa.
This workshop is the first of two workshops on this topic. The second will be an in-person workshop to be held at the University of Bath, in December 2021. In this first workshop, invited speakers
are encouraged to present open problems and explore interesting directions for potential research as part of their talk. The schedule allows participants time to initiate conversations and
collaborations that can be developed at the winter workshop.
The two workshops in this series follow on from the <a href=”https://mathml2020.github.io/index”>LMS-Bath Symposium on the Mathematics of Machine Learning</a> held 3-7 August 2020.
Speaker list
Kostas Zygalakis
Talk title
Connection Between Optimization and Sampling Algorithms
Optimization and Sampling problems lie in the heart of Bayesian inverse problems. The ability to solve such inverse problems depends crucially on the efficient calculation of quantities relating to the
posterior distribution, giving thus rise to computationally challenging high dimensional optimization and sampling problems. In this talk, we will connect the corresponding optimization and sampling
problems to the large time behaviour of solutions to (stochastic) differential equations. Establishing such a connection allows to utilise existing knowledge from the field of numerical analysis of
differential equations. In particular, two very important concepts are numerical stability and numerical contractivity. In the case of linear differential equations these two concepts coincide, but
with the exception of some very simple Runge-Kutta methods such the Euler method in the non-linear case numerical stability doesn’t imply numerical contractivity [1]. However, the recently introduced
framework of integral quadratic constraints and Lyapunov functions [2, 3] allows for bridging this gap between linearity and non-linearity in the case of (strongly) convex functions. We will use this
framework, to study a large class of strongly convex optimization methods and give an alternative explanation for the good properties of Nesterov method, as well as highlight the reasons behind the
failure of the heavy ball method [2]. In addition, using similar ideas [4], we will present a general framework for the non-asymptotic study of the 2-Wasserstein distance between the invariant
distribution of an ergodic stochastic differential equation and the distribution of its numerical approximation in the strongly log-concave case. This allows us to study in a unified way a number of
different integrators proposed in the literature for the overdamped and underdamped Langevin dynamics. If times allows us we will also talk about the application of these ideas to imaging inverse
problems [5, 6].
Future research
In (strongly log-concave) sampling should you go explicit or implicit? I will describe some very recent (and perhaps incomplete) work that aims to study the computational efficiency of explicit methods
for sampling problems vs implicit integrators. In particular, the class of strongly convex potentials is ideal as it allows for the explicit computation of the number of function evaluations that one
needs to do in order to solve the optimization problem related to performing an implicit numerical step.
Yves van Gennip
Talk Title
Gradient Flows and Thresholding Schemes on Graphs
We present unconstrained and constrained gradient flows and thresholding schemes on graphs, which can be used in applications such as image segmentation and data clustering. We study connections
between the flows and schemes and investigate discrete-to-continuum limit properties.
Future Research
Besides clustering and the maximum-cut problem, which other combinatorial problems on graphs are amenable to relaxation via differential equations?
Bernhard Schmitzer
Linearization of Balanced and Unbalanced Optimal Transport
Optimal transport provides a geometrically intuitive Lagrangian way of comparing distributions by mass rearrangement. The metric can be approximated by representing each sample as deformation of a
reference distribution. Formally this corresponds to a local linearization of the underlying Riemannian structure. When combined with subsequent data analysis and machine learning methods this new
embedding usually outperforms the standard Eulerian representation. We show how the framework can be extended to the unbalanced Hellinger–Kantorovich distance to improve robustness to noise.
Future Research
For the balanced case the properties of the linearized optimal transport em-bedding are already understood quite well. In the unbalanaced case we seek to confirm the well-posedness of linearized
interpolations and to understand the range in which we can expect the approximation to be accurate. In both cases we look for ways to go beyond a simple one-point-linearization.
Youssef Marzouk
Density Estimation and Conditional Simulation Using Triangular Transport
Triangular transformations of measures, such as the Knothe–Rosenblatt rearrangement, underlie many new computational approaches for density estimation and conditional simulation. This talk will
discuss several aspects of such constructions.
First we present new approximation results for triangular maps, focusing on analytic densities with bounded support. We show that there exist sparse polynomial approximations and deep ReLU network
approximations of such maps that converge exponentially, where error is assessed on the pushforward distributions of such maps in a variety of distances and divergences. We also give an explicit a
priori description of the polynomial ansatz space guaranteeing a given error bound.
Second, we discuss the problem of estimating a triangular transformation given a sample from a distribution of interest—and hence, transport-driven density estimation. We present a general functional
framework for representing monotone triangular maps between distributions on unbounded domains, and analyze properties of maximum likelihood estimation in this framework. We demonstrate that the
associated optimization problem is smooth and, under appropriate conditions, has no spurious local minima. This result provides a foundation for a greedy semi-parametric estimation procedure.
Time permitting, we may also discuss a conditional simulation method that employs a specific composition of maps, derived from the Knothe–Rosenblatt rearrangement, to push forward a joint distribution
to any desired conditional. We show that this composed-map approach reduces variability in conditional density estimates and reduces the bias associated with any approximate map representation. For
context, and as a pointer to an interesting application domain, we elucidate links between conditional simulation with composed maps and the ensemble Kalman filter used in geophysical data
This is joint work with Ricardo Baptista (MIT), Olivier Zahm (INRIA), and Jakob Zech (Hei-delberg).
Future Research
Open questions:
1. Removal of dependence, i.e., estimating and minimizing mutual information over a given space of transformations. In other words, how to find a transformation T such that Z = T (Y, X) has minimal
dependence with Y ? This turns out to be the essential underlying problem of conditional simulation, in my view.
2. Transport-based methods for (approximate) inference are a natural complement to ensemble methods for data assimilation. Yet embedding an approximate inference step into a dynamical system presents
many new challenges and considerations. In what sense should inference be made accurate, given the subsequent evolution of the approximate posterior under the dynamics? How do errors in inference
affect stability of the associated interacting particle system? How can we use this understanding to design better algorithms?
Coloma Ballester
Generative Methods for Some Inverse Problems in Imaging
Generative approaches aim to model the probability distribution of certain data so that we can sample new samples out of the distribution. In this talk, we will discuss them together with its use to
tackle several imaging problems. They include a method for out-of-distribution that leverages the learning of the probability distribution of normal data through generative adversarial networks while
simultaneously keeping track of the states of the learning to finally estimate an efficient anomaly detector. Also, a method for image colorization will be discussed. It is based on an adversarial
strategy that captures geometric, perceptual and semantic information. Finally, the use of generative methods for inpainting will be discussed.
Future Research
An open/interesting problem related to this work is the suitability of generative methods to obtain multiple solutions of problems that do not have a unique solution. Another open problem is how the
learning of the underlying distribution of color images is affected by the color space in which those images are given.
Andres Almansa
Bayesian Estimators for Inverse Imaging Problems with Decoupled Learned Priors
Decoupled methods derive Minimum Mean Square Error (MMSE) or Maximum A Posteriori (MAP) estimators for inverse problems in imaging by combining an explicit likelihood function with a prior that was
previously trained on a large natural image database. In this talk we present recent developments on such decoupled methods for two kinds of learned priors.
Part I: Concentrates on so called Plug & Play (PnP) methods, where the prior is implicitly defined by an image denoising algorithm. In [1] we introduce theory, methods, and a provably convergent
algorithm for performing Bayesian inference with PnP priors. We introduce two algo-rithms: 1) PnP-ULA (Plug & Play Unadjusted Langevin Algorithm) for Monte Carlo sampling and MMSE inference; and 2)
PnP-SGD (Plug & Play Stochastic Gradient Descent) for MAP infer-ence. Using recent results on the quantitative convergence of Markov chains, we establish detailed convergence guarantees for these two
algorithms under realistic assumptions on the denoising op-erators used, with special attention to denoisers based on deep neural networks. We also show that these algorithms approximately target a
decision-theoretically optimal Bayesian model that is well-posed. The proposed algorithms are demonstrated on several canonical problems such as image deblurring, inpainting, and denoising, where
they are used for point estimation as well as for uncertainty visualisation and quantification.
Part II: Concentrates on generative models where the prior in the image domain is expressed as the pushforward measure of a Gaussian distribution in latent space. Whereas previous MAP-based
approaches to this problem lead to highly non-convex optimization algorithms, the JPMAP approach proposed in [2] computes the joint (space-latent) MAP that naturally leads to alternate optimization
algorithms and to the use of a stochastic encoder to accelerate computations. We show theoretical and experimental evidence that the proposed objective function is quite close to bi-convex. Indeed it
satisfies a weak bi-convexity property which is sufficient to guarantee that our optimization scheme converges to a stationary point. Experimental results also show the higher quality of the solutions
obtained by our joint posterior maximization approach with respect to other non-convex MAP approaches which more often get stuck in spurious local optima. Current work also explores how the
quasi-bi-convexity property of the joint posterior could be exploited to provide efficient posterior sampling schemes that lead to faster MMSE and uncertainty estimators.
Mario González, Rémi Laumont, Jean Prost, Valentin de Bortoli, Julie Delon, Alain Durmus, Antoine Houdard, Pablo Musé, Nicolas Papadakis, Marcelo Pereyra, Pauline Tan.
Future Research
We are currently exploring theoretical, methodological and applied generalizations of the methods and algorithms described in the abstract.
Resizeable generative priors: PnP priors which are usually implemented as convolutional neural network denoisers that can be trained on image patches of a certain size, and then used on images of any
size. Generative priors, in contrast, are usually trained and used on images of a fixed size. This issue significantly limits the applicability of generative priors for general inverse problems in
imaging. A few solutions to this issue have been explored: (a) In [3] a generative prior is trained on natural image patches and then the regularization term is applied patch-wise, but this is
computationally inefficient and results are not competitive with other PnP approaches. (b) In [4] the architecture of the generative model is fully convolutional which allows the generative model to be
resized. The authors show a few relatively convincing applications of this resizability feature but no in-depth theoretical explanation was provided to justify that this operation is well-founded.
Faster posterior sampling schemes: The PnP-ULA scheme only provides stability and convergence guarantees as long as the step-size satisfies certain limits which makes it quite slow. In order to allow
for larger step-sizes and faster schemes we can consider semi-implicit or higher-order schemes. These introduce, however, the need for denoisers D such that the operator (D+ a Id) can be efficiently
inverted. For generative priors, the pCN scheme [5] can perform posterior sampling based on the generator network only. We are exploring whether this scheme can be accelerated when an encoder network
is also available like in the JPMAP approach [2].
Applications: On the applications side, we are exploring the use of PnP posterior sampling techniques for semi-blind restoration in the context of camera shake, as well as for uncertainty
Johannes Schmidt-Hieber
Overparametrization and the Bias-Variance Dilemma
For several machine learning methods such as neural networks, good generalisation performance has been reported in the overparametrized regime. In view of the classical bias-variance trade-off, this
behaviour is highly counterintuitive. The talk summarizes recent theoretical results on overparametrization and the bias-variance trade-off. This is joint work with Alexis Derumigny.
Future Research
In the talk we present universal lower bounds on the bias-variance trade-off. These bounds have also to be obeyed in the overparametrized regime. It is an open problem to derive rate optimal bounds on
bias and variance in the overparametrized regime for standard nonparametric and high-dimensional models. Another challenging problem is to extend the results to the trade-off between bias and mean
absolute deviation, see the paper arXiv:2006.00278 for more details and a first result in this direction.
Omiros Papaspiliopoulos
Scalable Computation for Bayesian Hierarchical Models
I will provide an overview of very recent theory that establishes conditions under which coordinate-wise updating algorithms, in particular Gibbs samplers for posterior sampling, coordinate ascent
variational inference for posterior approximation, and backfitting algorithms for MAP estimation, are scalable. Scalability here means that their computational complexity in order to achieve a certain
approximation precision scales linearly in the size of the data and the size of the model. The results refer to a very important family of statistical models, known as crossed-effect hierarchical
models, which are the canonical predictive/inference models for regressing against categorical predictors. The theory brings together multigrid decompositions, concentration inequalities and random
Future Research
I will describe some very recent (and perhaps incomplete) work that aims to study the computational efficiency of explicit methods for sampling problems vs implicit integrators. In particular, the
class of strongly convex potentials is ideal as it allows for the explicit computation of the number of function evaluations that one needs to do in order to solve the optimization problem related to
performing an implicit numerical step.
Mathew Penrose
Optimal Cuts of Random Geometric Graphs
Given a “cloud” of n points sampled independently uniformly at random from a Eu-clidean domain D, one may form a geometric graph by connecting nearby points using a distance parameter r(n). We
consider the problem of partitioning the cloud into two pieces to minimise the number of “cut edges” of this graph, subject to a penalty for an unbalanced partition. The optimal score is known as the
Cheeger constant of the graph. We discuss convergence of the Cheeger constant (suitably rescaled) for large n with suitably chosen r(n), towards an analogous quantity defined for the original domain
D, and the related problem of optimal bisection into two pieces of equal size.
Future Research
These results are for when the mean degree nr(n)d grows faster than logn; open problems include finding analogous results (i) when the mean degree grows more slowly than this, or (ii) for the
k-nearest neighbour graph with k = k(n) tending to infinity, or (iii) extending the results to manifolds.
Partitioning problems are are of interest in themselves when dealing with multidimensional data, and moreover the Cheeger constants provide bounds on Laplacian eigenvalues, both for graphs and in the
Franca Hoffmann
Discrete-Continuum Interplay: Formulations for Supervised and Semi-Supervised Learning
Graph Laplacians encode geometric information contained in data, via the eigenfunctions associated with their small eigenvalues. These spectral properties provide powerful tools in data clustering
and data classification. When a large number of data points are available one may consider continuum limits of the graph Laplacian, both to give insight and, potentially, as the basis for numerical
methods. We summarize recent insights into the properties of a family of weighted elliptic operators arising in the large data limit of different graph Laplacian normalizations, and the role these
operators play, both in the discrete and in the continuum, for clustering and classification algorithms. We show consistency of optimization-based techniques for graph-based semi-supervised learning
(SSL), in the limit where the labels have small noise and the underlying unlabelled data is well clustered. This formalism suggests continuous versions of SSL algorithms, making use of these
differential operators.
Future Research
How to leverage continuum formulations for algorithm design and implementations? How to evaluate and compare these implementations?
Jeff Calder
PDE Continuum Limits for Prediction with Expert Advice
Prediction with expert advice refers to a class of machine learning problems that is concerned with how to optimally combine advice from multiple experts whose prediction qualities may vary greatly.
We study a stock prediction problem with history-dependent experts and an adversarial (worst-case) market, posing the problem as a repeated two-player game. The game is a generalization of the
Kohn-Serfaty two-player game for curvature motion. We prove that when the game is played for a long time, the discrete value functions converge in the continuum to the solution of a nonlinear
parabolic partial differential equation (PDE) and the optimal strategies are characterized by the solution of a Poisson equation over a De Bruijn graph, arising from the history-dependence of the
experts. Joint work with Nadejda Drenska (UMN).
Future Research
PDE continuum limits for adversarial multi-armed bandits. Multi-armed bandits is a machine learning problem centered around exploration vs exploitation in an online learning environment, and is used
to evaluate strategies for managing large research projects or allocating funds to researchers, among other problems. The user has many possible “arms” to query (the language relates to pulling an
arm of a slot machine), each of which gives the player a randomized reward (following a different distribution for each arm). The user wants to maximize their rewards over the long run, which requires
trading off exploration (pulling new arms that may end up giving poor rewards) and exploitation (repeatedly pulling the same arm that is known to give a good reward). The adversarial version of the
mulit-armed bandit problem, where the rewards are controlled by an adversary whose goal is to minimize the player’s rewards, is very closely related to the prediction from expert advice problem
covered in my talk, except that the player has only partial knowledge in the multi-armed bandit setting. An interesting problem would center around determining a PDE continuum limit for the value
function of an adversarial multi-armed bandit problem, and using this to construct asymptotically optimal strategies for the player and adversary.
Jonas Latz
Stochastic Gradient Descent in Continuous Time: Discrete and Continuous Data
Optimisation problems with discrete and continuous data appear in statistical estimation, machine learning, functional data science, robust optimal control, and variational inference. The ‘full’
target function in such an optimisation problems is given by the integral over a family of parameterised target functions with respect to a discrete or continuous probability measure. Such problems
can often be solved by stochastic optimisation methods: performing optimisation steps with respect to the parameterised target function with randomly switched parameter values. In this talk, we
discuss a continuous-time variant of the stochastic gradient descent algorithm. This so-called stochastic gradient process couples a gradient flow minimising a parameterised target function and a
continuous-time ‘index’ process which determines the parameter.
We first briefly introduce the stochastic gradient processes for finite, discrete data which uses pure jump index processes. Then, we move on to continuous data. Here, we allow for very general index
processes: reflected diffusions, pure jump processes, as well as other L´evy processes on compact spaces. Thus, we study multiple sampling patterns for the continuous data space. We show that the
stochastic gradient process can approximate the gradient flow minimising the full target function at any accuracy. Moreover, we give convexity assumptions under which the stochastic gradient process
with constant learning rate is geometrically ergodic. In the same setting, we also obtain ergodicity and convergence to the minimiser of the full target function when the learning rate decreases over
time sufficiently slowly. We illustrate the applicability of the stochastic gradient process in a simple polynomial regression problem with noisy functional data, as well as in physics-informed neural
networks approximating the solution to certain partial differential equations.
Future Research
We propose a continuous-time optimisation algorithm that needs to be discretised to be used in practice. Hence, we need to find a suitable time-stepping method, for which we have two constraints: (1)
a suitable time-stepping method should retain the same/similar ergodic behaviour as the stochastic gradient process; (2) the method should be computationally cheap to be applicable in large scale
estimation and learning problems.
Yury Korolev
Approximation Properties of Two-Layer Neural Networks with Values in a Banach Space
Approximation properties of infinitely wide neural networks have been studied by several authors in the last few years. New function spaces have been introduced that consist of functions that can be
efficiently (i.e., with dimension-independent rates) approximated by neural networks of finite width. Typically, these functions are supposed to act between Euclidean spaces, typically with a
high-dimensional input space and a lower-dimensional output space. As neural networks gain popularity in inherently infinite-dimensional settings such as imaging, it becomes necessary to analyse the
properties of neural networks as nonlinear operators acting between infinite-dimensional spaces. In this talk, I will present dimension-independent Monte-Carlo rates for neural networks acting between
Banach spaces with a partial order (vector lattices), where the ReLU no linearity will be interpreted as the lattice operation of taking the positive part.
Future Research
1. Studying infinitely wide/deep vector-valued networks with a more complex architecture: multilayer networks, ResNets, CNNs. Perhaps also other types of networks such as recurrent neural networks (I
know very little about them);
2. Neural networks with a ‘regularisation parameter’ that could be used to solve inverse problems for different noise levels without retraining (e.g., variable depth, scaling of the bias terms).
Matthias Ehrhardt
Equivariant Neural Networks for Inverse Problems
In recent years the use of convolutional layers to encode an inductive bias (translational equivariance) in neural networks has proven to be a very fruitful idea. The successes of this approach have
motivated a line of research into incorporating other symmetries into deep learning methods, in the form of group equivariant convolutional neural networks. Much of this work has been focused on
rototranslational symmetry of the Rd, but other examples are the scaling symmetry of Rd and rotational symmetry of the sphere. In this work, we demonstrate that group equivariant convolutional
operations can naturally be incorporated into learned reconstruction methods for inverse problems that are motivated by the variational regularisation approach. Indeed, if the regularisation
functional is invariant under a group symmetry, the corresponding proximal operator will satisfy an equivariance property with respect to the same group symmetry. As a result of this observation, we
design learned iterative methods in which the proximal operators are modelled as group equivariant convolutional neural networks. We use rototranslationally equivariant operations in the proposed
methodology and apply it to the problems of low-dose computerised tomography reconstruction and subsampled magnetic resonance imaging reconstruction. The proposed methodology is demonstrated to
improve the reconstruction quality of a learned re-construction method with a little extra computational cost at training time but without any extra cost at test time.
Future Research
In this talk we will make use of the learned proximal gradient method where the “proximal operator” is replaced by a neural network which can be trained either before or after it is being inserted
into the algorithm. If these neural networks are indeed proximal operators with the correct scaling, then classical convergence results can be utilized. More interesting in applications is to relax
these conditions as much as possible in order to allow for more flexibility and adaptation to data. Some results for averaged operators exist in this direction but these are not yet fully
satisfactory. Beyond the convergence of the algorithm it is of interest to mathematically analyze its limit and its regularization properties in an inverse problems context.
Philipp Peterson
Structured Singularities in Deep Learning
We will discuss various instances of structured singularities appearing in modern machine learning applications. A structured singularity is a discontinuity or a region where a function is
non-smooth, which is itself a (not necessarily smooth) manifold. Such structured singularities appear naturally in classification problems as class boundaries. It can be shown that in this case, the
regularity of the singularities is the decisive factor in describing the hardness of the underlying learning problem. In addition, learning classification problems with smooth boundaries is one of the
few examples where deep neural network architectures provably outperform classical methods in learning applications. We will describe some partial results quantifying the effect of singularities and
discuss the regularity assumption. The second example of structured singularities in learning will be discussed in the framework of inverse problems appearing in imaging. In many inverse problems,
based on pseudodifferential operators, one has a precise characterisation of how the for-ward and inverse operators transform singularities. Due to this, it seems promising to understand how deep
neural network architectures that attempt to approximate the inverse operator transform singularities through the layers. Here, however, one runs into the fundamental problem that singularities are a
phenomenon of the continuum, whereas neural networks acting on digital images are inherently discrete. We will discuss this question in detail for the inverse tomography problem.
Future Research
1. How appropriate is the smoothness assumption for classification problems? Is it possible to estimate the regularity of classification problems in real-world scenarios? Is it possible to set up a
meaningful numerical example? Can additional implicit assumptions on the decision boundary, such as locally minimal perimeter, be used to obtain reasonable regularity estimates?
2. Assuming a CNN is used to approximate an operator. What is the natural continuum analogue to a CNN when singularity propagation is concerned? What is the correct notion of a singularity in a
digital image in a learning-theoretical context? How is this notion of a digital singularity affected by an application of a neural network?
Christoph Brune
Geometric DL - From Learning ODE Dynamics towards Graph Neural Diffusion
In recent years the area of geometric deep learning has attracted significant attention in machine learning theory as well as in data science applications.
Two reasons for that development are that graph neural networks can effectively address different non-Euclidean geometric data domains, and that geometric priors can induce appealing analytical
properties like symmetry and scale separation for representing physically structured data.
Although deep learning has been effectively analyzed and used for learning and discovering dynamics from data, e.g., in the form of neural ODEs, physics-informed NNs or mean field control models, the
combination of PDEs with graph neural networks to explain spatio-temporal dynamics has been hardly explored.
This talk presents new methods for learning normal forms directly from dynamic data and new mesh graph neural networks useful for physics-informed 4d medical imaging raising an outlook towards graph
neural diffusion and beyond.
Future Research
Model-Informed Graph Neural Networks - Are graph neural networks more generalizable for model-informed machine learning problems compared to classical neural networks?
1. Modeling: How to combine graph neural networks properly with PDEs to achieve model-informed GNN?
2.Expressiveness: Are flow-induced function spaces more expressive for GNNs with model-informed constraints than Barron spaces?
3. Generalization: How does the double descent phenomena show up in GNNs when we know more about the dynamics underlying?
Abderrahim Elmoataz
The Nonlocal p-Laplacian Evolution Problem on Sparse Graphs: The Continuum Limit
The non-local p-Laplacian evolution equation, governed by a given kernel, has applications invarious areas of science and engineering. In particular, it is modern tools for massive data processing
(including signals, images, geometry), and machine learning tasks such as classification. In practice, however, this model is implemented in, time and space, discrete form as a numerical approximation
to a continuous problem, where the kernel is replaced by an adjacency matrix of graph. In this work, we propose a far-reaching generalization of the results in [] [2] to a much more general class of
kernels and initial data and the case p = 1 which was not handled in [] [2]. Combining tools from graph theory, convex analysis, non-linear semigroup theory and evolution equations, we give a
rigorous interpretation to the continuous limit of the discrete non-local p-Laplacian evolution on sparse graphs. More specifically, we consider a sequence of graphs converging to a so-called limit
object known as the Lq-graphon [] [3, 4]. If the continuous p-Laplacian evolution is properly discretized on this graph sequence, we prove that the solutions of the sequence of discrete problems
converge to the solution of the continuous problem governed by Lq-graphon, when the number of graph vertices grows to infinity. Along the way, we provide a consistency/error estimate between the
solutions of two evolution problems governed by two kernels and two initial data. For random graph sequences, using sharp deviation inequalities,we deliver nonasymptotic convergence rates in
probability and exhibit the different regimes depending on p and the regularity of the Lq-graphon and the initial condition.
Future Research
By iterating the p-Laplacian operator (for p = 2), we get so-called the nonlocal bilaplacian operator [] [5], which can be extended to a general operators called the nonlocal p-bilaplacian, for p ∈
(1, +∞). It will be very interesting to study the well-posedness of different problems governed by this family of operators e.g. evolution, variational and boundary value problems associated to this
operators. It will be also interesting to study their continuum limits
on L1-graphons.
Daniel Cremers
Self-supervised Learning for 3D Shape Analysis
While neural networks have swept the field of computer vision and replaced classical methods in most areas of image analysis and beyond, extending their power to the domain of 3D shape analysis
remains an important open challenge. In my presentation, I will focus on the problems of shape matching, correspondence estimation and shape interpolation and develop suitable deep learning
approaches to tackle these challenges. In particular, I will focus on the difficult problem of computing correspondence and interpolation for pairs of shapes from different classes – say a human and
a horse – where traditional isometry assumptions no longer hold.
Future Research
Open questions:
1. How can we extend the power of deep networks to the field of 3D shape analysis?
2. How can deep learning help to perform camera-based 3D reconstruction of static and dynamic scene?
3. How can deep learning be applied to analyzing 3D shapes – for problems such as estimating correspondence, computing interpolations between shapes and measuring shape similarity?
4. What are the strengths and weaknesses of different representations of shape in the age of deep learning?
Pablo Musé
Non-Uniform Motion Blur Kernel Estimation via Adaptive Decomposition
Motion blur estimation remains an important task for scene analysis and image restoration. In recent years, the removal of motion blur in photographs has seen impressive progress in the hands of deep
learning-based methods, trained to map directly from blurry to sharp images. Characterization of the motion blur, on the other hand, has received less attention, and progress in model-based methods
for deblurring lags behind that of data-driven end-to-end approaches.
In this work we revisit the problem of characterizing dense, non-uniform motion blur in a single image and propose a general non-parametric model for this task. Given a blurry image, a neural network
is trained to estimate a set of image-adaptive basis motion kernels as well as the mixing coefficients at the pixel level, producing a per-pixel motion blur field.
When applied to real motion-blurred images, a variational non-uniform blur removal method fed with the estimated blur kernels produces high-quality restored images. Qualitative and quantitative
evaluation shows that these results are competitive or superior to results obtained with existing end-to-end deep learning (DL) based methods, thus bridging the gap between model-based and
data-driven approaches.
Future Research
We are interested in deriving image restoration methods that combine model-driven and data-driven approaches. In this application, the image formation model is used to learn the local blur kernels
from data. Deblurring is performed in a second stage, using a classic non-blind deblurring method. The next step would be to include kernel estimation and deblurring into the same pipeline.
Dr Matthew Thorpe (The University of Manchester)
Matthew Thorpe is a lecturer in Applied Mathematics at the University of Manchester and works on optimal transport and large data limits of graph based variational problems that arise in
semi-supervised and unsupervised learning. Previously he was a research fellow at the University of Cambridge and a Postdoctoral Research Associate at Carnegie Mellon University. He completed his PhD
in 2015 at the University of Warwick.
Dr Patricia Vitoria Carrera (Universitat Pompeu Fabra, Barcelona)
Patricia Vitoria submitted her PhD thesis at the Image Processing Group of the Universitat Pompeu Fabra in Barcelona under the supervision of Coloma Ballester. During her PhD studies, she actively
collaborated with the University of Cambridge and Universidad de la Republica. She is currently working at Huawei Zurich Research Centre. She obtained a bachelor degree in Audiovisual Systems
Engineering from the University Pompeu Fabra, after which she completed an MSc. degree in Informatics at the Technical University of Munich.
Her research interests are in the field of image restoration and computer vision, more specifically camera artefacts, image and video in-painting, colorization and de-blurring using both model-based
and machine learning approaches.
Dr Bamdad Hosseini (California Institute of Technology)
Bamdad Hosseini is a Senior Postdoctoral Fellow in Computing and Mathematical Sciences at California Institute of Technology (Caltech). His research interests lie at the intersection of applied
mathematics, probability theory, and statistics focusing on the analysis and development of methods for extracting meaningful information from data. Hosseini has previously served as a von Karman
instructor at Caltech and received a PhD in Mathematics from Simon Fraser University.
09:15 - 09:30
Opening Remarks
09:30 - 10:10
Kostas Zygalakis - Connection Between Optimization and Sampling Algorithms
Talk 1: Talk
10:10 - 10:15
Kostas Zygalakis - Connection Between Optimization and Sampling Algorithms
Talk 1: Q&A
10:15 - 10:30
Kostas Zygalakis - Connection Between Optimization and Sampling Algorithms
Talk 1: Future Research
11:00 - 11:40
Yves van Gennip - Gradient Flows and Thresholding Schemes on Graphs
Talk 2: Talk
11:40 - 11:45
Yves van Gennip - Gradient Flows and Thresholding Schemes on Graphs
Talk 2: Q&A
11:45 - 12:00
Yves van Gennip - Gradient Flows and Thresholding Schemes on Graphs
Talk 2: Future Research
12:00 - 13:30
Lunch Break
13:30 - 14:10
Bernhard Schmitzer - Linearization of Balanced and Unbalanced Optimal Transport
Talk 3: Talk
14:10 - 14:15
Bernhard Schmitzer - Linearization of Balanced and Unbalanced Optimal Transport
Talk 3: Q&A
14:15 - 14:30
Bernhard Schmitzer - Linearization of Balanced and Unbalanced Optimal Transport
Talk 3: Future Research
15:40 - 15:45
Youssef Marzouk - Density Estimation and Conditional Simulation Using Triangular Transport
Talk 4: Q&A
15:45 - 16.00
Youssef Marzouk - Density Estimation and Conditional Simulation Using Triangular Transport
Talk 4: Future Research
16:00 - 17:00
Welcome Reception on Gathertown
09:30 - 10:10
Coloma Ballester - Generative Methods for Some Inverse Problems in Imaging
Talk 5: Talk
10:10 - 10:10
Coloma Ballester - Generative Methods for Some Inverse Problems in Imaging
Talk 5: Q&A
10:15 - 10:30
Coloma Ballester - Generative Methods for Some Inverse Problems in Imaging
Talk 5: Future Research
11:40 - 11:45
Andres Almansa - Bayesian Estimators for Inverse Imaging Problems with Decoupled Learned Priors
Talk 6: Q&A
11:45 - 12:00
Andres Almansa - Bayesian Estimators for Inverse Imaging Problems with Decoupled Learned Priors
Talk 6: Future Research
12:00 - 13:30
Lunch Break
13:30 - 14:10
Johannes Schmidt-Hieber - Overparametrization and the Bias-Variance Dilemma
Talk 7: Talk
14:10 - 14:15
Johannes Schmidt-Hieber - Overparametrization and the Bias-Variance Dilemma
Talk 7: Q&A
14:15 - 14:30
Johannes Schmidt-Hieber - Overparametrization and the Bias-Variance Dilemma
Talk 7: Future Research
15:00 - 15:40
Omiros Papaspiliopoulos - Scalable Computation for Bayesian Hierarchical Models
Talk 8: Talk
15:40 - 15:45
Omiros Papaspiliopoulos - Scalable Computation for Bayesian Hierarchical Models
Talk 8: Q&A
15:45 - 16:00
Omiros Papaspiliopoulos - Scalable Computation for Bayesian Hierarchical Models
Talk 8: Future Research
13:00 - 13:40
Mathew Penrose - Optimal Cuts of Random Geometric Graphs
Talk 9: Talk
13:40 - 13:45
Mathew Penrose - Optimal Cuts of Random Geometric Graphs
Talk 9: Q&A
13:45 - 14:00
Mathew Penrose - Optimal Cuts of Random Geometric Graphs
Talk 9: Future Research
15:10 - 15:15
Franca Hoffmann - Discrete-Continuum Interplay: Formulations for Supervised and Semi-Supervised Learning
Talk 10: Q&A
15:15 - 15:30
Franca Hoffmann - Discrete-Continuum Interplay: Formulations for Supervised and Semi-Supervised Learning
Talk 10: Future Research
15:30 - 17:00
Dinner Break
17:00 - 17:40
Jeff Calder - PDE Continuum Limits for Prediction with Expert Advice
Talk 11: Talk
17:40 - 17:45
Jeff Calder - PDE Continuum Limits for Prediction with Expert Advice
Talk 11: Q&A
17:45 - 18:00
Jeff Calder - PDE Continuum Limits for Prediction with Expert Advice
Talk 11: Future Research
18:30 - 19:00
Andrea Bertozzi - TBC
Talk 12: Talk
19:10 - 19:15
Andrea Bertozzi - TBC
Talk 12: Q&A
19:15 - 19:30
Andrea Bertozzi - TBC
Talk 12: Future Research
9:30 - 10:10
Jonas Latz - Stochastic Gradient Descent in Continuous Time: Discrete and Continuous Data
Talk 13: Talk
10:10 - 10:15
Jonas Latz - Stochastic Gradient Descent in Continuous Time: Discrete and Continuous Data
Talk 13: Q&A
10:15 - 10:30
Jonas Latz - Stochastic Gradient Descent in Continuous Time: Discrete and Continuous Data
Talk 13: Future Research
11:00 - 11:40
Yury Korolev - Approximation Properties of Two-Layer Neural Networks with Values in a Banach Space
Talk 14: Talk
11:40 - 11:45
Yury Korolev - Approximation Properties of Two-Layer Neural Networks with Values in a Banach Space
Talk 14: Q&A
11:45 - 12:00
Yury Korolev - Approximation Properties of Two-Layer Neural Networks with Values in a Banach Space
Talk 14: Future Research
12:00 - 13:30
Lunch Break
13:30 - 14:10
Matthias Ehrhardt - Equivariant Neural Networks for Inverse Problems
Talk 15: Talk
14:10 - 14:15
Matthias Ehrhardt - Equivariant Neural Networks for Inverse Problems
Talk 15: Q&A
14:15 - 14:30
Matthias Ehrhardt - Equivariant Neural Networks for Inverse Problems
Talk 15: Future Research
15:00 - 15:40
Philipp Peterson - Structured Singularities in Deep Learning
Talk 16: Talk
15:40 - 15:45
Philipp Peterson - Structured Singularities in Deep Learning
Talk 16: Q&A
15:45 - 16:00
Philipp Peterson - Structured Singularities in Deep Learning
Talk 16: Future Research
10:10 - 10:15
Christoph Brune - Geometric DL - From Learning ODE Dynamics towards Graph Neural Diffusion
Talk 17: Q&A
10:15 - 10:30
Christoph Brune - Geometric DL - From Learning ODE Dynamics towards Graph Neural Diffusion
Talk 17: Future Research
11:00 - 11:40
Abderrahim Elmoataz - The Nonlocal p-Laplacian Evolution Problem on Sparse Graphs: The Continuum Limit
Talk 18: Talk
11:40 - 11:45
Abderrahim Elmoataz - The Nonlocal p-Laplacian Evolution Problem on Sparse Graphs: The Continuum Limit
Talk 18: Q&A
11:45 - 12:00
Abderrahim Elmoataz - The Nonlocal p-Laplacian Evolution Problem on Sparse Graphs: The Continuum Limit
Talk 18: Future Research
12:00 - 13:30
Lunch Break
13:30 - 14:10
Daniel Cremers - Self-supervised Learning for 3D Shape Analysis
Talk 19: Talk
14:10 - 14:15
Daniel Cremers - Self-supervised Learning for 3D Shape Analysis
Talk 19: Q&A
14:15 - 14:30
Daniel Cremers - Self-supervised Learning for 3D Shape Analysis
Talk 19: Future Research
15:40 - 15:45
Pablo Musé - Non-Uniform Motion Blur Kernel Estimation via Adaptive Decomposition
Talk 20: Q&A
15:45 - 16:00
Pablo Musé - Non-Uniform Motion Blur Kernel Estimation via Adaptive Decomposition
Talk 20: Future Research
16:00 - 16:30
Final Remarks and Closing
09:15-09:30 Opening Remarks 09:30-10:10 Talk 1: Talk - Kostas Zygalakis 10:10-10:15 Talk 1: Q&A - Kostas Zygalakis 10:15-10:30 Talk 1: Future Research - Kostas Zygalakis 10:30-11:00 Break 11:00-11:40
Talk 2: Talk - Yves van Gennip 11:40-11:45 Talk 2: Q&A - Yves van Gennip 11:45-12:00 Talk 2: Future Research - Yves van Gennip 12:00-13:30 Lunch Break 13:30-14:10 Talk 3: Talk - Bernhard Schmitzer
14:10-14:15 Talk 3: Q&A - Bernhard Schmitzer 14:15-14:30 Talk 3: Future Research - Bernhard Schmitzer 14:30-15:00 Break 15:00-15:40 Talk 4: Talk - Youssef Marzouk 15:40-15:45 Talk 4: Q&A - Youssef
Marzouk 15:45-16:00 Talk 4: Future Research - Youssef Marzouk 16:00-17:00 Welcome Reception on Gathertown
9:30-10:10 Talk 5: Talk - Coloma Ballester 10:10-10:15 Talk 5: Q&A - Coloma Ballester 10:15-10:30 Talk 5: Future Research - Coloma Ballester 10:30-11:00 Break 11:00-11:40 Talk 6: Talk - Andres
Almansa 11:40-11:45 Talk 6: Q&A - Andres Almansa 11:45-12:00 Talk 6: Future Research - Andres Almansa 12:00-13:30 Lunch Break 13:30-14:10 Talk 7: Talk - Johannes Schmidt-Hieber 14:10-14:15 Talk 7: Q&
A - Johannes Schmidt-Hieber 14:15-14:30 Talk 7: Future Research - Johannes Schmidt-Hieber 14:30-15:00 Break 15:00-15:40 Talk 8: Talk - Omiros Papaspiliopoulos 15:40-15:45 Talk 8: Q&A - Omiros
Papaspiliopoulos 15:45-16:00 Talk 8: Future Research - Omiros Papaspiliopoulos
13:00-13:40 Talk 9: Talk - Mathew Penrose 13:40-13:45 Talk 9: Q&A - Mathew Penrose 13:45-14:00 Talk 9: Future Research - Mathew Penrose 14:00-14:30 Break 14:30-15:10 Talk 10: Talk - Franca Hoffmann
15:10-15:15 Talk 10: Q&A - Franca Hoffmann 15:15-15:30 Talk 10: Future Research - Franca Hoffmann 15:30-17:00 Dinner Break 17:00-17:40 Talk 11: Talk - Jeff Calder 17:40-17:45 Talk 11: Q&A - Jeff
Calder 17:45-18:00 Talk 11: Future Research - Jeff Calder 18:00-18:30 Break 18:30-19:10 Talk 12: Talk - Andrea Bertozzi 19:10-19:15 Talk 12: Q&A - Andrea Bertozzi 19:15-19:30 Talk 12: Future Research
- Andrea Bertozzi
9:30-10:10 Talk 13: Talk - Jonas Latz 10:10-10:15 Talk 13: Q&A - Jonas Latz 10:15-10:30 Talk 13: Future Research - Jonas Latz 10:30-11:00 Break 11:00-11:40 Talk 14: Talk - Yury Korolev 11:40-11:45
Talk 14: Q&A - Yury Korolev 11:45-12:00 Talk 14: Future Research - Yury Korolev 12:00-13:30 Lunch Break 13:30-14:10 Talk 15: Talk - Matthias Ehrhardt 14:10-14:15 Talk 15: Q&A - Matthias Ehrhardt
14:15-14:30 Talk 15: Future Research - Matthias Ehrhardt 14:30-15:00 Break 15:00-15:40 Talk 16: Talk - Philipp Peterson 15:40-15:45 Talk 16: Q&A - Philipp Peterson 15:45-16:00 Talk 16: Future
Research - Philipp Peterson
9:30-10:10 Talk 17: Talk - Christoph Brune 10:10-10:15 Talk 17: Q&A - Christoph Brune 10:15-10:30 Talk 17: Future Research - Christoph Brune 10:30-11:00 Break 11:00-11:40 Talk 18: Talk - Abderrahim
Elmoataz 11:40-11:45 Talk 18: Q&A - Abderrahim Elmoataz 11:45-12:00 Talk 18: Future Research - Abderrahim Elmoataz 12:00-13:30 Lunch Break 13:30-14:10 Talk 19: Talk - Daniel Cremers 14:10-14:15 Talk
19: Q&A - Daniel Cremers 14:15-14:30 Talk 19: Future Research - Daniel Cremers 14:30-15:00 Break 15:00-15:40 Talk 20: Talk - Pablo Musé 15:40-15:45 Talk 20: Q&A - Pablo Musé 15:45-16:00 Talk 20:
Future Research - Pablo Musé 16:00-16:30 Final Remarks and Closing
|
{"url":"https://bathsymposium.ac.uk/symposium/analyticandgeometricapproachestomachinelearning/","timestamp":"2024-11-09T13:22:00Z","content_type":"text/html","content_length":"132349","record_id":"<urn:uuid:44fea920-83f5-4615-9585-bebd2f91ede6>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00607.warc.gz"}
|
Simulation and Implementation of Solar Powered Electric Vehicle
Circuits and Systems, 2016, 7, 643-661
Published Online May 2016 in SciRes. http://www.scirp.org/journal/cs
How to cite this paper: Sankar, A.B. and Seyezhai, R. (2016) Simulation and Implementation of Solar Powered Electric Ve-
hicle. Circuits and Systems, 7, 643-661. http://dx.doi.org/10.4236/cs.2016.76055
Simulation and Implementation of Solar
Powered Electric Vehicle
A. Bharathi Sankar, R. Seyezhai
Depart ment of EEE, SSN College of Engineering, Chennai, India
Received 7 March 2016; accepted 6 May 2016; published 11 May 2016
Copyright © 2016 by authors and Scientific Research Publishing Inc.
This work is licensed under the Creative Commons Attribution International License (CC BY).
http://creativ ecommon s.org/l icenses/by /4.0/
The rise in the price of oil and pollution issues has increased the interest on the development of
electric vehicles. This paper discusses about the application of solar energy to power up the ve-
hicle. The basic principle of solar based electric vehicle is to use energy that is stored in a battery
to drive the motor and it moves the vehicle in forward or reverse direction. The Photo Voltaic (PV)
module may be connected either in parallel or series, and the charge controllers direct this solar
power to the batteries. The DC voltage from the PV panel is then boosted up using a boost DC-DC
converter, and then an inverter, where DC power is converted to AC power, ultimately runs the
Brushless DC motor which is used as the drive motor for the vehicle application. This paper focus-
es on the design, simulation and implementation of the various components, namely: solar panel,
charge controller, battery, DC-DC boost converter, DC-AC power converter (inverter circuit) and
BLDC motor for the vehicle application. All these components are modeled in MATLAB/SIMULINK
and in re al -time, the hardware integration of the system is developed and tested to verify the si-
mulation results.
Photo Voltaic P anel , DC -DC Converter, Brushless DC Motor , Electric Vehicle
1. Introduction
The renewable energy is vital for today’s world as the non-renewable sources that we are using are going to get
exhausted. The solar vehicle is a step in saving these non-renewable sources of energy [ 1]-[3]. Solar powered
electric vehicle is advantageous because of less noise, less pollution and reduces carbon dioxide emissions
[4]-[6]. It consists of PV pa nel, charger c ontro ller, batter y, inverter a nd BLDC motor. T he basic p rinciple of t he
proposed vehicle is the energy drawn from the solar panel that is used to c har ge a b att er y whic h i n tur n run s the
motor of the vehicle. A boost converter is used as an interface between the solar panel and the battery to obtain
A. B. Sankar, R. Seyezhai
the re quir ed vo lta ge and to extract maximum power from PV. The BLDC motor is preferred over DC motor be-
cause of high efficiency, low maintenance, long life, low weight and compact construction. The conventional
DC motor is relatively more expensive and needs maintenance due to the brushes and commutator, whereas,
BLDC motor has a rotor and a stator, which is connected to a power electronic switching circuit [7]-[9]. This
paper focuses on the modeling of solar cell, battery and implements a boost converter for the solar vehicle dri-
ven by BLDC motor. The simulations are carried out in MATLAB software. T he hard ware prototype is built and
the results are verified.
2. Modeling of Solar Cell
A solar cell is the building block of a photovoltaic panel. A photovoltaic panel is developed by connecting many
solar cells in series and parallel. A single photovoltaic cell can be modeled by utilizing current source, diode and
two resistors as shown in Figure 1 [10] [11].
The equation for a photovo ltai c cell i s give n by
exp 1
lg ossh
V IRV IR
II IqAkT R
+∗ +∗
=− ∗∗−−
exp *
os orgo
I IqE
T Ak
∗ ∗∗∗
( )
{ }
lgscr i
=+∗− ∗
exp 1
sp s
p lgp ossh
NN N
INI NIqAkT R
= −−−
∗∗∗∗ ∗∗
where I & V: Photovoltaic c e ll output current a nd vo ltage;
Ios: PV cell reverses saturation current;
T: Solar cell temperature in Celsiu s;
k: Boltzmann’s constant, 1.38 × 10−19 J/K;
q: Electron charge, 1.6 × 10−23 C;
Ki: Short circ uit curr ent tempera ture c oefficient at Iscr;
λ : Solar cell irr a diation in W/m2;
Iscr: Short circuit c urrent at 25 degree Celsius;
Ilg: Light-generated current;
Ego: Band gap for silicon;
A: Ideality factor;
Tr: Reference temperature;
Figure 1 . A single solar cell circuit model.
A. B. Sankar, R. Seyezhai
Ior: Cell saturation current at Tr;
Rsh: Shunt resistance;
Rs: Series resistance.;
It can be seen that the photovoltaic cell operates as a constant current source at low values of operating vol-
tages and a constant voltage source at low values of operating current. Electrical specifications of solar panel are:
open circuit volta ge: 32.6 V, short circuit current: 8. 5A, total no of cells in series : 72 and sola r cell te mperature:
30 degree Celsius. SIMULINK model of the photovoltaic panel is shown in Figure 2.
The P-V output characteristics with varying irradiation at constant temperature are shown in Figure 3. The
I-V output characteristics of PV module with varying irradiation at constant temperature are shown in Figure 4.
When the irradiation increases, the current output increases and the voltage output also i ncre ases. T his re sults in
net increase in output power with increase in irradiation at constant temperature.
The P-V output characteristics with varying temperature at constant irradiation are shown in Figure 5. The
I-V output characteristics of P V module with varying te mperature a t constant irrad iation ar e shown in Figure 6.
When the operating temperature increases, the current output increases marginally but the voltage output de-
creases drastically results i n net reduction in power output with rise i n temperature.
Figure 2 . SIMULINK model of the photovoltaic panel.
Figure 3. P-V char acteristics of a typical PV mod ule for varying solar irradiance.
A. B. Sankar, R. Seyezhai
Figure 4 . I-V characteri s tics of a typical PV module fo r varying solar irradiance.
Figure 5 . P-V characteristics of a typical PV modu le for varying temperat ure.
Figure 6. I-V characteristics of a typical PV mod ule for varying t emper ature.
A. B. Sankar, R. Seyezhai
3. DC-DC Boost Convert er
A single photovoltaic cell produces voltage of about 0.6V. In order to boost up the voltage, a DC-DC boost
converter is used. It is used to step up the input voltage to a required output voltage without the use of a trans-
former. T he control strategy lies in the manipulation o f the duty cycle of the switch which re sults in obtai ning a
variable DC output vol tage. The circ ui t diagram of the boost co nverter is shown in Figure 7 [12] .
The active switch in the boost converter is a MOSFET. A fast recovery diode is used as the freewheeling di-
ode. The input and output capacitor is selected as C1 = 470 μF and C2 = 330 μF, 450 V .The ind ucta nce va lue is
2 mH, 15 A. For a DC-DC boo s t converter, the conversi on gain fo r continuous conduction mode is give n by:
where Vo is the output voltage of the converter , Vin i s t he input voltage of the c onverter and D is t he dut y cycle of
Boost Inductor and Output Capacitor
The boost inductor L is calculated based on parameters such as switching frequency fs, input and o utput voltages,
Vin and Vout and the inductor current ripple ∆IL.
( )
ino in
V VV
LI fV
The output capacitor is calculated using the below formula
CfR V
where ∆Vo is the output voltage ripple.
The DC-DC boost converter output voltage is about 60 V and inp ut cur re nt i s ab o ut 8 A as shown in Figure 8
& Figure 9.
4. Battery Modeling
The lead-acid battery are studied in an intensive way for automotive and the renewable energy sectors. In this
paper, the principle of the lead-acid battery is presented. A simple, fast, and effective equivalent circuit model
structure for lead-acid batteries is implemented [13]. The identificatio n of the parameters and the battery model
is validated by performing the simulation in MATL AB/SIMULINK.
The general equation for modeling the battery is
Q itQ it
=−−− ⋅+
Char ge:
Figure 7 . Circuit diagram of boost converter.
A. B. Sankar, R. Seyezhai
Figure 8. Input current and output voltage of boost converter.
Figure 9 . Output current of boost converter.
VEKitKiR iC
Q ititQ
=−−− ⋅+
V: Battery voltage (V );
Eo: Nominal voltage (V);
K: Polarization resistance (Ω);
Q: Battery capacity (A∙h);
it: Actual battery charge (A∙h);
A: Exponential zone a mpl itude (V);
B: Exponential zone time constant inverse (A∙h−1);
R: Battery internal resistance (Ω);
C: Exponential voltage (V).
From the manufacturer’s datasheet, the above parameters can be obtained. But, polarization resistance, expo-
nential zone amplitude and exponential zone time constant inverse should be calculated from the discharge curve
of the batter y which is calculated as follows is sho wn in Figure 10.
full exp
AV V= −
00.0050.01 0.0150.02 0.0250.03 0.0350.04
10 Input current
00.0050.01 0.0150.02 0.0250.03 0.0350.04
Time in s ec
Output voltage Input current
Output vo ltage
0.030.0301 0.0301 0.0302 0.0302 0.0302 0.0303 0.0303 0.0304 0.0304 0.0305
8.02 Input current ri pple
0.030.0301 0.0301 0.0302 0.0302 0.0302 0.0303 0.0303 0.0304 0.0304 0.0305
Time in sec
Output voltage ripple
A. B. Sankar, R. Seyezhai
Figure 10. Discharging characteristi cs of battery.
o full
EVKRi A=++ ⋅−
The value of exponential voltage for chargi ng and disc hargi ng are
( )
CBiCA=⋅ ⋅− +
Char ge:
( )
C BiC
=⋅ ⋅−
4.1. Current Block
The chargi ng and discharging of the batter y is altered depending upon the state of char ge of the batter y. When
the state of charge reaches a certain maximum level, it begins to discharge upto the minimum va lue is s hown i n
Figure 11. The value of state of charge can be fixed depending upon the battery specifications and the
4.2. State of Charge Block
The charge of the battery, Q is calculated as
dQ it=∫
The above equation gives the result in Ampere-seconds. In order to get th e st andard value i n Ampere-hours, the
result is divided by 3600 and compared with the nominal battery capacity to get the present state of charge is
sho wn in Figure 12.
4.3. Polarization Voltage Block
The polarization voltage block is calculated as is shown in Figure 13.
V Kit
Q it
= ⋅
4.4. Polarization Resistor Block
The polarization resistance is calculated according to charging and discharging modes is s hown in Figure 14.
Char ge:
Q it
A. B. Sankar, R. Seyezhai
Figure 11. SIMULINK model of current block.
Figure 1 2. SIMULINK model of state of charge b lock.
Figure 1 3. SIMULINK model of polarization voltage block.
Figure 1 4. SIMULINK model of polarization res is t or block .
A. B. Sankar, R. Seyezhai
it Q
4.5. Exponential Block
The SIMULINK model exponential block is shown in Figure 15.
By calculating the battery parameters using the mathematical blocks and using Equations (8) and ( 9), t he vol-
tage of the battery is plotted. The modeling is done in such a way that the charging current and discharging current
are alternated according to the state of charge of the battery. By this way, both the charging and discharging
characteristics are obtained. This is sho wn in Figure 16.
The discharging ch aracterist ic of th e lead acid battery is shown in Figure 17. The characteris tics were tak en by
connecting a resistive load across the battery. It can be seen that as the resistance increases the time taken for
discharging completely also in cr eases. T he SOC characteristics are shown in Figure 18.
For different charging currents, the charging characteristics were observed as shown in Figure 19. It can be
found that as t he charging current increas es, the time taken by the battery to attain full v oltage decreases. The SOC
characteristics ar e shown in Figure 20.
Figure 1 5. SIMULINK model of ex pone nt ia l bloc k.
Figure 1 6. Simulation results of battery characteristics.
x 10
Time (s)
Voltage (V)
x 10
Time (s)
x 10
Time (s)
Current (A)
A. B. Sankar, R. Seyezhai
Figure 1 7. Simulation results for battery voltage for various R load.
Figure 1 8. Simulation results of SOC for various R load.
Figure 1 9. Simulation results for battery voltage for various charging current.
Figure 2 0. Simulation results of SOC for various charging current.
0 0.51 1.5 22.5 33.5 4
x 104
Time (s)
Battery Voltage (V)
00.5 11.5 22.5 33.5 4
x 10
Time (s)
00.5 11.5 22.5 33.5 4
x 10
Time (s)
Battery Voltage (V)
C ha rg i ng
c ur r ent
0 0.5 1 1.52 2.5 3 3.5 4
x 105
Time (s)
C har ging
c urr ent
A. B. Sankar, R. Seyezhai
5. Electric Vehicle Dynamic Performance
Dynamic behavior analysis of electric motors is required in order to accurately evaluate the performance of
electric vehicles. Simulation to ols for electric vehicles are divided into steady state and dynamic models. For the
accurate prediction of electric vehicle performance, dynamic modeling of the motor and other components is
necessary. The dynamic performance of the motor for electric vehicles is investigated [14]-[19]. For this purpose
a BLDC motor with its electrical drive is modeled and simulated first, and then the other components of a series
electric vehicle, such as battery charger, multilevel inverter are designed and linked with the electric motor.
The first step in vehicle performance modeling is to obtain an equation for calculating the required vehicle
propelling force. This force must overcome the road load and accelerate the vehicle. The tractive force, Ftot,
available from the propulsio n system can be written a s:
totrollADgrade acc
Fffff=++ +
The rolling resistance, aerodynamic drag, and climbing resistance are known as the road load. The rolling re-
sistance is due to the friction of the vehicle tires on the road and can be written as:
roll r
ff Mg=∗∗
where M, fr and g are the vehicle mass, rolling resistance coefficient and gravity acceleration, respectively.
The aerod ynamic drag is d ue to the frictio n of the vehicle body mo ving thr ough the air. T he formula for thi s
component is:
AD D
fC AV
∗∗ ∗∗=
where V, ξ, CD and A are the vehicle speed, air mass density, aerodynamic coefficient and the frontal area of the
vehicle, respectively.
The climbing resistance is due to the slope of the road and is expressed by:
f Mg
is the angle of the slo pe.
If the velocity of the vehicle is changing, then clearly a force will be needed to be applied in addition to the
above forces. This force will provide the linear acceleration of the vehicle, and is given by:
acc V
fMaM t
Wheels and axle convert Ftot and the vehicle speed to torque and angular speed requirement to the differential
as follows:
wheeltotwheel wheelloss
( )
wheel wheel
= +
where Twheel, Rwheel, Iwheel, Tloss, ωwheel, and s are the tractive tor que, ra dius of the whe el, w heel inertia, wheel loss
torque, angular velocity of the wheels and wheel slip wheels, respectively.
Figure 21 shows that acce le r ation of ele c tric vehicle is calculated using mainl y the position of the accelera tor,
which is between −100% and +100% and the measured electric vehicle speed. Note that a negative accelerator
position represents a brake position. Figure 22 sho ws that t he starti n g motor to rq ue of ele ctric vehic le is ab out 3
Figure 23 shows that the maximum motor speed (2500 rpm) achieved in BLDC motor drive.
Figure 24 shows the stator c ur rent for ele ctric ve hicle dr ive. It is ob served fro m the results that the stato r c ur-
rent ripple is settled in runnin g co nd ition.
Figure 25 shows that the maximum vehicle speed (40 Km/hr) achieved BLDC motor drive.
The simulation results of motor acceleration, motor torque, stator current, motor speed, vehicle speed for
electric vehicle is obtained. The electromagnetic torque of BLDC motor is varied from 0 to 4 second. T hen the
rated torque is reached at 8 s. The rated torque can be seen 3 Nm as s hown i n the Figure 22. The stator current
A. B. Sankar, R. Seyezhai
Figure 21. Acceleratio n of electric vehicle.
Figure 22. Motor to r que of electric veh icle.
Figure 23. Motor speed of electric veh icle.
Time in s e c
Electrical vehicle Acceleration
-0. 5
Motor torque in N/m
Time in sec
Time in s ec
Motor speed i n rpm
A. B. Sankar, R. Seyezhai
Figure 24. Stator current of electric vehicl e.
Figure 25.Vehicle speed of electric vehicle.
resp onses of the B LDC mo tor are shown in Figure 24. The stator current is about 10 A and stator current fluc-
tuates between 4 and 8 s. With respect to the above Figures 23-25, the rotor speed is gradually increased to the
rated speed. The rated speed is 2500 rpm and it is reached at nearly 13 s and the vehicle speed is gradually in-
creased to the rated speed. The vehicle speed is 40 km/hr and it is reached at nearly 12 seconds.
The experimental P-V and V-I characteristics are shown in Fi gure 26. Table 1 shows that specifications of
PV Panel & Boost Converter.
The dynamic characteristics of PV array is measured using scope corder (advanced DSO) and it is shown in
Figures 27-29 (VOC = 32.5 V and ISC = 8.5 A).
The inp ut t o t he c o nver te r i s about 31.9 V and output voltage obtained is about 64.6 V as shown in Figure 30.
The MOSFET switches at 50% duty cycle.
Output voltage and input curr ent ripp le of DC-DC boost converter measured using PQ analyzer is about 0.9%
and 1.3% as sho wn in Figure 31.
The dynamic charging and discharging characteristics of battery are measured using scope corder and they are
sho wn in Fig ure 32 & Figure 33.
The solar powered electric vehicle using BLDC Drive is shown in Figure 34. The vehicle was designed with
forward and backward driving system which was able to achieve a speed of 40 K mph. The differential rear axle
Time in s ec
Motor stator current in amps
Time in s e c
Vehic le s pe e d in Km/Hr
A. B. Sankar, R. Seyezhai
Figure 2 6. Experimental P-V & V-I characteristics.
Figure 2 7. Experimental results for PV voltage and current.
Table 1. Speci ficat ions of PV panel & boost converter.
Parameters Values
Voc 31.16 V
Isc 8.57 A
Pmax 250 W
Insolation W/m2 1000 W/m2
System Efficiency 7 6.72%
Out pu t C a pacitance C1 = 330 μF
Inductance L1 = 2 mH, 15 A.
Switching Frequency fs = 50 KHz
A. B. Sankar, R. Seyezhai
Figure 2 8. Experimental results for voltage characteristics of PV.
Figure 2 9. Experimental results for current characteristics of PV.
of the vehi cle is c onnecte d to the drivin g shaft o f the B LDC moto r thro ugh the gear cou pling. W ith the c hange
in mo to r , whic h ha s hi g h to rq ue , t he ve hi cle wo uld b e c ap a b l e o f b ee n dr i ve n with he av y lo ad . The cur r ent fr om
the batteries flo ws to the controller, which controls the whole co ntrol system of the vehicle . With respect to the
movement of the accelerator, the controller sends forth the current, thereby increasing or decreasing the speed of
the vehicle. Tab l e 2 shows electric vehicle sp e c ification.
Figure 35 shows experimental values of actual speed and stator current with respect to bat tery voltage.
6. Conclusion
The importance of utilization of solar power in electric vehicle application is discussed in this paper. The pro-
A. B. Sankar, R. Seyezhai
Figure 3 0. Experimental s etup for PV interface boost converter.
Figure 31. Output voltage and input current ripple of boost converter for 50% duty cycle.
Figure 3 2. Experimental results for charging char acteristics.
A. B. Sankar, R. Seyezhai
Figure 3 3. Experimental results for discharging characteri stics.
Figure 3 4. Sola r power e d electric vehicl e.
Figure 3 5. Experimental values of motor speed and stator current vs. battery voltage.
A. B. Sankar, R. Seyezhai
Table 2. Electric vehicle specification.
Electric vehicle specification parameters
Vehicle capacity 2-Seater Maximum mileage 60 km
Overall Dimension 2750 mml * 1200 mmw *1800 m mh Forwar d & Reverse Sp eed 10 - 40 km/hour
Groun d Clearance 1 14 mm Brakin g D istance <2 meter
Tred Distance Front 870 mm Rear 980 mm Turning Radius 2.9 meter
Net Weight (without battery) 290 kg Climbing Ability Safe Climbing 25%
posed electric vehicle will be fuel efficient, reduce the pollution and provide noiseless operation. The drive
range of the proposed electric vehicle po wered by solar is improved compared with the conventional one. Selec-
tion of BLDC drive for the vehicle provides high efficiency, high operating life, torque/speed characteristics,
high output power to size ratio and noiseless operation. The design of DC-DC boost converter is investigated
and the input and output voltage ripple is reduced which is verified experimentally. Therefore, solar powered
electric vehicle will reduce the pollution and improve the economy of the country.
[1] Spina, M.A., de la Vega , R.J., Rossi, S.R., et al. (2012) Source Issues on the Design o f a So lar Vehicle Based on Hy-
brid Energy System. International Journal of Energy Engineering, 2, 15-21.
[2] Lalouni, S., Rekioua, D., Rekioua, T. and Matagn e, E. (2009) Fuzzy Logic Control of Standalone Photovoltaic System
with Battery Storage. Journal of Power System, 193, 899-907.
[3] Mangu, R., Prayaga, K., Nadimpally, B. and Ni caise, S. (2010) Design, Development and Optimization of Highly Effi-
cient Solar Cars: Gato Del Sol I-IV. Proceedings of 2010 IEEE Green Technologies Conference, Grapevine, 15-16
April 2010, 1-6.
[4] Husain, I. (2005) Electrical and Hybrid Vehicles Design Fundamentals. CRC Press, Boca Raton, London, New York
and Washingto n DC.
[5] Miller, T.J.E. (1989) Brushless Per manent Magnet and Reluctance Motor Drive. Clarendon Press, Oxford.
[6] Trembly, O., Dessaint, L.A. and Dekki che, A.-I. (2007) A Generic Bat tery Model for the Dynamic Simulation of Hy-
brid Electric V ehicles. 2007 IEEE Vehicle Power and Propulsion Conference, Arlington, 9-12 September 2007, 284-
289. http://dx.doi.org/10.1109/vppc.2007.4544139
[7] Bellur, D.M. and Kazimierczuk, M.K. (2007) DC-DC Converters for Electric Vehicle Applications. 2007 Electrical
Insulation Conference and Electrical Manufacturing Expo, Nashville, 22-24 October 2007, 28 6-293.
[8] Shmilovitz, D. (2005 ) On the Control of Photovoltaic Maximum Power Point Tracking via Output Parameters. IEE
Proceedings—Electric Power Applications, 152, 239 -248. http://dx.doi.org/10.1049/ip-epa:20040978
[9] Chiang, S.J., Chang, K.T. and Yen, C.Y. (1998) Residential Photovoltaic Energy Storage System. IEEE Transactions
on Industrial Electronics, 45, 385-394. http://dx.doi.org/10.1109/41.678996
[10] Underland, N.M.T.M. and Robinson, W.P. (2002) Power Electronics Converters Application and Design. 3rd Edit ion,
John Wiley & Sons, Inc., Hoboken.
[11] Shimizu, T. and Hirakata, M. (2001) Generation Control Circuit for Photovoltaic Modules. IEEE Transactions on
Power E lectroni cs , 16, 293-300. http://dx.doi.org/10.1109/63.923760
[12] Beno, J., Thompson, R. and Hebner, R. (2002) Flywheel Batteries for V ehicles. Proceedings of the 2002 Work sh op on
Autonomous Underwater Vehi cles, Piscataway, 99-101. http://dx.doi.org/10.1109/AUV.2002.1177211
[13] Zhu , Z.Q. and Howe, D. (2007) E lectrical Machin es and D rives for Electric, Hybrid, and Fuel Cell Veh icles. Proceed-
ings of the IEEE, 95, 746-765. http://dx.doi.org/10.1109/JPROC.2006.892482
[14] Karden, E., Ploumen, S., Fricke, B., Miller, T. and Snyder, K. (2007) Energy S torage D evices for Future Hybrid Elec-
tric Vehicles. Journal of Power Sou r ces, 168, 2-11. http://dx.doi.org/10.1016/j.jpowsour.2006.10.090
[15] Giannouli, M. and Yianoulis, P. (20 12) Study on the Incorporation of Photovoltaic Systems as an Auxiliary Power
Source for Hybrid and Elect r ic Vehicles. Solar Energy, 86, 441-451. http://dx.doi.org/10.1016/j.solener.2011.10.019
[16] Zhang, X., Chau, K.T., Yu, C. and Chan, C.C. (2008) An Optimal Solar-Thermoelectric Hybrid Energy System for
A. B. Sankar, R. Seyezhai
Hybrid Electric Vehicles. Proceedings of IEEE Vehicle Power and Propulsion Conference, Harbin, 3-5 September
2008, 1-6.
[17] Zhang, X., Chau, K.T. and Chan, C.C. (20 10) Overview of Power Networks in Hybrid Electric Vehicles. Journal of
Asian Electric Vehicles, 8, 1371-1377. http://dx.doi.org/10.4130/jaev.8.1371
[18] Ar sie , I., Rizzo, G. and Sorrentino, M. (2010) E ffect o f Engin e Thermal Transi ents on the Ener gy Management of Se-
ries Hybrid Solar Vehicles. Control Engineer i ng Pr ac t ice, 18, 1231-1238.
[19] Matsumoto, S. (2005) Advancement of Hybrid Vehicle Technology. Proceedings of IEEE European Conference on
Power Electronics and Applications, Dresden, 11-14 September 2005, 1-7. http://dx.doi.org/10.1109/epe.2005.219775
|
{"url":"https://file.scirp.org/Html/66357.html","timestamp":"2024-11-01T20:57:02Z","content_type":"text/html","content_length":"167357","record_id":"<urn:uuid:c850e481-2c62-4d2c-a35c-d7c69d3f1062>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00632.warc.gz"}
|
OpenStax College Physics, Chapter 28, Problem 12 (Problems & Exercises)
A spaceship, 200 m long as seen on board, moves by the Earth at 0.970c . What is its length as measured by an Earth-bound observer?
Question by
is licensed under
CC BY 4.0
Solution video
OpenStax College Physics, Chapter 28, Problem 12 (Problems & Exercises)
vote with a rating of votes with an average rating of.
Video Transcript
This is College Physics Answers with Shaun Dychko. A spaceship is measured to be 200 meters long according to a person that's on the ship this is going to be labeled L naught— the proper length—
because the person on the ship is at rest with respect to the two end points that are being measured between so the spaceship has one end point here at the tip and another end point at the booster
rocket I guess and then the person on the ship is at rest with respect to those end points and so the length they measure is the proper length. The ship is zipping past the Earth at a speed of 0.970c
and the question is what length will an Earth-based observer measure for this ship? So the ship is zipping past the Earth at a speed v. Okay! This is going to be a shorter length, it's going to be a
contracted length according to the Earth-based observer and that length will be the proper length times the square root of 1 minus v squared over c squared. You might also see this as L equals L
naught divided by γ and if you expand γ and replace it, you'll end up with this formula here. So we have 200 meters times the square root of 1 minus 0.970c squared divided by c squared and this is
48.6 meters.
|
{"url":"https://collegephysicsanswers.com/openstax-solutions/spaceship-200-m-long-seen-board-moves-earth-0970c-what-its-length-measured-earth","timestamp":"2024-11-08T17:34:30Z","content_type":"text/html","content_length":"161822","record_id":"<urn:uuid:f3eab49a-d77c-4754-be5b-5d5ac171eee2>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00597.warc.gz"}
|
Minkowski's Theorem
This is a development version of this entry. It might change over time and is not stable. Please refer to release versions for citations.
Minkowski's theorem relates a subset of ℝ^n, the Lebesgue measure, and the integer lattice ℤ^n: It states that any convex subset of ℝ^n with volume greater than 2^n contains at least one lattice
point from ℤ^n\{0}, i.e. a non-zero point with integer coefficients.
A related theorem which directly implies this is Blichfeldt's theorem, which states that any subset of ℝ^n with a volume greater than 1 contains two different points whose difference vector has
integer components.
The entry contains a proof of both theorems.
Session Minkowskis_Theorem
|
{"url":"https://devel.isa-afp.org/entries/Minkowskis_Theorem.html","timestamp":"2024-11-04T11:05:42Z","content_type":"text/html","content_length":"10514","record_id":"<urn:uuid:4cf34a71-987a-4e2d-ae78-870dc2c94d18>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00020.warc.gz"}
|
Algebra Learning Resource Usability Evaluation
Before starting Morae, open the three resources in IE and minimize to task bar.
Have paper & pencil available.
General Instructions:
For the following tasks, imagine that you are a student in a math course that uses the
following resources [see list] to help students learn math concepts . Use these resources
to help you complete the following assignments.
Task A: Combining Like Terms
1. Open the ' Algebra Basics ' page.
2. Use the site to learn about Combining Like Terms and then do the
activity on the 'workout' page on the website.
Post-task instructions:
1. Did participant complete the assignment?
2. Have participant fill out subjective survey
3. Ask participant to describe experience with the web site (record comments, both
positive and negative )
Task B: Word Problems
1. Open the 'A Simple Word Problem' page.
2. Use the site to learn about Word Problems.
3. Review the examples of 'Problems of class x/a = b/c.
4. Find a list of word problems you can try to solve.
[path: problems of class x/a = b/c > bottom of page > 'discussed & solved' link]
5. Solve the 'Crab's Weight' problem.
Post-task instructions:
a. Did participant complete the assignment?
b. Have participant fill out subjective survey
c. Ask participant to describe experience with the web site (record comments, both
positive and negative )
Task C: Exponents
1. Open the 'Exponents' page.
2. Use the site to learn about the Laws of Exponents .
3. Solve the problems on the handout, using the site as needed.
Post-task instructions:
a. Did participant complete the assignment?
b. Have participant fill out subjective survey
c. Ask participant to describe experience with the web site (record comments, both
positive and negative)
Task D: Graphing Equations
1. Open the 'SparkNotes: Graphing Equations' page.
2. Explore the Graphing Equations pages .
3. Solve two of the slope problems on the website.
Post-task instructions:
a. Did participant complete the assignment?
b. Have participant fill out subjective survey
c. Ask participant to describe experience with the web site (record comments, both
positive and negative)
Task E: Solving Linear Equations
1. Open the 'Solver SOLVE Linear Equations ' page.
2. Explore the site for a few minutes to learn about solving linear
3. Try two of the practice problems.
4. Follow along through the solution . How does it match your
thinking as you solved the problem?
Post-task instructions:
a. Did participant complete the assignment?
b. Have participant fill out subjective survey
c. Ask participant to describe experience with the web site (record comments, both
positive and negative)
Task F: Factoring
1. Open the ' Factoring Numbers ' page.
2. Use the site to learn about factoring.
3. Solve two of the problems on the site.
4. What was it like to use this site to answer the factoring
Post-task instructions:
a. Did participant complete the assignment?
b. Have participant fill out subjective survey
c. Ask participant to describe experience with the web site (record comments, both
positive and negative)
1. Simplify the following:
x^8/x^2 = _____
2. Solve the following:
2^-3 = _____
3. Solve the following:
(4^2)^3 = _
Combining Like Terms Homework
1. Simplify this expression by combining like terms:
3x^2 - 6y + 4x^2 + 3y
2. Simplify this expression by combining like terms:
4m - 7mn + 7 + 8m
|
{"url":"https://www.softmath.com/tutorials-3/reducing-fractions/algebra-learning-resource.html","timestamp":"2024-11-10T11:53:23Z","content_type":"text/html","content_length":"35616","record_id":"<urn:uuid:e3acc606-9125-42b8-871a-f9d5ab5a34f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00242.warc.gz"}
|
98 is 112% of what number?
Who united and expanded the Mongol Empire? A. Genghis Khan
Which of the following would NOT be considered characteristic of Genghis Khan? A. illiterate B. tolerant of other religions C. peaceful ruler D. peaceful conqueror
Tolerant of other religions would NOT be considered characteristic of Genghis Khan.
Added 8/31/2019 1:09:21 AM
This answer has been confirmed as correct and helpful.
|
{"url":"https://www.weegy.com/?ConversationId=ABE5936C&Link=i&ModeType=2","timestamp":"2024-11-08T15:05:06Z","content_type":"application/xhtml+xml","content_length":"61151","record_id":"<urn:uuid:9390f2a6-e4dc-44b9-a92b-ad2d02081182>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00874.warc.gz"}
|
Z94.1 - Analytical Techniques & Operations Research Terminology
| A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z |
MAINTAINABILITY. A characteristic of design and installation which is expressed as the probability that an item will be retained in or restored to a specified condition within a given period of time,
when the maintenance is performed in accordance with prescribed procedures and resources. [28]
MAINTENANCE. All actions necessary for retaining an item in or restoring it to a specified condition. [28]
MAINTENANCE, CORRECTIVE. The actions performed, as a result of failure, to restore an item to specified condition. [28]
MAINTENANCE, PREVENTIVE. The actions performed in an attempt to retain an item in a specified condition by providing systematic inspection, detection and prevention of incipient failure. [29]
MAN-FUNCTION. The function allocated to the human component of a system. [28]
MASTER PROGRAM. The part of a decomposed problem which expresses the extreme point solution of the subproblems as convex combinations and which also satisfy the constraints in the class C0 (the class
of constraints which C0 can include all the variables). (See DECOMPOSTION PRINCIPLE.) [19]
MATHEMATICAL PROGRAMMING. The general problem of optimizing a function of several variables subject to a number of constraints. If the function and constraints are linear in the variables and a
subset of the constraints restrict the variables to be nonnegative, we have a linear programming problem. [19]
MATRIX ELEMENT. One of the mn numbers of a matrix. [19]
MATRIX GAME. A zero-sum two-person game. The payoff (positive, negative or zero) of player one to player two when player one plays his ith strategy and player two his jth strategy is denoted by
aij. The set of payoffs aij can be arranged into an m x n matrix. [19]
MAX-FLOW MIN-CUT THEOREM. A theorem which applies to a maximal flow network problem which states that the maximal flow from the source to the sink is equal to the minimal cut capacity of all cuts
separating the source and sink nodes. [14]
MAXIMAL NETWORK FLOW PROBLEM. A problem involving a connected linear network in which goods must flow from an origin (source) to a final destination (sink) over the arcs of the network in such a
fashion as to maximize the total flow that the network can support. The arcs are connected at intermediate nodes and each branch has a given flow capacity that cannot be exceeded. The flow into an
intermediate node must equal the flow out of that node. [15]
MEAN. The expected value of a random variable.
MEAN LIFE. The arithmetic mean (q.v.) of the times-to-failure of the units of a given item. [20]
MEAN-MAINTENANCE-TIME. The total preventive and corrective maintenance time divided by the total number of preventive and corrective maintenance actions during a specified period of time. [28]
MEAN SERVICE RATE. The conditional expectation of the number of services completed in one time unit, given that service is going throughout the entire time unit.
MEAN TIME BETWEEN FAILURES(MTBF). For a particular interval, the total functioning life of a population of an item divided by the total number of failures within the population during the measurement
interval. [28]
MEAN TIME TO FAILURE. This expression applies to components of a system and not to systems. The expected length of time that a component is in use in a system in operation from the moment the
component is put into the system to the moment it fails. It is the expected length of life of the components. The meantime-to-failure is sometimes confused with the meantime-between-failures (MTBF)
which applies to systems which experience subsystem or component failures which can be replaced or repaired, putting the system into operation again. The confusion occurs in part because for the
Poisson failure distributions the two quantities have the same distribution.
MEAN TIME TO REPAIR (MTTR). The total corrective maintenance time divided by the total number of corrective maintenance actions during a given period of time. [28]
MEDIAN. A measure of central tendency with the number of data points above and below it equal; the 50th percentile value. [28]
MINIMAL COST FLOW PROBLEM. A network flow problem in which costs cij are the cost of shipping one unit of a homogeneous commodity from node i to node j. A given quantity F of the commodity must be
shipped from the origin node to the destination at minimum cost. [19]
MINIMAX PRINCIPLE. A principle introduced into decision-function theory by Wald (1939). It supposes that decisions are taken subject to the condition that the maximum risk in taking a wrong decision
is minimized. The principle has been critized on the grounds that decisions in real life are scarcely ever made by such a rule, which enjoins “that one should never walk under a tree for fear of
being killed by its falling. “ In the theory of games it is not open to the same objection, a prudent player being entitled to assume that his adversary will do his worst. [22]
MISSION. The objective or task, together with the purpose, which clearly indicates the action to be taken. [28]
MIXED-INTEGER PROGRAMMING. Integer programs in which some, but not all, of the variables are restricted to integral values. [19]
MODE. The value(s) of a random variable such that the probability mass (discrete random variable) or the probability density (continuous random variable) has a local maximum for this value (or
these values). Note: If there is one mode, the probability distribution of the random variable is said to be “unimodal”; if there is more than one mode the probability distribution is said to be
“multimodal” (bimodal if there are two modes).
MODEL. A mathematical or physical representation of a system (q.v.) often used to explore the many variables (q.v.) influencing the system.
MONOTONE. A monotonic (or monotone) non-decreasing quantity is a quantity which never decreases (the quantity may be a function, sequence, etc., which either increases or remains the same, but never
decreases). A sequence of sets El,E2,... is monotonic increasing if En is contained in En + I for each n. A monotonic (or monotone) non- increasing quantity is a quantity which never increases (the
quantity may be a function, sequence, etc,. which either decreases or remains the same, but never increases). A sequence of sets, El,E2,... is monotonic decreasing if En contains En+ I for each n. A
monotonic (or monotone) system of sets is a system of sets such that, for any two sets of the system one of the sets is contained in the other. A mapping of a topological space A onto a topological
space b is said to be monotone if the inverse image of each point of B is a continuum. A mapping of an ordered set A onto an ordered set B is monotone provided x* precedes (or equals) y* whenever x*
and y* are the images in B of points x and y of A for which x precedes y. [21]
MONOTONIC INCREASING FUNCTION. A function f is monotonic increasing on an interval (a, b) if f (y) > f (x) for any two numbers x and y (of this interval) for which x < y. [21]
MONTE CARLO METHOD. A simulation technique by which approximate evaluations are obtained in the solution of mathematical expressions so as to determine the range or optimum value. The technique
consists of simulating an experiment to determine some probabilistic property of a system or population of objects or events by the use of random sampling applied to the components of the system,
objects, or events.
MOVING AVERAGE. An unweighted average of the latest n observations where the current observations has replaced the oldest of the previous n observations.
MTBF. (See MEAN TIME BETWEEN FAILURES.) [20]
MULTICOMMODITY NETWORK PROBLEM. A network problem in which more than one commodity can flow through the network at the same time. The capacity constraints on the arcs apply to the sum of the flows of
the commodities. [19]
MULTICRITERIA OPTIMIZATION. A model and associated algorithm which attempts to find strategies which optimize several criterion measures instead of one. For example, there may be objectives and
criterion measurements on economic, social, end environmental issues. The criterion measurements would be in different units.
MULTIPERSON GAME. A game where a number of players are each competitively striving towards a specific objective which any one player only partially controls.
MUTLTIPLE REGRESSION. A statistical procedure to determine a relationship between the mean of a dependent random variable and the given value of one or more independent variables.
< Previous | Next >
|
{"url":"https://www.iise.org/Print/?id=1602&Site=Main","timestamp":"2024-11-09T03:18:27Z","content_type":"application/xhtml+xml","content_length":"15974","record_id":"<urn:uuid:69c114f6-d5be-4809-b583-60af60d079fb>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00657.warc.gz"}
|
Math Forum :: View topic - How the friendship? - www.mathdb.org
Math Forum :: View topic – How the friendship?
emily chung wrote:
i think you can use MI to prove, it seems it is true for all cases that if the number of students is odd. it is easy to see if n( number of students of school)=1 is true, also n=3 is true. then you
assume n=2m+1 is true, you only need to consider the case of n=2m+2 and n=2m+3, you can find there should be at least one of the students has even number of firends when n = 2m+3. i am not sure is
this method can apply, but it seems it is work
let me post another question here, it is quite interest.
Show that in any group of two or more persons there are at least two having the same number of friends (the friendship relation is symmetric)
Let be the number of persons in that group. Then the number of friends can only be 0,1,…,(n-1). We divide it into 2 cases. (1) If a person has n-1 friends, then none of the remaining n-1 persons has
0 friend, since that person is friend with all of them. Thus the number of friend can only be 1,2,…,(n-1). By piegonhole principle, at least 2 of them have the same number of friends.
(2) If no persons have n-1 friends. Then the number of friends can only be 0,1,…,(n-2). Again, by piegonhole principle, at least 2 of them have the same number of friends, so we are done.
|
{"url":"https://www.mathdb.org/phpbb2/viewtopicphpp4277amp/","timestamp":"2024-11-06T01:27:47Z","content_type":"text/html","content_length":"28259","record_id":"<urn:uuid:a66c4551-d239-470f-9c70-7c347647b752>","cc-path":"CC-MAIN-2024-46/segments/1730477027906.34/warc/CC-MAIN-20241106003436-20241106033436-00246.warc.gz"}
|
Precision NNLO determination of alpha(s)(M-Z) using an unbiased global parton set
Abstract / Description of output
We determine the strong coupling alpha(s) at NNLO in perturbative QCD using the global dataset input to the NNPDF2.1 NNLO parton fit: data from neutral and charged current deep-inelastic scattering.
Drell-Yan, vector boson production and inclusive jets. We find alpha(s)(M-Z) = 0.1173 +/- 0.0007(stat), where the statistical uncertainty comes from the underlying data and uncertainties due to the
analysis procedure are negligible. We show that the distribution of alpha(s) values preferred by different experiments in the global fit is statistically consistent, without need for rescaling
uncertainties by a "tolerance" factor. We show that if deep-inelastic data only are used, the best-fit value of alpha(s) is somewhat lower, but consistent within one sigma with the global
determination. We find that the shift between the NLO and NNLO values of alpha(s) is Delta alpha(pert)(s) = 0.0018, and we estimate the uncertainty from higher-order corrections to be Delta alpha
(NNLO)(s) similar to 0.0009. (C) 2011 Elsevier B.V. All rights reserved.
Keywords / Materials (for Non-textual outputs)
Dive into the research topics of 'Precision NNLO determination of alpha(s)(M-Z) using an unbiased global parton set'. Together they form a unique fingerprint.
• Ball, R., Berera, A., Boyle, P., Del Debbio, L., Figueroa-O'Farrill, J., Gardi, E., Horsley, R., Kennedy, A., Kenway, R., Pendleton, B. & Simon Soler, J.
1/10/11 → 30/09/15
Project: Research
|
{"url":"https://www.research.ed.ac.uk/en/publications/precision-nnlo-determination-of-alphasm-z-using-an-unbiased-globa","timestamp":"2024-11-02T08:03:37Z","content_type":"text/html","content_length":"62244","record_id":"<urn:uuid:710641ca-004a-43c6-9f2f-81880518389e>","cc-path":"CC-MAIN-2024-46/segments/1730477027709.8/warc/CC-MAIN-20241102071948-20241102101948-00553.warc.gz"}
|
English National Curriculum, Programme Of Study For Year 6 Mathematics
Sign In | Starter Of The Day | Tablesmaster | Fun Maths | Maths Map | Topics | More
How do you teach this topic? Do you have any tips or suggestions for other teachers? It is always useful to receive feedback and helps make these free resources even more useful for Maths teachers
anywhere in the world. Click here to enter your comments.
|
{"url":"https://transum.org/Maths/National_Curriculum/Topics.asp?ID_Statement=195","timestamp":"2024-11-13T19:02:39Z","content_type":"text/html","content_length":"19838","record_id":"<urn:uuid:519e167a-4465-4dd0-a577-58296feda645>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00786.warc.gz"}
|
Reverse Concept Drift (RCD) | NannyML Cloud
How to use the Reverse Concept Drift Algorithm
Reverse Concept Drift (RCD) belongs to the NannyML Cloud family of algorithms and can be accessed through a NannyML Cloud license or as a standalone algorithm.
How Reverse Concept Drift Works
Concept Drift
We call concept the relationship between the model inputs and the model targets. This is what a machine learning model learns when we train it. When this relationship changes, we say we have concept
drift. Mathematically, we can express a concept as $\mathrm{P}(y|X)$.
Concept drift and covariate shift may occur separately or together. Covariate shift is a change in the distributions of model inputs, $\mathrm{P}(X)$. One does not exclude the presence of the other.
They both affect the performance of a model. Not only that but their effect on performance is coupled. We describe that in more detail in our Understanding Data Shift: Impact on Machine Learning
Model Performance blog post.
The Reverse Concept Drift (RCD) algorithm focuses on the concept drift's impact on the model's performance. This is to keep the method simpler and to provide results that are easier to interpret. For
the impact of all factors in the model's performance, we need to look no further than the actual realized performance.
When we have concept drift, we know there is a new concept in the monitored data compared to what we have in our reference data. We can train a new machine learning model and learn the new concept in
order to compare it with the existing one. But how do we make a meaningful comparison?
As mentioned, performance change is a combination of covariate shift and concept drift. We can factor out covariate shift impact, as well as its interaction with concept drift, by focusing on the
reference dataset. How can we do that? We use the concept we learnt on the monitored data to make predictions on the reference dataset and treat them as ground truth. This allows us to estimate how
the monitored model would perform under the monitored data's concept on the reference data.
The impact of concept drift on performance is calculated based on the following steps:
Train a LightGBM model on a chunk of the monitored data.
Use the learned concept, to make predictions on the reference dataset.
Estimate Model Performance on reference data assuming the monitored concept model's predictions are the ground truth. A key detail here is that we are using the predicted scores, not the
predicted labels. This allows us to have a more accurate calculation but adds more complexity. The calculation uses CBPE in an inverse way, where the fractional results come from the y_true
column rather than the y_pred_proba column.
The actual model's performance on reference is subtracted from the estimated performance result. This results in a performance number that is the performance impact on the model only because of
the concept drift. To compare, the full performance impact under both concept drift and covariate shift is the performance change between the performance of the model in the chunk data minus its
performance on the reference data. This is why those results are also labeled with the Performance Impact Estimation (PIE) acronym.
However, Reverse Concept Drift (RCD) also offers another approach. It substitutes steps 3 and 4 with the following step:
Assumptions and Limitations
RCD rests on some assumptions in order to give accurate results. Those assumptions are also its limitations. Let's see what they are:
1. The data available are large enough.
We need enough data to be able to accurately train a density ratio estimation model and be able to properly multi-calibrate.
RCD will likely fail if there is covariate shift to previously unseen regions in the model input space. Mathematically, we can say that the support of the analysis data needs to be a subset of the
support of the reference data. If not, density ratio estimation is theoretically not defined. Practically if we don't have data from an analysis region in the reference data, we can't account for
that shift with a weighted calculation from reference data.
2. Our machine learning algorithm is able to capture the concept accurately.
We are using a LightGBM model to learn the concept. In cases where that algorithm cannot perform well enough, RCD will not provide accurate results.
|
{"url":"https://docs.nannyml.com/cloud/model-monitoring/how-it-works/reverse-concept-drift-rcd","timestamp":"2024-11-11T04:27:32Z","content_type":"text/html","content_length":"311349","record_id":"<urn:uuid:50c63dd3-4807-4690-a7f4-099eee0319c4>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00464.warc.gz"}
|
Array Utility Functions
Part 1. Test Cases Worksheet
You should print out the following worksheet and think about use/edge cases on paper before beginning. This worksheet will not be handed in but you should bring it with you to office hours to aid the
discussion. Thought put into this worksheet will pay dividends when you get to writing tests in code and implementing the functions. We've gone ahead and shared edge cases the grader will test and
Ps2 Test Cases
Below are 9 function names and English descriptions of their purpose. Read the description of each and write two example use cases. The use case should be a valid call to the function, with necessary
arguments, and the expected return value. Then write at least one edge case. A second edge case is optional but encouraged if there are edge cases related to more than one parameter.
1. count
Given an array of numbers and a number to search for, returns a count of the number of times the number occurs in the array.
Edge cases the grader tests:
count([], 1) -> 0
count([1], 2) -> 0
2. max
Given an array of numbers, returns the largest number in the array. When the array is empty, returns Number.MIN_VALUE.
Note: Number.MIN_VALUE is the number closest to zero that our language can represent (5 x 10^-324). It is not negative. Negative numbers are smaller than Number.MIN_VALUE.
Edge cases the grader tests:
max([]) -> Number.MIN_VALUE
max([-12, -10, -11]) -> -10
max([-1, 0.5, -2]) -> 0.5
3. has
Given an array of numbers and a number to search for, returns true when the number is an element of the array or false otherwise.
Edge cases the grader tests:
has([], 1) -> false
4. all
Given an array of numbers and a number to search for, returns true when every element in the array is equal to that number or false otherwise.
Edge cases the grader tests:
all([], 1) -> false
5. equals
Given two arrays of numbers, returns true when two arrays have the same elements in the same order. Two empty arrays are equal to one another.
Edge cases the grader tests:
equals([], [1]) -> false
equals([1], []) -> false
equals([1, 2], [1]) -> false
equals([1], [1, 2]) -> false
equals([], []) -> true
6. cycle
Given a number as an upper bound, and a count of elements to generate, generate an array that cycles from 1 to the upper bound with the correct length.
For example, if the cycle function is given 3 as an upper bound and a count of 7 would generate the following array:
[1, 2, 3, 1, 2, 3, 1]
When either the upper bound or the count is zero or negative, cycle returns an empty array.
Hint: How might the remainder operator help you here?
Edge cases the grader tests:
cycle(0, 3) -> []
cycle(3, -3) -> []
7. concat
Given two arrays, return a single array that contains all of the elements of the first array followed by all of the elements of the second array.
Concat should not modify either array parameter it is given.
Edge cases the grader tests:
concat([], []) -> []
concat([1], []) -> [1]
concat([], [1]) -> [1]
concat([1, 2], []) -> [1, 2]
concat([], [1, 2]) -> [1, 2]
8. sub
Given an array, a starting index, and an ending index, return an array that contains only the elements of the input array from start index to (end index - 1).
When the start index is negative, start from the beginning of the array.
When the end index is greater than the length of the array, end with the end of the array.
Edge cases the grader tests:
sub([1], -1, 0) -> []
sub([1], -1, 1) -> [1]
sub([1], 1, 2) -> []
sub([1], 2, 2) -> []
sub([], 0, 1) -> []
9. splice
Given an array we'll refer to as the first array, an integer index, and another array we'll refer to as second array, splice or "insert" the elements of the second array at the integer index of the
first array.
For example, if the first array is [1, 9], the index is 1, and the second array is [4, 5], the splice function will return [1, 4, 5, 9].
If the index is less than zero, insert the second array before the first array.
If the index is greater than the length of the first array, append the second array to the first array.
Splice should not modify either array parameter it is given.
Hint: consider how you might call upon other functions you've written to simplify your job in splice.
Note: the order of the function parameters should be (1) the first array, (2) the integer index, and (3) the other array.
Edge cases the grader tests:
splice([1, 2], 0, []) -> [1, 2]
splice([1, 2], 1, []) -> [1, 2]
splice([1, 2], 2, []) -> [1, 2]
splice([], 0, [1, 2]) -> [1, 2]
splice([], 1, [1, 2]) -> [1, 2]
splice([], 2, [1, 2]) -> [1, 2]
Part 2. Implementation
Do not begin Part 2 until after you have completed the test cases worksheet for Part 1!
The UTAs are instructed not to help anyone with Part 2 unless they have example test cases to work with as is required in Part 1.
2.0. Starting the Dev Environment
As with in lecture, begin by opening up VSCode, ensuring your project is still open, and then running the following two commands in the VSCode Integrated Terminal:
This project's starter code can be found in ps02-array-utils. Your work will be completed in two files:
1. test-runner-app.ts - This is the file where you will add your test cases.
2. array-utils.ts - This is the file where you will define and export each of the functions from part 1.
2.1 Honor Code
Special honor code rule for this assignment! You are only allowed to use capabilities we've talked about in class. Specifically, you are not allowed to use any built-in methods of an array. You are
not supposed to know what the preceding sentence means at this point in the semester, but if you attempt to find solutions to these functions on-line it is likely they will involve concepts we have
not discussed in class yet. As such, as long as you only make use of capabilities we have talked about in class you will be just fine.
Add the following honor code header, and fill in your name and ONYEN, in both of the files: test-runner-app.ts and array-utils.ts:
* Author:
* ONYEN:
* UNC Honor Pledge: I certify that no unauthorized assistance has been received
* or given in the completion of this work. I certify that I understand and
* could now rewrite on my own, without assistance from course staff,
* the problem set code I am submitting.
2.3 Implement the count Function
You will notice in test-runner-app.ts that the test cases provided as examples in the worksheet of Part 1 have been turned into actual test cases in the main function.
Additionally, in array-utils.ts, you'll notice we have defined and exported a skeleton count function for you. Your first task is to implement a version of the count function given the description in
part 1.1.
Once you are passing all cases in your development environment and believe you have a working implementation of the count function, publish your project and submit for grading. You should pass the
tests for count before continuing forward.
2.4 Implement the max function
Now you're ready to work on the max function!
Step 1) Begin in array-utils.ts by defining and exporting a max function with a skeleton implementation. Remember, a skeleton implementation of a function is a correct declaration whose body simply
returns a dummy value of the correct return type.
Step 2) Switch back over to test-runner-app.ts. Import your max function so that you can call it from this file. Add the test cases you wrote out by hand in the Part 1 worksheet for the max function.
Refer to count's test cases for inspiration.
Step 3) Now switch back to array-utils.ts and work on implementing a correct max function as per its description in Part 1.2. Once you have your test cases passing, try submitting to the autograder.
If the autograder is failing on either use or edge cases, you should try to think of additional use or edge cases to test in your main function.
2.5 Implement the remaining functions
For each of the remaining functions in parts 1.3 through 1.9, follow the three steps above.
Note that for parts 1.3 through 1.5 the functions return booleans. To write a test for a boolean function, update your import statement in test-runner-app.ts to include the testBoolean function:
import { testNumber, testBoolean, testArray } from "./test-util";
Note that for any function that returns an array (for example, the cycle function) you will need to call the testArray function rather than the testNumber function to test your actual returned values
against your expected return values.
|
{"url":"https://comp110.com/assignments/array-utility-functions-1-1-1","timestamp":"2024-11-08T13:56:44Z","content_type":"text/html","content_length":"14183","record_id":"<urn:uuid:57bb7d4a-f9e5-4aef-8ebb-f3e377825c04>","cc-path":"CC-MAIN-2024-46/segments/1730477028067.32/warc/CC-MAIN-20241108133114-20241108163114-00714.warc.gz"}
|
Fair Division | Brilliant Math & Science Wiki
Given a cake, how can a group of people divide the cake among themselves so that everyone is happy with their share? The problem of dividing goods has a long and rich history. This page explores the
mathematical results on fair division since the 1940s, when Hugo Steinhaus began the mathematically rigorous study of this problem. Fair division touches upon many different topics and has surprising
connections with the fields of combinatorics, mathematical induction, graph theory, algorithms, and topology.
We begin by considering the case of fair division between two people: given a cake, how can two people divide the cake so that both people are happy with their share? What does it mean for both
people to be happy with their share? We will study these questions by analyzing a well known method known as the cut-and-choose method.
2 Person Cut-and-choose Method: One person divides the cake into two parts and the other person chooses one of the resulting parts.
To understand whether or not this results in a division that everyone is happy with, we must first consider the preferences of the players. In general, people may have different preferences and the
value of different portions of the cake may vary. For example, one person may prefer the piece at the center of the cake with the strawberry topping, while another person may love chocolate frosting
and wants the corner piece of the cake with the most frosting.
In addition to different preferences, there are several ways to interpret fairness. A division of cake among \(2\) people is a proportional division if each person believes she or he has received at
least \(\frac{1}{2}\) of the cake. A division of cake among \(2\) people is an envy-free division if each person believes she or he received the piece of largest value (or one of the pieces of
largest value if there is a tie).
If Alice and Bob carry out the 2 Person Cut-and-choose Method, is the resulting division a proportional division? Is it an envy-free division?
Solution: Suppose Alice is the cutter and Bob is the chooser. From Alice's point of view, she knows that Bob will have the choice of either of the two pieces in her division, so her best strategy
is to cut the cake into two pieces she believes have equal value. Then whichever piece she receives in the end, she believes this piece to have value exactly \(\frac{1}{2}\) of the cake and does
not have a stronger preference for the other piece. From Bob's point of view, he is allowed to choose the piece he prefers, so in the end, he will not have a stronger preference for Alice's
piece, and he receives a piece he believes to be at least \(\frac{1}{2}\) of the cake. This shows the result is both a proportional and envy-free division of the cake.
We can generalize the definitions above to say that a division of cake among \(n\) people is a proportional division if each person believes she or he has received at least \(\frac{1}{n}\) of the
cake, and is an envy-free division if each person believes she or he received the piece of largest value. Another way to think about an envy-free division is that no person has a strictly stronger
preference for any other person's piece of cake.
Moving Knife Procedures
Now that we have considered the problem of dividing a cake among two people, how do we divide a cake among three people, Alice, Bob, and Carol? We begin with the problem of proportional division:
how do we divide a cake into thirds so that all three people are satisfied they have received at least \(\frac{1}{3}\)rd of the cake? The moving knife procedure works as follows. One person moves
a knife slowly over the cake, so that the amount of cake on the left of the knife increases continuously from zero to the entire cake. As soon as either Alice, Bob, or Carol believes that the
piece of cake to the left of the knife is equal to \(\frac{1}{3}\) of the cake, this person shouts "Cut!" The first person to shout out gets the piece to the left of the knife (if multiple people
shout out at the same time, then any one of them can be chosen at random to receive this piece.) Then the person who has received a piece of cake goes off to get a glass of milk for their cake
and the moving knife procedure continues with the remaining two people and the remaining portion of cake.
We can analyze this procedure as follows:
□ In both the first and second stages, the two people who shout "Cut!" receive pieces of cake they believe to be exactly \(\frac{1}{3}\) of the cake
□ For the person who does not shout out during the process, this person believes that the piece to be cut from the cake in both the first and second stages are each less than \(\frac{1}{3}\) of
the cake. Then the piece this person receives is at least \(1 - \frac{1}{3} - \frac{1}{3} = \frac{1}{3}\) of the cake.
This shows that the moving knife procedure gives a proportional division for \(3\) people. If Alice, Bob, and Carol would like to invite their friends to join the party, does the moving knife
procedure also give a proportional division for everyone? Suppose \(n \geq 4\) people are dividing a cake with a knife moving continuously across the cake, and as soon as any person believes that
the piece of cake to the left of the knife is equal to \(\frac{1}{n}\) of the cake, he/she shouts "Cut!" Now, the first person receives what they believe to be \(\frac{1}{n}\) of the cake and
everyone else believes the remaining piece is at least \(1 - \frac{1}{n} = \frac{n-1}{n}\) of the cake. Now, by mathematical induction, the problem reduces to dividing \(\frac{n-1}{n}\) of the
cake among \(n-1\) people and by the induction hypothesis, the moving knife procedure will give each of the remaining people at least \(\frac{1}{n-1}\) of the remaining cake, so each person
receives what she/he believes to be at least \(\frac{1}{n-1} \times \frac{n-1}{n} = \frac{1}{n}\) of the original cake. This gives a proportional division for \(n\) people.
Does the moving knife procedure give an envy-free division of cake for all \(n\) people? In other words, if the procedure is carried out and in the end, all the pieces are placed on the table,
would each person choose the piece they received during the moving knife process?
Solution: We show that the moving knife procedure gives a division that is proportional but not necessarily envy-free. Suppose Alice, Bob, and Carol are dividing a cake and Alice is the first to
yell "Cut!" Then she takes the piece to the left of the knife, which she believes to have value \(\frac{1}{3}\) of the cake. Now, when the moving knife procedure continues with the remaining
cake, there is a point where Alice believes the knife has moved over another \(\frac{1}{3}\) of cake, and she would yell "Cut!" (except that according to the procedure, she has now left the room
to get a glass of milk and is no longer involved in the moving knife process). However, suppose Bob and Carol both believe the moving knife has not yet moved past the \(\frac{1}{3}\) value mark,
so neither of them yell for the knife to stop. Then in the end, Alice believes the second piece of cake is larger than the first piece, so she does not receive what she believes to be the largest
piece. This shows the moving knife procedure is not necessarily envy-free.
Observe that the moving knife procedure is not discrete (since the knife is assumed to move continuously across the cake in order to cut at any location) and does not necessarily result in an
envy-free division. This leads us to the following questions: does there exist an envy-free division procedure for \(n\) people? Are there procedures that are discrete and require only a finite
number of cuts?
Envy-free division using discrete methods was first solved for the 3 player case in 1960 by John Selfridge of Northern Illinois University and John Horton Conway at Cambridge University. The method
requiring fewest cuts is the Selfridge–Conway discrete procedure, which uses at most 5 cuts.
For general \(n\), Brams and Taylor gave the first envy-free division method for four or more players in 1995. Other methods are due to Robertson and Webb, and Brams and Kilgour. The Brams and
Kilgour method, called the gap procedure, uses the concept of an auction for each participant to place bids on the pieces in the proposed division. The procedure then finds an allocation of the items
according to the participant's bids. However, these algorithms for general \(n\) have the property that the number of cuts required is unbounded and an arbitrarily large number of steps may be
required to find the division, depending on player preferences. A longstanding open question is whether there is a procedure with a finite number of steps for envy-free cake division among \(n\)
Open Problem: Does there exist an envy-free fair division procedure for \(n\) players with a finite number of steps?
The fair division problem also has a counterpart, in which an undesirable good is to be distributed among \(n\) people, such as in dividing chores or splitting rent. Martin Gardner introduced the
problem of dividing a set of chores (indivisible, undesirable items) among multiple people. In the rental division problem, a group of housemates must decide a fair allocation of rents for rooms
which may have different features: Alice may prefer windows and Bob may like hardwood floors and the goal is to find a way to decide how best to split the rent based on their preferences.
There are also issues that require other notions of fairness. Consider a cake that is half strawberry and half chocolate. Suppose Alice likes only chocolate, and Bob only likes strawberry, and
they apply the 2-person cut-and-choose method. If Alice is the cutter and does not know Bob's preference, then she would divide the cake so that each piece contains half chocolate, half
strawberry. But then, both players receive pieces that contain only half of what they would like. On the other hand, if Alice knows Bob's preference, then she would divide the cake into the
chocolate piece and the strawberry a piece, so they both get their entire preference. This is the notion of Pareto efficiency; the first solution in which Alice does not know Bob's preference is
Pareto inefficient because it is possible to find another division that makes one person better off without making another person worse off.
In addition to these problems, there are many applications of fair division in auctions, economics, social choice theory, and game theory. Fair division algorithms can be used to resolve disputes
over the splitting up of goods by taking into account preferences of all the people involved.
Advanced Topic: Combinatorial Topology
We now journey into the world of combinatorial topology and show how fixed point theorems from topology can be applied to solve envy-free fair division problems. We first give the following
definitions. The \(n\)-dimensional simplex is the set
\[\Delta_n = \{ (x_0, x_1, x_2, \ldots x_n) \in \mathbb{R}^{n+1} : 0 \leq x_i, \sum x_i = 1 \}.\]
Let \(e_i \in \mathbb{R}^{n+1}\) denote the vector with \(i\)th coordinate equal to 1 and all other coordinates equal to 0. Then \(e_i\) are vertices of \(\Delta_n\). Intuitively, each point on the
simplex can be thought of as a division of the cake into \(n+1\) pieces, where coordinate \(i\) is the fraction of the cake for piece \(i\). Vertex \(e_i\) of \(\Delta_n\) is then the division of the
cake where piece \(i\) is the entire cake and all other pieces have size \(0\). The simplices for \(n=0,1,\) and \(2\) can be visualized geometrically as shown:
We will focus on the case \(n=2,\) which is a triangle. Label the vertices of this triangle by \(1,2, 3\) and consider a decomposition of this triangle into smaller triangles, where the smallest
triangles are called cells. Suppose the vertices along the side connecting \(1\) and \(2\) have labels either \(1\) or \(2\), the vertices along the side connecting \(2\) and \(3\) have labels either
\(2\) or \(3\), the vertices along the side connecting \(1\) and \(3\) have labels either \(1\) or \(3\), and the vertices in the interior are labeled by any one of \(1,2,\) or \(3\). A labeling
satisfying these conditions is called a Sperner labeling:
We first show the following result from graph theory.
In a graph \(G\), the degree of a vertex \(v\) is the number of edges adjacent to \(v\). The number of odd degree vertices in any graph must be even.
Solution: Consider the sum of degrees of all vertices of the graph. Because each edge has two endpoints, the sum of degrees of all vertices is equal to twice the number of edges. Since the sum of
the degrees is even and the sum of the degrees of vertices with even degree is even, the sum of the degrees of vertices with odd degree must be even. This shows there must be an even number of
vertices with odd degree.
We give a statement and proof of Sperner's Lemma for \(n=2,\) which corresponds to a triangle; the proof for higher dimensional simplices follows the same reasoning with a few modifications.
Sperner's Lemma: If a triangle is decomposed into cells and all vertices are labeled by a Sperner labeling, then there are an odd number of cells with all different labels.
Proof: Create an auxiliary graph by introducing a vertex for each cell of the triangulation and a special vertex \(v^*\) for the outside of the triangle. Now, connect any two vertices by an edge
if the intersection of their boundaries is an edge of a cell with endpoints labeled by \(1\) and \(2\). The vertices adjacent to vertex \(v^*\) are the cells on the boundary whose endpoints have
labels \(1\) and \(2\), as shown in this example:
Now, when traveling along the boundary from vertex \(1\) and \(2\) of the large triangle, there are an odd number of edges with labels \(1\) and \(2\), since we start from a vertex labeled by \(1
\) and end at a vertex labeled by \(2\). This implies the degree of \(v^*\) is odd (in the figure above, the degree of \(v^*\) is equal to three). By the result above, the graph has an even
number \(k\) of odd degree vertices, and since \(v^*\) has odd degree, there are \(k-1\) odd degree vertices remaining in the interior of the triangle. Now, the vertices in the interior of the
triangle can have degree one, two, or three, so the only possible odd degree for an interior vertex is degree one. If a vertex has degree one, then it corresponds to a cell \(c\) such that \(c\)
has an edge labeled by \(1,2\) and the other edges of \(c\) cannot be labeled by \(1,2\). This implies the other edges of \(c\) must have labels \((1,3)\) and \((2,3)\), so \(c\) is a cell with
labels \(1,2,3\). This shows there are \(k-1\) cells labeled by \(1,2,3,\) which is an odd number, proving Sperner's Lemma.
(Check that in the above figure, each of the five interior vertices of degree one corresponds to a cell with labels \(1,2,\) and \(3\).)
For general \(n\), Sperner's Lemma proceeds by induction on \(n\).
In particular, Sperner's Lemma shows there exists at least one triangle with labels \(1,2,3\) in any Sperner labeling. We now prove the Envy-Free Fair Division Theorem for three people (the proof for
the general case can be obtained in the same manner with a few modifications).
Envy-Free Fair Division Theorem for three people (Simmons): For three people, there exists a division of cake such that each player receives a piece she/he believes to be of largest value.
Proof: Given \(\epsilon > 0,\) construct a triangulation of the simplex such that distances between cell vertices is at most \(\epsilon,\) and consider a labeling \(L\) of the triangulation such
that each cell has labels \(A, B,\) and \(C\).
Now, construct a second labeling of the triangulation as follows. For each vertex \(x\), consider the label given by triangulation \(L\) and ask the person corresponding to this label which piece
they prefer if the cake is divided into parts \(x = (x_1, x_2, x_3)\) corresponding to the coordinates of this point in the simplex. Label the vertex with \(1\) if the person prefers piece \(1\),
\(2\) if the person prefers piece \(2\), and \(3\) if the person prefers piece \(3\). The result is a Sperner labeling, since along each boundary \((e_i, e_j)\), only pieces \(i\) and \(j\) are
nonempty, so only they will be preferred. Then by Sperner's Lemma, there exists a cell labeled by \(1, 2, 3\), which corresponds to a cell where each player chooses a different piece of the cake.
Now, we recurse on this cell to obtain a sequence of cells with vertices closer and closer (\(\epsilon\) tending to 0) such that the corresponding cells have nonempty intersection and each player
chooses the same piece each time his name occurs as a label of a cell. In the limit, a point in the intersection is a division in which each player prefers a different piece of the divided cake.
Note that this proof gives an approximate algorithm for finding fixed points and fair divisions. For the problem of dividing rents or splitting chores, Francis Su developed an algorithm that
repeatedly queries each housemate in turn on which room she or he would prefer if the total rent were divided in certain ways among the rooms (or chores were distributed in certain ways). Then a
procedure based on Sperner's lemma determines room rents to propose after each housemate's turn. These methods have been implemented in several online tools:
Spliddit: Online Calculator for Sharing Rent, Dividing Goods, Assigning Credit
Sperner's Lemma also implies a well-known theory in combinatorial topology known as Brouwer's Fixed Point Theorem.
Brouwer Fixed-Point Theorem: Any continuous map \(f\) from the simplex to itself has a fixed point \(x\), i.e., \(f(x) = x\).
Other results from combinatorial topology also apply to fair division problem. For example, the Ham Sandwich Theorem shows that for \(n\) people with possibly different preferences, there is always a
single cut of the cake so that all \(n\) people believe both sides of the cake have equal value.
As we have shown, the cake cutting problem touches upon many different areas, ranging from combinatorics and graph theory, to algorithms and topology. It also has many applications in economics, game
theory, and decision theory, and there has been a great deal of mathematical study on this problem over the past few decades. Many open questions still remain -- perhaps you will be the person to
think of new ideas to solve these problems!
S. Brams, A. Taylor, "An Envy-Free Cake Division Protocol". The American Mathematical Monthly 102 (1): 9–18 (1995).
M. Gardner, "Aha! Insight", W.F. Freeman and Co., New York (1978).
F. Su, “Rental Harmony: Sperner’s Lemma in Fair Division”, American Mathematical Monthly 106, pp. 930–942 (December 1999).
F. Su, Fair Division Page.
|
{"url":"https://brilliant.org/wiki/fair-division/","timestamp":"2024-11-13T12:44:06Z","content_type":"text/html","content_length":"70032","record_id":"<urn:uuid:ac839e34-83bd-42b9-b4c3-6e088e5f3292>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00446.warc.gz"}
|
Mixing Lemonade Poster
Mixing Lemonade Poster
This poster is based on the problem
Mixing Lemonade
The poster is available as a
, or the image below can be clicked on to enlarge it.
Student Solutions
Answer: the first glass tasted stronger
How do you know?
Method 1: making the same amount of lemon juice
First glass lemon:water 60:200
Second glass lemon:water 100:350
60$\times$5 = 100$\times$3 = 300
200$\times$5 = 1000, so first glass lemon:water 300:1000
350$\times$3 = 1050, so second glass lemon:water 300:1050
First glass has less water for the same amount of lemon juice so the first glass tastes stronger.
Method 2: making the same amount of water
First glass lemon:water 60:200
Second glass lemon:water 100:350
200$\times$7 = 350$\times$4 = 1400
60$\times$7 = 420, so first glass lemon:water 420:1400
100$\times$4 = 400, so second glass lemon:water 400:1400
First glass has more lemon juice for the same amount of water so the first glass tastes stronger.
Method 3: scaling lemon juice to water
First glass: scale factor from lemon juice to water is 3.333
Second glass: scale factor from lemon juice to water is 3.5
There is more water compared to lemon juice in the second glass, so the first glass tastes stronger.
Method 4: fractions
First glass: $\frac{60}{260} =\frac6{26} = \frac3{13}$ lemon juice
Second glass: $\frac{100}{450} = \frac{10}{45} = \frac29$ lemon juice
Common denominator: $13\times9$
First glass: $\frac3{13} = \frac{27}{13\times9}$
Second glass: $\frac27 = \frac{26}{13\times9}$
The first glass has a greater fraction of lemon juice so it tastes stronger.
|
{"url":"https://nrich.maths.org/problems/mixing-lemonade-poster","timestamp":"2024-11-13T19:47:57Z","content_type":"text/html","content_length":"41964","record_id":"<urn:uuid:79635a37-7a53-480a-86af-75d1763f3cf1>","cc-path":"CC-MAIN-2024-46/segments/1730477028387.69/warc/CC-MAIN-20241113171551-20241113201551-00621.warc.gz"}
|
The Failures of Mathematical Anti-Evolutionism
[The following is a copy of a book review, written by the present author, which appeared in the Reports of the National Center for Science Education.]
The Failures of Mathematical Anti-Evolutionism, Cambridge University Press, 2022, 292 pages. Reviewed by David H. Bailey
Although modern science has uncovered a universe that is far vaster and more awe-inspiring than ever imagined before, some writers, mostly of the creationist and intelligent design schools, prefer
instead to combat science, particularly on traditional topics such as evolution. One common line of argumentation is that certain biological structures are so unlikely, according to simple
back-of-the-envelope reckonings based on probability or information theory, that they could never have been produced by a purely natural, “random” evolutionary process, even assuming millions of
years of geologic time. Thus evolutionary theory must be false.
Biologists have never taken these writings seriously, mainly because the empirical evidence for evolution is so overwhelming. Mathematicians and statisticians have never taken these writings
seriously, mainly because they have deemed them unworthy of detailed refutation. As a result, there has been a dearth of reliable, readable information on the topic.
Mathematician Jason Rosenhouse’s new book The Failures of Mathematical Anti-Evolutionism addresses this specific topic. Rosenhouse is very well-qualified for the task. He has previously published
Among the Creationists: Dispatches from the Anti-Evolutionist Front Line, describing his experiences attending numerous creationist and intelligent design conferences. He has also published several
books explaining probability paradoxes to a mainstream audience. His books clearly demonstrate a talent for science writing.
His new book respectfully but firmly explains why various anti-evolution arguments based on probability and information theory are without merit. Many of these are some variation of what Rosenhouse
terms the “Basic Argument from Improbability”: (a) identify a complex biological structure; (b) model its evolution as a random selection from a large space of equally probable outcomes; (c) use
elementary combinatorics to enumerate this space; and then assert that the resulting “probability” is too remote for the structure to have evolved.
As Rosenhouse observes, such arguments fall to several well-known fallacies: First of all, they presume that all outcomes are equally probable, which is utterly false in real-world biology — some
structures are very likely to appear, while vast numbers of others are not biologically possible at all. Further, these arguments presume that the structure appeared via a single-shot “random”
selection among all combinatorial possibilities, whereas real biological structures arose from a long string of earlier steps over the eons, each useful in an earlier context. Finally, these
arguments ignore the crucial role of natural selection in finding a “path” through biological space.
In general, such arguments are dead ringers for the post-hoc probability fallacy, namely reckoning a probability after the fact, and then claiming that the event could not have happened naturally. As
Rosenhouse explains, we should not be surprised at a seemingly improbable outcome, because some outcome had to happen.
Rosenhouse illustrates this type of fallacious reasoning with the following story (pg. 128-129):
Suppose you and a friend are in the downtown area of a major American city, and you both decide you want a slice of pizza. You pick a direction and start walking. Within just two blocks you find
a pizza parlor. Your friend now says, “Incredible! The surface of the Earth is enormous, and almost none of it is covered with pizza parlors. Yet somehow we were able to find one of the few
places on Earth that has a pizza parlor. How can you explain something so remarkable?
As Rosenhouse explains, the surface area of the Earth is irrelevant because it was only necessary to search the tiny portion near their location, which, because it is in a major city, has numerous
pizza parlors. Rosenhouse then points out that the Basic Argument from Improbability “is guilty of precisely the same oversights, except applied to protein space rather than to the surface of the
Earth.” He adds that “the mathematical model on which the [improbability] argument relies is far too unrealistic to produce meaningful results.”
Rosenhouse also addresses arguments based on information theory, entropy and the Second Law of Thermodynamics. Although such arguments are superficially more sophisticated than probability arguments,
in the end he finds them equally vacuous — they either rely on intuitive lines of thinking that do not stand up to rigorous analysis, or else they feature profound-looking mathematical analyses,
which, because they are based on deeply flawed idealistic models, are irrelevant.
Rosenhouse’s book is a major step forward, and will be greatly appreciated in the community. But as Rosenhouse himself acknowledges, much remains to be done. Hopefully his nicely crafted book will
serve as a template for additional contributions in this arena.
|
{"url":"https://www.sciencemeetsreligion.org/2023/04/the-failures-of-mathematical-anti-evolutionism/","timestamp":"2024-11-12T06:30:15Z","content_type":"application/xhtml+xml","content_length":"71028","record_id":"<urn:uuid:6b90bd3d-9fb0-47d0-ba93-745b89f00e70>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00311.warc.gz"}
|
Circular Motion & Gravity
Regents Physics - Circular Motion & Gravity
Now that we've talked about linear and projectile kinematics, as well as fundamentals of dynamics and Newton's Laws, we have the skills and background to analyze circular motion. Of course, this has
the obvious applications such as cars moving around a circular track, roller coasters moving in loops, and toy trains chugging around a circular track under the Christmas tree. Less obvious, however,
is the application to turning objects. Any object that is turning can be thought of as moving through a portion of a circular path, even if it's just a small portion of that path.
With this realization, analysis of circular motion will allow us to explore a car speeding around a corner on an icy road, a running back cutting upfield to avoid a blitzing linebacker, and the
orbits of planetary bodies. The key to understanding all of these phenomena starts with the study of uniform circular motion.
1. Explain the acceleration of an object moving in a circle at constant speed.
2. Define centripetal force and recognize that it is not a special kind of force, but that it is provided by forces such as tension, gravity, and friction.
3. Solve problems involving calculations of centripetal force.
4. Determine the direction of a centripetal force and centripetal acceleration for an object moving in a circular path.
5. Calculate the period, frequency, speed and distance traveled for objects moving in circles at constant speed.
6. Analyze and solve problems involving objects moving in vertical circles.
7. Determine the acceleration due to gravity near the surface of Earth.
8. Utilize Newton's Law of Universal Gravitation to determine the gravitational force of attraction between two objects.
9. Explain the difference between mass and weight.
10. Explain weightlessness for objects in orbit.
Topics of Study
|
{"url":"https://www.aplusphysics.com/courses/regents/circmotion/regents-UCM-home.html","timestamp":"2024-11-12T03:10:41Z","content_type":"application/xhtml+xml","content_length":"25828","record_id":"<urn:uuid:831b5335-3eeb-440c-9a16-e305951a166e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00540.warc.gz"}
|
Logic programming explained
Logic programming is a programming, database and knowledge representation paradigm based on formal logic. A logic program is a set of sentences in logical form, representing knowledge about some
problem domain. Computation is performed by applying logical reasoning to that knowledge, to solve problems in the domain. Major logic programming language families include Prolog, Answer Set
Programming (ASP) and Datalog. In all of these languages, rules are written in the form of clauses:
A :- B<sub>1</sub>, ..., B<sub>n</sub>.
and are read as declarative sentences in logical form:
A if B<sub>1</sub> and ... and B<sub>n</sub>.
A is called the head of the rule, B<sub>1</sub>, ..., B<sub>n</sub> is called the body, and the B<sub>i</sub> are called literals or conditions. When n = 0, the rule is called a fact and is written
in the simplified form:
Queries (or goals) have the same syntax as the bodies of rules and are commonly written in the form:
?- B<sub>1</sub>, ..., B<sub>n</sub>.
In the simplest case of Horn clauses (or "definite" clauses), all of the A, B[1], ..., B[n] are atomic formulae of the form p(t[1],..., t[m]), where p is a predicate symbol naming a relation, like
"motherhood", and the t[i] are terms naming objects (or individuals). Terms include both constant symbols, like "charles", and variables, such as X, which start with an upper case letter.
Consider, for example, the following Horn clause program:mother_child(elizabeth, charles).father_child(charles, william).father_child(charles, harry).parent_child(X, Y) :- mother_child(X,
Y).parent_child(X, Y) :- father_child(X, Y).grandparent_child(X, Y) :- parent_child(X, Z), parent_child(Z, Y).Given a query, the program produces answers.For instance for a query ?- parent_child(X,
william), the single answer isX = charlesVarious queries can be asked. For instancethe program can be queried both to generate grandparents and to generate grandchildren. It can even be used to
generate all pairs of grandchildren and grandparents, or simply to check if a given pair is such a pair:grandparent_child(X, william).X = elizabeth
?- grandparent_child(elizabeth, Y).Y = william;Y = harry.
?- grandparent_child(X, Y).X = elizabethY = william;X = elizabethY = harry.
?- grandparent_child(william, harry).no?- grandparent_child(elizabeth, harry).yes
Although Horn clause logic programs are Turing complete,^[1] ^[2] for most practical applications, Horn clause programs need to be extended to "normal" logic programs with negative conditions. For
example, the definition of sibling uses a negative condition, where the predicate = is defined by the clause X = X: sibling(X, Y) :- parent_child(Z, X), parent_child(Z, Y), not(X = Y).Logic
programming languages that include negative conditions have the knowledge representation capabilities of a non-monotonic logic.
In ASP and Datalog, logic programs have only a declarative reading, and their execution is performed by means of a proof procedure or model generator whose behaviour is not meant to be controlled by
the programmer. However, in the Prolog family of languages, logic programs also have a procedural interpretation as goal-reduction procedures. From this point of view, clause A :- B[1],...,B[n] is
understood as:
to solve A, solve B<sub>1</sub>, and ... and solve B<sub>n</sub>.
Negative conditions in the bodies of clauses also have a procedural interpretation, known as negation as failure: A negative literal not B is deemed to hold if and only if the positive literal B
fails to hold.
Much of the research in the field of logic programming has been concerned with trying to develop a logical semantics for negation as failure and with developing other semantics and other
implementations for negation. These developments have been important, in turn, for supporting the development of formal methods for logic-based program verification and program transformation.
The use of mathematical logic to represent and execute computer programs is also a feature of the lambda calculus, developed by Alonzo Church in the 1930s. However, the first proposal to use the
clausal form of logic for representing computer programs was made by Cordell Green.^[3] This used an axiomatization of a subset of LISP, together with a representation of an input-output relation, to
compute the relation by simulating the execution of the program in LISP. Foster and Elcock's Absys, on the other hand, employed a combination of equations and lambda calculus in an assertional
programming language that places no constraints on the order in which operations are performed.^[4]
Logic programming, with its current syntax of facts and rules, can be traced back to debates in the late 1960s and early 1970s about declarative versus procedural representations of knowledge in
artificial intelligence. Advocates of declarative representations were notably working at Stanford, associated with John McCarthy, Bertram Raphael and Cordell Green, and in Edinburgh, with John Alan
Robinson (an academic visitor from Syracuse University), Pat Hayes, and Robert Kowalski. Advocates of procedural representations were mainly centered at MIT, under the leadership of Marvin Minsky and
Seymour Papert.^[5]
Although it was based on the proof methods of logic, Planner, developed by Carl Hewitt at MIT, was the first language to emerge within this proceduralist paradigm.^[6] Planner featured
pattern-directed invocation of procedural plans from goals (i.e. goal-reduction or backward chaining) and from assertions (i.e. forward chaining). The most influential implementation of Planner was
the subset of Planner, called Micro-Planner, implemented by Gerry Sussman, Eugene Charniak and Terry Winograd. Winograd used Micro-Planner to implement the landmark, natural-language understanding
program SHRDLU.^[7] For the sake of efficiency, Planner used a backtracking control structure so that only one possible computation path had to be stored at a time. Planner gave rise to the
programming languages QA4,^[8] Popler,^[9] Conniver,^[10] QLISP,^[11] and the concurrent language Ether.^[12]
Hayes and Kowalski in Edinburgh tried to reconcile the logic-based declarative approach to knowledge representation with Planner's procedural approach. Hayes (1973) developed an equational language,
Golux, in which different procedures could be obtained by altering the behavior of the theorem prover.^[13]
In the meanwhile, Alain Colmerauer in Marseille was working on natural-language understanding, using logic to represent semantics and using resolution for question-answering. During the summer of
1971, Colmerauer invited Kowalski to Marseille, and together they discovered that the clausal form of logic could be used to represent formal grammars and that resolution theorem provers could be
used for parsing. They observed that some theorem provers, like hyper-resolution,^[14] behave as bottom-up parsers and others, like SL resolution (1971)^[15] behave as top-down parsers.
It was in the following summer of 1972, that Kowalski, again working with Colmerauer, developed the procedural interpretation of implications in clausal form. It also became clear that such clauses
could be restricted to definite clauses or Horn clauses, and that SL-resolution could be restricted (and generalised) to SLD resolution. Kowalski's procedural interpretation and SLD were described in
a 1973 memo, published in 1974.^[16]
Colmerauer, with Philippe Roussel, used the procedural interpretation as the basis of Prolog, which was implemented in the summer and autumn of 1972. The first Prolog program, also written in 1972
and implemented in Marseille, was a French question-answering system. The use of Prolog as a practical programming language was given great momentum by the development of a compiler by David H. D.
Warren in Edinburgh in 1977. Experiments demonstrated that Edinburgh Prolog could compete with the processing speed of other symbolic programming languages such as Lisp.^[17] Edinburgh Prolog became
the de facto standard and strongly influenced the definition of ISO standard Prolog.
Logic programming gained international attention during the 1980s, when it was chosen by the Japanese Ministry of International Trade and Industry to develop the software for the Fifth Generation
Computer Systems (FGCS) project. The FGCS project aimed to use logic programming to develop advanced Artificial Intelligence applications on massively parallel computers. Although the project
initially explored the use of Prolog, it later adopted the use of concurrent logic programming, because it was closer to the FGCS computer architecture.
However, the committed choice feature of concurrent logic programming interfered with the language's logical semantics^[18] and with its suitability for knowledge representation and problem solving
applications. Moreover, the parallel computer systems developed in the project failed to compete with advances taking place in the development of more conventional, general-purpose computers.
Together these two issues resulted in the FGCS project failing to meet its objectives. Interest in both logic programming and AI fell into world-wide decline.^[19]
In the meanwhile, more declarative logic programming approaches, including those based on the use of Prolog, continued to make progress independently of the FGCS project. In particular, although
Prolog was developed to combine declarative and procedural representations of knowledge, the purely declarative interpretation of logic programs became the focus for applications in the field of
deductive databases. Work in this field became prominent around 1977, when Hervé Gallaire and Jack Minker organized a workshop on logic and databases in Toulouse.^[20] The field was eventually
renamed as Datalog.
This focus on the logical, declarative reading of logic programs was given further impetus by the development of constraint logic programming in the 1980s and Answer Set Programming in the 1990s. It
is also receiving renewed emphasis in recent applications of Prolog^[21]
The Association for Logic Programming (ALP) was founded in 1986 to promote Logic Programming. Its official journal until 2000, was The Journal of Logic Programming. Its founding editor-in-chief was
J. Alan Robinson.^[22] In 2001, the journal was renamed The Journal of Logic and Algebraic Programming, and the official journal of ALP became Theory and Practice of Logic Programming, published by
Cambridge University Press.
Logic programs enjoy a rich variety of semantics and problem solving methods, as well as a wide range of applications in programming, databases, knowledge representation and problem solving.
Algorithm = Logic + Control
The procedural interpretation of logic programs, which uses backward reasoning to reduce goals to subgoals, is a special case of the use of a problem-solving strategy to control the use of a
declarative, logical representation of knowledge to obtain the behaviour of an algorithm. More generally, different problem-solving strategies can be applied to the same logical representation to
obtain different algorithms. Alternatively, different algorithms can be obtained with a given problem-solving strategy by using different logical representations.^[23]
The two main problem-solving strategies are backward reasoning (goal reduction) and forward reasoning, also known as top-down and bottom-up reasoning, respectively.
In the simple case of a propositional Horn clause program and a top-level atomic goal, backward reasoning determines an and-or tree, which constitutes the search space for solving the goal. The
top-level goal is the root of the tree. Given any node in the tree and any clause whose head matches the node, there exists a set of child nodes corresponding to the sub-goals in the body of the
clause. These child nodes are grouped together by an "and". The alternative sets of children corresponding to alternative ways of solving the node are grouped together by an "or".
Any search strategy can be used to search this space. Prolog uses a sequential, last-in-first-out, backtracking strategy, in which only one alternative and one sub-goal are considered at a time. For
example, subgoals can be solved in parallel, and clauses can also be tried in parallel. The first strategy is called and the second strategy is called . Other search strategies, such as intelligent
backtracking,^[24] or best-first search to find an optimal solution,^[25] are also possible.
In the more general, non-propositional case, where sub-goals can share variables, other strategies can be used, such as choosing the subgoal that is most highly instantiated or that is sufficiently
instantiated so that only one procedure applies.^[26] Such strategies are used, for example, in concurrent logic programming.
In most cases, backward reasoning from a query or goal is more efficient than forward reasoning. But sometimes with Datalog and Answer Set Programming, there may be no query that is separate from the
set of clauses as a whole, and then generating all the facts that can be derived from the clauses is a sensible problem-solving strategy. Here is another example, where forward reasoning beats
backward reasoning in a more conventional computation task, where the goal ?- fibonacci(n, Result) is to find the n^th fibonacci number:fibonacci(0, 0).fibonacci(1, 1).
fibonacci(N, Result) :- N > 1, N1 is N - 1, N2 is N - 2, fibonacci(N1, F1), fibonacci(N2, F2), Result is F1 + F2.
Here the relation fibonacci(N, M) stands for the function fibonacci(N) = M, and the predicate N is Expression is Prolog notation for the predicate that instantiates the variable N to the value of
Given the goal of computing the fibonacci number of n, backward reasoning reduces the goal to the two subgoals of computing the fibonacci numbers of n-1 and n-2. It reduces the subgoal of computing
the fibonacci number of n-1 to the two subgoals of computing the fibonacci numbers of n-2 and n-3, redundantly computing the fibonacci number of n-2. This process of reducing one fibonacci subgoal to
two fibonacci subgoals continues until it reaches the numbers 0 and 1. Its complexity is of the order 2^n. In contrast, forward reasoning generates the sequence of fibonacci numbers, starting from 0
and 1 without any recomputation, and its complexity is linear with respect to n.
Prolog cannot perform forward reasoning directly. But it can achieve the effect of forward reasoning within the context of backward reasoning by means of tabling: Subgoals are maintained in a table,
along with their solutions. If a subgoal is re-encountered, it is solved directly by using the solutions already in the table, instead of re-solving the subgoals redundantly.^[27]
Relationship with functional programming
Logic programming can be viewed as a generalisation of functional programming, in which functions are a special case of relations.^[28] For example, the function, mother(X) = Y, (every X has only one
mother Y) can be represented by the relation mother(X, Y). In this respect, logic programs are similar to relational databases, which also represent functions as relations.
Compared with relational syntax, functional syntax is more compact for nested functions. For example, in functional syntax the definition of maternal grandmother can be written in the nested
form:maternal_grandmother(X) = mother(mother(X)).The same definition in relational notation needs to be written in the unnested, flattened form:maternal_grandmother(X, Y) :- mother(X, Z), mother(Z,
However, nested syntax can be regarded as syntactic sugar for unnested syntax. Ciao Prolog, for example, transforms functional syntax into relational form and executes the resulting logic program
using the standard Prolog execution strategy.^[29] Moreover, the same transformation can be used to execute nested relations that are not functional. For example: grandparent(X) := parent(parent
(X)).parent(X) := mother(X).parent(X) := father(X).
mother(charles) := elizabeth.father(charles) := phillip.mother(harry) := diana.father(harry) := charles.
?- grandparent(X,Y).X = harry,Y = elizabeth.X = harry,Y = phillip.
Relationship with relational programming
The term relational programming has been used to cover a variety of programming languages that treat functions as a special case of relations. Some of these languages, such as miniKanrenand
relational linear programming^[30] are logic programming languages in the sense of this article.
However, the relational language RML is an imperative programming language^[31] whose core construct is arelational expression, which is similar to an expression in first-order predicate logic.
Other relational programming languages are based on the relational calculus^[32] or relational algebra.^[33]
Semantics of Horn clause programs
See main article: Syntax and semantics of logic programming.
Viewed in purely logical terms, there are two approaches to the declarative semantics of Horn clause logic programs: One approach is the original logical consequence semantics, which understands
solving a goal as showing that the goal is a theorem that is true in all models of the program.
In this approach, computation is theorem-proving in first-order logic; and both backward reasoning, as in SLD resolution, and forward reasoning, as in hyper-resolution, are correct and complete
theorem-proving methods. Sometimes such theorem-proving methods are also regarded as providing a separate proof-theoretic (or operational) semantics for logic programs. But from a logical point of
view, they are proof methods, rather than semantics.
The other approach to the declarative semantics of Horn clause programs is the satisfiability semantics, which understands solving a goal as showing that the goal is true (or satisfied) in some
intended (or standard) model of the program. For Horn clause programs, there always exists such a standard model: It is the unique minimal model of the program.
Informally speaking, a minimal model is a model that, when it is viewed as the set of all (variable-free) facts that are true in the model, contains no smaller set of facts that is also a model of
the program.
For example, the following facts represent the minimal model of the family relationships example in the introduction of this article. All other variable-free facts are false in the model:
mother_child(elizabeth, charles).father_child(charles, william).father_child(charles, harry).parent_child(elizabeth, charles).parent_child(charles, william).parent_child(charles,
harry).grandparent_child(elizabeth, william).grandparent_child(elizabeth, harry).
The satisfiability semantics also has an alternative, more mathematical characterisation as the least fixed point of the function that uses the rules in the program to derive new facts from existing
facts in one step of inference.
Remarkably, the same problem-solving methods of forward and backward reasoning, which were originally developed for the logical consequence semantics, are equally applicable to the satisfiability
semantics: Forward reasoning generates the minimal model of a Horn clause program, by deriving new facts from existing facts, until no new additional facts can be generated. Backward reasoning, which
succeeds by reducing a goal to subgoals, until all subgoals are solved by facts, ensures that the goal is true in the minimal model, without generating the model explicitly.^[34]
The difference between the two declarative semantics can be seen with the definitions of addition and multiplication in successor arithmetic, which represents the natural numbers 0, 1, 2, ... as a
sequence of terms of the form 0, s(0), s(s(0)), .... In general, the term s(X) represents the successor of X, namely X + 1. Here are the standard definitions of addition and multiplication in
functional notation:
X + 0 = X.
X + s(Y) = s(X + Y).
i.e. X + (Y + 1) = (X + Y) + 1
X × 0 = 0.
X × s(Y) = X + (X × Y).
i.e. X × (Y + 1) = X + (X × Y).
Here are the same definitions as a logic program, using
add(X, Y, Z)
to represent
X + Y = Z,
multiply(X, Y, Z)
to represent
X × Y = Z
add(X, 0, X).add(X, s(Y), s(Z)) :- add(X, Y, Z).
multiply(X, 0, 0).multiply(X, s(Y), W) :- multiply(X, Y, Z), add(X, Z, W).
The two declarative semantics both give the same answers for the same existentially quantified conjunctions of addition and multiplication goals. For example 2 × 2 = X has the solution X = 4; and X ×
X = X + X has two solutions X = 0 and X = 2:
?- multiply(s(s(0)), s(s(0)), X).X = s(s(s(s(0)))).
?- multiply(X, X, Y), add(X, X, Y).X = 0, Y = 0.X = s(s(0)), Y = s(s(s(s(0)))).
However, with the logical-consequence semantics, there are non-standard models of the program, in which, for example, add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))), i.e. 2 + 2 = 5 is true. But with the
satisfiability semantics, there is only one model, namely the standard model of arithmetic, in which 2 + 2 = 5 is false.
In both semantics, the goal ?- add(s(s(0)), s(s(0)), s(s(s(s(s(0)))))) fails. In the satisfiability semantics, the failure of the goal means that the truth value of the goal is false. But in the
logical consequence semantics, the failure means that the truth value of the goal is unknown.
Negation as failure
See main article: Negation as failure. Negation as failure (NAF), as a way of concluding that a negative condition not p holds by showing that the positive condition p fails to hold, was already a
feature of early Prolog systems. The resulting extension of SLD resolution is called SLDNF. A similar construct, called "thnot", also existed in Micro-Planner.
The logical semantics of NAF was unresolved until Keith Clark^[35] showed that, under certain natural conditions, NAF is an efficient, correct (and sometimes complete) way of reasoning with the
logical consequence semantics using the completion of a logic program in first-order logic.
Completion amounts roughly to regarding the set of all the program clauses with the same predicate in the head, say:
A :- Body<sub>1</sub>.
A :- Body<sub>k</sub>.
as a definition of the predicate:
A iff (Body<sub>1</sub> or ... or Body<sub>k</sub>)
where iff means "if and only if". The completion also includes axioms of equality, which correspond to unification. Clark showed that proofs generated by SLDNF are structurally similar to proofs
generated by a natural deduction style of reasoning with the completion of the program.
Consider, for example, the following program:should_receive_sanction(X, punishment) :- is_a_thief(X), not should_receive_sanction(X, rehabilitation). should_receive_sanction(X, rehabilitation) :-
is_a_thief(X), is_a_minor(X), not is_violent(X). is_a_thief(tom). Given the goal of determining whether tom should receive a sanction, the first rule succeeds in showing that tom should be punished:
?- should_receive_sanction(tom, Sanction).Sanction = punishment.
This is because tom is a thief, and it cannot be shown that tom should be rehabilitated. It cannot be shown that tom should be rehabilitated, because it cannot be shown that tom is a minor.
If, however, we receive new information that tom is indeed a minor, the previous conclusion that tom should be punished is replaced by the new conclusion that tom should be rehabilitated:
?- should_receive_sanction(tom, Sanction).Sanction = rehabilitation.
This property of withdrawing a conclusion when new information is added, is called non-monotonicity, and it makes logic programming a non-monotonic logic.
But, if we are now told that tom is violent, the conclusion that tom should be punished will be reinstated:
?- should_receive_sanction(tom, Sanction).Sanction = punishment.
The completion of this program is:
should_receive_sanction(X, Sanction) iff Sanction = punishment, is_a_thief(X), not should_receive_sanction(X, rehabilitation) or Sanction = rehabilitation, is_a_thief(X), is_a_minor(X), not
is_violent(X). is_a_thief(X) iff X = tom.is_a_minor(X) iff X = tom.is_violent(X) iff X = tom.
The notion of completion is closely related to John McCarthy's circumscription semantics for default reasoning,^[36] and to Ray Reiter's closed world assumption.^[37]
The completion semantics for negation is a logical consequence semantics, for which SLDNF provides a proof-theoretic implementation. However, in the 1980s, the satisfiability semantics became more
popular for logic programs with negation. In the satisfiability semantics, negation is interpreted according to the classical definition of truth in an intended or standard model of the logic
In the case of logic programs with negative conditions, there are two main variants of the satisfiability semantics: In the well-founded semantics, the intended model of a logic program is a unique,
three-valued, minimal model, which always exists. The well-founded semantics generalises the notion of inductive definition in mathematical logic.^[38] XSB Prolog^[39] implements the well-founded
semantics using SLG resolution.^[40]
In the alternative stable model semantics, there may be no intended models or several intended models, all of which are minimal and two-valued. The stable model semantics underpins answer set
programming (ASP).
Both the well-founded and stable model semantics apply to arbitrary logic programs with negation. However, both semantics coincide for stratified logic programs. For example, the program for
sanctioning thieves is (locally) stratified, and all three semantics for the program determine the same intended model:
should_receive_sanction(tom, punishment).is_a_thief(tom).is_a_minor(tom).is_violent(tom).
Attempts to understand negation in logic programming have also contributed to the development of abstract argumentation frameworks.^[41] In an argumentation interpretation of negation, the initial
argument that tom should be punished because he is a thief, is attacked by the argument that he should be rehabilitated because he is a minor. But the fact that tom is violent undermines the argument
that tom should be rehabilitated and reinstates the argument that tom should be punished.
Metalogic programming
Metaprogramming, in which programs are treated as data, was already a feature of early Prolog implementations.^[42] ^[43] For example, the Edinburgh DEC10 implementation of Prolog included "an
interpreter and a compiler, both written in Prolog itself".^[43] The simplest metaprogram is the so-called "vanilla" meta-interpreter: solve(true). solve((B,C)):- solve(B),solve(C). solve(A):- clause
(A,B),solve(B). where true represents an empty conjunction, and (B,C) is a composite term representing the conjunction of B and C. The predicate clause(A,B) means that there is a clause of the form A
:- B.
Metaprogramming is an application of the more general use of a metalogic or metalanguage to describe and reason about another language, called the object language.
Metalogic programming allows object-level and metalevel representations to be combined, as in natural language. For example, in the following program, the atomic formula attends(Person, Meeting)
occurs both as an object-level formula, and as an argument of the metapredicates prohibited and approved.
prohibited(attends(Person, Meeting)) :- not(approved(attends(Person, Meeting))).
should_receive_sanction(Person, scolding) :- attends(Person, Meeting), lofty(Person), prohibited(attends(Person, Meeting)).should_receive_sanction(Person, banishment) :- attends(Person, Meeting),
lowly(Person), prohibited(attends(Person, Meeting)).
approved(attends(alice, tea_party)).attends(mad_hatter, tea_party).attends(dormouse, tea_party).
?- should_receive_sanction(X,Y).Person = mad_hatter,Sanction = scolding.Person = dormouse,Sanction = banishment.
In his popular Introduction to Cognitive Science,^[44] Paul Thagard includes logic and rules as alternative approaches to modelling human thinking. He argues that rules, which have the form IF
condition THEN action, are "very similar" to logical conditionals, but they are simpler and have greater psychological plausability (page 51). Among other differences between logic and rules, he
argues that logic uses deduction, but rules use search (page 45) and can be used to reason either forward or backward (page 47). Sentences in logic "have to be interpreted as universally true", but
rules can be defaults, which admit exceptions (page 44).
He states that "unlike logic, rule-based systems can also easily represent strategic informationabout what to do" (page 45). For example, "IF you want to go home for the weekend, and you have bus
fare, THENyou can catch a bus". He does not observe that the same strategy of reducing a goal to subgoals can be interpreted, in the manner of logic programming, as applying backward reasoning to a
logical conditional:
can_go(you, home) :- have(you, bus_fare), catch(you, bus).
All of these characteristics of rule-based systems - search, forward and backward reasoning, default reasoning, and goal-reduction - are also defining characteristics of logic programming. This
suggests that Thagard's conclusion (page 56) that:
Much of human knowledge is naturally described in terms of rules, and many kinds of thinking such as planning can be modeled by rule-based systems.
also applies to logic programming.
Other arguments showing how logic programming can be used to model aspects of human thinking are presented by Keith Stenning and Michiel van Lambalgen in their book,Human Reasoning and Cognitive
Science.^[45] They show how the non-monotonic character of logic programs can be used to explain human performance on a variety of psychological tasks. They also show (page 237) that "closed–world
reasoning in its guise as logic programming has an appealing neural implementation, unlike classical logic." In The Proper Treatment of Events,^[46] Michiel van Lambalgen and Fritz Hamm investigate
the use of constraint logic programming to code "temporal notions in natural language by looking at the way human beings construct time".
Knowledge representation
The use of logic to represent procedural knowledge and strategic information was one of the main goals contributing to the early development of logic programming. Moreover, it continues to be an
important feature of the Prolog family of logic programming languages today. However, many applications of logic programming, including Prolog applications, increasingly focus on the use of logic to
represent purely declarative knowledge. These applications include both the representation of general commonsense knowledge and the representation of domain specific expertise.
Commonsense includes knowledge about cause and effect, as formalised, for example, in the situation calculus, event calculus and action languages. Here is a simplified example, which illustrates the
main features of such formalisms. The first clause states that a fact holds immediately after an event initiates (or causes) the fact. The second clause is a frame axiom, which states that a fact
that holds at a time continues to hold at the next time unless it is terminated by an event that happens at the time. This formulation allows more than one event to occur at the same time:holds(Fact,
Time2) :- happens(Event, Time1), Time2 is Time1 + 1, initiates(Event, Fact). holds(Fact, Time2) :- happens(Event, Time1), Time2 is Time1 + 1, holds(Fact, Time1), not(terminated(Fact, Time1)).
terminated(Fact, Time) :- happens(Event, Time), terminates(Event, Fact).
Here holds is a meta-predicate, similar to solve above. However, whereas solve has only one argument, which applies to general clauses, the first argument of holds is a fact and the second argument
is a time (or state). The atomic formula holds(Fact, Time) expresses that the Fact holds at the Time. Such time-varying facts are also called fluents. The atomic formula happens(Event, Time)
expresses that the Event happens at the Time.
The following example illustrates how these clauses can be used to reason about causality in a toy blocks world. Here, in the initial state at time 0, a green block is on a table and a red block is
stacked on the green block (like a traffic light). At time 0, the red block is moved to the table. At time 1, the green block is moved onto the red block. Moving an object onto a place terminates the
fact that the object is on any place, and initiates the fact that the object is on the place to which it is moved:holds(on(green_block, table), 0).holds(on(red_block, green_block), 0).
happens(move(red_block, table), 0).happens(move(green_block, red_block), 1).
initiates(move(Object, Place), on(Object, Place)).terminates(move(Object, Place2), on(Object, Place1)).
?- holds(Fact, Time).
Fact = on(green_block,table),Time = 0.Fact = on(red_block,green_block),Time = 0.Fact = on(green_block,table),Time = 1.Fact = on(red_block,table),Time = 1.Fact = on(green_block,red_block),Time =
2.Fact = on(red_block,table),Time = 2.Forward reasoning and backward reasoning generate the same answers to the goal holds(Fact, Time). But forward reasoning generates fluents progressively in
temporal order, and backward reasoning generates fluents regressively, as in the domain-specific use of regression in the situation calculus.^[47]
Logic programming has also proved to be useful for representing domain-specific expertise in expert systems.^[48] But human expertise, like general-purpose commonsense, is mostly implicit and tacit,
and it is often difficult to represent such implicit knowledge in explicit rules. This difficulty does not arise, however, when logic programs are used to represent the existing, explicit rules of a
business organisation or legal authority.
For example, here is a representation of a simplified version of the first sentence of the British Nationality Act, which states that a person who is born in the UK becomes a British citizen at the
time of birth if a parent of the person is a British citizen at the time of birth:initiates(birth(Person), citizen(Person, uk)):- time_of(birth(Person), Time), place_of(birth(Person), uk),
parent_child(Another_Person, Person), holds(citizen(Another_Person, uk), Time).Historically, the representation of a large portion of the British Nationality Act as a logic program in the 1980s^[49]
was "hugely influential for the development of computational representations of legislation, showing how logic programming enables intuitively appealing representations that can be directly deployed
to generate automatic inferences".^[50]
More recently, the PROLEG system,^[51] initiated in 2009 and consisting of approximately 2500 rules and exceptions of civil code and supreme court case rules in Japan, has become possibly the largest
legal rule base in the world.^[52]
Variants and extensions
See main article: Prolog.
The SLD resolution rule of inference is neutral about the order in which subgoals in the bodies of clauses can be selected for solution. For the sake of efficiency, Prolog restricts this order to the
order in which the subgoals are written. SLD is also neutral about the strategy for searching the space of SLD proofs.Prolog searches this space, top-down, depth-first, trying different clauses for
solving the same (sub)goal in the order in which the clauses are written.
This search strategy has the advantage that the current branch of the tree can be represented efficiently by a stack. When a goal clause at the top of the stack is reduced to a new goal clause, the
new goal clause is pushed onto the top of the stack. When the selected subgoal in the goal clause at the top of the stack cannot be solved, the search strategy backtracks, removing the goal clause
from the top of the stack, and retrying the attempted solution of the selected subgoal in the previous goal clause using the next clause that matches the selected subgoal.
Backtracking can be restricted by using a subgoal, called cut, written as !, which always succeeds but cannot be backtracked. Cut can be used to improve efficiency, but can also interfere with the
logical meaning of clauses. In many cases, the use of cut can be replaced by negation as failure. In fact, negation as failure can be defined in Prolog, by using cut, together with any literal, say
fail, that unifies with the head of no clause:not(P) :- P, !, fail.not(P).
Prolog provides other features, in addition to cut, that do not have a logical interpretation. These include the built-in predicates assert and retract for destructively updating the state of the
program during program execution.
For example, the toy blocks world example above can be implemented without frame axioms using destructive change of state:
on(green_block, table).on(red_block, green_block).
move(Object, Place2) :- retract(on(Object, Place1)), assert(on(Object, Place2).
The sequence of move events and the resulting locations of the blocks can be computed by executing the query:?- move(red_block, table), move(green_block, red_block), on(Object, Place).
Object = red_block,Place = table.Object = green_block,Place = red_block.
Various extensions of logic programming have been developed to provide a logical framework for such destructive change of state.^[53] ^[54] ^[55]
The broad range of Prolog applications, both in isolation and in combination with other languages is highlighted in the Year of Prolog Book,^[21] celebrating the 50 year anniversary of Prolog in
Prolog has also contributed to the development of other programming languages, including ALF, Fril, Gödel, Mercury, Oz, Ciao, Visual Prolog, XSB, and λProlog.
Constraint logic programming
See main article: Constraint logic programming. Constraint logic programming (CLP) combines Horn clause logic programming with constraint solving. It extends Horn clauses by allowing some predicates,
declared as constraint predicates, to occur as literals in the body of a clause. Constraint predicates are not defined by the facts and rules in the program, but are predefined by some
domain-specific model-theoretic structure or theory.
Procedurally, subgoals whose predicates are defined by the program are solved by goal-reduction, as in ordinary logic programming, but constraints are simplified and checked for satisfiability by a
domain-specific constraint-solver, which implements the semantics of the constraint predicates. An initial problem is solved by reducing it to a satisfiable conjunction of constraints.
Interestingly, the first version of Prolog already included a constraint predicate dif(term1, term2), from Philippe Roussel's 1972 PhD thesis, which succeeds if both of its arguments are different
terms, but which is delayed if either of the terms contains a variable.^[52]
The following constraint logic program represents a toy temporal database of john's history as a teacher:teaches(john, hardware, T) :- 1990 ≤ T, T < 1999.teaches(john, software, T) :- 1999 ≤ T, T <
2005.teaches(john, logic, T) :- 2005 ≤ T, T ≤ 2012.rank(john, instructor, T) :- 1990 ≤ T, T < 2010.rank(john, professor, T) :- 2010 ≤ T, T < 2014.Here ≤ and < are constraint predicates, with their
usual intended semantics. The following goal clause queries the database to find out when john both taught logic and was a professor:?- teaches(john, logic, T), rank(john, professor, T).The solution
2010 ≤ T, T ≤ 2012 results from simplifying the constraints 2005 ≤ T, T ≤ 2012, 2010 ≤ T, T < 2014.
Constraint logic programming has been used to solve problems in such fields as civil engineering, mechanical engineering, digital circuit verification, automated timetabling, air traffic control, and
finance. It is closely related to abductive logic programming.
See main article: Datalog. Datalog is a database definition language, which combines a relational view of data, as in relational databases, with a logical view, as in logic programming.
Relational databases use a relational calculus or relational algebra, with relational operations, such as union, intersection, set difference and cartesian product to specify queries, which access a
database. Datalog uses logical connectives, such as or, and and not in the bodies of rules to define relations as part of the database itself.
It was recognized early in the development of relational databases that recursive queries cannot be expressed in either relational algebra or relational calculus, and that this defficiency can be
remedied by introducing a least-fixed-point operator.^[56] ^[57] In contrast, recursive relations can be defined naturally by rules in logic programs, without the need for any new logical connectives
or operators.
Datalog differs from more general logic programming by having only constants and variables as terms. Moreover, all facts are variable-free, and rules are restricted, so that if they are executed
bottom-up, then the derived facts are also variable-free.
For example, consider the family database:
mother_child(elizabeth, charles).father_child(charles, william).father_child(charles, harry).parent_child(X, Y) :- mother_child(X, Y).parent_child(X, Y) :- father_child(X, Y).ancestor_descendant(X,
Y) :- parent_child(X, X).ancestor_descendant(X, Y) :- ancestor_descendant(X, Z), ancestor_descendant(Z, Y).
Bottom-up execution derives the following set of additional facts and terminates:
parent_child(elizabeth, charles).parent_child(charles, william).parent_child(charles, harry).
ancestor_descendant(elizabeth, charles).ancestor_descendant(charles, william).ancestor_descendant(charles, harry).
ancestor_descendant(elizabeth, william).ancestor_descendant(elizabeth, harry).
Top-down execution derives the same answers to the query:?- ancestor_descendant(X, Y).
But then it goes into an infinite loop. However, top-down execution with tabling gives the same answers and terminates without looping.
Answer set programming
See main article: Answer Set Programming.
Like Datalog, Answer Set programming (ASP) is not Turing-complete. Moreover, instead of separating goals (or queries) from the program to be used in solving the goals, ASP treats the whole program as
a goal, and solves the goal by generating a stable model that makes the goal true. For this purpose, it uses the stable model semantics, according to which a logic program can have zero, one or more
intended models. For example, the following program represents a degenerate variant of the map colouring problem of colouring two countries red or green:country(oz).country(iz).adjacent(oz,
iz).colour(C, red) :- country(C), not(colour(C, green)).colour(C, green) :- country(C), not(colour(C, red)).
The problem has four solutions represented by four stable models:country(oz). country(iz). adjacent(oz, iz). colour(oz, red). colour(iz, red).
country(oz). country(iz). adjacent(oz, iz). colour(oz, green). colour(iz, green).
country(oz). country(iz). adjacent(oz, iz). colour(oz, red). colour(iz, green).
country(oz). country(iz). adjacent(oz, iz). colour(oz, green). colour(iz, red).
To represent the standard version of the map colouring problem, we need to add a constraint that two adjacent countries cannot be coloured the same colour. In ASP, this constraint can be written as a
clause of the form:
- country(C1), country(C2), adjacent(C1, C2), colour(C1, X), colour(C2, X).
With the addition of this constraint, the problem now has only two solutions:country(oz). country(iz). adjacent(oz, iz). colour(oz, red). colour(iz, green).
country(oz). country(iz). adjacent(oz, iz). colour(oz, green). colour(iz, red).The addition of constraints of the form :- Body. eliminates models in which Body is true.
Confusingly, constraints in ASP are different from constraints in CLP. Constraints in CLP are predicates that qualify answers to queries (and solutions of goals). Constraints in ASP are clauses that
eliminate models that would otherwise satisfy goals. Constraints in ASP are like integrity constraints in databases.
This combination of ordinary logic programming clauses and constraint clauses illustrates the generate-and-test methodology of problem solving in ASP: The ordinary clauses define a search space of
possible solutions, and the constraints filter out unwanted solutions.^[58]
Most implementations of ASP proceed in two steps: First they instantiate the program in all possible ways, reducing it to a propositional logic program (known as grounding). Then they apply a
propositional logic problem solver, such as the DPLL algorithm or a Boolean SAT solver. However, some implementations, such as s(CASP)^[59] use a goal-directed, top-down, SLD resolution-like
procedure withoutgrounding.
Abductive logic programming
See main article: Abductive logic programming. Abductive logic programming^[60] (ALP), like CLP, extends normal logic programming by allowing the bodies of clauses to contain literals whose
predicates are not defined by clauses. In ALP, these predicates are declared as abducible (or assumable), and are used as in abductive reasoning to explain observations, or more generally to add new
facts to the program (as assumptions) to solve goals.
For example, suppose we are given an initial state in which a red block is on a green block on a table at time 0:holds(on(green_block, table), 0).holds(on(red_block, green_block), 0).
Suppose we are also given the goal:?- holds(on(green_block,red_block), 3), holds(on(red_block,table), 3).
The goal can represent an observation, in which case a solution is an explanation of the observation. Or the goal can represent a desired future state of affairs, in which case a solution is a plan
for achieving the goal.^[61]
We can use the rules for cause and effect presented earlier to solve the goal, by treating the happens predicate as abducible:
holds(Fact, Time2) :- happens(Event, Time1), Time2 is Time1 + 1, initiates(Event, Fact). holds(Fact, Time2) :- happens(Event, Time1), Time2 is Time1 + 1, holds(Fact, Time1), not(terminated(Fact,
Time1)). terminated(Fact, Time) :- happens(Event, Time), terminates(Event, Fact).
initiates(move(Object, Place), on(Object, Place)).terminates(move(Object, Place2), on(Object, Place1)).
ALP solves the goal by reasoning backwards and adding assumptions to the program, to solve abducible subgoals. In this case there are many alternative solutions, including:
happens(move(red_block, table), 0).happens(tick, 1).happens(move(green_block, red_block), 2).
happens(tick,0).happens(move(red_block, table), 1).happens(move(green_block, red_block), 2).
happens(move(red_block, table), 0).happens(move(green_block, red_block), 1).happens(tick, 2).Here tick is an event that marks the passage of time without initiating or terminating any fluents.
There are also solutions in which the two move events happen at the same time. For example:happens(move(red_block, table), 0).happens(move(green_block, red_block), 0).happens(tick, 1).happens(tick,
Such solutions, if not desired, can be removed by adding an integrity constraint, which is like a constraint clause in ASP:
- happens(move(Block1, Place), Time), happens(move(Block2, Block1), Time).
Abductive logic programming has been used for fault diagnosis, planning, natural language processing and machine learning. It has also been used to interpret negation as failure as a form of
abductive reasoning.^[62]
Inductive logic programming
See main article: Inductive logic programming.
Inductive logic programming (ILP) is an approach to machine learning that induces logic programs as hypothetical generalisations of positive and negative examples. Given a logic program representing
background knowledge and positive examples together with constraints representing negative examples, an ILP system induces a logic program that generalises the positive examples while excluding the
negative examples.
ILP is similar to ALP, in that both can be viewed as generating hypotheses to explain observations, and as employing constraints to exclude undesirable hypotheses. But in ALP the hypotheses are
variable-free facts, and in ILP the hypotheses are general rules.^[63] ^[64]
For example, given only background knowledge of the mother_child and father_child relations, and suitable examples of the grandparent_child relation, current ILP systems can generate the definition
of grandparent_child, inventing an auxiliary predicate, which can be interpreted as the parent_child relation:^[65]
grandparent_child(X, Y):- auxiliary(X, Z), auxiliary(Z, Y).auxiliary(X, Y):- mother_child(X, Y).auxiliary(X, Y):- father_child(X, Y).Stuart Russell^[66] has referred to such invention of new concepts
as the most important step needed for reaching human-level AI.
Recent work in ILP, combining logic programming, learning and probability, has given rise to the fields of statistical relational learning and probabilistic inductive logic programming.
Concurrent logic programming
See main article: Concurrent logic programming. Concurrent logic programming integrates concepts of logic programming with concurrent programming. Its development was given a big impetus in the 1980s
by its choice for the systems programming language of the Japanese Fifth Generation Project (FGCS).^[67]
A concurrent logic program is a set of guarded Horn clauses of the form:
H :- G<sub>1</sub>, ..., G<sub>n</sub> | B<sub>1</sub>, ..., B<sub>n</sub>.
The conjunction G<sub>1</sub>, ..., G<sub>n</sub> is called the guard of the clause, and is the commitment operator. Declaratively, guarded Horn clauses are read as ordinary logical implications:
H if G<sub>1</sub> and ... and G<sub>n</sub> and B<sub>1</sub> and ... and B<sub>n</sub>.
However, procedurally, when there are several clauses whose heads H match a given goal, then all of the clauses are executed in parallel, checking whether their guards G<sub>1</sub>, ..., G<sub>n</
sub> hold. If the guards of more than one clause hold, then a committed choice is made to one of the clauses, and execution proceeds with the subgoals B<sub>1</sub>, ..., B<sub>n</sub> of the chosen
clause. These subgoals can also be executed in parallel. Thus concurrent logic programming implements a form of "don't care nondeterminism", rather than "don't know nondeterminism".
For example, the following concurrent logic program defines a predicate shuffle(Left, Right, Merge), which can be used to shuffle two lists Left and Right, combining them into a single list Merge
that preserves the ordering of the two lists Left and Right:
shuffle([], [], []).shuffle(Left, Right, Merge) :- Left = [First | Rest] | Merge = [First | ShortMerge], shuffle(Rest, Right, ShortMerge).shuffle(Left, Right, Merge) :- Right = [First | Rest] | Merge
= [First | ShortMerge], shuffle(Left, Rest, ShortMerge).
Here, [] represents the empty list, and [Head | Tail] represents a list with first element Head followed by list Tail, as in Prolog. (Notice that the first occurrence of in the second and third
clauses is the list constructor, whereas the second occurrence of is the commitment operator.) The program can be used, for example, to shuffle the lists [ace, queen, king] and [1, 4, 2] by invoking
the goal clause:
shuffle([ace, queen, king], [1, 4, 2], Merge).
The program will non-deterministically generate a single solution, for example Merge = [ace, queen, 1, king, 4, 2].
Carl Hewitt has argued^[68] that, because of the indeterminacy of concurrent computation, concurrent logic programming cannot implement general concurrency. However, according to the logical
semantics, any result of a computation of a concurrent logic program is a logical consequence of the program, even though not all logical consequences can be derived.
Concurrent constraint logic programming
See main article: Concurrent constraint logic programming.
Concurrent constraint logic programming^[69] combines concurrent logic programming and constraint logic programming, using constraints to control concurrency. A clause can contain a guard, which is a
set of constraints that may block the applicability of the clause. When the guards of several clauses are satisfied, concurrent constraint logic programming makes a committed choice to use only one.
Higher-order logic programming
Several researchers have extended logic programming with higher-order programming features derived from higher-order logic, such as predicate variables. Such languages include the Prolog extensions
HiLog^[70] and λProlog.^[71]
Linear logic programming
Basing logic programming within linear logic has resulted in the design of logic programming languages that are considerably more expressive than those based on classical logic. Horn clause programs
can only represent state change by the change in arguments to predicates. In linear logic programming, one can use the ambient linear logic to support state change. Some early designs of logic
programming languages based on linear logic include LO,^[72] Lolli,^[73] ACL,^[74] and Forum.^[75] Forum provides a goal-directed interpretation of all linear logic.
Object-oriented logic programming
F-logic^[76] extends logic programming with objects and the frame syntax.
Logtalk^[77] extends the Prolog programming language with support for objects, protocols, and other OOP concepts. It supports most standard-compliant Prolog systems as backend compilers.
Transaction logic programming
Transaction logic^[53] is an extension of logic programming with a logical theory of state-modifying updates. It has both a model-theoretic semantics and a procedural one. An implementation of a
subset of Transaction logic is available in the Flora-2^[78] system. Other prototypes are also available.
See also
General introductions
Other sources
Further reading
External links
Notes and References
1. Tärnlund . S.Å. . 1977 . Horn clause computability . . 17 . 2 . 215–226 . 10.1007/BF01932293. 32577496 .
2. Andréka . H. . Németi . I. . 1978 . The generalised completeness of Horn predicate-logic as a programming language . Acta Cybernetica . 4 . 1 . 3–10.
3. Cordell. Green. Application of Theorem Proving to Problem Solving. IJCAI 1969.
4. J.M.. Foster. E.W.. Elcock. ABSYS 1: An Incremental Compiler for Assertions: an Introduction. Fourth Annual Machine Intelligence Workshop. Machine Intelligence. 4. Edinburgh, UK. Edinburgh
University Press. 1969. 423–429.
5. 10.1145/35043.35046 . Kowalski . R. A. . The early years of logic programming . Communications of the ACM . 31 . 38–43 . 1988 . 12259230 .
6. Carl. Hewitt. Carl Hewitt. Planner: A Language for Proving Theorems in Robots. IJCAI 1969.
7. Terry. Winograd. Terry Winograd. Understanding natural language. Cognitive Psychology. 3. 1. 1972. 1–191. 10.1016/0010-0285(72)90002-3.
8. Jeff Rulifson . Jeff Rulifson . Jan Derksen . Richard Waldinger . QA4, A Procedural Calculus for Intuitive Reasoning . SRI AI Center Technical Note 73 . November 1973 .
9. Davies, J.M., 1971. POPLER: a POP-2 planner. Edinburgh University, Department of Machine Intelligence and Perception.
10. McDermott . D.V. . Drew McDermott . Sussman . G.J. . Gerald Jay Sussman . May 1972 . The Conniver reference manual . Artificial Intelligence Memo No. 259 .
11. Reboh . R. . Sacerdoti . E.D. . August 1973 . A preliminary QLISP manual . Artificial Intelligence Center, SRI International .
12. Kornfeld . W.A. . Hewitt . C.E. . Carl Hewitt . 1981 . The scientific community metaphor . IEEE Transactions on Systems, Man, and Cybernetics . 11 . 1 . 24–33. 10.1109/TSMC.1981.4308575 . 1721.1/
5693 . 1322857 . free .
13. Pat. Hayes. Computation and Deduction. Proceedings of the 2nd MFCS Symposium. Czechoslovak Academy of Sciences. 1973. 105–118.
14. Robinson . J. . 1965 . Automatic deduction with hyper-resolution . . 1 . 3 . 227–234 . 10.2307/2272384. 2272384 .
15. Robert. Kowalski. Donald. Kuehner. Linear Resolution with Selection Function. Artificial Intelligence. 2. 3–4. 1971. 227–260. 10.1016/0004-3702(71)90012-9.
16. Web site: Robert. Kowalski. Predicate Logic as a Programming Language. Memo 70. Department of Artificial Intelligence, Edinburgh University. 1973. Also in Proceedings IFIP Congress, Stockholm,
North Holland Publishing Co., 1974, pp. 569–574.
17. Warren . D.H. . Pereira . L.M. . Pereira . F. . 1977 . Prolog-the language and its implementation compared with Lisp . . 12 . 8 . 109–115. 10.1145/872734.806939 .
18. Ueda, K., 2018. Logic/constraint programming and concurrency: The hard-won lessons of the fifth generation computer project. Science of Computer Programming, 164, pp.3-17.
19. Book: Warren . D.S. . Warren . D.S. . Dahl . V. . Eiter . T. . Hermenegildo . M.V. . Kowalski . R. . Rossi . F. . Introduction to Prolog . Prolog: The Next 50 Years . Lecture Notes in Computer
Science . 2023 . 13900 . Springer, Cham. . 10.1007/978-3-031-35254-6_1 . 3–19. 978-3-031-35253-9 .
20. J. Alan . Robinson . 2001 . Invited Editorial . Theory and Practice of Logic Programming . 1 . 1 . 1 . . 10.1017/s1471068400000028. free .
21. R.A.Kowalski. Algorithm=Logic + Control. Communications of the ACM. 22. 7. July 1979. 424–436. 10.1145/359131.359136. 2509896. free.
22. Book: Bruynooghe . M. . Pereira . L.M. . 1984 . Deduction revision by intelligent backtracking . 194–215 . Implementations of Prolog . Ellis Horwood . Chichester, England.
23. Nakamura . K. . July 1985 . Heuristic Prolog: logic program execution by heuristic search . Conference on Logic Programming . 148–155 . Berlin, Heidelberg . Springer Berlin Heidelberg.
24. Genesereth . M.R. . Ginsberg . M.L. . 1985 . Logic programming . . 28 . 9 . 933–941. 10.1145/4284.4287 . 15527861 . free .
25. Swift . T. . Warren . D.S. . January 2012 . XSB: Extending Prolog with tabled logic programming . . 12 . 1–2 . 157–187. 10.1017/S1471068411000500 . 1012.5123 . 6153112 .
26. Book: Daniel Friedman. William Byrd. Oleg Kiselyov. Jason Hemann. The Reasoned Schemer, Second Edition. 2018. The MIT Press.
27. A. Casas, D. Cabeza, M. V. Hermenegildo. A Syntactic Approach to Combining Functional Notation, Lazy Evaluation and Higher-Order in LP Systems. The 8th International Symposium on Functional and
Logic Programming (FLOPS'06), pages 142-162, April 2006.
28. Kersting, K., Mladenov, M. and Tokmakov, P., 2017. Relational linear programming. Artificial Intelligence, 244, pp.188-216.
29. Beyer, D., 2006, May. Relational programming with CrocoPat. In Proceedings of the 28th International Conference on Software engineering (pp. 807-810).
30. Behnke, R., Berghammer, R., Meyer, E. and Schneider, P., 1998. RELVIEW—A system for calculating with relations and relational programming. In Fundamental Approaches to Software Engineering: First
International Conference, FASE'98 Held as Part of the Joint European Conferences on Theory and Practice of Software, ETAPS'98 Lisbon, Portugal, March 28–April 4, 1998 Proceedings 1 (pp. 318-321).
Springer Berlin Heidelberg.
31. Van Emden. M.H.. Kowalski. R.A.. October 1976. The semantics of predicate logic as a programming language. Journal of the ACM. 23. 4. 733–742. 10.1145/321978.321991 . 11048276 . free.
32. Book: Clark, K.L. . Logic and Data Bases . Negation as Failure . Keith Clark (computer scientist) . 1977 . 293–322 . 10.1007/978-1-4684-3384-5_11 . Boston, MA . Springer US. 978-1-4684-3386-9 .
33. Gelfond . M. . Przymusinska . H. . Przymusinski . T. . 1989 . On the relationship between circumscription and negation as failure . . 38 . 1 . 75–94. 10.1016/0004-3702(89)90068-4 .
34. Shepherdson . J.C. . 1984 . Negation as failure: a comparison of Clark's completed data base and Reiter's closed world assumption . . 1 . 1 . 51–79. 10.1016/0743-1066(84)90023-2 .
35. Denecker . M. . Ternovska . E. . A logic of nonmonotone inductive definitions . . 9 . 2 . 14:1–14:52 . 2008. 10.1145/1342991.1342998 . 13156469 . cs/0501025 .
36. Rao . P. . Sagonas . K. . Swift . T. . Warren . D.S. . Freire . J. . XSB: A system for efficiently computing well-founded semantics . Logic Programming And Nonmonotonic Reasoning: 4th
International Conference, LPNMR'97 . Dagstuhl Castle, Germany . July 28–31, 1997 . 430–440 . Springer Berlin Heidelberg . 10.1007/3-540-63255-7_33.
37. W. Chen . D. S. Warren . Tabled Evaluation with Delaying for General Logic Programs. . 43 . 1 . 20–74 . January 1996 . 10.1145/227595.227597. 7041379 . free .
38. Phan Minh Dung . On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming, and n–person games . Artificial Intelligence . 77 . 2. 321–357 . 1995 .
10.1016/0004-3702(94)00041-X. free .
39. Colmerauer, A. and Roussel, P., 1996. The birth of Prolog. In History of programming languages---II (pp. 331-367).
40. Warren, D.H., Pereira, L.M. and Pereira, F., 1977. Prolog-the language and its implementation compared with Lisp. ACM SIGPLAN Notices, 12(8), pp.109-115.
41. Book: Thagard, Paul. Mind: Introduction to Cognitive Science. The MIT Press. 2005. 9780262701099. 11. https://www.google.co.uk/books/edition/Mind_second_edition/gjcR1U2HT7kC?hl=en&gbpv=1&pg=PP11&
42. Book: Stenning , Keith . van Lambalgen, Michiel . Human reasoning and cognitive science . registration . . 2008. 978-0-262-19583-6 . https://philpapers.org/archive/STEHRA-5.pdf
43. Van Lambalgen, M. and Hamm, F., 2008. The proper treatment of events. John Wiley & Sons.https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&doi=3126320bb6e37ca3727fed404828b53fc56ff063
44. Reiter, R., 1991. The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression. Artificial and Mathematical Theory of Computation, 3.
45. Merritt, D., 2012. Building expert systems in Prolog. Springer Science & Business Media. https://ds.amu.edu.et/xmlui/bitstream/handle/123456789/4434/
46. Sergot. M.J.. Sadri. F.. Kowalski. R.A.. Kriwaczek. F.. Hammond. P. Cory. H.T.. 1986. The British Nationality Act as a logic program. Communications of the ACM. 29. 5. 370–386. 10.1145/5689.5920
. 5665107 .
47. Prakken. H.. Sartor. G.. October 2015. Law and logic: a review from an argumentation perspective. Artificial Intelligence. 227. 214–245. 10.1016/j.artint.2015.06.005. 4261497 .
48. Satoh, K., 2023. PROLEG: Practical legal reasoning system. In Prolog: The Next 50 Years (pp. 277-283). Cham: Springer Nature Switzerland.
49. Körner . Philipp . Leuschel . Michael . Barbosa . João . Costa . Vítor Santos . Dahl . Verónica . Hermenegildo . Manuel V. . Morales . Jose F. . Wielemaker . Jan . Diaz . Daniel . Abreu .
Salvador . Ciatto . Giovanni . November 2022 . Fifty Years of Prolog and Beyond . Theory and Practice of Logic Programming . en . 22 . 6 . 776–858 . 10.1017/S1471068422000102 . 1471-0684. free .
2201.10816 .
50. Bonner, A.J. and Kifer, M., 1993, February. Transaction Logic Programming. In ICLP (Vol. 93, pp. 257-279).
51. Genesereth, M., 2023. Dynamic logic programming. In Prolog: The Next 50 Years (pp. 197-209). Cham: Springer Nature Switzerland.
52. Kowalski, R., Sadri, F., Calejo, M. and Dávila, J., 2023. Combining logic programming and imperative programming in LPS. In Prolog: The Next 50 Years (pp. 210-223). Cham: Springer Nature
53. Aho, A.V. and Ullman, J.D., 1979, January. Universality of data retrieval languages. In Proceedings of the 6th ACM SIGACT-SIGPLAN symposium on Principles of programming languages (pp. 110-119).
54. Maier, D., Tekle, K.T., Kifer, M. and Warren, D.S., 2018. Datalog: concepts, history, and outlook. In Declarative Logic Programming: Theory, Systems, and Applications (pp. 3-100).
55. Eiter, T., Ianni, G. and Krennwallner, T., 2009. Answer Set Programming: A Primer. In Reasoning Web. Semantic Technologies for Information Systems: 5th International Summer School 2009,
Brixen-Bressanone, Italy, August 30-September 4, 2009, Tutorial Lectures (pp. 40-110).
56. J. . Arias . M. . Carro . E. . Salazar . K. . Marple . G. . Gupta . Constraint Answer Set Programming without Grounding . Theory and Practice of Logic Programming . 18 . 3–4 . 337–354 . 2018.
10.1017/S1471068418000285 . 13754645 . free . 1804.11162 .
57. M. . Denecker . A.C. . Kakas . Special issue: abductive logic programming . Journal of Logic Programming . 44 . 1–3 . 1–4 . July 2000 . 10.1016/S0743-1066(99)00078-3 . free .
58. Eshghi, K. and Kowalski, R.A., 1989, June. Abduction Compared with Negation by Failure. In ICLP (Vol. 89, pp. 234-255).
59. Book: Nienhuys-Cheng . Shan-hwei . Foundations of inductive logic programming . Wolf . Ronald de . 1997 . Spinger . 978-3-540-62927-6 . Lecture notes in computer science Lecture notes in
artificial intelligence . Berlin Heidelberg . 173.
60. Flach, P.A. and Kakas, A.C., 2000. On the relation between abduction and inductive learning. In Abductive Reasoning and Learning (pp. 1-33). Dordrecht: Springer Netherlands.
61. Cropper, A. and Dumančić, S., 2022. Inductive logic programming at 30: a new introduction. Journal of Artificial Intelligence Research, 74, pp.765-850.
62. Shunichi Uchida and Kazuhiro Fuchi. Proceedings of the FGCS Project Evaluation Workshop. Institute for New Generation Computer Technology (ICOT). 1992.
63. Web site: Inconsistency Robustness for Logic Programs . Hal Archives . 27 April 2016 . 7 November 2016 . Hewitt, Carl . 21–26.
64. Saraswat, V.A. and Rinard, M., 1989, December. Concurrent constraint programming. In Proceedings of the 17th ACM SIGPLAN-SIGACT symposium on Principles of programming languages (pp. 232-245).
65. Chen . Weidong . Kifer . Michael . Warren . David S. . February 1993 . HiLog: A foundation for higher-order logic programming . . 15 . 3 . 187–230 . 10.1016/0743-1066(93)90039-J. free .
66. Miller, D.A. and Nadathur, G., 1986, July. Higher-order logic programming. In International Conference on Logic Programming (pp. 448-462). Berlin, Heidelberg: Springer Berlin Heidelberg.
67. Jean-Marc. Andreoli. 10.1093/logcom/2.3.297. Logic Programming with Focusing Proofs in Linear Logic. Journal of Logic and Computation. 1 June 1992. 2. 3. 297–347.
68. Joshua. Hodas. Dale. Miller. Logic Programming in a Fragment of Intuitionistic Linear Logic. Information and Computation. 1994. 110. 2. 327–365. 10.1006/inco.1994.1036 . free.
69. Naoki. Kobayashi. Akinori. Yonezawa. Akinori Yonezawa. Asynchronous communication model based on linear logic. US/Japan Workshop on Parallel Symbolic Computing. 1994. 279–294. 10.1.1.42.8749 .
70. Dale. Miller. Forum: A Multiple-Conclusion Specification Logic. Theoretical Computer Science. 30 September 1996. 165. 1. 201–232. 10.1016/0304-3975(96)00045-X. free.
71. Kifer, M. and Lausen, G., 1989, June. F-logic: a higher-order language for reasoning about objects, inheritance, and scheme. In Proceedings of the 1989 ACM SIGMOD international conference on
Management of data (pp. 134-146).
72. de Moura, P.J.L., 2003. Design of an Object-Oriented Logic Programming Language (Doctoral dissertation, Universidade da Beira Interior).
|
{"url":"https://everything.explained.today/Logic_programming/","timestamp":"2024-11-06T11:27:26Z","content_type":"text/html","content_length":"117098","record_id":"<urn:uuid:c4e69d0d-8ea7-416b-99eb-03726768ce11>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00064.warc.gz"}
|
the vertex for
Find the vertex for the graph of each quadratic function. $$y=x^{2}-x+1$$
Short Answer
Expert verified
The vertex is \( (\frac{1}{2}, \frac{3}{4}) \).
Step by step solution
Identify the coefficients
The quadratic function is given in the standard form: \( y = ax^2 + bx + c \)Here, the coefficients are: \( a = 1 \)\( b = -1 \)\( c = 1 \)
Use the vertex formula for x-coordinate
The formula to find the x-coordinate of the vertex is: \( x = \frac{-b}{2a} \)Plug in the values of a and b: \( x = \frac{-(-1)}{2(1)} = \frac{1}{2} \)
Find the y-coordinate of the vertex
Substitute \( x = \frac{1}{2} \) back into the original equation to find y:\( y = \left( \frac{1}{2} \right)^2 - \frac{1}{2} + 1 \)\( y = \frac{1}{4} - \frac{1}{2} + 1 \)\( y = \frac{1}{4} - \frac{2}
{4} + \frac{4}{4} \)\( y = \frac{3}{4} \)
Write the vertex coordinates
The vertex of the quadratic function \( y = x^2 - x + 1 \) is at \( (\frac{1}{2}, \frac{3}{4}) \)
Key Concepts
These are the key concepts you need to understand to accurately answer the question.
quadratic function
A quadratic function is a type of polynomial function that is characterized by the presence of a squared term. The general form of a quadratic function is given by:
y = ax^2 + bx + c
* a, b, and c are constants (with a ≠ 0)
* x is the variable
The term 'ax^2' makes this function quadratic. These functions create a parabola when graphed. Parabolas can open upwards or downwards depending on the sign of the coefficient a.
If a is positive, the parabola opens upwards, resembling a U-shape. Conversely, if a is negative, the parabola opens downwards, making a ∩-shape. Quadratic functions are fundamental in algebra and
appear frequently in various mathematical and real-life applications.
Now, let's learn how to find the vertex, which is a crucial point on the graph of a quadratic function.
vertex formula
The vertex of a quadratic function is the highest or lowest point on its graph. To find the vertex, you can use the vertex formula. The vertex formula finds the coordinates (x, y) of the vertex:
For the quadratic function y = ax^2 + bx + c, the x-coordinate of the vertex is given by:
\( x = \frac{-b}{2a} \)
Here, b and a are the coefficients from the quadratic equation. Once you have the x-coordinate, you can find the y-coordinate by substituting the x-value back into the original quadratic equation.
For example, in the quadratic function y = x^2 - x + 1, the coefficients are:
* a = 1
* b = -1
* c = 1
Using the vertex formula:
\( x = \frac{-(-1)}{2(1)} = \frac{1}{2} \)
To find the y-coordinate, substitute x = 1/2 back into the original equation:
\( y = (\frac{1}{2})^2 - \frac{1}{2} + 1 \)
\( y = \frac{1}{4} - \frac{2}{4} + \frac{4}{4} \)
\( y = \frac{3}{4} \)
Thus, the vertex of y = x^2 - x + 1 is at \( \left( \frac{1}{2}, \frac{3}{4} \right) \). The vertex formulas provide a simple, direct way to locate the vertex quickly.
graphing quadratics
Graphing quadratic functions is an essential skill in algebra. The graph of a quadratic function is always a parabola. Here's how you can graph a quadratic function in three steps:
1. **Find the vertex:** Use the vertex formula as explained in the previous section. For y = x^2 - x + 1, the vertex is \( \left(\frac{1}{2}, \frac{3}{4}\right) \).
2. **Determine the axis of symmetry:** The axis of symmetry is a vertical line that passes through the vertex. For y = x^2 - x + 1, the axis of symmetry is x = 1/2. This line helps to plot
symmetrical points on the graph around this axis.
3. **Plot additional points:** Choose x-values on either side of the axis of symmetry and calculate the corresponding y-values to get more points. For instance, if x = 0 and x = 1:
* For x = 0, y = (0)^2 - 0 + 1 = 1.
* For x = 1, y = (1)^2 - 1 + 1 = 1.
Now, plot these points on the graph and draw a smooth curve through them, ensuring it passes through the vertex.
By following these steps, you can accurately graph any quadratic function and understand its shape and properties.
|
{"url":"https://www.vaia.com/en-us/textbooks/math/algebra-for-college-students-5-edition/chapter-8/problem-35-find-the-vertex-for-the-graph-of-each-quadratic-f/","timestamp":"2024-11-14T03:54:50Z","content_type":"text/html","content_length":"250662","record_id":"<urn:uuid:4c86aabb-4902-43a2-9f5c-0cc84b083eab>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00553.warc.gz"}
|
range estimation formula army
The 27.77 is a "constant". This is an acceptable body fat percentage for every age category! by the proper number of 0s. The basic mil relation formula goes like this: Size of target (units of
measure) X 1000 = Range (units of measure)Size of target (mils). CLASS IIIP. To range a target to a tenth of a mil we need to build a stable position just like if we were going to send a shot
downrange. Tc = 2.2( LLC Slope1085)0.3. where Tc = time of concentration (hrs); L =longest flow path (mi); Lc = Centroidal flow path (mi); Slope1085 = average slope of the flow path represented by 10
. _form.find("input:checkbox,input:radio").each(function(){ The following video will give you the answer to that question. Unfortunately, getting much more accurate than .1 is very difficult, and
usually not worth the effort. The final estimate under the triangular method was 23, compared to 23.5 using the PERT method. The pounds per bag per person vary with each climate. The final
transportation planning factor is fighter management. This number is known as a "constant." as such, it will never change. This report documents research investigating the possibilities of
incorporating risk analysis techniques into the Corps' cost estimating processes. Unit replenishment from the ammunition transfer and holding point to each battalion's units is accomplished through
expenditure reports. _form.removeData('being-submitted'); The driver and gunner can also use this method to determine ranges to close-in targets. }, This gives us 0.05 mil. _form.find
('.submitbtn-disabled').removeClass('submitbtn-disabled'); Each module will dictate the national stock number, nomenclature, quantity, and unit of issue for a given defensive combat configured load
(CCL). Since the distance on most maps is marked in meters and the RF is expressed in this unit of measurement in most cases, a brief description of the metric system is needed. Range (y) x Size
(MOA) 95.5 = Size (inches) Of note, 34.38 can (and often is) rounded to 34.4. our formula and get this: 2 yards X 1000 = 588 yards3.4 mils. The vehicle commander must be aware that light, weather,
and terrain conditions can make a target look nearer or farther than it is. The examples are based on published consumption rates. More importantly to us, is that a millradian represents 1 unit of
measure at 1000 units of measure. When you are not sure about something, you may resort to waiting for the actual results to come forward. The first is the use of engineer assets to construct berms
and hasty fighting positions. } This method requires the vehicle commander to constantly track his location and rapidly determine the location of the target vehicle using six-digit grid coordinates.
} Too thick of a reticle will obscure too much of the target at high magnification. Using the range estimation formula with the units meters for range and . by a few law enforcement sharpshooters
that works very quick, but you have to Like the side of a building, tree, window, traffic sign, etc. Using the number of aircraft multiplied by the number of gallons per hour and air hours allows
planners to compute the estimated fuel needed. When sat behind a real rifle, consider the following tips when judging distance to your target: } Pi is 3.14159265 plus a slew of additional digits.
_form.find('.pbSubmit').addClass('submitbtn-disabled'); It is intended to assist junior logistics planners in making better estimation decisions. Hopefully your scope came with literature showing the
scale of the different portions of your reticle. we quickly, and accurately mil him at 3.4 mils tall. Forecasting requirements begins during mission analysis and is the most important mental process
for logistics planners. To estimate the volume of the spread footing foundation, use the surface area method. Using the horizontal ranging bracket's 0.1mil fine hash marks to measure shoulder width.
Weapon density, the number of personnel, and specific mission requirements will determine the ammunition requirements. Barrett is the world leader in long-range, large-caliber, precision rifle design
and manufacturing. That's the very same equation as the one used in our find the range calculator. processing_form(); Reactions: Bags. The consultant have their own simulation software to run this
situation. Logistics forecasting and estimates in the brigade combat team, Transcript: Media roundtable with Mr. Douglas R. Bush, Assistant Secretary of the Army, Acquisition Logistics & Technology
on One of the major applications of statistics is estimating population parameters from sample statistics. Visit our friends and partners at Nortrack for tracking and other tactical training, Federal
GMM 175gr 2600fps Transportation requirements are interconnected with every class of supply. and well see what we can do. Notice that the number was pulled a little toward the far extreme of the
pessimistic estimate, but not by much, since. Former Army Delta OperatorJohn McPheeshows how to judge the distance to your target (out to 600 yards) using only your own two eyes. But can we make it
more simple? }); So, we can substitute in our 36 inches per yard and start simplifying. The W comes from Table 4-1, Mil Relation for Various Targets, or other vehicle identification aids (GTA 17-2-13
or FM 23-1), and is expressed in meters. Modern-day range estimation has become a lot easier for the common man to accomplish thanks to technology. Calculate aviation fuel requirements the same as
ground equipment. For example, a BMP is 6.75 meters long (W). Logistics planners forecast meals, such as meals ready-to-eat (MREs) and unitized group rations (UGR), needed to sustain the force.
_form.find("select option").each(function(){ The brigade support operations section and brigade and battalion S-4s can calculate available water capabilities based on asset availability to understand
the maximum water capability of each unit. PERT combines probability theory and statistics to derive a formula for the average activity from the three-point estimates. Knowing that it would measure 1
mil tall if it was at 1000 yards, you would easily determine the range to be 500 yards since the measured height was twice as tall as 1 mil, it would make sense that it was twice as close, or half
the distance, 500 yards. THANK YOU FOR SUPPORTING VETERAN JOURNALISM - JOIN SOFREP+ , PO Box 1077 MURFREESBORO, Tennessee 37133 United States, P.O. Planners must be cognizant of where a unit's
assault or containerized kitchen is located in relation to the forward line of troops. When you are measuring targets a long ways away, every Yes, this is a good article. Forecasting meals and water
is crucial for sustainment planning. Also includes Conversion Tables, Formulas, Rules, Guidelines, Angle Fire, Wind, Range Estimation, Target Dimensions, Zero Summary, UKD, Cold Bore & Barrel Log.
_form.data('being-submitted',1); You must become a subscriber or login to view or post comments on this article. These are the most common and found in Leupold as well as many other makes of rifle
scope. Historical data is valuable only when an operation has matured enough for the data to be applicable to the situation. pessimistic time (P): the longest amount of time given to finish the
activity. Many long range paper targets are 21" x 21" so we can use this known size to estimate range. The map method should be used during planning by predetermining the location of engagement areas
and suspected enemy positions, providing the vehicle commander reference points to determine range. Mission analysis should be a focused effort in which planners define the current operational
environment in terms of capabilities, requirements, assessments, and mitigation plans. *Lighting a path to truth* Former Navy JAG Worldwide U.S. Military Defense. Printed on "Rite in the Rain" paper.
Figure 4-1. It's good to note these so that in the future you can take them into account.
|
{"url":"http://korzenengineering.com/aqb2kz/range-estimation-formula-army","timestamp":"2024-11-06T10:48:55Z","content_type":"text/html","content_length":"37148","record_id":"<urn:uuid:f4ae7294-d845-4c5b-9ada-ed80720219fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00761.warc.gz"}
|
Introduction to Using Curriculum- Based Measurement for Progress Monitoring in Math. - ppt download
Presentation is loading. Please wait.
To make this website work, we log user data and share it with processors. To use this website, you must agree to our
Privacy Policy
, including cookie policy.
Ads by Google
|
{"url":"https://slideplayer.com/slide/5756858/","timestamp":"2024-11-04T09:06:03Z","content_type":"text/html","content_length":"276153","record_id":"<urn:uuid:37950c10-453f-45f5-bbbe-24ebf3ce60a2>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00179.warc.gz"}
|
Reason: I am going to submit the manuscripts exactly the same. So, I did embargo for 2 years.
until file(s) become available
Mathematics teacher educators have suggested that approximations of practice provide preservice mathematics teachers (PMTs) with opportunities to engage with, develop, and demonstrate subdomains of
Mathematical Knowledge for Teaching ([MKT], Ball et al., 2008) because MKT provides a way for PMTs to understand how to contextualize their discipline-specific content knowledge for effective
mathematics teaching and learning. However, the affordances and limitations of commonly used forms of approximations of practice (i.e., lesson planning and peer teaching) coupled with reflective
practices to engage PMTs in subdomains of MKT are still being explored. In this study, I investigated how lesson planning, peer teaching, and associated reflections individually and collectively
afforded opportunities for PMTs to demonstrate and develop the MKT subdomains. Eleven PMTs enrolled in a secondary mathematics methods course at a large Midwestern University participated in the
study. My dissertation comprises three sub-studies (Sub-study “1”, “2”, and “3”), and I produced three manuscripts to individually report findings from those sub-studies. I investigated how lesson
planning, peer teaching, and reflections afforded opportunities for PMTs to demonstrate and describe MKT subdomains in Sub-studies 1, 2, and 3, respectively. The findings across the sub-studies
suggested that several MKT subdomains (e.g., Knowledge of Content and Teaching, Knowledge of Content and Students) were evidenced in the PMTs’ planned teacher and student actions (e.g., selecting
mathematical tasks, formulating and sequencing questions), and in-the-moment actions and decisions (e.g., mathematically representing students’ responses, implementing mathematical tasks). Several
aspects of MKT subdomains (e.g., evaluate the diagnostic potential of tasks) were strongly evidenced only in the PMTs’ lesson plans whereas other aspects (e.g., modifying tasks based on students’
responses) were evidenced only in peer teaching. These findings suggested that various forms of approximations of practice (planned and enacted actions) created unique opportunities for the PMTs to
engage with and demonstrate MKT. I also found that the PMTs reflected on some subdomains of MKT that were not evidenced in their approximated practices, indicating that how PMTs describe the MKT
subdomains is not entirely a result of what subdomains they engage in during approximations of practice. My findings also revealed limitations of using approximations of practice to engage PMTs with
MKT subdomains. The MKT subdomains that required the PMTs to think about students’ alternative mathematical concepts, big mathematical ideas, and non-standard mathematics problem-solving strategies
were least evidenced across the approximations of practice and reflections. These findings have two primary implications for mathematics teacher educators. First, I invite mathematics teacher
educators to engage PMTs in multiple forms of approximations of practice to optimize their opportunities to engage with, demonstrate, and develop the MKT subdomains. Second, I suggest potential
instructional activities (e.g., inviting PMTs to reflect on their roles as students and teachers during peer teaching) that could be incorporated into approximations of practice to address the
existing limitations. Broadly, I invite mathematics teacher educators to design instructional activities at the intersection of mathematics content and pedagogy, collaborating with colleagues to
enhance these opportunities across programs.
• Curriculum and Instruction
Advisor/Supervisor/Committee Chair
Jill Newton
Additional Committee Member 2
Lynn Bryan
Additional Committee Member 3
Rachael Kenney
Additional Committee Member 4
Hala Ghousseini
|
{"url":"https://hammer.purdue.edu/articles/thesis/PRESERVICE_TEACHERS_MATHEMATICAL_KNOWLEDGE_FOR_TEACHING_FOCUS_ON_LESSON_PLANNING_PEER_TEACHING_AND_REFLECTION/20263698/1","timestamp":"2024-11-07T20:07:37Z","content_type":"text/html","content_length":"146130","record_id":"<urn:uuid:f3a56286-89a9-4932-bc58-b81e1fdb84d2>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00439.warc.gz"}
|
Liberty PSYC 354 SPSS Homework 2 (2015) - Frequency Tables and Graphs - academicwritingsexperts.com
Liberty PSYC 354 SPSS Homework 2 (2015) – Frequency Tables and Graphs
Homework 2
Frequency Tables and Graphs
Be sure you have reviewed this module/week’s lessons and presentations along with the practice data analysis before proceeding to the homework exercises. Complete all analyses in SPSS, then copy and
paste your output and graphs into your homework document file. Answer any written questions (such as the text-based questions or the APA Participants section) in the appropriate place within the same
Part I: Concepts
All Questions
These questions are based on the Nolan and Heinzen reading and end-of-chapter questions.
Use the following table to answer Question 1. This table depicts the scores of 83 students on an exam worth 65 points.
1) Use the information in the table to determine the percentages for each interval.
Table: Grouped Frequency Table
Exam score
Percentages for each Interval
2) When constructing a histogram and labeling the x- and y-axes, the lowest number on each axis should ideally be ________ .
3) A frequency distribution that is bell-shaped, symmetrical, and unimodal is ______ _____ .
4) A frequency distribution that has a tail trailing off to the right of the distribution is ______ ______ .
5) A frequency distribution of ages of residents at a senior citizen home is clustered around 83 with a long tail to the left. This distribution is _ ___________ .
6) When a variable cannot take on values above a certain level, this is known as a(n) ___ ____ effect.
7) A grouped frequency table has the following intervals: 30–44, 45–59, and 60–74.
If converted into a histogram, what would the midpoints be?
8) Do the data below show a linear relation, non-linear relation, or no relation at all?
9) Do the data below show a linear relation, non-linear relation, or no relation at all?
Part I:
Question 10a- 10e
· At this website , (http://projects.newyorker.com/story/subway/) you will find an interactive graph concerning New York City’s geography and income.
· Read the introduction and click on different “subway lines” to see how the interactive graph works.
· Note that the abbreviations stand for the four different boroughs:
§ MAN = Manhattan
§ BRX = Bronx
§ BRK = Brooklyn
§ QNS = Queens
· Also note that hovering your mouse over the dots on the graph displays the subway stop and the median income of households in that area.
10-a) In which of the four boroughs is the median household income highest? (This is made evident as you click on the different lines.)
10-b) Click on the “A” line. Does the line graph for Manhattan show high or low variability? What does this mean in terms of household income in this area of Manhattan?
10-c) Click on line 2. Which borough (not a street) shows the least variability in median household income?
10-d) On line 2, find the following two subway stops: Park Place (the first of the highest Manhattan stops) and E 180 St. (one of the lowest Bronx stops, located about halfway across the BRX
section). What is the difference (calculate) between the median household incomes of the two areas?
10-e) Click on the “D” line. Which subway stop in Brooklyn seems to be an outlier
Part II: SPSS Analysis
Green and Salkind, Lesson 20
· Open the “Lesson 20 Exercise File 1” document (found in the course’s Assignment Instructions folder) in order to complete these exercises.
· Always use the Blackboard files instead of the files on the Green and Salkind website as some files have been modified for the purposes of this course.
· Reminder: For Exercise 1, be sure to paste in the SPSS output and write out the answers for A, B, and C beneath it.
Part II:
Questions 1-4
· Ann wants to describe the demographic characteristics of a sample of 25 individuals who completed a large-scale survey.
· She has demographic data on the participants’:
§ Gender (two categories)
§ Educational level (four categories)
§ Marital status (three categories)
§ Community population size (eight categories).
Questions 1a -1c
1) Conduct a frequency analysis on the gender and marital status variables. From the output, identify the following:
a. Percentage of men
b . Mode for marital status
c. Frequency of divorced people in the sample
Answer- Table- Gender: (paste Table in this cell)
Answer- Table- Marital Status: (paste Table in this cell)
1-a) Percentage of men:
1-b) Mode for marital status:
1-c) Frequency of divorced people in the sample:
Questions 2-4
2) Create a frequency table to summarize the data on the educational level variable. Answer- Table- Education Level: (paste Table in this cell)
3) Create a bar chart to summarize the data from the community population variable. Answer- Bar chart – Education Level: (paste Bar chartin this cell)
4) Write a Participants section describing the participants in Ann’s s ample.
Part II:
Questions 5-7
Open the “Lesson 20 Exercise File 2” document (found in the course’s Assignment Instructions folder) in order to complete these exercises.
· Julie asks 50 men and 50 women to indicate what type of books they typically read for pleasure. She codes the responses into 10 categories: drama, mysteries, romance, historical nonfiction, travel
books, children’s books, poetry, autobiographies, political science, and local interest books.
· She also asks the participants how many books they read in a month. She categorizes their responses into four categories: nonreaders (no books a month), light readers (1—2 books a month), moderate
readers (3—5 books a month), and heavy readers (more than 5 books a month).
· Julie’s SPSS data file contains two variables: book (a 10-category variable for the type of books read) and reader (a 4-category variable indicating the number of books read per month).
5) Create a table to summarize the types of books that people report reading. Answer- Table- Book Type:
(paste Table in this cell)
6) Create a pie chart to describe how many books per month Julie’s sample reads. Answer- Pie Chart- Books Read per Month:
(paste Pie Chart in this cell)
7) Write a Participants section describing your results on the three variables.
Part III: SPSS Data Entry and Analysis
The steps will be the same in Part III as the ones you have been practicing in Part I of the assignment; the only difference is that you are now responsible for creating the data file as well.
Remember to do the following:
· Name and define your variables under the “Variable View,” then return to the “Data View” to enter the data; and
· Paste all SPSS output and graphs into your homework file at the appropriate place.
Part III:
Questions 1a-1c
· This question is based on the data in the end-of-chapter Question 2.30 of the Nolan and Heinzen textbook.
· Create a variable called “num_years” in a new SPSS file.
· Enter the data given in #2.30.
o Remember to enter the data into 1 column (variable).
1-a) Run a frequencies analysis that includes descriptive statistics for these scores (central tendency, dispersion, and distribution) and create a frequency table in SPSS for these data. Answer-
Descriptive Statistics Table – Number of Years:
(paste Table in this cell)
Answer- Frequency statistics Table- Number of Years:
(paste Table in this cell)
1-b) Create a histogram for these data. Answer- Histogram – Number of Years:
(paste Figure in this cell)
1-c) How many schools have an average completion time of 8 years or less?
An average completion time of 10 years or more?
Answer 29
Answer 8
https://academicwritingsexperts.com/wp-content/uploads/2021/12/LOGO.png 0 0 Sam https://academicwritingsexperts.com/wp-content/uploads/2021/12/LOGO.png Sam2023-10-06 03:18:212023-10-06 03:18:21
Liberty PSYC 354 SPSS Homework 2 (2015) – Frequency Tables and Graphs
|
{"url":"https://academicwritingsexperts.com/2023/10/06/liberty-psyc-354-spss-homework-2-2015-frequency-tables-and-graphs/","timestamp":"2024-11-06T13:42:13Z","content_type":"text/html","content_length":"80694","record_id":"<urn:uuid:d3e51aad-2478-461f-8207-a6ada818c19a>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00751.warc.gz"}
|
What is the Bradford Factor? - SkyHR
What is the Bradford Factor?
The Bradford Factor is a way to measure how much un-planned time off an employee has had, and how much of an impact it had.
The basic idea behind the Bradford Factor is that it is disruptive to your company if an employee has lots of short unplanned time off. It is less disruptive if the employee has fewer unplanned
absences, even if the total time is longer.
The reason for this, is that no matter how much time off is had, every time an employee has some unplanned time off, the company has to adjust to function without that employee doing their bit. This
adjustment takes time and effort so the more often it happens, the more overall impact.
The Bradford Factor is an attempt to summarise the impact of unplanned time off into a single score: The Bradford Factor Score.
Once you have calculated the Bradford Factor Score for an employee, you can then get an idea of how disruptive their unplanned time off has been.
Note: Most HR software systems can automatically tell you your Bradford Factor score.
How do you calculate the Bradford Factor Score?
To calculate the Bradford Factor Score for an employee you use the following simple formula:
BFS = S x S x D
In this formula, the following variables are used:
• BF is the resulting Bradford Factor Score
• S is the number of separate unplanned absences by the employee in the last 12 months
• D is the total number of days the employee has had of unplanned absence in the last 12 months
Examples of calculating the Bradford Factor Score
We will take 4 examples to show how to calculate the score. This will help show you how the Bradford Factor really highlights the number of times an employee has unplanned time off, rather than the
total time they have been off.
Example 1: Employee is off for a total of 6 days, spread across 2 different unplanned absences.
2 x 2 x 6 = Bradford Factor Score of 24
Example 2: Employee is off for a total of 6 days, spread across 4 different unplanned absences.
4 x 4 x 6 = Bradford Factor Score of 96
Example 3: Employee is off for a total of 6 days, spread across 6 different unplanned absences.
6 x 6 x 6 = Bradford Factor Score of 216
Example 4: Employee is off for 12 days, all in a single time off.
1 x 1 x 12 = Bradford Factor Score of 12
Let’s look at these examples in a simple table to make it easier to compare:
Example Number Ex.1 Ex.2 Ex.3 Ex.4
Total Days Off 6 6 6 12
Number of Separate Absences 2 4 6 1
Bradford Factor Score 24 96 216 12
As you can clearly see, the number of Bradford Factor Score really starts to get big if you have more separate absences. So much so, that the score for Example 4 is actually the lowest score, even
though it has double the total number of days off than all the other examples.
What do the different Bradford Factor Scores mean?
In a nutshell, the higher the Bradford Factor Score for an employee, the more disruption their planned time off has caused your company.
Most HR teams will have a set of guidelines for scores and what they mean. As an example, the scores could be broken down into bands like this:
• Below 25: No concern with the absences
• 26 – 45: Some concern
• 46 – 100: Proactive action required to find out why the employee is having unplanned absence
• 100 – 900: Consider disciplinary action
• 901: Serious disciplinary action required
Using these guidelines with the examples above we can see that only Example 1 and Example 4 are considered scores where there is no concern.
Example 3 (The 6 days across 6 separate unplanned absences one) actually has a score that would encourage the company to consider disciplinary action against the employee.
How is that fare? Example 3, with 6 days off should start disciplinary action, but Example 4, with twice as much time off is totally fine?
How should a company correctly use the Bradford Factor?
The major problem with the Bradford Factor and it’s scores is not the formula. It’s how the scores are used so blindly.
Too many companies use the Bradford Factor and Scores in conjunction with Trigger Points at which point disciplinary action is started. There is an example above, for employees with a score of over
However, what far too many companies fail to do is the Trigger Point above that one. Here is where the company should take some proactive action first. They need to work with the employee to find out
if anything can be done to reduce the number of times the employee is having unpanned absences.
There could be all kinds of reasons why an employee has unplanned absences. The Bradford Factor can not be used as a stick to beat employees with. It should only be used to highlight the situation.
The company can work with the employee to improve things for both the employee and the company.
Without understanding the individual circumstances of an employee, you can never just use the score as a reason for disciplinary action.
Bradford Factor Trigger Points: Adjusting per Employee
Every employee is different, so you shouldn’t apply the same Bradford Factor trigger points to them. An employee with an ongoing medical condition is far more likely to have a larger Bradford Factor
Score that normal.
However, unplanned absences are not just about sickness. Often, unplanned absences are a result of unforeseen circumstances such as needing to care for dependants at short notice.
Understanding each employee and the reasons for each unplanned absence is critical when using the Bradford Factor.
For each of your employees, we recommend that you include them in any discussion regarding Bradford Factor trigger points. Give them the opportunity to explain any circumstances that you should be
aware of that could result in unplanned absences. This should be an ongoing process. Situations change, and so too should the trigger points for each employee.
Reducing the impact of unplanned absences
Discuss with your employees how to minimise the impact that an unplanned absence would have.
If you know that an employee has an ongoing medical condition, you can put a process in place to transfer their work as fast as possible. Done well, there will be minimal impact caused by such an
absence. Therefore the number of different times the employee is off is no longer as important.
Bonus: Adapting the Bradford Factor Formula
A well documented process for adapting to an unplanned absence lets you adjust how you calculate their Bradford Factor Score.
Let’s say that the process you have worked out makes the impact of an unplanned absence only half that of normal. With this example you could use the following formula instead of the regular one:
BFS = (S x S x D) x 1/ 2
Bradford Factor and COVID-19
The global pandemic of COVID-19 is a great example of how the Bradford Factor can not be used as a rigid rule regarding disciplinaries over unplanned absences.
With quarantine, forced isolation, the loss of loved ones and everything else being thrown at us all, it is more important than ever to show compassion and empathy when dealing with unplanned
I want to state this again because it really is incredibly important. The Bradford Factor should never be used as a rigid benchmark to beat employees with.
Do you use the Bradford Factor Score?
Do you think your company is using the Bradford Factor Score too rigidly? If so, please send them a link to this page to help point them start using it correctly.
Post a comment
Articles written by and for SkyHR for our blog and other sections of our main website, https://skyhr.io, by the central SkyHR team
Related Articles
January 30, 2023
Can I take holiday in my notice period?
January 25, 2024
Navigating Sabbatical Leave: A Complete HR Guide
April 15, 2021
Alternatives to the Bradford Factor
|
{"url":"https://skyhr.io/absence-management/what-is-the-bradford-factor/","timestamp":"2024-11-02T17:41:21Z","content_type":"text/html","content_length":"138470","record_id":"<urn:uuid:d666024f-f26d-4fe5-ba57-ec31c9eede74>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00501.warc.gz"}
|
This Thermal House
[A parallel treatment of some of this material appears in Chapter 6 of the Energy and Human Ambitions on a Finite Planet (free) textbook.]
If you want to make your house more efficient at repelling the unpleasantness outdoors (whether hot or cold), what should you do first? Insulate the walls? Insulate the ceiling? The roof? Better
windows? Draft elimination? What has the biggest effect? While I have regrettably little practical experience tightening up a house (it’s on my bucket list), I at least do understand heat transfer
from a physics/engineering perspective, and can walk through some insightful calculations. So let’s build a fantasy house and evaluate thermal tradeoffs at 1234 Theoretical Lane.
Heat Transport
There are only three ways for heat to travel: conduction, convection, and radiation. No other options.
The power (energy per unit time) flowing across a material by conduction sensibly depends on the material properties (thermal conductivity, κ), the thickness of the material, t, the area, A
participating in the conduction (between the cold and hot environments), and the temperature difference, ΔT. Without much thought, you could construct the correct relationship for the power
transported by conduction by figuring out how it should scale as we change one variable or the other: P[cond] = κAΔT/t, where κ is the thermal conductivity of the material, taking on units of W/m/°C
in the metric system. For many building materials, κ is in the range of 0.1–1 W/m/°C. A sheet of plywood, at the lower end of the range (κ ≈ 0.12, measuring 4×8 feet, or 3 m²; t = 0.019 m, or 0.75
inches, thick) would conduct about 19 W per degree Celsius presented across it.
The building industry characterizes materials by their R-value, which in the U.S. has the unfortunate units of ft²·°F·hr/Btu. The SI equivalent is a slightly more tidy m²·°C/W. The R-value builds the
thickness, t, into the measure, so the same material in twice the thickness will earn twice the R-value.
Relating to intrinsic properties of the material, κ and t, R[US] = 5.7×t/κ in the U.S., or more simply, R[SI] = t/κ overseas. Our plywood from before would be characterized as R = 0.9 in the U.S., or
0.16 internationally. Note that the R-value is independent of area. To get the power flow across a surface, in Watts, we replace the relation two paragraphs back with P[cond] = 5.7×AΔT/R[US], or P
[cond] = AΔT/R[SI].
Convection is at its core just conduction into a moving fluid, which then carries the heat away by simply wafting it along. Adjacent to any surface in a fluid flow is a boundary layer of fluid that
clings to the surface, so that the thermal flow is controlled by conduction across the boundary layer. For air, κ ≈ 0.02 W/m/°C, and boundary layer thickness is often in the neighborhood of a few
millimeters, putting the effective R-value (US) in the neighborhood of 1.
Boundary layers aside, convection power should be proportional to the area exposed and to the temperature difference between the skin and the surrounding air. A constant of proportionality, h governs
how vigorous the coupling is, and is effectively capturing the physics of the boundary layer (which depends on the flow rate, surface details, etc.). In any case, we get a relation P[conv] = hAΔT.
Typical situations might see h ≈ 2 W/m²/°C at indoor surfaces (“still” air), h ≈ 5 W/m²/°C for light airs outdoors, and perhaps 10 or 20 in windy conditions. If our 3 m² piece of plywood is at room
temperature (20°C) and is placed in a freezing breeze with an h value of 5, each surface would lose energy at a rate of 300 W.
Note that we can relate h to the R-value in a generic equation that looks just like the conduction relation: P = hAΔT = 5.7×AΔT/R[US], in which case we can identify h = 5.7/R[US] = 1/R[SI]. In this
case, the light airs of the outdoors (h = 5) may be associated with R[US] ≈ 1.
Every object radiates electromagnetically. At familiar temperatures, this all transpires in the mid-infrared, peaking at a wavelength of 10 microns and petering out completely by 2 μm (meanwhile,
human vision is 0.4–0.7 μm). The net flow, naturally, is from hot to cold, and obeys the relation: P[rad] = Aσ(ε[h]T^4[h] − ε[c]T^4[c]), where σ = 5.67×10^−8 W/m²/K^4. The ε factors are emissivity
values, ranging from 0.0 (shiny) to 1.0 (dull). The temperatures must be expressed in Kelvin, as the amount of radiation depends on the absolute temperature of the object. Subscripts denote hot and
cold objects. We’ll ignore complications from non-uniform environments.
So our piece of plywood, at room temperature (293 K) in radiative contact with a surrounding world at 0°C (273 K) would see about 300 W pouring off each surface if emissivities are assumed to be
nearly 1.0. Pretty similar to convection (a good rule of thumb).
A word about emissivity. Most things have very high emissivity. Anything organic (wood, skin, plastics, paint of any color) is likely to have an emissivity in the neighborhood of 0.95. Even glass,
with a semi-shiny (partially reflective) surface is 0.87. Only shiny metals dip low, which is why ducts, some insulation, and thermos bottles employ shiny surfaces: to knock out the radiative heat
loss channel.
Annoyingly, radiation is not just proportional to ΔT, being instead proportional to the difference between the fourth powers of the temperatures. However, for small temperature differences on the
absolute scale (fortunately commonplace), we can linearize the relation (here assuming unit emissivity) to P[rad] ≈ 4AσT³ΔT, where the T in the cubic term is a representative temperature, perhaps
between the hot and cold. Notice that the form now looks just like convection, with 4σT³ replacing h. For the foregoing examples, if we pick T = 283 K, we find an equivalent h-value of 4σT³ ≈ 5.1.
Again, this illustrates the similar magnitude of radiation and convection, in ordinary circumstances. In this example, the linearized approximation is well within a percent of the correct answer when
the midpoint is chosen as the “reference” temperature, deviating by ~10% if one of the endpoints is used instead. Because radiation may be linearized in this way and expressed as an h-value, it, too,
can be cast in terms of an equivalent R-value.
The Whole Enchilada
In a real-world situation, we typically must deal with all three thermal paths simultaneously. So let’s consider a wall situated between a toasty interior and a cold, breezy exterior. From
experience, the wall will be a little cool to the touch, so we have thermal flow from the room to the wall via convection and radiation. The wall itself conducts heat to the outside surface. Then
convection and radiation carry heat away from there. In equilibrium (and since thermal energy is not being created or destroyed in the wall), we have a balance of equations such that P[conv,in] + P
[rad,in] = P[cond] = P[conv,out] + P[rad,out].
If we don’t care to analyze the surface temperatures of the wall on the inside and outside, we can lump all the conduits together into a single entity. It may help to think of each path in terms of a
resistance to thermal flow (itself akin to current in a circuit). That’s the origin of the term “R-value” in the first place. Convection and radiation operate like two resistors in parallel, in
series with the conduction piece.
Note that when two processes operate in parallel, sharing the same area and ΔT, the effective R-value is given by P[tot] = AΔT/R[eff] = P[1] + P[2] = AΔT(1/R[1] + 1/R[2]), so that 1/R[eff] = (1/R[1]
+ 1/R[2]). Conversely, when two processes are in series, sharing the same power flow and the same area, but piecewise-different ΔT values, we have that P = AΔT[1]/R[1] = AΔT[2]/R[2], so that the
total ΔT = ΔT[1] + ΔT[2] works out to P(R[1] + R[2])/A, or P = AΔT/(R[1] + R[2]), so that R[eff] = (R[1] + R[2]). In other words, the R-values simply add together in series, while their inverses add
when in parallel—just like resistors in an electrical circuit. Note that for the sake of tidiness I have left off the annoying 5.7 conversion factor in the above relations, which can be added back in
if desired.
For an explicit example of how all this works, let’s construct a wall out of a single sheet of plywood (κ = 0.12 W/m/°C; t = 0.019 m; so R[US] = 0.9. We’ll have an inside environment with h = 2 W/m²/
°C, T = 20°C, and assume the inside wall temperature is close to the same, so that I can use T = 293 K in the radiation approximation term. In this case, I compute R values (US) of 2.85 and 1 for
convection and radiation, respectively (for the still air inside, radiation is here the more important channel). In parallel, these add to an effective R-value of 0.74. If the outside of our “wall”
is near the ambient temperature of, say, 273 K, and a bit of wind gives us h = 10 W/m²/°C, we have R-values of 0.57 and 1.2 for convection and radiation (note the role reversal in more active air, so
that convection dominates). The outside combination is R = 0.39.
Our total transfer through the wall therefore has three R-values in series: 0.74 to get heat into the wall, 0.9 to get heat through the wall, and 0.39 to get it off the outside surface. Summing
these, we have R[US] ≈ 2.03 in total. For an inside-outside ΔT = 20°C, each square meter of this wall would conduct 5.7×20/2.03 ≈ 56 W.
Get Real
Now that we have some sense for how to handle conduction, convection, and radiation in the R-value context, we can find and use relevant R-values for common building materials. I get most of my
information from this very useful site, many values also being available at the Wikipedia site.
To compute the effective R-value for a composite surface like a wall with studs inside, one simply combines paths in parallel, weighted by the fractional area of each. For instance, a wall with studs
has 15% of the area covered by studs, with a total end-to-end R-value (including convection/radiation, called “air film”) of 7.1. The other 85% is insulated bay, with an R-value of 15.7. The
effective R-value is given by 1/R = (0.15/R[stud] + 0.85/R[bay]), calculating to R = 13.3. If I left out the insulation, I would replace the R=13 fiberglass batting with two “air film” layers
carrying values of 0.68 (very similar to our value of 0.74 from above). In this case, we have 1/R = (0.15/7.1 + 0.85/4.1), or R = 4.3. Note that for uninsulated walls, the studs are more insulating
than the air space between.
Let’s now assemble a table of values for relevant building blocks. Divide R[US] by 5.7 to get R[SI].
Structure % Framing Elements R[US]
Uninsulated Wall 15% air; drywall; stud/bay; plywood; siding; air 4.1
Insulated Wall 15% replace bay with insulation 13.3
Uninsulated Ceiling 8% air; drywall; rafter/open; air 1.65
Insulated Ceiling 8% replace open with insulation 13.0
Uninsulated Floor 15% air; tile; plywood; joists/open; air 2.5
Insulated Floor 15% replace open with insulation 12.7
Uninsulated Roof 8% air; framing/open; plywood; shingles; air 1.85
Insulated Roof 8% replace open with insulation 13.2
Single-Pane Window — no coatings 0.9
Dual-Pane Window — half-inch air space 2.0
Best Window — suspended film, low E 4.0
Door — wood, solid core 3.0
Our Boring House
For the sake of simplicity, we’re going to make a one-story house with a square footprint. We’ll have a pitched roof with attic space, and will look at raised foundations with a crawl space
underneath, and also slab foundations. We’ll adorn each side of the house with two moderate-sized windows and a front and back door. For size, we’ll go with something close to the American average of
2700 ft² and take the opportunity to go metric by making our house 15 m on a side, resulting in an area of 225 m² or 2422 ft². The walls will be 2.5 m (8 ft) high. For windows, we’ll make each one
1.5 m² (equivalent to 16 ft², or 4×4 feet). Our doors will take up 2 m² each.
The wall area therefore totals 134 m², floor and ceiling each 225 m², windows 12 m², and doors 4 m².
We will compute the thermal snugness of a house in terms of W/°C, and call this thermal admittance. Each component adds some bit of thermal admittance according to Q = P/ΔT = 5.7×A/R[US]. These can
then be added for each component of the house.
Using the uninsulated values for everything and single-pane windows, I get Q values, in W/°C, for the walls of 186; ceiling (assumes ample attic ventilation puts it at ambient temperature): 777;
raised floor: 513; single-pane windows: 75; doors: 8. The total is 1560 W/°C.
Let’s pause to put this number in perspective. Maintaining room temperature when the outside is at freezing would require 31 kW of power, or 20 space heaters. A furnace rated at 75,000 Btu/hr is
equivalent to 22 kW and would not be able to keep up. And we have not even considered drafts yet.
Now we’ll look at the other extreme and put R-13 insulation in the walls, ceiling, under the floor, and use the best windows we can buy. We will again let the attic be fully ventilated and at the
outdoor ambient temperature. Now we get walls: 57; ceiling: 99; floor: 103; windows: 17, and doors still at 4. The total is 280 W/°C, and about a fifth of what it was previously. The cost of heating/
cooling will likewise improve by at least a factor of five (won’t be needed as often in milder conditions). In our case, 53% of the improvement came from insulating the ceiling, 32% from the floor,
10% from the walls, and 5% from the windows. This suggests an order of priority. Of course even larger gains are possible with greater amounts of insulation—until other factors dominate.
The floor loss is slightly exaggerated here, as the simple numbers assume the crawl space is as cold as the exterior. To the degree that this is not true, the numbers soften a bit, in proportion to
the relative temperature rise. It is also the case that the air near the floor is likely to be cooler than the air near the ceiling, unless the interior air is being well mixed. This also reduces
heat loss through the floor in the case that it’s colder outside than inside. Still, it is likely that insulating the floor will bring a pretty noticeable improvement.
Roof Considerations
Perhaps the assumption of a fully ventilated attic caused consternation. Had I assumed a sealed attic (the other extreme), the ceiling and roof would act in series to produce an R-value of 3.5 in the
uninsulated case or 26.2 in the insulated case. The thermal admittance values would then be 366 W/°C and 49 W/°C, respectively. Our totals would go from 1150 W/°C to 232 W/°C. The biggest single gain
would then stem from insulating the floor. But in reality, the attic tends to be closer to ambient than to interior, so that ceiling insulation is likely to remain the most important step.
Assuming the attic is ventilated, most of the temperature difference between interior and exterior will appear across the ceiling, rendering the roof’s insulating qualities of secondary importance.
But this neglects solar load onto the roof. Anyone who has experienced a hot attic knows that attic ventilation is inadequate to prevent the roof from heating the space. Therefore insulating the roof
may become an important step in environments where cooling is a large energy sink. For places where heating is more important than cooling, it may actually be better to leaving the roof insulation
off so that the winter sun provides some heating benefit by warming the attic a bit.
Slab Floors
For slab floors, the evaluation is somewhat more complicated than for raised floors. A six-inch slab of concrete itself has an R-value of around 0.5. But below the slab is dirt. Cobbling together
information from a few sources (here and here), I gather that dry soil has a thermal conductivity around 0.8 W/m/°C, and an effective thermal thickness (length scale over which temperature gradient
exists) around 0.2 m. This would give it an R-value around 1.4 for a combined slab-ground R-value of 1.9, or 2.6 once factoring in the radiative/conductive coupling. But all this may not matter
because the ground temperature is pretty stable throughout the year, and may reach approximate equilibrium with your house temperature—at least away from the slab edge. To address leakage out the
sides of the slab (air and ground), the Washington State site implies a loss rate of 1.2 W/°C per meter of perimeter, or 72 W/°C for our lovely house, which is not too different from what we computed
for the insulated raised floor.
I Feel a Draft
Some time ago, I evaluated the thermal performance of my house (which is a slab house about two-thirds the size we’re considering in this post) in the context of heating, and in doing so computed
that my house requires 610 W/°C to heat. A bit later, I looked at the cooling performance and in the process recognized a shortcoming in my previous method of analysis. A more complete method ended
up suggesting 1465 W/°C. Big difference! But not only that, it seems that my house performs worse than our example house—despite being smaller, having insulation in the walls, varying degrees of
insulation in the ceiling (some is very old and mashed thin), and double-pane windows virtually everywhere. In my case, the disappointing thermal performance does not translate into wasted energy,
since I normally do not heat or cool the house. But a snugger house would be more comfortable. So what’s the deal?
I suspect drafts. We have ventilation fans in several rooms with minimal sealing, can lights all over the ceiling, possibly leaky door frames, and a damper in our unused fireplace that I just now
checked and found open—which has probably been that way since we bought the house a few years back!
How important could drafts be? Air has a heat capacity of about 1000 J/kg/°C. Each cubic meter of air (1000 L) has about 1.25 kg of mass, and therefore holds 1250 J of energy per degree of
temperature difference. Thus if air were to enter with a 10°C temperature difference at a rate of 0.1 m³/s (210 cfm, or cubic feet per minute), the corresponding thermal transport rate would be 1250
Recommended flow rates call for something in excess of 4 air exchanges per hour. In our pretend house, this means 225×2.5×4 = 2250 m³ per 3600 seconds, or 0.625 m³/s, carrying about 0.8 kg/s, or 780
W/°C. That’s a lot! Another source recommends a minimum flow of 1 cfm per minute per 100 ft² of floorspace, plus another 7.5 cfm times the number of bedrooms plus one. For our model house, assuming
three bedrooms, we get a minimum requirement of 54 cfm, translating to just 0.026 m³/s, or one complete exchange every six hours. Now we’re at 32 W/°C and competitive with our insulated walls, etc. I
believe the latter source is more likely correct.
I found the following information from this site to be very useful:
The national average of air change rates, for existing homes, is between one and two per hour, and is dropping with tighter building practices and more stringent building codes. Standard homes
built today usually have air change rates from .5 to 1.0. Extremely tight new construction can achieve air change rates of .35 or less. Most homes with such low air change rates have some form of
mechanical ventilation to bring in fresh outside air and exchange heat between the two air streams.
To get an idea of what your home’s air change rate might be, consider that a tight, well sealed newly constructed home usually achieves .6 air changes per hour or less. A reasonably tight, well
constructed older home typically has an air change rate of about 1 per hour. A somewhat loose older home with no storm windows and caulk missing in spots has an air change rate of about 2. A
fairly loose, drafty house with no caulk or weatherstripping and entrances used might have an air change rate as high as 4, and a very drafty, dilapidated house might have an air change rate of
as high as 8.
Draft Dodging
I am motivated to do a blower-door test to check the draftiness of my house. The idea is to seal up the house, install a large fan on the front door that pulls air out of the house, and measure the
difference in pressure as a function of air exhaust rate. Also, once the house is under negative pressure, leaks can be hunted down by listening for whistles or hissing, using a smoke source, and
partitioned by alternately closing/sealing parts of the house to isolate where the biggest problems lie. How can that not be fun?!
Another technique worth mentioning is that after tightening up a house, one can still manage to provide adequate ventilation without incurring the full thermal hit by using a heat recovery ventilator
. The idea is to pass the incoming air past the outgoing air in a heat exchanger (air is separated by a thin metal membrane, for instance). By the time the air emerges from either side, the incoming
air has acquired the temperature of the house’s ambient air, while the exhaust air becomes much like the exterior air before emerging. The thermal losses associated with air exchange can be cut by a
factor of four or more using such an approach. This would bring the previously calculated 32 W/°C down to well less than 10, and into the same ballpark as high-performance windows.
Lessons Learned
The thermal performance of a house is not that hard to understand, given a bit of background and some relevant numbers. The tools developed here allow exploration of the relative merits of new
windows, insulation projects, ventilation management, etc. Of primary importance is the ability to lump all three thermal pathways into an R-value framework so that composite structures may be
evaluated and compared. By adopting units of W/°C, we can quickly understand the heating requirements for a given temperature difference, or simply use the number as an indication of thermal quality.
I encourage you to try computing the thermal admittance of your house, given its geometry and construction. If you know how many kWh or Therms per day you use to maintain a particular ΔT, you can
compare the theoretical performance to measured reality.
Of course things are never as simple in practice as they are on Theoretical Lane. My house, for instance appears to be three times worse than the value I compute in ignorance of ventilation. Airflow
is the wildcard here, and may indeed account for the discrepancy in my case—something I need to follow up.
Views: 7087
25 thoughts on “This Thermal House”
1. I have done some quick & cheap blower door tests simply putting up the exhauster speed in the kitchen. In one house I did the test, an exhauster in the bath room also helped. An incense stick if
a good tool as a smoke source.
2. I came across a neat empirical method (Jagnow/Wolff method) for calculating the required capacity of your heating system based on previous monthly energy consumption (your gas meter readings) and
average temperatures for your location (obtained from the meteorological office).
It works by making a scatterplot of temperatures and energy consumption, with temperatures on the x-axis and energy consumption on the y-axis. What you will see is a curve that goes downward up
to a certain temperature, and then becomes flat. The flat part corresponds to times when the house is not heated, and energy is used only for warm water, cooking and the pilot light. The downward
part corresponds to the times when your house is heated.
To arrive at the required capacity of your heating, extend the downward curve upwards to the left until you come to the minimum outside temperature that you can come to expect in the area where
you live (again use data from your meteorological office). The energy needed at this point will be what you will need to heat the house at that temperature (where I live this would be around
As this method uses energy input (gas consumption), not energy output, it may overestimate your power requirement if you switch to a more efficient technology, say a condensating boiler.
But in all probability, it will be much lower than the capacity of your existing boiler or the boiler recommended by your heating contractor. Heating contractors have every incentive to install
boilers which are too powerful because
a) it costs more and up maximises their profit,
b) they like to install many identical boilers to simplify maintenance, and
c) it minimises their risk: people may complain if their house is too cold, but they will never complain about unused heating capacity.
However, a properly dimensioned boiler runs more efficiently than an oversized one. If you get a modulating boiler (with variable power output), also pay attention that the power output can be
reduced so much to keep your house warm during spring and autumn without too many on/off cycles, which are bad for efficiency.
I find this method superior to the “Swiss” method, which divides annual energy consumption by 3000 hours (or a different number, depending on location). And it is certainly more practical than a
calculation with r-values which are almost impossible to estimate accurately for an existing house. In addition, being empirically based, the Jagnow/Wolff method takes full account of solar gain
and heat gain from adjacent buildings.
3. The W/°C value for the house is a nice, simple result, but it can be made more practical, by introducing a few more numbers:
– Number of days of the heating season at the location of the house
– The average outside temperature during the heating season at the location of the house
– $/kWh price of different energy sources
Given the area of the house, you can also calculate the energy requirement per m2 per year, which is usually used to compare buildings.
4. Hi Tom,
A few thoughts without getting too verbose
The “effective R-value for a composite surface” is commonly known as “whole-wall R-value”.
“Roof insulation” over a vented attic is not normally prescribed as a means of reducing attic temperatures. In many hot sunny climates “radiant barrier” sheathing is often used to help mitigate
solar heat gain through the roof decking – incidentally, this about the only practical use for a “radiant barrier” (another exception being “low-E” windows).
The “tightness” of a building assembly is an important factor not only in terms of energy efficiency but also in terms of durability (air transport of water vapour being a major liability).
Any fibrous insulation (fibreglass, cellulose, mineral wool, etc.) will only perform to the rated R-value if it is protected from air infiltration (ie fibreglass is a filter and not insulation if
air is moving through it).
Generally speaking, the advice given to anyone who is interested in improving the energy performance of their home is to have an “energy audit” done. Include blower door testing in conjuction
with thermal imaging as part of the audit, seal cracks at the upper and lower parts of the house (cracks at the neutral pressure plane having a lower rate of infiltration than those that are
higher or lower), and then insulate.
□ Some details on “protected from air infiltration”.
There are two kinds of air infiltration to handle:
– Cold air washes through fibrous insulation. Vapor permeable foil at the external surface of the insulation is preventing it.
– Warm inside air going out, through the insulation causes bigger problem. During its way out, it cools down and vapor condenses. First, wet insulation is bad insulation. Condensation also
causes wooden structures to rot, and if it freezes, brick and concrete is damaged.
This kind of air transport has to be avoided by applying a vapor barrier layer inside the house.
If there is a heat recovery ventilation system installed, it is always tuned to have lower inside pressure than outside pressure, to avoid any outgoing are leakage through the errors of the
vapor barrier layer.
☆ Likewise, for air-conditioned houses in hot, humid climates, a vapor barrier may be required on the outside of the wall to prevent condensation inside the wall during the summer. That
raise an interesting question: “What is the best approach for climates that have both significant heating and cooling seasons?”
5. I really enjoyed this post. It’s a great way to tie together a lot of the basic physics you’ve spoken about in the past with practical considerations for construction.
One problem I’ve been thinking about is the effects of the shape of a building on it’s thermal admittance. I would assume that it’s best to maximize the volume to surface area ratio, and
therefore a spherical house would be optimal. However, people require flat floors, so a cylindrical or cubical design is probably better. Your theoretical house would be more energy efficient
with a second floor and a roof with no gables (four-sided pyramids are probably better). This ends up being close to the “American Foursquare” home style. This is true regardless of the quality
of insulation available at some point in time, so it’s worth considering geometry when designing a building.
Multi-family housing has bigger gains. An apartment with only one external wall has very small thermal losses, as \Delta T = 0 on all other walls. Duplexes eliminate loss on one side, row or
terrace homes eliminate loss on two sides.
Given that some people like having a yard and a garage, I think one of the best designs would be a set of two to three story row houses on an east-west street with an alley in between streets for
access to a garage in the back yard. Except for end lots, all buildings would have two shared walls. All buildings would be oriented with one direct southern face to optimize solar heating, and
window canopies and appropriately-sized roof overhang could be used to block the sun in the summer. A Cartesian grid plan for streets would ensure optimal transportation paths. This sort of thing
already exists in some places (sans attached houses): https://maps.google.com/maps?hl=en&ll=41.745725,-87.714651&spn=0.007837,0.013078&t=h&z=17
□ The shape of the house is very important, and in particular how many external walls it has. On my website I have a simple tool to help you do heat loss calculations for a variety of shapes of
houses. They are typical UK shapes though – I don’t know anything about USA ones – and I didn’t put apartments in. I could do them though, quite easily. Another difference USA v. UK is we
tend to use U-values rather than R values but they are very closely related: U=1/R. The U values are more intuitive as big values mean more heat loss but the R values have the merit that you
can add them up when you have a layered structure. Anyway, my tool helps you work out how much difference you would get from insulating bits of your house or draft proofing. It is here: http:
6. Is there a formula for calculating the R-value of a house (or at least one wall) by measuring the inside and outside air temperatures and the interior and exterior wall temperatures?
□ Yes. In theory. In practice it would be very hard to carry out for reasonable walls.
You have four measurements: Tout, Text, Tint, Tin, where the intermediate temps are the exterior and interior wall surface temperatures. If you know the effective “air film” R-values (e.g.,
0.68 interior, 0.17 exterior, but condition dependent!), then you can make the following relationship in steady-state conditions: (Tin – Tint)/Rint = (Tint – Text)/Rwall = (Text – Tout)/Rext.
If you think you know Rint and Rext well enough (can check that the terms containing them are equal), then you can get Rwall.
The challenge, besides not knowing Rint and Rext very well, is that the temperature difference between inside air and wall surface is small/subtle. Same for Text and Tout. Measuring
temperatures at this level is a subtle business, having to worry about calibration, radiative coupling, convective coupling, altering the surface temp by putting a probe on it and impacting
its interface to the environment, etc. I would not expect easy success—but do try if you are so inclined, and report back!
□ In theory, you can calculate the heat going through the wall by knowing the air temperature, and the surface temperature on a single side. (Pconv = hAΔT)
In practice, it is good only for a very rough guess, because there is a great uncertainty in the heat transfer constant of the convection. (see the big range for h in the article).
□ See http://czbo.blogspot.com/2012/01/calculating-r-values.html for methods and a chart.
7. Insulation is only one side of the energy balance of buildings. In case of good insulation houses the losses through the surfaces becomes so low, that we calculate with otherwise ignorable gains:
– Heat generated by the humans inside
– Heat from electricity
– Heat from hot water
– Solar gain
A triple plane window on the southern side of the house has a positive energy balance even in winters. (To avoid overheating in summers, properly dimensioned shadowing is required above them,
e.g. roof overhang.)
Solar gain is so important, that it would require half to one meter insulation to make passive houses (15 kWh/m2/year) without it. While with proper placement of the building and its windows,
25-30 cm insulation is usually enough.
8. I like the idea of thermal mass and proper layout to maximize solar gain for heat in winter. Tall trees “placed” close to the south would provide shade in summer and allow sunlight to hit the
roof in winter where a fan could force hot air into the thermal mass as well.
An east/west rectangular house may be better than a north/south layout, providing more sunlight through the windows. Here, I believe insulation underneath the slab would be excellent, as dark
colored tiles would absorb much of that excess brightness.
I often forget that each 10 square feet of window is almost as good as an electric heater (1m^2 = 1,000 watts at best).
Thanks for the inspiration…
9. Great post.
In practical terms, what people often want to know is how their building fares in comparison. I know my energy use rate, square footage, and I can find out climate data for my location. Is my
building efficiency substandard? Is it average? Is it already far enough above average that it can’t be easily improved any more? I’m sure there must be comparison tables that would answer those
questions but I’m not aware where to find them. Any pointers?
□ The EIA’s Residential Energy Consumption Survey probably has what you need:
The other approach you could take is take a sort of a “prescriptive approach” by comparing your home as built to what would be required by current energy codes. For example: your home has
R-13 wall insulation but code now requires R-19. If you brought your home up to the current standard, your house would be better than the average home in your area.
□ TM,
Consider contacting a HERS rater to come and provide you with an evaluation.
The HERS (Home Energy Rating System) index basically involves comparing the performance of an existing home to the performance of a theoretical “average new construction” reference home.
The HERS index uses a score of 100 as a benchmark for the reference home.
The lower the score the better, so that an existing home with a HERS score of 50 uses half the energy of the reference home.
☆ Well I would like to see the comparison tables used by HERS. It shouldn’t be necessary to pay for a rating. Of course that would undermine the HERS business model.
○ Start here for free (you’ll need utility bills and the floor area of you home):
If you’re curious about the backup data, it’s the RECS data I linked to above.
■ Thanks for the pointer!
10. Evaporation is also a way for heat to travel although it would probably be negligible in a house
interesting to note how much energy we spend to heat a whole house when all we are really trying to do is regulate heat loss in our own bodies by about plus or minus 10 watts.
11. “There are only three ways for heat to travel: conduction, convection, and radiation. No other options.”
Well, that’s technically correct but very misleading as your own article makes clear. Heat also travels through mass flow, which is what you’re trying to figure out in your blower door test or
air inflitration calculations. According to Energy Audit of Building Systems (Krarti, M. 2010), it’s better to assume four manners of heat transfer by including the mass flow rate of fluids
involved (don’t forget, every time you run your cold tap, you’re bringing water of a specific temp into your house). The formula (without symbols) is:
Heat = (mass)*(specific heat)*(delta-T)
Heat Loss Rate = (mass flow rate)*(specific heat)*(delta-T)
It makes for a much more accurate model of heat transfer by including air infiltration, which in leaky houses can account for 30% or more of heat loss.
From the same book, calculating the building load coefficient as another way to get a gross approximation for building thermal performance posted by Gidon above is widely used to benchmark
buildings of a specific type and use in a region. The standard way is to plot 3 years of utility data (gas vs heating degree days or average temp for heating BLC, electric vs cooling degree days
or average temp for cooling BLC – the two measures give you different values, which can usually be used to approximate the internal heat gains from equipment loads and body heat), then find the
balance points above/below which the heating/cooling system doesn’t engage. The slope of the linear best fit gives you the BLC for either function.
12. My mate is a physicist and is also doing up an old house. His comment on this blog as follows:
“he could have gone the extra mile of assessing impact of moisture transport and material permeability.
However the main element he seems to have thermally missed (like many others) is that he does not take into account thermal mass calculation and its effects (as a thermal battery/accumulation) in
a non static environment (i.e. transient thermal charges and discharges).
He should really take material diffusivity into account in his somewhat theoritical house, because at the end of it his numbers only reflect a static scenario and in reality night and day have
different temperatures and solar gain. Therefore the real energy loss and gain is probably not as bad as he thinks and the effective R value of the house should be better.”
13. ” he does not take into account thermal mass calculation”
We can ignore heat capacity, when calculating energy requirement.
Imagine, that there are nice two weeks in autumn, when the outside and inside temperatures are the same, the preferred room temperature. During this period, all the structures of the building
also get to this temp. Later the weather goes wrong and the heating is switched on in the house.
About a half year later in spring, the outside weather again gets similar to the ideal inside temperature. Again, it stays for two weeks, so all building material will be at the same temp too.
So, the thermal energy stored in the mass of the building is now exactly the same as it was in autumn. It means, that all the energy invested into the heating, went out to the nature. The
opposite is also true: calculating the energy loss will predict the heating requirement.
14. This is a followup to the final comments (by hal jones and hdi) on the previous post on the thermal properties of Tom Murphy’s house, where comments are now closed. As hal jones states, the
thermal mass (J/degree) and heat flux (W/degree) are both relevant to understanding a house’s thermal operation. There are many houses and climates where hdi’s 2-season (heating/no heating) model
does not apply. In New Zealand, only 2-3% of houses have central heating. Most are heated, or partially heated, for part of each day and cool down at night. The transient response of the house is
important. The same thing is true in cold but sunny winter climates like Colorado.
The two pieces are data can be handily combined into the building time constant, BTC. My house initially had a thermal mass of 30MJ/degree and a heat loss of 500 W/degree, giving a BTC = 3 x 10^7
/ 500 = 60000 s = 16.7 hours. The temperature after the heating is off at night will be
T(t) = T_ambient + (T(0) – T_ambient) exp(-t/BTC)
With T_ambient = 4 deg C, T(0)=19 deg C, the temperature in the morning, 9 hours later, will be 4 + (19-4) exp(-9/16.7) = 12.75 deg C, which was what I actually observed.
With some home improvements – better curtains and completing the underfloor insulation – I was able to get the BTC up to 27 hours. It should be possible to extra the BTC of Tom’s house both from
knowledge of its construction and from the observed temperature plots, since he doesn’t heat his house.
|
{"url":"https://dothemath.ucsd.edu/2012/11/this-thermal-house/","timestamp":"2024-11-14T15:05:35Z","content_type":"text/html","content_length":"96719","record_id":"<urn:uuid:26d31022-d3a9-4a1c-8e8b-8dcecdc4189d>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00453.warc.gz"}
|
Liquid Gases - Sepidan Gas Aria - , Distribute all kinds of industrial gases
• Mix: A mixture of the above gases.
Industrial Gases:
Industrial gases in a wide range of industries, including oil and gas, petrochemicals, chemicals, electricity, mining, steel making, metals, environmental protection, medicine, pharmaceuticals,
biotechnology, food, water, fertilizers, nuclear energy, electronics and etc are applicable. Industrial gases are sold to other industries, and typically include large orders for different customers.
• Liquid Oxygen(Usages: Automotive & Transportation Equipment, Chemicals, Energy, Glass, Healthcare, Metal Productions & … )
– Liquid oxygen is carried in portable flasks and cryogenic tanks.
– The liquid oxygen density is 1141 kg per cubic meter, so a liter (one thousandth of a cubic meter) of liquid oxygen will be 1.141 kg.
• Liquid Nitrogen(Usages: Freezing of food products, Cryotherapy, Shielding materials)
– Liquid nitrogen is also carried in cryogenic tanks.
– The density of liquid nitrogen is 808.4 kg / m 3, and one liter is equivalent to 0.808 kg.
– Liquid nitrogen is the liquefied form of the element nitrogen that is commercially produced by fractional distillation of liquid air.
• Liquid Argon(Usages: Welding, Lighting industry, Semiconductor manufacturing)
– Liquid argon is tasteless, colorless, odorless, non-corrosive, non-flammable and very cold.
– The density of the liquid argon is 1394 kg / m 3, which is a liter of it will be about 1.4 kg.
– Liquid argon is tasteless, colorless, odorless, noncorrosive, nonflammable, and extremely cold. Belonging to the family
of rare gases, argon is the most plentiful, making up approximately 1% of the earth’s atmosphere.
– Argon is produced at air separation plants by liquefaction of atmospheric air
and separation of the argon by continuous cryogenic distillation.
Liquid Gases:
The industrial gases in liquid form and in a very low temperature that are much easier to carry (cryogenic process).
Cylinders and Cryogenic Tanks
• Low Capacity Cryogenic Tanks
Designed for small carrying.
• High Capacity Cryogenic Tanks
For carrying and storing in large sizes.
Cryogenic Tanks:
The Cryotank or Cryogenic reservoir is a tank for the storage of liquid gases. Cryotanks and Cryogenics can be found in many science fiction films, but today it is still a high technology and
growing. The term “cryotank” refers to the storage of ultra-cold fuels, such as liquid oxygen and liquid hydrogen.
Low pressure cylinders (under 500 psi) come in a variety of sizes. Some examples of gases supplied in low pressure cylinder are LPG and refrigerant gases.
Some examples of gases supplied in High pressure (under 10000 psi) cylinders include Nitrogen, Helium, Hydrogen, Oxygen and Carbon Dioxide.
Cylinders are used to carry and store gases at a pressure above the normal pressure of the atmosphere.
To evaporate liquid gases.
• Ambient air heated, both natural as forced draft design
• Electrical heated
• Steam heated
• Water heated, both coil in shell as shell in tube design
• Gas – or Diesel fired heated
• Mobile units (based on above technologies)
• Trim heaters
|
{"url":"http://www.sepidangasaria.com/en/liquid-gases/","timestamp":"2024-11-08T21:55:04Z","content_type":"text/html","content_length":"61520","record_id":"<urn:uuid:db1b72d4-b137-4a7f-a3a5-58345f13676a>","cc-path":"CC-MAIN-2024-46/segments/1730477028079.98/warc/CC-MAIN-20241108200128-20241108230128-00007.warc.gz"}
|
Solve an Ordinary Differential Equation (ODE) Algebraically
Solve an Ordinary Differential Equation (ODE) Algebraically¶
Use SymPy to solve an ordinary differential equation (ODE) algebraically. For example, solving \(y''(x) + 9y(x)=0 \) yields \( y(x)=C_{1} \sin(3x)+ C_{2} \cos(3x)\).
Solve an Ordinary Differential Equation (ODE)¶
Here is an example of solving the above ordinary differential equation algebraically using dsolve(). You can then use checkodesol() to verify that the solution is correct.
>>> from sympy import Function, dsolve, Derivative, checkodesol
>>> from sympy.abc import x
>>> y = Function('y')
>>> # Solve the ODE
>>> result = dsolve(Derivative(y(x), x, x) + 9*y(x), y(x))
>>> result
Eq(y(x), C1*sin(3*x) + C2*cos(3*x))
>>> # Check that the solution is correct
>>> checkodesol(Derivative(y(x), x, x) + 9*y(x), result)
(True, 0)
The output of checkodesol() is a tuple where the first item, a boolean, tells whether substituting the solution into the ODE results in 0, indicating the solution is correct.
Defining Derivatives¶
There are many ways to express derivatives of functions. For an undefined function, both Derivative and diff() represent the undefined derivative. Thus, all of the following ypp (“y prime prime”)
represent \(y''\), the second derivative with respect to \(x\) of a function \(y(x)\):
ypp = y(x).diff(x, x)
ypp = y(x).diff(x, 2)
ypp = y(x).diff((x, 2))
ypp = diff(y(x), x, x)
ypp = diff(y(x), x, 2)
ypp = Derivative(y(x), x, x)
ypp = Derivative(y(x), x, 2)
ypp = Derivative(Derivative(y(x), x), x)
ypp = diff(diff(y(x), x), x)
yp = y(x).diff(x)
ypp = yp.diff(x)
We recommend specifying the function to be solved for, as the second argument to dsolve(). Note that it must be a function rather than a variable (symbol). SymPy will give an error if you specify a
variable (\(x\)) rather than a function (\(f(x)\)):
>>> dsolve(Derivative(y(x), x, x) + 9*y(x), x)
Traceback (most recent call last):
ValueError: dsolve() and classify_ode() only work with functions of one variable, not x
Similarly, you must specify the argument of the function: \(y(x)\), not just \(y\).
Options to Define an ODE¶
You can define the function to be solved for in two ways. The subsequent syntax for specifying initial conditions depends on your choice.
Option 1: Define a Function Without Including Its Independent Variable¶
You can define a function without including its independent variable:
>>> from sympy import symbols, Eq, Function, dsolve
>>> f, g = symbols("f g", cls=Function)
>>> x = symbols("x")
>>> eqs = [Eq(f(x).diff(x), g(x)), Eq(g(x).diff(x), f(x))]
>>> dsolve(eqs, [f(x), g(x)])
[Eq(f(x), -C1*exp(-x) + C2*exp(x)), Eq(g(x), C1*exp(-x) + C2*exp(x))]
Note that you supply the functions to be solved for as a list as the second argument of dsolve(), here [f(x), g(x)].
Specify Initial Conditions or Boundary Conditions¶
If your differential equation(s) have initial or boundary conditions, specify them with the dsolve() optional argument ics. Initial and boundary conditions are treated the same way (even though the
argument is called ics). It should be given in the form of {f(x0): y0, f(x).diff(x).subs(x, x1): y1} and so on where, for example, the value of \(f(x)\) at \(x = x_{0}\) is \(y_{0}\). For power
series solutions, if no initial conditions are specified \(f(0)\) is assumed to be \(C_{0}\) and the power series solution is calculated about \(0\).
Here is an example of setting the initial values for functions, namely namely \(f(0) = 1\) and \(g(2) = 3\):
>>> from sympy import symbols, Eq, Function, dsolve
>>> f, g = symbols("f g", cls=Function)
>>> x = symbols("x")
>>> eqs = [Eq(f(x).diff(x), g(x)), Eq(g(x).diff(x), f(x))]
>>> dsolve(eqs, [f(x), g(x)])
[Eq(f(x), -C1*exp(-x) + C2*exp(x)), Eq(g(x), C1*exp(-x) + C2*exp(x))]
>>> dsolve(eqs, [f(x), g(x)], ics={f(0): 1, g(2): 3})
[Eq(f(x), (1 + 3*exp(2))*exp(x)/(1 + exp(4)) - (-exp(4) + 3*exp(2))*exp(-x)/(1 + exp(4))), Eq(g(x), (1 + 3*exp(2))*exp(x)/(1 + exp(4)) + (-exp(4) + 3*exp(2))*exp(-x)/(1 + exp(4)))]
Here is an example of setting the initial value for the derivative of a function, namely \(f'(1) = 2\):
>>> eqn = Eq(f(x).diff(x), f(x))
>>> dsolve(eqn, f(x), ics={f(x).diff(x).subs(x, 1): 2})
Eq(f(x), 2*exp(-1)*exp(x))
Option 2: Define a Function of an Independent Variable¶
You may prefer to specify a function (for example \(y\)) of its independent variable (for example \(t\)), so that y represents y(t):
>>> from sympy import symbols, Function, dsolve
>>> t = symbols('t')
>>> y = Function('y')(t)
>>> y
>>> yp = y.diff(t)
>>> ypp = yp.diff(t)
>>> eq = ypp + 2*yp + y
>>> eq
y(t) + 2*Derivative(y(t), t) + Derivative(y(t), (t, 2))
>>> dsolve(eq, y)
Eq(y(t), (C1 + C2*t)*exp(-t))
Using this convention, the second argument of dsolve(), y, represents y(t), so SymPy recognizes it as a valid function to solve for.
Specify Initial Conditions or Boundary Conditions¶
Using that syntax, you specify initialor boundary conditions by substituting in values of the independent variable using subs() because the function \(y\) already has its independent variable as an
argument \(t\):
>>> dsolve(eq, y, ics={y.subs(t, 0): 0})
Eq(y(t), C2*t*exp(-t))
Beware Copying and Pasting Results¶
If you choose to define a function of an independent variable, note that copying a result and pasting it into subsequent code may cause an error because x is already defined as y(t), so if you paste
in y(t) it is interpreted as y(t)(t):
>>> dsolve(y(t).diff(y), y)
Traceback (most recent call last):
TypeError: 'y' object is not callable
So remember to exclude the independent variable call (t):
>>> dsolve(y.diff(t), y)
Eq(y(t), C1)
Use the Solution Result¶
Unlike other solving functions, dsolve() returns an Equality (equation) formatted as, for example, Eq(y(x), C1*sin(3*x) + C2*cos(3*x)) which is equivalent to the mathematical notation \(y(x) = C_1 \
sin(3x) + C_2 \cos(3x)\).
Extract the Result for One Solution and Function¶
You can extract the result from an Equality using the right-hand side property rhs:
>>> from sympy import Function, dsolve, Derivative
>>> from sympy.abc import x
>>> y = Function('y')
>>> result = dsolve(Derivative(y(x), x, x) + 9*y(x), y(x))
>>> result
Eq(y(x), C1*sin(3*x) + C2*cos(3*x))
>>> result.rhs
C1*sin(3*x) + C2*cos(3*x)
Some ODEs Cannot Be Solved Explicitly, Only Implicitly¶
The above ODE can be solved explicitly, specifically \(y(x)\) can be expressed in terms of functions of \(x\). However, some ODEs cannot be solved explicitly, for example:
>>> from sympy import dsolve, exp, symbols, Function
>>> f = symbols("f", cls=Function)
>>> x = symbols("x")
>>> dsolve(f(x).diff(x) + exp(-f(x))*f(x))
Eq(Ei(f(x)), C1 - x)
This gives no direct expression for \(f(x)\). Instead, dsolve() expresses a solution as \(g(f(x))\) where \(g\) is Ei, the classical exponential integral function. Ei does not have a known
closed-form inverse, so a solution cannot be explicitly expressed as \(f(x)\) equaling a function of \(x\). Instead, dsolve returns an implicit solution.
When dsolve returns an implicit solution, extracting the right-hand side of the returned equality will not give an explicitly expression for the function to be solved for, here \(f(x)\). So before
extracting an expression for the function to be solved for, check that dsolve was able to solve for the function explicitly.
Extract the Result for Multiple Function-Solution Pairs¶
If you are solving a system of equations with multiple unknown functions, the form of the output of dsolve() depends on whether there is one or multiple solutions.
If There is One Solution Set¶
If there is only one solution set to a system of equations with multiple unknown functions, dsolve() will return a non-nested list containing an equality. You can extract the solution expression
using a single loop or comprehension:
>>> from sympy import symbols, Eq, Function, dsolve
>>> y, z = symbols("y z", cls=Function)
>>> x = symbols("x")
>>> eqs_one_soln_set = [Eq(y(x).diff(x), z(x)**2), Eq(z(x).diff(x), z(x))]
>>> solutions_one_soln_set = dsolve(eqs_one_soln_set, [y(x), z(x)])
>>> solutions_one_soln_set
[Eq(y(x), C1 + C2**2*exp(2*x)/2), Eq(z(x), C2*exp(x))]
>>> # Loop through list approach
>>> solution_one_soln_set_dict = {}
>>> for fn in solutions_one_soln_set:
... solution_one_soln_set_dict.update({fn.lhs: fn.rhs})
>>> solution_one_soln_set_dict
{y(x): C1 + C2**2*exp(2*x)/2, z(x): C2*exp(x)}
>>> # List comprehension approach
>>> solution_one_soln_set_dict = {fn.lhs:fn.rhs for fn in solutions_one_soln_set}
>>> solution_one_soln_set_dict
{y(x): C1 + C2**2*exp(2*x)/2, z(x): C2*exp(x)}
>>> # Extract expression for y(x)
>>> solution_one_soln_set_dict[y(x)]
C1 + C2**2*exp(2*x)/2
If There are Multiple Solution Sets¶
If there are multiple solution sets to a system of equations with multiple unknown functions, dsolve() will return a nested list of equalities, the outer list representing each solution and the inner
list representing each function. While you can extract results by specifying the index of each function, we recommend an approach which is robust with respect to function ordering. The following
converts each solution into a dictionary so you can easily extract the result for the desired function. It uses standard Python techniques such as loops or comprehensions, in a nested fashion.
>>> from sympy import symbols, Eq, Function, dsolve
>>> y, z = symbols("y z", cls=Function)
>>> x = symbols("x")
>>> eqs = [Eq(y(x).diff(x)**2, z(x)**2), Eq(z(x).diff(x), z(x))]
>>> solutions = dsolve(eqs, [y(x), z(x)])
>>> solutions
[[Eq(y(x), C1 - C2*exp(x)), Eq(z(x), C2*exp(x))], [Eq(y(x), C1 + C2*exp(x)), Eq(z(x), C2*exp(x))]]
>>> # Nested list approach
>>> solutions_list = []
>>> for solution in solutions:
... solution_dict = {}
... for fn in solution:
... solution_dict.update({fn.lhs: fn.rhs})
... solutions_list.append(solution_dict)
>>> solutions_list
[{y(x): C1 - C2*exp(x), z(x): C2*exp(x)}, {y(x): C1 + C2*exp(x), z(x): C2*exp(x)}]
>>> # Nested comprehension approach
>>> solutions_list = [{fn.lhs:fn.rhs for fn in solution} for solution in solutions]
>>> solutions_list
[{y(x): C1 - C2*exp(x), z(x): C2*exp(x)}, {y(x): C1 + C2*exp(x), z(x): C2*exp(x)}]
>>> # Extract expression for y(x)
>>> solutions_list[0][y(x)]
C1 - C2*exp(x)
Work With Arbitrary Constants¶
You can manipulate arbitrary constants such as C1, C2, and C3, which are generated automatically by dsolve(), by creating them as symbols. For example, if you want to assign values to arbitrary
constants, you can create them as symbols and then substitute in their values using subs():
>>> from sympy import Function, dsolve, Derivative, symbols, pi
>>> y = Function('y')
>>> x, C1, C2 = symbols("x, C1, C2")
>>> result = dsolve(Derivative(y(x), x, x) + 9*y(x), y(x)).rhs
>>> result
C1*sin(3*x) + C2*cos(3*x)
>>> result.subs({C1: 7, C2: pi})
7*sin(3*x) + pi*cos(3*x)
Numerically Solve an ODE in SciPy¶
A common workflow which leverages SciPy’s fast numerical ODE solving is
1. set up an ODE in SymPy
2. convert it to a numerical function using lambdify()
3. solve the initial value problem by numerically integrating the ODE using SciPy’s solve_ivp.
Here is an example from the field of chemical kinetics where the nonlinear ordinary differential equations take this form:
\[\begin{split} r_f = & k_f y_0(t)^2 y_1(t) \\ r_b = & k_b y_2(t)^2 \\ \frac{d y_0(t)}{dt} = & 2(r_b - r_f) \\ \frac{d y_1(t)}{dt} = & r_b - r_f \\ \frac{d y_2(t)}{dt} = & 2(r_f - r_b) \end{split}\]
\[\begin{split}\vec{y}(t) = \begin{bmatrix} y_0(t) \\ y_1(t) \\ y_2(t) \end{bmatrix} \end{split}\]
>>> from sympy import symbols, lambdify
>>> import numpy as np
>>> import scipy.integrate
>>> import matplotlib.pyplot as plt
>>> # Create symbols y0, y1, and y2
>>> y = symbols('y:3')
>>> kf, kb = symbols('kf kb')
>>> rf = kf * y[0]**2 * y[1]
>>> rb = kb * y[2]**2
>>> # Derivative of the function y(t); values for the three chemical species
>>> # for input values y, kf, and kb
>>> ydot = [2*(rb - rf), rb - rf, 2*(rf - rb)]
>>> ydot
[2*kb*y2**2 - 2*kf*y0**2*y1, kb*y2**2 - kf*y0**2*y1, -2*kb*y2**2 + 2*kf*y0**2*y1]
>>> t = symbols('t') # not used in this case
>>> # Convert the SymPy symbolic expression for ydot into a form that
>>> # SciPy can evaluate numerically, f
>>> f = lambdify((t, y, kf, kb), ydot)
>>> k_vals = np.array([0.42, 0.17]) # arbitrary in this case
>>> y0 = [1, 1, 0] # initial condition (initial values)
>>> t_eval = np.linspace(0, 10, 50) # evaluate integral from t = 0-10 for 50 points
>>> # Call SciPy's ODE initial value problem solver solve_ivp by passing it
>>> # the function f,
>>> # the interval of integration,
>>> # the initial state, and
>>> # the arguments to pass to the function f
>>> solution = scipy.integrate.solve_ivp(f, (0, 10), y0, t_eval=t_eval, args=k_vals)
>>> # Extract the y (concentration) values from SciPy solution result
>>> y = solution.y
>>> # Plot the result graphically using matplotlib
>>> plt.plot(t_eval, y.T)
>>> # Add title, legend, and axis labels to the plot
>>> plt.title('Chemical Kinetics')
>>> plt.legend(['NO', 'Br$_2$', 'NOBr'], shadow=True)
>>> plt.xlabel('time')
>>> plt.ylabel('concentration')
>>> # Finally, display the annotated plot
>>> plt.show()
(png, hires.png, pdf)
SciPy’s solve_ivp returns a result containing y (numerical function result, here, concentration) values for each of the three chemical species, corresponding to the time points t_eval.
Ordinary Differential Equation Solving Hints¶
Return Unevaluated Integrals¶
By default, dsolve() attempts to evaluate the integrals it produces to solve your ordinary differential equation. You can disable evaluation of the integrals by using Hint Functions ending with
_Integral, for example separable_Integral. This is useful because integrate() is an expensive routine. SymPy may hang (appear to never complete the operation) because of a difficult or impossible
integral, so using an _Integral hint will at least return an (unintegrated) result, which you can then consider. The simplest way to disable integration is with the all_Integral hint because you do
not need to know which hint to supply: for any hint with a corresponding _Integral hint, all_Integral only returns the _Integral hint.
Select a Specific Solver¶
You may wish to select a specific solver using a hint for a couple of reasons:
• educational purposes: for example if you are learning about a specific method to solve ODEs and want to get a result that exactly matches that method
• form of the result: sometimes an ODE can be solved by many different solvers, and they can return different results. They will be mathematically equivalent, though the arbitrary constants may not
be. dsolve() by default tries to use the “best” solvers first, which are most likely to return the most usable output, but it is not a perfect heuristic. For example, the “best” solver may
produce a result with an integral that SymPy cannot solve, but another solver may produce a different integral that SymPy can solve. So if the solution isn’t in a form you like, you can try other
hints to check whether they give a preferable result.
Not All Equations Can Be Solved¶
Equations With No Solution¶
Not all differential equations can be solved, for example:
>>> from sympy import Function, dsolve, Derivative, symbols
>>> y = Function('y')
>>> x, C1, C2 = symbols("x, C1, C2")
>>> dsolve(Derivative(y(x), x, 3) - (y(x)**2), y(x)).rhs
Traceback (most recent call last):
NotImplementedError: solve: Cannot solve -y(x)**2 + Derivative(y(x), (x, 3))
Equations With No Closed-Form Solution¶
As noted above, Some ODEs Cannot Be Solved Explicitly, Only Implicitly.
Also, some systems of differential equations have no closed-form solution because they are chaotic, for example the Lorenz system or a double pendulum described by these two differential equations
(simplified from ScienceWorld):
\[ 2 \theta_1''(t) + \theta_2''(t) \cos(\theta_1-\theta_2) + \theta_2'^2(t) \sin(\theta_1 - \theta_2) + 2g \sin(\theta_1) = 0 \]
\[ \theta_2''(t) + \theta_1''(t) \cos(\theta_1-\theta_2) - \theta_1'^2(t) \sin(\theta_1 - \theta_2) + g \sin(\theta_2) = 0 \]
>>> from sympy import symbols, Function, cos, sin, dsolve
>>> theta1, theta2 = symbols('theta1 theta2', cls=Function)
>>> g, t = symbols('g t')
>>> eq1 = 2*theta1(t).diff(t, t) + theta2(t).diff(t, t)*cos(theta1(t) - theta2(t)) + theta2(t).diff(t)**2*sin(theta1(t) - theta2(t)) + 2*g*sin(theta1(t))
>>> eq2 = theta2(t).diff(t, t) + theta1(t).diff(t, t)*cos(theta1(t) - theta2(t)) - theta1(t).diff(t)**2*sin(theta1(t) - theta2(t)) + g*sin(theta2(t))
>>> dsolve([eq1, eq2], [theta1(t), theta2(t)])
Traceback (most recent call last):
For such cases, you can solve the equations numerically as mentioned in Alternatives to Consider.
|
{"url":"https://docs.sympy.org/dev/guides/solving/solve-ode.html","timestamp":"2024-11-11T01:46:13Z","content_type":"text/html","content_length":"160796","record_id":"<urn:uuid:55f1a6d6-e15b-469a-86d8-ebf86edac36b>","cc-path":"CC-MAIN-2024-46/segments/1730477028202.29/warc/CC-MAIN-20241110233206-20241111023206-00276.warc.gz"}
|
CAPRI Online Manual (update)
Table of Contents
Disaggregation of crop Areas
The principle of the distribution of crop areas is based on few constraints only: full exhaustion of available ares for each spatial unit, vertical consistency, and primacy of land stability.
Vertical consistency means that the sum of each land use type over all spatial units recovers available land use at the higher spatial level. As available information are not (necessarily)
geo-referenced, the allocation of a given statistical information to a spatial unit is associated with uncertainty. For example, a farmer with the residence in region A can have land also in regions
B and C, but will declare them together, and they will be allocated to her residence (region A). Accordingly, there is some blurring in particular at boundaries and this is accounted for in the
methodology: also at the high disaggregation level, the land uses are principally to be interpreted as ‘Land owned by a farmer with residence in this spatial unit’.
Primacy of land stability means that if there is no indication (i. e. new observation, policy restricting previous land distributions, …) it is more likely that the spatial pattern remains similar to
the previous (prior) pattern. Therefore, once a likely distribution of land and livestock has been determined on the basis of high-resolution FSS statistics, the model tries to stay as close as
possible to this distribution. This is achieved with penalty factors that are activated as soon as the estimated land use area deviated from the prior values, assigning a higher penalty for
deviations of permanent crops and forests, and very high penalties of a land use is estimated in a spatial unit where it didn’t exist in the prior’s data base.
The disaggregation model m_hpdCropSpat is described in Section Simulation model m_hpdCropSpat.Section Data sets describes the required input data, and Section Data preparation describes the
preparation of the input data for their use in hpdCropSpat.
Simulation model m_hpdCropSpat
Equation 1 RULEVL_
The levels assigend to the FSUs must recover the given NUTS II area
both NUTS2 level and FSU level in 1000 ha
rur = grid cell in mode LAPM, rur = NUTS2 in all other modes
RULEVL_(rur,curact) ..
=E= p_nutsLevl(rur,curact);
\(a_{c^*,h}\) = Area [parameter, km^2] cultivated with crop c or covered by ‘other land’ use excluding forest in spatial unit h
\(A_{c^*,h}\) = Area [parameter, km^2] cultivated with crop c or covered by ‘other land’ use excluding forest in region r
Equation 2 ADDUPGRID_
Due to several reasons it could be impossible to distribute all agricultural area under the given constraints of total available area in the spatial units (net of the area of ‘nogo’ units and forest
area). In order to enable a feasible solution, an error term is introduced that allows the units to slightly shrink or grow. The reason:
• Statistical data are not (necessarily) geo-referenced; i.e. an area of crops (or a livestock) might be assigned to one unit/grid cell because this is where the farm is registered rather than the
physical location of crop/livestock
• Uncertainties in the data
• Inconsistencies of data sources (i.e. FSS agricultural statistics, Corine Land Cover data, CAPRI regional statistics)
1. – The FSU area must be exhausted, but the variable v_%spatunit%SizeChg
allows some flexibility if needed.
ADDUPGRID_(rur,cur%spatunit%) $ p_levlunit(rur,cur%spatunit%,"area") ..
\(a_h\) = Area [parameter, km^2] of spatial unit h
\(a_{c^*,h}\) =Area [parameter, km^2] cultivated with crop c or covered by ‘other land’ use excluding forest in spatial unit h
\(\epsilon_{a,h}\) = Error term, allowing a spatial unit to shrink or grow slightly in order to enable a feasible disaggregation of statistical data.
Equation 3 PDF_
The most likely solution is obtained with the ‘Highest Posterior Density’ method. A penalty function calculates deviations from prior information, applying uncertainties
• A random re-allocation of crops should be avoided. Therefore, a penalty is given with increasing deviation from the prior distribution to ensure stability in the time series
• In particular the ‘appearance’ of crops in spatial units where they have not been observed in the prior data should be restricted (disagg(“penelizenewcrops”))
• The error term for the area of the spatial units should be kept at a minimum (disagg(“penalizesizechange”))
PDF_ ..
# Scale density value a good couple of magnitudes for numerical reasons.
*[SUM((curact,regmap(rur,cur%spatunit%)) $ p_levlStde(cur%spatunit%,curact),
+SUM(regmap(rur,cur%spatunit%) $(v_%spatunit%SizeChg.LO(rur,cur%spatunit%) NE
# hsu area-weighted mean square of the deviation from prior mean area, scaled by its stdev
* SQR( (v_levlCon(rur,cur%spatunit%,curact)-p_levlunit(rur,cur%spatunit%,curact))
*( 1 $ p_levlunit(rur,cur%spatunit%,curact)
+ disagg("penalizenewcrops") $ (not p_levlunit(rur,cur%spatunit%,curact)))
$$ifi %MODE%==LAPM p_levlStde(cur%spatunit%,curact)
$$ifi NOT %MODE%==LAPM 1
) \\
)/SUM((regmap(rur,cur%spatunit%),curact) \\
$p_levlStde(cur%spatunit%,curact),p_levlunit(rur,cur%spatunit%,"area")) \\
)$sum((regmap(rur,cur%spatunit%),curact),p_levlStde(cur%spatunit%,curact)) \\
# penalty for deviation from hsu area \\
+(SUM(regmap(rur,cur%spatunit%), \\
disagg("penalizesizechange")*p_levlunit(rur,cur%spatunit%,"area")*SQR((v_%spatunit%SizeChg(rur,cur%spatunit%)-1))) \\
/SUM(regmap(rur,cur%spatunit%), p_levlunit(rur,cur%spatunit%,"area")) \\
)$SUM(regmap(rur,cur%spatunit%),p_levlunit(rur,cur%spatunit%,"area")) \\
; \\
Model parameters
Some model parameters can be set by the user through the CAPRI GUI.
They are collected in the parameter ‘disagg’.
set disaggcontrol /
mincropshare "Minimum allowed cropshare per HSU"
relcropshare "Defines heterogeneity of crop shares for a crop per HSU"
relstdefix "Relative standard deviation, predefined if land needs 'to be fixed'"
relstdeperm "Relative standard deviation, predefined for permanent crops"
relstdeothe "Relative standard deviation, predefined for other land (large to avoid that other land pushes agriland around)"
penalizenewcrops "multiply deviations for crops predicted where they haven't been before"
penalizesizechange "multiply deviations for total HSU unit size"
* --- scalars controlling livestock disaggregation
weightRUMIfodduaar "Weighting between fodds and uaar to distribute initial RUMI numbers"
weightMONOcereuaar "Weighting between cereals and uaar to distribute initial NRUMI numbers"
minLSUdens "Minimum density for LSU allowed to not have them everywhere... "
# managing crop residues
minmcactSurs #Miniumum surplus as compared to average over all crops
maxmcactSurs #Maxiumum surplus as compared to average over all crops
rangemcactSurs #Range of sursoi (max/average) below which the high sursoi are not reduced
• Stability of forests – disagg(“relstdefix”)
Forests cannot easily be ‘displaced’ and are likely to remain rooted as given in the land cover data sets. So far, estimations of changes of forest areas at the regional level are not included in the
disaggregation procedure.
The default value used for disagg(“relstdefix”) = 0.01
The lower the value the higher becomes the penalty if the estimates are deviating from the priors.
• Stability of permanent crops disagg(“relstdeperm”)
Permanent crops are long-term investments and require time to grow. Displacement of permanent crops is slow.
The default value used for disagg(“relstdeperm”) = 0.05.
• Coefficient of variation for ‘other land uses’ disagg(“relstdeothe”)
‘Other’ area is a lump of all non-agricultural areas. We consider this area as relatively flexible.
The default value used for disagg(“relstdeothe”) = 1.
• Penalization for new crops in spatial units disagg(“penalizenewcrops”)
New crops ‘appearing’ in spatial units, if they have not been in the priors data set, are penalized. The penalization factor is a multiplicator of the squared deviation from the prior. Thus, the
higher the factor the higher becomes the penalty.
The default values used for disagg(“penalizenewcrops ”)=2.0;
• Penalization of area changes of spatial units disagg(“penalizesizechange”)
The default values used for disagg(“penalizesizechange”)=2.0;
• Minimum crop share allowed in the spatial unit disagg(“mincropshare”)
The minimum crop share which is allowed in the spatial unit χ_(min ) is used to calculate the lowest allowed crop share, in combination with the minimum relative crop share defining the level of
spatial heterogeneity for a crop. See section 7.4.3.5
\(χ_{min}\) can be set through the CAPRI GUI (tab CAPREG disaggregation options – “Suppression of crops if the share is very low”)
By default, \(χ_{min}\)is set to zero.
• Minimum relative crop share disagg(“relcropshare”)
The minimum relative crop share defining the level of spatial heterogeneity for a \(χ_{rel}\) is used to calculate the lowest allowed crop share, in combination with the m crop minimum crop share
which is allowed in the spatial unit (see section 7.4.3.5
\(χ_{rel}\) can be set through the CAPRI GUI (tab CAPREG disaggregation options – “Minimum relative crop share”)
By default, \(χ_{rel}\) is set to zero.
Defining bounds for the land use distribution model
Bounds for size changes of the total area of the spatial units
v_%spatunit%SizeChg.L (rur,cur%spatunit%) $p_temp3dim(rur,cur%spatunit%,"crops")= 1;
v_%spatunit%SizeChg.UP(rur,cur%spatunit%) $p_temp3dim(rur,cur%spatunit%,"crops")= 1.1;
v_%spatunit%SizeChg.LO(rur,cur%spatunit%) $p_temp3dim(rur,cur%spatunit%,"crops")= 0.9;
$(p_nutslevl(rur,"AREAcorr") and p_temp3dim(rur,cur%spatunit%,"crops"))= 2.0;
$(p_nutslevl(rur,"AREAcorr") and p_temp3dim(rur,cur%spatunit%,"crops"))= 0.5;
$$ifi %MODE%=="LAPM" v_%spatunit%SizeChg.UP(rur,cur%spatunit%) $p_temp3dim(rur,cur%spatunit%,"crops")=
$$ifi %MODE%=="LAPM" v_%spatunit%SizeChg.LO(rur,cur%spatunit%) $p_temp3dim(rur,cur%spatunit%,"crops")=
To enable the solver to find feasible solutions even in difficult situations, it is possible to expand or shrink the total area of the spatial units. This is in consistence with the definition of the
data compiled by the statistical offices which link the area to residence to the farmer rather than to the geographic location of each field.
Generally, we limit this area-change to plus/minus 10% of the original size.
In cases where inconsistencies between data sets have already been identified (see 0) a higher degree of flexibility is allowed (factor 2) as it is not known in which spatial unit the inconsistency
Only in the task ‘A priori land use distribution’ the degree of flexibility is calculated as a function of the correction that had to be applied to the regional area.
The bounds for the area-size change are hard-coded and can not be changed by the user.
Setting standard deviations
Data do not come with any level of uncertainty attached, and there is no a priori information on what spatial distribution is more likely than any other.
Therefore, the uncertainty in the estimates is ‘guessed’ based on crop groups.
Other options tested (all standard deviations equal or scaling prior estimates to a plausible range) are currently not used. The standard deviations are only set at the first task. In subsequent
tasks, the standard deviations of the priors are used and ‘gap-filled’ if necessary.
$set changelapmstdev bygroups
$iftheni.std %changelapmstdev%=="scalestdevs"
$elseifi.std %changelapmstdev%=="allone"
$elseifi.std %changelapmstdev%=="bygroups"
p_levlstde(cur%spatunit%, %croptp%)
* $ p_levlstde(cur%spatunit%, %croptp%)
= 0.5;
p_levlstde(cur%spatunit%, %croptp%)
* $ p_levlstde(cur%spatunit%, %croptp%)
= 0.001 $sum(fssact2groups(%croptp%, "FORE"), 1)
+ 0.50 $sum(fssact2groups(%croptp%, "CERE"), 1) # Cereals incl those likely in rotation: Assumptions market oriented might change relatively quickly if price is correct
+ 0.25 $sum(fssact2groups(%croptp%, "FODD"), 1) # Fodder crops: roof, ofar, lgras. Assumption: link to livestock which do not shift around so quickly
+ 0.25 $sum(fssact2groups(%croptp%, "OILS"), 1) # Oil crops: rape, sunflower, soya. Assumption: relatively sticky
+ 0.15 $sum(fssact2groups(%croptp%, "VEGE"), 1) # Vegetables: flower, pulses, potatoes, sugar beet, ... but also tobacco, text etc. Assumption: require often infrastructure (greenhouse) so longer investments required
+ 0.05 $sum(fssact2groups(%croptp%, "TREE"), 1) # Permanent crops: olives, nurseries, fruit and nuts trees, vinyards. There are permanent
+ 0.05 $sum(fssact2groups(%croptp%, "PERM"), 1) # Permanent crops: olives, nurseries, fruit and nuts trees, vinyards. There are permanent
+ 0.80 $sum(fssact2groups(%croptp%, "REST"), 1) # Assumptions: can be easily pushed around
Data sets
Update pending
FSS 2010 data at nested grid levels
FSS 2010 data, gap-filled at 10km-NUTS3 overlay
Forest map
Data preparation
Re-mapping from FSS crops to CAPRI crops
# Convert to CAPRI activities (posteact)
#As grids and NUTS are not always consistent - use the grid-HSUs to fill with crops
Intersecting FSS 2010 10km x NUTS3 to FSU
The land use distribution model works at the spatial intersection of the prior and posterior spatial units. Historically (when CAPDIS was based on HSU) this intersection was an area determined by the
fraction of the HSU that lies within different FSS-admin grid cells. This fraction \(f_{hg}\in[0,1]\).
However after the update to the FSU, each spatial unit was fully in one FSS-admin grid cell only, thus \(f_{hg}\in\{0,1\}\). The following text is therefore relevant only if CAPDIS is run with ‘old’
# Work on the intersection between grid and HSU (Line 585 ff)
" Work on intersection of HSU and grid for crops"' '" "'
# Distribute crops over intersected units –and scale to total area (should be already but not always is...)
# Scale all areas such that the sum becomes the AREA of the unit (in case the HSU had to be split)
# - if LAPM predictions are consistent with total area
=sum(%croptp%$(not sameas(%croptp%,"FORE")),p_levlunit(%region%,cur%spatunit%,%croptp%));
$(p_temp3dim(%region%,cur%spatunit%,"allarea") and not sameas(%croptp%,"FORE"))
In order to ensure consistency of total area between the prior and the posterior data sets, all data are re-mapped into their intersection. This is achieved with the fraction of the spatial units of
one layer that is part of a unit of the second spatial layer:
$$a_{hg}=\sum_{h,g}\{a_h\cdot f_{hg} \}$$
\(a_{hg}\) = Area [parameter, km^2] of unit u intersecting spatial unit h and grid cell g.
\(a_h\) = Area [parameter, km^2] of spatial unit h
\(f_{hg}\) = Fraction of spatial unit h, which is covered by grid cell g
Cultivated crop areas are re-mapped to the intersecting units proportionally to the area fraction, assuming homogeneous distribution of each crop within each spatial unit h.
The LAPM predictions are not constrained to exhaust the total available area. However, this is a required characteristic in CAPRI. Therefore, the land use areas (crops, forest land and ‘other’ area)
are scaled so that their sum matches the total available area. As forest areas are obtained from a data set, which is assumed to be of high precision, forest areas are excluded from scaling.
$$a_{c^o,hg}=\sum_{h,g} \left\{ a_{c,h} \cdot f_{hg} \cdot \frac{a_{hg} - a_{forest,hg}}{\sum_{c^{o^\prime}} a_{c^{o^\prime}}} \right\},$$
\(c^*\) = Land use. \(c^*\in\{c,other \; land\}\)
\(a_{c^o,hg}\) = Area [parameter, km^2] cultivated with crop c or covered by ‘other land’ use excluding forest in unit u intersecting spatial unit h and grid cell g.
\(a_{c^o,h}\) = Area [parameter, km^2] cultivated with crop c or covered by ‘other land’ use excluding forest in spatial unit h
\(f_{hg}\) = Fraction of spatial unit h, which is covered by grid cell g
Note that this re-mapping is done in each step, however it affects only the step ‘A priori land use distribution’ which is constrained by data for the intersections of the FSS-10km grid cells with
NUTS3 regions. These units are not aligned with the spatial units (HSU) for two reasons:
1. HSU are aligned with a regular grid of 0.25° x 0.25° but not to a grid of 10 km x 10 km
2. Even though HSU are aligned with a NUTS3 administrative region layer, changes in the definition of NUTS3 regions over time create shifts in the boundaries
In all other steps, the constraining data set is taken from CAPRI NUTS2 regions to which all spatial units are nested to and \(f_{hr}=1\forall h,r\).
The same holds if disaggregation is done into the FSU units, which are part of exactly one FSS grid cell.
Dealing with FSS grid cells with too much crops
Line 613ff
p_temp3dim(%region%,"AREA","<0")$(p_nutslevl(%region%,"AREA") and (p_nutslevl(%region%,"OTHER")<0))
# Rescale total and crop area of units if the total area in the %region% had to be changed.
p_levlunit(%region%,cur%spatunit%,"OTHER")$(not p_temp3dim(%region%,"AREA","<0"))
Farm structure surveys collect data on crop areas and allocate them to the geographic location where the farmer resides. Therefore, it is not excluded that a spatial unit (grid cell) has more crop
area than the cell is large. It is not possible to ‘correct’ those allocations.
CAPRI works with an area-consistent approach, thus the total area available must be exactly matching the sum of the areas used for different purposes. As a re-allocation of ‘surplus’ crop areas is
not possible, we inflate the area of spatial units h so that all forest and crop areas can be accommodated. The area of ‘other’ land uses is adapted to ensure coherence.
$$a_{hg} \leftarrow a_{hg} \cdot \frac{a_{g,forest} + \sum_c a_{g,c}}{a_g}$$
$$a_{hg, other} = a_{hg} - a_{hg,forest} - \sum_c a_{hg,c}$$
Note that this ‘manipulation’ of the data is needed to avoid any potential infeasibilities in the land use disaggregation model maintaining the relevant information from the different data sets:
• The total crop areas data collected in the FSS
• The heterogeneity of crop areas (‘suitability’) as modelled with the LAPModel. This concerns both the spatial heterogeneity within a region across spatial units, as well as the relative abundance
of different crops in a single spatial unit.
Adding previously unobserved crops
It might well be that crops occur in a grid cell or region which were not predicted or which had not been observed ‘before’ (e.g. when moving from ex-post to ex-ante simulation). In this case prior
estimates of the distribution need to be developed.
This is done on the basis of ‘similarity’ assuming that similar crops have similar preferences for natural conditions (or available infrastructure) and a similar spatial heterogeneity.
This ‘gap-filling’ is done in three hierarchical steps:
1. Average crop area of similar crops in the same spatial unit as defined in the set mactgroups:
set mactgroups(sgroups,*) "Groups with similar crops - LAPM activities" /
2. If there are no ‘similar’ crops in the region or grid cell, the average area of all available crops is used.
3. If there is still no prediction of the in the spatial unit, the same crop area is given to the prior estimates in all spatial units in the region/grid cell.
Checking availability of standard deviations
May become obsolete for the CAPDIS modules of the CAPRI stable release versions following STAR 2.4
The LAPModel provides not only prediction for crop shares for each HSU, but at the same time also an estimate for the standard deviation of each estimate. These standard deviations are ‘carried’ on
throughout the CAPRI disaggregation steps.
The procedures described above however made evident that some crops might appear in spatial units where they have not been observed before; obviously, an estimate of the standard deviation must be
provided as well.
If the crops have been observed in other spatial units of the region or grid, the maximum relative standard deviation is used. If the crop has not been observed in the whole region or grid, the
maximum standard deviation over all observed crops is used. Still missing standard deviations are assumed to be 100%.
Standard deviation for ‘other land uses’ are also set to a default of 100%, but can be modified by the user (through the CAPRI GUI).
Preparation for specific applications
Lines 690ff
p_temp3dim(%region%,"loshare",%croptp%)$ p_nutslevl(%region%,%croptp%)
= max(disagg("mincropshare"),p_temp3dim(%region%,"share",%croptp%)*disagg("relcropshare"));
<p_temp3dim(%region%,"loshare",%croptp%)) and not (sameas(%croptp%,"other")))
For some applications, the focus is on the analysis of dominant crops in each spatial unit, for example, when linking the result of the disaggregation with process-based crop models. This is done on
the basis of two parameters:
$$a_{hc}= 0 \leftarrow \frac {a_{h,c}} {\sum_{c^{\prime}}a_{h,c^\prime}} \lt max \left[ χ_{min},\frac{a_{r,c}}{\sum_{c^\prime}a_{r,c^\prime} } \cdot χ_{rel} \right]$$
\(χ_{min}\) = Minimum crop share allowed in the spatial unit
\(χ_{rel}\) = Minimum relative crop share – defines heterogeneity of crop shares for a crop in a spatial unit
disaggregation_of_crop_areas.txt · Last modified: 2022/11/07 10:23 by 127.0.0.1
|
{"url":"https://www.capri-model.org/dokuwiki_help/doku.php?id=disaggregation_of_crop_areas","timestamp":"2024-11-04T18:20:36Z","content_type":"text/html","content_length":"57219","record_id":"<urn:uuid:f270dda2-5a62-407a-a584-a86f553ab879>","cc-path":"CC-MAIN-2024-46/segments/1730477027838.15/warc/CC-MAIN-20241104163253-20241104193253-00453.warc.gz"}
|
Please Do not press "Submit" at the end of the exam until you are sure of your responses, as your test will be graded immediately.
You can hover your mouse pointer over
1. This is a game which requires you to use four cards below to make an addition problem.
Each card can only be used once.
Here is the addition problem.
___ ___
+ ___ ___
What would be the largest answer?
2. Mrs Tan bought a T-shirt at a store. She paid the cashier $50 and received a change of $17.25. How much did the T-shirt cost?
3. Ben has 5 times as many coins as Kenneth.
They have 30 coins altogether.
How many coins does Ben have?
4. There are 118 children waiting for their football coach.
If each football coach can have only 8 children, how many coaches are needed?
5. There are 89 magazines arranged in 3 piles. The first pile has 19 more magazines than the second pile. The number of magazines in the second pile is twice the number of magazines in the third
pile. How many magazines are there in the second pile?
6. If John buys 7 pens, he would be short of $2. If he buys 6 pens, he would have $1 left. How much money does John have?
7. Mark, Samuel and Zack had 49 marbles altogether. Mark had twice as many marbles as Samuel, while Samuel had twice as many marbles as Zack. How many marbles does Mark have?
8. Joshua collected 517 ice-cream sticks.
Nicholas collected 230 more ice-cream sticks than Joshua.
a) How many ice-cream sticks did Nicholas collect?
b) How many ice-cream sticks did they collect altogether?
9. 5 classes of pupils went on an excursion.
Each class had 35 pupils and was accompanied by two teachers.
What was the total number of people who went for the excursion?
10. Jason has 249 more stamps than Mitchell. If Jason gives Mitchell 45 of his stamps, how many more stamps will Jason have than Mitchell?
11. 9 children bought a present for their friend.
They shared the cost of the present equally.
If each child paid $7, how much did the present cost?
12. Tom and Jerry spent a total of $90.50 at a shop. Tom spent $29.65. How much more did Jerry spend than Tom?
13. The printer prints 8 copies of photographs in 5 minutes.
How many copies of photographs can it print in 25 minutes?
14. Aaron bought 3 boxes of sweets. There were 6 packets of sweets in each box. If each packet contained 10 sweets, how many sweets did he have altogether?
Copyright © StarPoint Online Test Centre
|
{"url":"http://www.starpoint.com.sg/examinator2/91333/demom","timestamp":"2024-11-05T01:24:06Z","content_type":"text/html","content_length":"25668","record_id":"<urn:uuid:e3e91a88-cf79-4bc4-9728-99c40b1e9bc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.84/warc/CC-MAIN-20241104225856-20241105015856-00253.warc.gz"}
|
A train of mass 1000kg is moving with speed of 24m/s find force required to stop it in 2 minute
A train of mass 1000kg is moving with speed of 24m/s find force required to stop it in 2 minute
Solution 1
Sure, let's solve this step by step:
Step 1: First, we need to convert the time from minutes to seconds because the standard unit of time in physics is seconds. So, 2 minutes = 2 * 60 = 120 seconds.
Step 2: Next, we need to find the acceleration of the train. Acceleration is the rate of change of Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI is a powerful AI-powered study tool designed to help you to solve study problem.
Knowee AI
Upgrade your grade with Knowee
Get personalized homework help. Review tough concepts in more detail, or go deeper into your topic by exploring other relevant questions.
|
{"url":"https://knowee.ai/questions/72808875-a-train-of-mass-kg-is-moving-with-speed-of-ms-find-force-required-to","timestamp":"2024-11-10T04:54:16Z","content_type":"text/html","content_length":"364318","record_id":"<urn:uuid:c7d62ee5-28f2-4638-8cd5-5d85d69ea893>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00378.warc.gz"}
|
3D Printed Robot Arm - Acceleration / Deceleration Planner Algorithm?
Hi All,
I’m working on a 3D Printed Robot Arm using the Dagor ESP32 board running SimpleFOC. As you can see in the video I’ve got it far enough along that I’m testing motion on each of the 6 axes.
My approach for motion on the robot arm is to let the user manually move the arm into the correct position, record it, then move to the next position, record that, etc. Then the user would play back
the recorded positions sort of like playing Gcode being sent to a 3D printer. When playing the recorded positions back, I would let them specify any velocity (slower/faster) and I would calculate the
acceleration and deceleration for each move.
Does anybody have any algorithm that already does this?
It would seem that others have likely run into this problem. In the world of stepper motors there is a popular library called AccelStepper. There’s also the open source Grbl and TinyG code for
steppers that do motion planning. Grbl uses trapezoidal planning to keep it simple. TinyG uses S-curve planning for even smoother acceleration/deceleration. Both of those also do motion planning
whereby they don’t slow all the way down to zero as they head into the next move if the angle of the move is not drastic.
At a minimum, I’d love to just do a simple trapezoidal acceleration/deceleration algorithm inside the SimpleFOC loop like the below graph:
Or it would be nice to get a bit fancier and do more of the S-curve:
This is an example of how TinyG visualizes its jerk acceleration / deceleration. I would LOVE to implement an algorithm like this.
3 Likes
Hi @jlauer very interesting project
I think what you are looking for is something like Trapezoidal Path Planner which, as far as I know, is not currently implemented at SimpleFOC level … but in my opinion it would be a very nice
feature to have
My knowledge in this field is extremely limited, but I think you can externally generate this kind of trajectory and command it to the SimpleFOC device.
PS: Just curious … do you use some kind of gear at your arm like planetary, harmonic , cycloidal… or are in direct drive?
Hi @JorgeMaker, that thread looks very promising. It appears the code added to ODrive was based on a research paper (https://hal.archives-ouvertes.fr/hal-01575057/document) which has a key chart in
There is a Python file they added to ODrive which seems to be the first test implementation for the algorithm. This could be super helpful. (https://github.com/odriverobotics/ODrive/blob/
On your question, yes, the robot arm uses a 20:1 planetary gearbox that is backdrivable so each axis can be placed into a light torque mode so the user can position the robot arm where they want to
record the position. I’m getting about 12NM of torque out of each actuator, which as you can see in the video is enough power for the axes at the base to lift the entire arm, which is quite heavy. It
was a risk whether this would work, but I’m quite pleased with how things are coming together.
1 Like
Very interesting … I am developing a similar project, but I decided to design my own custom PCBs to run SimpleFOC and it is taking me longer than I initially thought.
For the issue of arm control, I had thought of trying both your strategy of recording positions and then playing it and also willing to try some kind of integration with ROS … an environment that
from what little I know can offer solutions to the planning of trajectories at a higher level than the one with the actuators
Want to collaborate? This robot arm is super fun, but it is a lot of work. I’m doing it because I think the world needs an inexpensive 6 DOF robot arm that actually functions. Inexpensive BLDCs,
drivers, and projects like SimpleFOC are breakthrough territory for the maker community, which then makes this possible.
Here’s an image of the actuator and how complicated it is. It’s quite the 3D printing process to pull each one off, but once you get the hang of it, they come together nicely. The cost is about $100
per actuator. So, you’re talking an entire robot arm for maybe $700.
3 Likes
Very nice and compact design
My initial plan was to develop a complete actuator unit [PCB, scketch, motor selection and “gearbox” design] and then clone it in sufficient quantity to develop a robotic arm … but due to a shortage
of electronic components by afraid of running out of supplies or having to pay absurd prices, I had to modify my initial design to cut costs and advance PCB production before the full actuator design
was completed.
I have a couple of questions for you …
• Did you 3D print your planetary gearbox or incorporated something like this ? If 3D printed … Were you able to get an acceptable backlash?
• Do you have any recommendations or references for the BLDC motor used? I have many doubts about which engines to choose since I do not want to fall short or spend too much on excessively powerful
engines for this type of application. My heart says buy a little beast, something like this but my wallet says look for something cheaper
Would that motor run on 12V? the specs say 5V. I don’t have any gimbal motor experience.
Not sure … the spec says you can get "Torque: 4970g @ 5V & 0.75A " but do not specify any upper limit as far as I know.
I guess there has to be an upper limit in terms of current and voltage or a any combination of both, but I couldn’t find it. This brand offers a very poor/undefinded specifications
The testing setup for my board has the BGM4108-130. For this motor, a little brother of BGM8108-90, the specs say “Torque: 700g @ 5V & 0.43A” … I have tested at 11V ,which is the limit of my
regulated power supply, without any problems and with good results. I plan to test it to 24V once I got more clones of my board, … for the moment I do not want to risk
This is the motor I’m using Gartt ML5010 @ $33/each in bulk. Amazon.com
All gears are 3D printed. There’s a category in @David_Gonzalez 's Discord for this robot arm. Discord
3 Likes
You have 0.24mm^2 enamelled copper wire, if I see it correctly - so there will be a max current rating for this. And the insulation will have a max voltage, but this will be much higher than 12V, I
So 12V will be 1.25A or thereabouts - the motor will get hot after a while, but it should be fine to run at 12V.
I’d like to keep this thread focused on the acceleration / deceleration algorithm as I really could use the help of this community on that.
1 Like
On the original topic, I found a few links in my list about motion control for arms:
3 Likes
Many many years without using Matlab but it seems that trajectory planning can be used.
If you get a file with calculated target positions, you can use a Python script to reproduce those calculated paths following your desired profiles. … It would be great if such a thing was done in
SimpleFOC, but at the moment, as far as I know, it can’t be done by the library.
Maybe, I’m not sure something similar can be implemented at sketch level by intercepting position commands and then interpolating the target position to achieve with intermediate target positions
that meet your desired profile in terms in terms of velocity and accelerations … but not sure.
1 Like
I am nut sure it will apply to your specific need, but in the OpenPnp Community there has been some development on S-curve motion control driven mainly by Mark. Take a look in the source.
1 Like
Making progress here with that post from @JorgeMaker regarding the Trapezoidal Path Planner. This algorithm does seem to be able to generate the position the SimpleFOC device should be at for any
arbitrary time and calculates that position based on an acceleration / deceleration profile.
I’ve heavily tweaked the Python code (https://github.com/odriverobotics/ODrive/blob/a8461c40fd6d97991adc9d553db3fdbce4a01dc4/tools/motion_planning/PlanTrap.py) and this is exactly what I need for
giving SimpleFOC the positions each time through the loop (or each N times through the loop).
As an example, here’s a typical move I’d make on the robot arm. I’d go from position 0 (radians) to position 30 (radians, roughly 5 turns of motor). That is shown on the Y axis.
I would make this move very slowly if I’m moving a heavy object. So I would use velocity of 10 radians/second and an acceleration of 5 radians/second/second.
The Python code gives me a perfect acceleration / coast / deceleration profile. It calculates that my total move time is 5 secs. 2 seconds spent accelerating, 1 second in coast, and 2 seconds
Start position (rads): 0
End position (rads): 30
Velocity for move (rads/s): 10
Acceleration (rads/s/s): 5
Here is a test of the code that would get called each time through the loop at any arbitrary time to find what the current position should be, so that we can update the position target in SimpleFOC
as fast as possible.
t (seconds): 1.00
Acceleration stage
y (pos): 2.50
y_Accel (): 10.00
yd (vel): 5.00
ydd (acc): 5.00
t (seconds): 2.50
Coasting stage
y (pos): 15.00
y_Accel (): 10.00
yd (vel): 10.00
ydd (acc): 0.00
t (seconds): 4.00
Deceleration stage
y (pos): 27.50
y_Accel (): 10.00
yd (vel): 5.00
ydd (acc): -5.00
Here is the code that would get run each time through the loop:
Python Code calc_step_pos_at_arbitrary_time()
def calc_step_pos_at_arbitrary_time(t, Xf, Xi, Vi, Ar, Vr, Dr, Ta, Tv, Td, Tf):
# t is the current time
# t would be calculated based on microseconds() passed
# since the start of the current move
#t = 4.5
# We only know acceleration (Ar and Dr), so we integrate to create
# the velocity and position curves
# we should do this once at the start of the move, not
# each time thru this step method
y_Accel = Xi + Vi*Ta + 0.5*Ar*Ta**2
y = None
yd = None
ydd = None
print("t (seconds): {:.2f}".format(t))
if t < 0: # Initial conditions
print("Initial conditions")
y = Xi
yd = Vi
ydd = 0
elif t < Ta: # Acceleration
print("Acceleration stage")
y = Xi + Vi*t + 0.5*Ar*t**2
yd = Vi + Ar*t
ydd = Ar
elif t < Ta+Tv: # Coasting
print("Coasting stage")
y = y_Accel + Vr*(t-Ta)
yd = Vr
ydd = 0
elif t < Tf: # Deceleration
print("Deceleration stage")
td = t-Tf
y = Xf + 0*td + 0.5*Dr*td**2
yd = 0 + Dr*td
ydd = Dr
elif t >= Tf: # Final condition
print("Final condition")
y = Xf
yd = 0
ydd = 0
raise ValueError("t = {} is outside of considered range".format(t))
print("y (pos): {:.2f}".format(y))
print("y_Accel (): {:.2f}".format(y_Accel))
print("yd (vel): {:.2f}".format(yd))
print("ydd (acc): {:.2f}".format(ydd))
return (y, yd, ydd)
Then, just to test the algorithm further, let’s decrease the acceleration to 2 radians/sec/sec, which is slow enough where we may never get to a coast phase because we reach our final destination
before the maximum velocity of 10 radians/sec can be achieved.
Start position (rads): 0
End position (rads): 30
Velocity for move (rads/s): 10
Acceleration (rads/s/s): 2
This time it takes us 7.74 seconds to reach our destination. 3.87 on accelerate and 3.87 on decelerate. We never reach coast, so we have a triangle shape move rather than a trapezoidal shape.
Thus this algorithm is going to do the job for us. Here’s the main calculation code that would get run once at the start of the move:
Python Code calc_plan_trapezoidal_path_at_start()
def calc_move_parameters_at_start(Xf, Xi, Vi, Vmax, Amax, Dmax):
dX = Xf - Xi # Distance to travel
stop_dist = Vi**2 / (2*Dmax) # Minimum stopping distance
dXstop = np.sign(Vi)*stop_dist # Minimum stopping displacement
s = np.sign(dX - dXstop) # Sign of coast velocity (if any)
Ar = s*Amax # Maximum Acceleration (signed)
Dr = -s*Dmax # Maximum Deceleration (signed)
Vr = s*Vmax # Maximum Velocity (signed)
# If we start with a speed faster than cruising, then we need to decel instead of accel
# aka "double deceleration move" in the paper
if s*Vi > s*Vr:
Ar = -s*Amax
# Time to accel/decel to/from Vr (cruise speed)
Ta = (Vr-Vi)/Ar
Td = -Vr/Dr
# Integral of velocity ramps over the full accel and decel times to get
# minimum displacement required to reach cruising speed
dXmin = Ta*(Vr+Vi)/2.0 + Td*(Vr)/2.0
# Are we displacing enough to reach cruising speed?
if s*dX < s*dXmin:
print("Short Move:")
# From paper:
# Vr = s*math.sqrt((-(Vi**2/Ar)-2*dX)/(1/Dr-1/Ar))
# Simplified for less divisions:
Vr = s*math.sqrt((Dr*Vi**2 + 2*Ar*Dr*dX) / (Dr-Ar))
Ta = max(0, (Vr - Vi)/Ar)
Td = max(0, -Vr/Dr)
Tv = 0
print("Long move:")
Tv = (dX - dXmin)/Vr # Coasting time
Tf = Ta+Tv+Td
print("Xi: {:.2f}\tXf: {:.2f}\tVi: {:.2f}".format(Xi, Xf, Vi))
print("Amax: {:.2f}\tVmax: {:.2f}\tDmax: {:.2f}".format(Amax, Vmax, Dmax))
print("dX: {:.2f}\tdXst: {:.2f}\tdXmin: {:.2f}".format(dX, dXstop, dXmin))
print("Ar: {:.2f}\tVr: {:.2f}\tDr: {:.2f}".format(Ar, Vr, Dr))
print("Ta: {:.2f}\tTv: {:.2f}\tTd: {:.2f}".format(Ta, Tv, Td))
return (Ar, Vr, Dr, Ta, Tv, Td, Tf)
Here is the final code that now needs to be transposed to C code for running with SimpleFOC. I think I’ll add a commander command like G for Gcode and have users submit something like “G30 V10 A5”
for move 30 radians at velocity of 10 (rads/s) at acceleration of 5 (rads/s/s).
# Copyright (c) 2021 John Lauer for modifications
# Original copyrights and code by
# Original URL https://github.com/odriverobotics/ODrive/blob/a8461c40fd6d97991adc9d553db3fdbce4a01dc4/tools/motion_planning/PlanTrap.py
# Copyright (c) 2018 Paul Guénette
# Copyright (c) 2018 Oskar Weigl
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# This algorithm is based on:
# FIR filter-based online jerk-constrained trajectory generation
# https://www.researchgate.net/profile/Richard_Bearee/publication/304358769_FIR_filter-based_online_jerk-controlled_trajectory_generation/links/5770ccdd08ae10de639c0ff7/FIR-filter-based-online-jerk-controlled-trajectory-generation.pdf
def calc_plan_trapezoidal_path_at_start(Xf, Xi, Vi, Vmax, Amax, Dmax):
dX = Xf - Xi # Distance to travel
stop_dist = Vi**2 / (2*Dmax) # Minimum stopping distance
dXstop = np.sign(Vi)*stop_dist # Minimum stopping displacement
s = np.sign(dX - dXstop) # Sign of coast velocity (if any)
Ar = s*Amax # Maximum Acceleration (signed)
Dr = -s*Dmax # Maximum Deceleration (signed)
Vr = s*Vmax # Maximum Velocity (signed)
# If we start with a speed faster than cruising, then we need to decel instead of accel
# aka "double deceleration move" in the paper
if s*Vi > s*Vr:
Ar = -s*Amax
# Time to accel/decel to/from Vr (cruise speed)
Ta = (Vr-Vi)/Ar
Td = -Vr/Dr
# Integral of velocity ramps over the full accel and decel times to get
# minimum displacement required to reach cruising speed
dXmin = Ta*(Vr+Vi)/2.0 + Td*(Vr)/2.0
# Are we displacing enough to reach cruising speed?
if s*dX < s*dXmin:
print("Short Move:")
# From paper:
# Vr = s*math.sqrt((-(Vi**2/Ar)-2*dX)/(1/Dr-1/Ar))
# Simplified for less divisions:
Vr = s*math.sqrt((Dr*Vi**2 + 2*Ar*Dr*dX) / (Dr-Ar))
Ta = max(0, (Vr - Vi)/Ar)
Td = max(0, -Vr/Dr)
Tv = 0
print("Long move:")
Tv = (dX - dXmin)/Vr # Coasting time
Tf = Ta+Tv+Td
# We only know acceleration (Ar and Dr), so we integrate to create
# the velocity and position curves
# we should do this once at the start of the move, not
# each time thru this step method
y_Accel = Xi + Vi*Ta + 0.5*Ar*Ta**2
print("Xi: {:.2f}\tXf: {:.2f}\tVi: {:.2f}".format(Xi, Xf, Vi))
print("Amax: {:.2f}\tVmax: {:.2f}\tDmax: {:.2f}".format(Amax, Vmax, Dmax))
print("dX: {:.2f}\tdXst: {:.2f}\tdXmin: {:.2f}".format(dX, dXstop, dXmin))
print("Ar: {:.2f}\tVr: {:.2f}\tDr: {:.2f}".format(Ar, Vr, Dr))
print("Ta: {:.2f}\tTv: {:.2f}\tTd: {:.2f}".format(Ta, Tv, Td))
print("y_Accel: {:.2f}".format(y_Accel))
print("Tf (full move time): {:.2f}".format(Tf))
return (Ar, Vr, Dr, Ta, Tv, Td, Tf, y_Accel)
def calc_step_pos_at_arbitrary_time(t, Xf, Xi, Vi, Ar, Vr, Dr, Ta, Tv, Td, Tf, y_Accel):
# t is the current time
# t would be calculated based on microseconds() passed
# since the start of the current move
#t = 4.5
y = None
yd = None
ydd = None
print("t (seconds): {:.2f}".format(t))
if t < 0: # Initial conditions
print("Initial conditions")
y = Xi
yd = Vi
ydd = 0
elif t < Ta: # Acceleration
print("Acceleration stage")
y = Xi + Vi*t + 0.5*Ar*t**2
yd = Vi + Ar*t
ydd = Ar
elif t < Ta+Tv: # Coasting
print("Coasting stage")
y = y_Accel + Vr*(t-Ta)
yd = Vr
ydd = 0
elif t < Tf: # Deceleration
print("Deceleration stage")
td = t-Tf
y = Xf + 0*td + 0.5*Dr*td**2
yd = 0 + Dr*td
ydd = Dr
elif t >= Tf: # Final condition
print("Final condition")
y = Xf
yd = 0
ydd = 0
raise ValueError("t = {} is outside of considered range".format(t))
print("y (pos): {:.2f}".format(y))
print("y_Accel (): {:.2f}".format(y_Accel))
print("yd (vel): {:.2f}".format(yd))
print("ydd (acc): {:.2f}".format(ydd))
return (y, yd, ydd)
if __name__ == '__main__':
Xf = 30 # Position we're moving to (rads)
Xi = 0 # Initial position (rads)
Vi = 0 # Initial velocity (rads/s)
Vmax = 10 # Velocity max (rads/s)
Amax = 5 # Acceleration max (rads/s/s)
Dmax = Amax # Decelerations max (rads/s/s)
# Plan the path
(Ar, Vr, Dr, Ta, Tv, Td, Tf, y_Accel) = calc_plan_trapezoidal_path_at_start(Xf, Xi, Vi, Vmax, Amax, Dmax)
# Example calls each time through loop during move
# Time = 1 second
(Y, Yd, Ydd) = calc_step_pos_at_arbitrary_time(1, Xf, Xi, Vi, Ar, Vr, Dr, Ta, Tv, Td, Tf, y_Accel)
# Time = 2.5 seconds
(Y, Yd, Ydd) = calc_step_pos_at_arbitrary_time(2.5, Xf, Xi, Vi, Ar, Vr, Dr, Ta, Tv, Td, Tf, y_Accel)
# Time = 4 seconds
(Y, Yd, Ydd) = calc_step_pos_at_arbitrary_time(4, Xf, Xi, Vi, Ar, Vr, Dr, Ta, Tv, Td, Tf, y_Accel)
2 Likes
Got it working! Thanks for all the help from this thread. I’m amazed how quickly it came together. You can see in the video how well it works.
G30 to move to position 30 (rads)
GV100 to change velocity to 100 (rads/s)
GA5 to change acceleration to 5 (rads/s/s)
Here are the 2 files I added to the Dagor controller’s code base to get the G command added into the SimpleFOC commander. Keep in mind this is very messy code. Lots of globals. Not well contained
into a struct. If you issue a new Gcode move in the middle of an existing one the behavior is weird. However, this is an amazing start. Would love help from the community to refine the code.
This is the main file that’s as close to a port of the Python script I posted above as I could get it on quick order.
// Acceleration / Deceleration Planner for SimpleFOC & Dagor Controller
// Copyright (c) 2021 John Lauer for ACCELDECEL_PLANNER.ino
// This library allows you to control SimpleFOC with a plan for moves at a certain velocity as well
// as acceleration and deceleration at a certain rate. This code follows the same license as the original code
// given credit to below.
// Credit for most of this concept and much of the code used as a starting point for this library goes to
// the code inside ODrive available at:
// https://github.com/odriverobotics/ODrive/blob/a8461c40fd6d97991adc9d553db3fdbce4a01dc4/tools/motion_planning/PlanTrap.py
// # Original copyrights and code by
// # Copyright (c) 2018 Paul Guénette
// # Copyright (c) 2018 Oskar Weigl
// # Permission is hereby granted, free of charge, to any person obtaining a copy
// # of this software and associated documentation files (the "Software"), to deal
// # in the Software without restriction, including without limitation the rights
// # to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// # copies of the Software, and to permit persons to whom the Software is
// # furnished to do so, subject to the following conditions:
// # The above copyright notice and this permission notice shall be included in all
// # copies or substantial portions of the Software.
// # THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// # IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// # FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// # AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// # LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// # OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
// # SOFTWARE.
// # This algorithm is based on:
// # FIR filter-based online jerk-constrained trajectory generation
// # https://www.researchgate.net/profile/Richard_Bearee/publication/304358769_FIR_filter-based_online_jerk-controlled_trajectory_generation/links/5770ccdd08ae10de639c0ff7/FIR-filter-based-online-jerk-controlled-trajectory-generation.pdf
// Symbol Description
// Ta, Tv and Td Duration of the stages of the AL profile
// Xi and Vi Adapted initial conditions for the AL profile
// Xf Position set-point
// s Direction (sign) of the trajectory
// Vmax, Amax, Dmax and jmax Kinematic bounds
// Ar, Dr and Vr Reached values of acceleration and velocity
// float Xf = 30; // # Position we're moving to (rads)
// float Xi = 0; // # Initial position (rads)
// float Vi = 0; // # Initial velocity (rads/s)
float Vmax_ = 1000; //10; // # Velocity max (rads/s)
float Amax_ = 20; //5; // # Acceleration max (rads/s/s)
float Dmax_ = Amax_; // # Decelerations max (rads/s/s)
//struct Step_t {
float Y_;
float Yd_;
float Ydd_;
// Private variables
float Tf_, Xi_, Xf_, Vi_, Ar_, Dr_, Vr_, Ta_, Td_, Tv_, yAccel_;
float sign(float val) {
if (val < 0) return -1.0f;
if (val == 0) return 0.0f;
//if val > 0:
return 1.0f;
// A sign function where input 0 has positive sign (not 0)
float sign_hard(float val) {
if (val < 0) return -1.0f;
return 1.0f;
// This method should get called at the start of a move to generate all of the global variables
// for the duration of the move. These numbers will be used each time you call calc_step_pos_at_arbitrary_time()
// during the loop() because of course you would not want to calc these globals each time.
bool calc_plan_trapezoidal_path_at_start(float Xf, float Xi, float Vi, float Vmax, float Amax, float Dmax) {
float dX = Xf - Xi; // Distance to travel
float stop_dist = (Vi * Vi) / (2.0f * Dmax); // Minimum stopping distance
float dXstop = sign(Vi) * stop_dist; // Minimum stopping displacement
float s = sign_hard(dX - dXstop); // Sign of coast velocity (if any)
Ar_ = s * Amax; // Maximum Acceleration (signed)
Dr_ = -s * Dmax; // Maximum Deceleration (signed)
Vr_ = s * Vmax; // Maximum Velocity (signed)
// If we start with a speed faster than cruising, then we need to decel instead of accel
// aka "double deceleration move" in the paper
if ((s * Vi) > (s * Vr_)) {
Ar_ = -s * Amax;
// Time to accel/decel to/from Vr (cruise speed)
Ta_ = (Vr_ - Vi) / Ar_;
Td_ = -Vr_ / Dr_;
// Integral of velocity ramps over the full accel and decel times to get
// minimum displacement required to reach cuising speed
float dXmin = 0.5f*Ta_*(Vr_ + Vi) + 0.5f*Td_*Vr_;
// Are we displacing enough to reach cruising speed?
if (s*dX < s*dXmin) {
// Short move (triangle profile)
Vr_ = s * sqrt((Dr_*sq(Vi) + 2*Ar_*Dr_*dX) / (Dr_ - Ar_));
Ta_ = max(0.0f, (Vr_ - Vi) / Ar_);
Td_ = max(0.0f, -Vr_ / Dr_);
Tv_ = 0.0f;
} else {
// Long move (trapezoidal profile)
Tv_ = (dX - dXmin) / Vr_;
// Fill in the rest of the values used at evaluation-time
Tf_ = Ta_ + Tv_ + Td_;
Xi_ = Xi;
Xf_ = Xf;
Vi_ = Vi;
yAccel_ = Xi + Vi*Ta_ + 0.5f*Ar_*sq(Ta_); // pos at end of accel phase
return true;
// # Pass in t as the current time
// # t would be calculated based on milliseconds() passed
// # since the start of the current move
void calc_step_pos_at_arbitrary_time(float t) {
//Step_t trajStep;
if (t < 0.0f) { // Initial Condition
Y_ = Xi_;
Yd_ = Vi_;
Ydd_ = 0.0f;
} else if (t < Ta_) { // Accelerating
Y_ = Xi_ + Vi_*t + 0.5f*Ar_*sq(t);
Yd_ = Vi_ + Ar_*t;
Ydd_ = Ar_;
} else if (t < Ta_ + Tv_) { // Coasting
Y_ = yAccel_ + Vr_*(t - Ta_);
Yd_ = Vr_;
Ydd_ = 0.0f;
} else if (t < Tf_) { // Deceleration
float td = t - Tf_;
Y_ = Xf_ + 0.5f*Dr_*sq(td);
Yd_ = Dr_*td;
Ydd_ = Dr_;
} else if (t >= Tf_) { // Final Condition
Y_ = Xf_;
Yd_ = 0.0f;
Ydd_ = 0.0f;
} else {
// TODO: report error here
return; // trajStep;
And then a utilities file that injects the commander command, parses it, and has the tick method called from the main loop.
// Utilities
// These are extra commands available to help
// use the Dagor controller. They register themselves into the commander
// so when you send ? to this controller in the Serial port, you can
// see these available utilities.
// This init is called from the setup step on boot of ESP32
// Add any registrations of utilities or commander commands here
void util_init() {
// Add command to commander for torque test. Do this via
// it's init() method.
// torque_test_init();
// Add command to commander for restart
command.add('r', util_restart_cb, "Restart the ESP32");
// Add command to commander for break in
command.add('b', util_breakin_cb, "Break in the motor");
// Add command to commander for stat
command.add('S', util_stats_cb, "Stats");
// Enable toggle
// command.add('e', util_enable_cb, "Enable/disable toggle");
// AccelDecel Planner
command.add('G', util_gcode_move_cb, "Gcode move Gxx, GVxx, or GAxx - Example: G30 moves to position in rads. GV10 sets velocity to 10 rads/s. GA5 sets acceleration to 5 rads/s/s.");
// 5 seconds later startup. Add your startup method here. It is called well after all other systems are booted.
void util_startup() {
// For now, actually run break in so can do this without serial port attached
// torque_test_cb("");
// Start the Gcode utility
// Add your tick method here for your utility. This is called at 5hz.
bool util_is_did_startup_method = false;
byte util_startup_tick_ctr = 0;
void util_tick() {
if (util_is_did_startup_method == false) {
// we have not run our startup routine yet. run it after 10 ticks.
if (util_startup_tick_ctr >= 20) {
util_is_did_startup_method = true;
util_startup_tick_ctr = 0;
} else {
if (util_startup_tick_ctr == 1) {
Serial.print("Waiting to start utils...");
} else {
// Add your slow tick method here for your utility. This is called at 5hz,
// so use somewhat sparingly.
void util_5hz_tick() {
// call our own tick method for util startup processing
// call your tick method here. will get called at 5hz.
// torque_tick();
// this is the slow tick for repeating the loop over and over for debug of arm
// Add your super fast tick method here for your utility. This is called as fast as possible,
// so use sparingly.
void util_fast_tick() {
// util_breakin_micro_tick();
// Prints out the delta in a nice format of the milliseconds since start to now
// t is time in seconds = millis()/1000;
void timeToString(char* string, size_t size)
unsigned long nowMillis = millis();
unsigned long seconds = nowMillis / 1000;
int days = seconds / 86400;
seconds %= 86400;
byte hours = seconds / 3600;
seconds %= 3600;
byte minutes = seconds / 60;
seconds %= 60;
snprintf(string, size, "%04d:%02d:%02d:%02d", days, hours, minutes, seconds);
// ---------------------
// AccelDecel Planner
// Gcode Move Gxx Vxx Axx
// Example: G30 V10 A5
// ----------------------
void util_gcode_move_cb(char* cbArg) {
//Serial.print("Gcode move. cbArg:");
// Parse this string for vals
if (cbArg[0] == 'V') {
String str = String(cbArg);
// Remove v so can convert to a float
str = str.substring(1);
// Serial.print("As string removing the v:");
// Serial.println(str);
// Now that we have clean string, convert it to a float and set velocity max global variable
Vmax_ = str.toFloat();
Serial.print("User wants velocity change. Vmax_: ");
// Try calling the planner to use this new velocity value
// We have to use the current pos, vel, and accel
// calc_plan_trapezoidal_path_at_start(Xf_, Xi_, Vi_, Vmax_, Amax_, Dmax_);
calc_plan_trapezoidal_path_at_start(Xf_, motor.shaft_angle, motor.shaft_velocity, Vmax_, Amax_, Dmax_);
else if (cbArg[0] == 'A') {
String str = String(cbArg);
// Remove a so can convert to a float
str = str.substring(1);
// Serial.print("As string removing the a:");
// Serial.println(str);
// Now that we have clean string, convert it to a float and set velocity max global variable
Amax_ = str.toFloat();
Dmax_ = Amax_; // For now force decel to be same as accel. At some point add extra command to allow either/or to be set.
Serial.print("User wants acceleration change. Amax_: ");
// Try calling the planner to use this new acceleration value
// calc_plan_trapezoidal_path_at_start(Xf_, Xi_, Vi_, Vmax_, Amax_, Dmax_);
calc_plan_trapezoidal_path_at_start(Xf_, motor.shaft_angle, motor.shaft_velocity, Vmax_, Amax_, Dmax_);
else if (cbArg[0] == 'L') {
// Loop mode
// This let's you just loop back and forth from one position to another
else {
// Presume they asked for a position move where the cbArg is just a number for the new position
String str = String(cbArg);
float newPosition = str.toFloat();
Serial.print("Move to new position (rads): ");
// Start the whole Gcode move process
// This is called by the util startup method that triggers about 5 seconds after the actuator boots up
float util_gcode_loop_posA = 20;
void util_gcode_startup() {
// Serial.println("Started up AccelDecel Planner Gcode moves");
Serial.println("Entering endless loop of Gcode moves");
Vmax_ = 1000;
Amax_ = 10;
bool is_util_gcode_moving = false;
// -1 off
// 0 want to get going to position A but idle
// 1 going to position A
// 2 want to get going to position B (startPos) but idle
// 3 going to position B (startPos)
int util_gcode_loop_status = -1;
void util_gcode_loop_start() {
// tell the status variable that we want to get going to position A
util_gcode_loop_status = 0;
// now, next time into the tick() method we'll trigger the moveto
void util_gcode_loop_end() {
// tell the status variable that we want to do nothing
// that means each time into tick nothing happens, the move will eventually finish, all is good
util_gcode_loop_status = -1;
// now, next time into the tick() method we'll trigger nothing
void util_gcode_loop_tick() {
if (util_gcode_loop_status == -1) {
// user does not want it on, so exit
else if (util_gcode_loop_status == 0) {
// they want to get going to posA
util_gcode_loop_status = 1; // so next time into tick we know we're executing our move to posA
else if (util_gcode_loop_status == 1) {
// we do nothing here while the move is going
// if the move is done, though, we start moving back to posB (startPos)
if (is_util_gcode_moving) {
// they are still moving, do nothing
else {
// they must be done moving,
util_gcode_loop_status = 2;
else if (util_gcode_loop_status == 2) {
// they want to get going to posA
util_gcode_loop_status = 3; // so next time into tick we know we're executing our move to posA
else if (util_gcode_loop_status == 3) {
// we do nothing here while the move is going
// if the move is done, though, we start moving back to posA (endPos)
if (is_util_gcode_moving) {
// they are still moving, do nothing
else {
// they must be done moving,
util_gcode_loop_status = 0;
float new_pos_ = 0;
static unsigned long util_gcode_start_move_millis = 0;
// When ready to move to new position, call this with the new position
void util_gcode_moveto(float newPos) {
// See if already moving. Exit if so.
// if (is_util_gcode_moving) {
// Serial.println("Already in a move. Cancelling request to move to new position.");
// return;
// }
// set our global of the new position
Xf_ = newPos;
// set our milliseconds passed variable so we have a t to work with
util_gcode_start_move_millis = millis();
// take the position from SimpleFOC and set it to our start position
Xi_ = motor.shaft_angle;
Serial.print("Starting at shaft angle of:");
// TODO: If we are being asked to move but are already in motion, we should go with the current velocity, position, and acceleration
// and keep moving.
// Now we need to do our calcs before we start moving
// TODO: Maybe stop passing vars and just use all globals
calc_plan_trapezoidal_path_at_start(Xf_, Xi_, Vi_, Vmax_, Amax_, Dmax_);
motor.target = Y_; // could possibly put this in the method above
// Tell user how long this move will take
Serial.print("Time to complete move (secs):");
// Velocity and Accel of move
Serial.print("Velocity for move: ");
Serial.print(", Acceleration: ");
// set our global bool so the tick method knows we have a move in motion
is_util_gcode_moving = true;
// motor.monitor_downsample = 250; // enable monitor every 250ms
// This should get call at 100hz
unsigned int gcode_step_ctr = 0;
static unsigned long lastTimeUpdateMillis = 0;
int util_gcode_period = 10; // 1000 / this number = Hz, i.e. 1000 / 100 = 10Hz, 1000 / 10 = 100Hz, 1000 / 5 = 200Hz, 1000 / 1 = 1000hZ
unsigned long util_gcode_time_now = 0;
void util_gcode_tick() {
// This should get entered 100 times each second (100Hz)
if((unsigned long)(millis() - util_gcode_time_now) > util_gcode_period){
util_gcode_time_now = millis();
// see if we are in a move or not
if (is_util_gcode_moving) {
// we are in a move, let's calc the next position
float secs_since_start_of_move = (millis() - util_gcode_start_move_millis) / 1000.0f;
motor.target = Y_;
// Serial.print("t (secs): ");
// Serial.print(secs_since_start_of_move);
// Serial.print(", Y_ (rads): ");
// Serial.println(Y_);
// Called 4 times per second
if (gcode_step_ctr >= 50) {
Serial.print(", ");
gcode_step_ctr = 0;
// see if we are done with our move
if (secs_since_start_of_move >= Tf_) {
// we are done with move
// motor.monitor_downsample = 0; // disable monitor
Serial.println("Done with move");
is_util_gcode_moving = false;
// // Called once/second
// gcode_step_ctr++;
// if (gcode_step_ctr >= 25) {
// char str[20] = "";
// timeToString(str, sizeof(str));
// Serial.print(str);
// gcode_step_ctr = 0;
// util_stats();
// }
// ----------------
// ENABLE TOGGLE
// ----------------
void util_enable_cb(char* cbArg) {
Serial.print("Toggling enable. cbArg:");
void util_enable_toggle() {
int gate_val = digitalRead(enGate);
if (gate_val == HIGH) {
digitalWrite(enGate, LOW);
Serial.println("Toggled enable to off.");
} else {
digitalWrite(enGate, HIGH);
Serial.println("Toggled enable to on.");
// --------------
// RESTART
// ----------------
// The callback when user enters command at serial port. This restarts the ESP32.
void util_restart_cb(char* cbArg) {
Serial.print("Restarting ESP32. cbArg:");
// --------------
// BREAK IN MOTOR
// ----------------
bool breakin_is_running = false;
unsigned int breakin_tick_ctr = 0;
int breakin_vel_current = 0;
int breakin_vel_start = 1;
int breakin_vel_increment = 1;
int breakin_vel_end = 65; //125;
byte breakin_phase = 0; // 0:stop, 1:fwd accel to end, 2:fwd decel back to 0, 3:rev accel to -end, 4: rev decel back to 0
unsigned int util_breakin_stateT = 0;
// The callback when user enters command at serial port. This triggers the break in function.
void util_breakin_cb(char* cbArg) {
Serial.print("Toggling break in of motor. cbArg:");
if (breakin_is_running) {
Serial.println("Toggling OFF");
else {
//Serial.println("Starting break in of motor.");
Serial.println("Toggling ON");
Serial.println("We will rotate at differing velocities and in different directions");
Serial.println("in an unending loop. Enter 'b' again to toggle the break in off.");
Serial.print("We will accel/decel to +/- velocity of: ");
Serial.print("At a step increment of: ");
void breakin_begin() {
// ok, now do move where we go to start position, pause for 2 seconds, then move back
breakin_tick_ctr = 0;
breakin_is_running = true;
//breakin_fwd_dir = true;
breakin_phase = 1; // 1:fwd accel to end
// make sure we are in velocity mode
breakin_vel_current = breakin_vel_start; // + torque_increment;
motor.target = breakin_vel_current;
//Serial.print("Setting velocity to: ");
//Serial.println("The micro tick callback will execute at 25 hertz");
Serial.println("Going to phase 1:fwd accel to end");
void breakin_step() {
// increment velocity
breakin_vel_current = breakin_vel_current + breakin_vel_increment;
// see what phase we're in
// 0:stop, 1:fwd accel to end, 2:fwd decel to 0, then rev accel to -end, 3:rev decel back to 0
if (breakin_phase == 1) {
// see if we're at end
if (breakin_vel_current > breakin_vel_end) {
// if we get here, we're at end of forward direction, so now we need to go reverse direction
// meaning slow down
breakin_vel_increment = -1 * breakin_vel_increment;
breakin_vel_end = -1 * breakin_vel_end;
breakin_phase = 2; // skip to 2:rev accel to -end
Serial.println("Going to phase 2:fwd decel to 0, then rev accel to -end");
} else {
// i think do nothing for now
else if (breakin_phase == 2) {
// see if we're at -end
if (breakin_vel_current < breakin_vel_end) {
// if we get here, we're at end of reverse direction to -end, so now we need to go fwd direction
// meaning slow down back to 0
breakin_vel_increment = -1 * breakin_vel_increment;
breakin_vel_end = -1 * breakin_vel_end;
breakin_phase = 3;
Serial.println("Going to phase 3:rev decel back to 0");
} else {
// i think do nothing for now
else if (breakin_phase == 3) {
if (breakin_vel_current > 0) {
// This means we slowed all the way down to 0 so can end
Serial.println("Going back to phase 1.");
// Let's start all over again
else {
Serial.print("Got phase that should not get. breakin_phase:");
motor.target = breakin_vel_current;
//Serial.println("Going to new velocity for break in of motor.");
//Serial.print("Setting velocity to: ");
void breakin_finish() {
// done with breakin, so set bool that we're done
breakin_is_running = false;
breakin_vel_increment = abs(breakin_vel_increment);
breakin_vel_end = abs(breakin_vel_end);
breakin_phase = 0;
breakin_vel_current = -1;
motor.target = 0;
Serial.println("Ending break in of motor.");
//Serial.print("Moving to: ");
void util_breakin_micro_tick() {
// Functions inside this "if" will execute at a 50hz rate (the denominator)
if(util_breakin_stateT >= 100000/50){
util_breakin_stateT = 0;
// this is called at 5hz, that way we can trigger off of it
void breakin_tick() {
if (breakin_is_running) {
// At 2 seconds
//if (breakin_tick_ctr == 10) {
// torque_tick_ctr++;
// torque_relax();
// Each time in here (50Hz)
// At 1 second
if (breakin_tick_ctr >= 10) {
//Serial.println("breakin_tick_ctr was 15, so doing next step...");
breakin_tick_ctr = 0;
else {
// --------------
// STATS
// ----------------
// current target value
float target;
// current motor angle
float shaft_angle;
// current motor velocity
float shaft_velocity;
// current target velocity
float shaft_velocity_sp;
// current target angle
float shaft_angle_sp;
// current voltage set to the motor (voltage.q, voltage.d)
DQVoltage_s voltage;
// current (if) measured on the motor (current.q, current.d)
DQCurrent_s current;
// phase voltages
float Ua, Ub, Uc;
void util_stats() {
Serial.print("Angle: ");
Serial.print(", Velocity: ");
Serial.print(", Voltage q: ");
Serial.print(", Temp: ");
Serial.println(util_temp(), 2);
// The callback when user enters command at serial port. This restarts the ESP32.
void util_stats_cb(char* cbArg) {
//Serial.print("Stats. cbArg:");
float util_temp() {
float vOut = 0;
for (byte ctr = 0; ctr < 10; ctr++) {
vOut += analogRead(vTemp);
return ((((vOut/10)*3.3)/4095)-1.8577)/-0.01177;
// --------------
// TORQUE TEST
// ----------------
// Setup torque test
float torque_pre_start = 0;
float torque_start = 10;
float torque_end = 50;
float torque_increment = 5;
float torque_current = -1; // stores the current move to target
bool torque_is_testing = false;
int torque_tick_ctr = 0;
void torque_test_init() {
// add command to commander so we can do torque test
command.add('t', torque_test_cb, "Torque Test");
// The callback when user enters command at serial port. This starts test.
void torque_test_cb(char* cbArg) {
Serial.print("torque_test_cb. cbArg:");
void torque_begin() {
// ok, now do move where we go to start position, pause for 2 seconds, then move back
torque_tick_ctr = 0;
torque_is_testing = true;
// jerk control using voltage voltage ramp
// default value is 300 volts per sec ~ 0.3V per millisecond
//motor.PID_velocity.output_ramp = 1;
// setting the limits
// maximal velocity of the position control
motor.velocity_limit = 0.001; // rad/s - default 20
// acceleration control using output ramp
// this variable is in rad/s^2 and sets the limit of acceleration
motor.P_angle.output_ramp = 10; // default 1e6 rad/s^2
// angle P controller - default P=20
motor.P_angle.P = 0.5;
torque_current = torque_start; // + torque_increment;
motor.target = torque_current;
Serial.println("Starting torque test.");
Serial.print("Moving to: ");
// in this step we go back to the start number quickly, like after 2 ticks
// so that we don't put too much pressure on the mosfets or current draw
void torque_relax() {
motor.target = torque_pre_start;
Serial.println("Back to relax position.");
Serial.print("Moving to: ");
void torque_step() {
torque_current = torque_current + torque_increment;
// see if we're at end
if (torque_current > torque_end) {
motor.target = torque_current;
Serial.println("Next step on torque test.");
Serial.print("Moving to: ");
void torque_finish() {
// done testing at 2 seconds, so go back to start position
torque_is_testing = false;
torque_current = -1;
motor.target = torque_pre_start;
Serial.println("Ending torque test.");
Serial.print("Moving to: ");
// jerk control using voltage voltage ramp
// default value is 300 volts per sec ~ 0.3V per millisecond
//motor.PID_velocity.output_ramp = 1000;
// setting the limits
// maximal velocity of the position control
//motor.velocity_limit = 20; // rad/s - default 20
// this is called at 5hz, that way we can trigger off of it
void torque_tick() {
if (torque_is_testing) {
if (torque_tick_ctr == 50) {
else if (torque_tick_ctr == 100) {
Serial.println("torque_tick_ctr was 15, so doing next step...");
torque_tick_ctr = 0;
else {
3 Likes
Awesome project! I just gotta say, this inspired me to se if I can ditch the steppers on my pnp and do some sort of 3D printed gearing for a sensored gimbal. How fast can those motors spin @24v ?
Amazing … I will try it very soon
Gimbal motors have KVs in the range 50-200, I’d say, so 1000-4000 RPM @24V
But I personally wouldn’t aim to drive them near their max. Their high Ohm windings typically have quite thin wires and in my experience they get very hot if you give them a lot of current for any
length of time…
Ok, thanks. That is ideal for pnp. I’m thinking sensored bldc for eboard with a reduktion gearing. Even 1000-1500rpm with a reduktion gearing would be much faster then a nema23 stepper. I would drive
it way below rated watt. Maybe 250w. I’m contemplating if it is possible to drive the bldc using the inbuilt hall’s and use the analog hall encoder I’m working on, mounted on the reduction gear next
to the bldc… I see it’s possible to get eboard motor/pulley belt mounts.
|
{"url":"https://community.simplefoc.com/t/3d-printed-robot-arm-acceleration-deceleration-planner-algorithm/958?u=svavar_konradsson","timestamp":"2024-11-08T01:21:10Z","content_type":"text/html","content_length":"113969","record_id":"<urn:uuid:1044df91-1958-4790-8de7-59423166e4e2>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00152.warc.gz"}
|
TI-BASIC:E Value
From Learn @ Cemetech
Jump to navigationJump to search
e is a constant on the TI-83 series calculators that holds the approximate value of [*http://mathworld.wolfram.com/e.html Euler's number], fairly important in calculus and other higher-level
The approximate value, to as many digits as stored in the calculator, is 2.718281828459...
The main use of e is as the base of the exponential function e^( (which is also a separate function on the calculator), and its inverse, the natural logarithm Ln(. From these functions, others such
as the trigonometric functions (e.g. Sin() and the hyperbolic functions (e.g. Sinh() can be defined. In re^θi mode, e is used in an alternate form of expressing complex numbers.
Important as the number e is, nine times out of ten you won't need the constant itself when using your calculator, but rather the e^( and ln( functions.
Related Commands
|
{"url":"https://learn.cemetech.net/index.php/TI-BASIC:E_Value","timestamp":"2024-11-11T18:13:03Z","content_type":"text/html","content_length":"15430","record_id":"<urn:uuid:36ff012c-e2c6-493d-92a7-0612deebf7ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00372.warc.gz"}
|
Search result: Catalogue data in Spring Semester 2023
Computer Science Master
Major in Theoretical Computer Science
Elective Courses
Number Title Type ECTS Hours Lecturers
6 2V +
252-0408-00L Cryptographic Protocols W credits 2U + M. Hirt
In a cryptographic protocol, a set of parties wants to achieve some common goal, while some of the parties are dishonest. Most prominent example of a cryptographic protocol is
Abstract multi-party computation, where the parties compute an arbitrary (but fixed) function of their inputs, while maintaining the secrecy of the inputs and the correctness of the outputs even
if some of the parties try to cheat.
Learning To know and understand a selection of cryptographic protocols and to
objective be able to analyze and prove their security and efficiency.
The selection of considered protocols varies. Currently, we consider
multi-party computation, secret-sharing, broadcast and Byzantine
Content agreement. We look at both the synchronous and the asynchronous
communication model, and focus on simple protocols as well as on
highly-efficient protocols.
Lecture notes We provide handouts of the slides. For some of the topics, we also
provide papers and/or lecture notes.
Prerequisites A basic understanding of fundamental cryptographic concepts (as taught
/ Notice for example in the course Information Security) is useful, but not
Subject-specific Competencies Concepts and Theories assessed
Techniques and Technologies assessed
Method-specific Competencies Analytical Competencies assessed
Competencies Decision-making fostered
Personal Competencies Creative Thinking fostered
Critical Thinking fostered
6 2V +
252-1424-00L Models of Computation W credits 2U + M. Cook
Abstract This course surveys many different models of computation: Turing Machines, Cellular Automata, Finite State Machines, Graph Automata, Circuits, Tilings, Lambda Calculus, Fractran,
Chemical Reaction Networks, Hopfield Networks, String Rewriting Systems, Tag Systems, Diophantine Equations, Register Machines, Primitive Recursive Functions, and more.
Learning The goal of this course is to become acquainted with a wide variety of models of computation, to understand how models help us to understand the modeled systems, and to be able to
objective develop and analyze models appropriate for new systems.
Content This course surveys many different models of computation: Turing Machines, Cellular Automata, Finite State Machines, Graph Automata, Circuits, Tilings, Lambda Calculus, Fractran,
Chemical Reaction Networks, Hopfield Networks, String Rewriting Systems, Tag Systems, Diophantine Equations, Register Machines, Primitive Recursive Functions, and more.
263-4509-00L Complex Network Models W 5 2V + J. Lengler
credits 2A
Complex network models are random graphs that feature one or several properties observed in real-world networks (e.g., social networks, internet graph, www). Depending on the
Abstract application, different properties are relevant, and different complex network models are useful. This course gives an overview over some relevant models and the properties they do and
do not cover.
Learning The students get familiar with a portfolio of network models, and they know their features and shortcomings. For a given application, they can identify relevant properties for this
objective applications and can select an appropriate network model.
Content Network models: Erdös-Renyi random graphs, Chung-Lu graphs, configuration model, Kleinberg model, geometric inhomogeneous random graphs
Properties: degree distribution, structure of giant and smaller components, clustering coefficient, small-world properties, community structures, weak ties
The script is available in moodle or at https://as.inf.ethz.ch/people/members/lenglerj/CompNetScript.pdf
Lecture notes
It will be updated during the semester.
Literature Latora, Nikosia, Russo: "Complex Networks: Principles, Methods and Applications"
van der Hofstad: "Random Graphs and Complex Networks. Volume 1"
Prerequisites The students must be familiar with the basics of graph theory and of probability theory (e.g. linearity of expectation, inequalities of Markov, Chebyshev, Chernoff). The course
/ Notice "Randomized Algorithms and Probabilistic Methods" is helpful, but not required.
Subject-specific Competencies Concepts and Theories assessed
Competencies Techniques and Technologies assessed
Method-specific Competencies Analytical Competencies assessed
8 3V +
263-4510-00L Introduction to Topological Data Analysis W credits 2U + P. Schnider
Abstract Topological Data Analysis (TDA) is a relatively new subfield of computer sciences, which uses techniques from algebraic topology and computational geometry and topology to analyze and
quantify the shape of data. This course will introduce the theoretical foundations of TDA.
Learning The goal is to make students familiar with the fundamental concepts, techniques and results in TDA. At the end of the course, students should be able to read and understand current
objective research papers and have the necessary background knowledge to apply methods from TDA to other projects.
Content Mathematical background (Topology, Simplicial complexes, Homology), Persistent Homology, Complexes on point clouds (Čech complexes, Vietoris-Rips complexes, Delaunay complexes, Witness
complexes), the TDA pipeline, Reeb Graphs, Mapper
Main reference:
Tamal K. Dey, Yusu Wang: Computational Topology for Data Analysis, 2021
Other references:
Herbert Edelsbrunner, John Harer: Computational Topology: An Introduction, American Mathematical Society, 2010
Literature https://bookstore.ams.org/mbk-69
Gunnar Carlsson, Mikael Vejdemo-Johansson: Topological Data Analysis with Applications, Cambridge University Press, 2021
Robert Ghrist: Elementary Applied Topology, 2014
Allen Hatcher: Algebraic Topology, Cambridge University Press, 2002
Prerequisites The course assumes knowledge of discrete mathematics, algorithms and data structures and linear algebra, as supplied in the first semesters of Bachelor Studies at ETH.
/ Notice
Subject-specific Competencies Concepts and Theories assessed
Techniques and Technologies assessed
Method-specific Competencies Analytical Competencies assessed
Problem-solving assessed
Competencies Project Management fostered
Social Competencies Communication assessed
Cooperation and Teamwork fostered
Self-presentation and Social Influence fostered
Personal Competencies Creative Thinking fostered
263-4656-00L Digital Signatures W 5 2V + D. Hofheinz
credits 2A
Abstract Digital signatures as one central cryptographic building block. Different security goals and security definitions for digital signatures, followed by a variety of popular and
fundamental signature schemes with their security analyses.
Learning The student knows a variety of techniques to construct and analyze the security of digital signature schemes. This includes modularity as a central tool of constructing secure schemes,
objective and reductions as a central tool to proving the security of schemes.
We will start with several definitions of security for signature schemes, and investigate the relations among them. We will proceed to generic (but inefficient) constructions of secure
Content signatures, and then move on to a number of efficient schemes based on concrete computational hardness assumptions. On the way, we will get to know paradigms such as hash-then-sign,
one-time signatures, and chameleon hashing as central tools to construct secure signatures.
Literature Jonathan Katz, "Digital Signatures."
Prerequisites Ideally, students will have taken the D-INFK Bachelors course "Information Security" or an equivalent course at Bachelors level.
/ Notice
Algorithmics for Hard Problems 2V +
272-0300-00L W 5 1U + H.‑J. Böckenhauer,
This course d o e s n o t include the Mentored Work Specialised Courses with an Educational Focus in Computer Science A. credits 1A D. Komm
This course unit looks into algorithmic approaches to the solving of hard problems, particularly with moderately exponential-time algorithms and parameterized algorithms.
The seminar is accompanied by a comprehensive reflection upon the significance of the approaches presented for computer science tuition at high schools.
Learning To systematically acquire an overview of the methods for solving hard problems. To get deeper knowledge of exact and parameterized algorithms.
First, the concept of hardness of computation is introduced (repeated for the computer science students). Then some methods for solving hard problems are treated in a systematic way.
Content For each algorithm design method, it is discussed what guarantees it can give and how we pay for the improved efficiency. A special focus lies on moderately exponential-time algorithms
and parameterized algorithms.
Lecture notes Unterlagen und Folien werden zur Verfügung gestellt.
J. Hromkovic: Algorithmics for Hard Problems, Springer 2004.
R. Niedermeier: Invitation to Fixed-Parameter Algorithms, 2006.
Literature M. Cygan et al.: Parameterized Algorithms, 2015.
F. Fomin et al.: Kernelization, 2019.
F. Fomin, D. Kratsch: Exact Exponential Algorithms, 2010.
Subject-specific Competencies Concepts and Theories assessed
Method-specific Competencies Analytical Competencies assessed
Problem-solving assessed
Social Competencies Communication fostered
Cooperation and Teamwork fostered
Competencies Self-presentation and Social Influence fostered
Personal Competencies Creative Thinking assessed
Critical Thinking assessed
Self-awareness and Self-reflection fostered
Self-direction and Self-management fostered
Approximation and Online Algorithms 2V +
272-0302-00L W 5 1U + D. Komm
Does not take place this semester. credits 1A
Abstract This lecture deals with approximative algorithms for hard optimization problems and algorithmic approaches for solving online problems as well as the limits of these approaches.
Learning Get a systematic overview of different methods for designing approximative algorithms for hard optimization problems and online problems. Get to know methods for showing the limitations
objective of these approaches.
Approximation algorithms are one of the most succesful techniques to attack hard optimization problems. Here, we study the so-called approximation ratio, i.e., the ratio of the cost of
the computed approximating solution and an optimal one (which is not computable efficiently).
For an online problem, the whole instance is not known in advance, but it arrives pieceweise and for every such piece a corresponding part of the definite output must be given. The
quality of an algorithm for such an online problem is measured by the competitive ratio, i.e., the ratio of the cost of the computed solution and the cost of an optimal solution that
could be given if the whole input was known in advance.
Content The contents of this lecture are
- the classification of optimization problems by the reachable approximation ratio,
- systematic methods to design approximation algorithms (e.g., greedy strategies, dynamic programming, linear programming relaxation),
- methods to show non-approximability,
- classic online problem like paging or scheduling problems and corresponding algorithms,
- randomized online algorithms,
- the design and analysis principles for online algorithms, and
- limits of the competitive ratio and the advice complexity as a way to do a deeper analysis of the complexity of online problems.
The lecture is based on the following books:
J. Hromkovic: Algorithmics for Hard Problems, Springer, 2004
Literature D. Komm: An Introduction to Online Computation: Determinism, Randomization, Advice, Springer, 2016
Additional literature:
A. Borodin, R. El-Yaniv: Online Computation and Competitive Analysis, Cambridge University Press, 1998
Subject-specific Competencies Concepts and Theories assessed
Method-specific Competencies Analytical Competencies assessed
Problem-solving assessed
Social Competencies Communication fostered
Cooperation and Teamwork fostered
Competencies Self-presentation and Social Influence fostered
Personal Competencies Creative Thinking assessed
Critical Thinking assessed
Self-awareness and Self-reflection fostered
Self-direction and Self-management fostered
401-3052-10L Graph Theory W 10 4V + B. Sudakov
credits 1U
Basics, trees, Caley's formula, matrix tree theorem, connectivity, theorems of Mader and Menger, Eulerian graphs, Hamilton cycles, theorems of Dirac, Ore, Erdös-Chvatal, matchings,
Abstract theorems of Hall, König, Tutte, planar graphs, Euler's formula, Kuratowski's theorem, graph colorings, Brooks' theorem, 5-colorings of planar graphs, list colorings, Vizing's theorem,
Ramsey theory, Turán's theorem
Learning The students will get an overview over the most fundamental questions concerning graph theory. We expect them to understand the proof techniques and to use them autonomously on related
objective problems.
Lecture notes Lecture will be only at the blackboard.
West, D.: "Introduction to Graph Theory"
Literature Diestel, R.: "Graph Theory"
Further literature links will be provided in the lecture.
Prerequisites Students are expected to have a mathematical background and should be able to write rigorous proofs.
/ Notice
401-3902-21L Network & Integer Optimization: From Theory to Application W 6 3G R. Zenklusen
Abstract This course covers various topics in Network and (Mixed-)Integer Optimization. It starts with a rigorous study of algorithmic techniques for some network optimization problems (with a
focus on matching problems) and moves to key aspects of how to attack various optimization settings through well-designed (Mixed-)Integer Programming formulations.
Learning Our goal is for students to both get a good foundational understanding of some key network algorithms and also to learn how to effectively employ (Mixed-)Integer Programming
objective formulations, techniques, and solvers, to tackle a wide range of discrete optimization problems.
Key topics include:
- Matching problems;
Content - Integer Programming techniques and models;
- Extended formulations and strong problem formulations;
- Solver techniques for (Mixed-)Integer Programs;
- Decomposition approaches.
- Bernhard Korte, Jens Vygen: Combinatorial Optimization. 6th edition, Springer, 2018.
Literature - Alexander Schrijver: Combinatorial Optimization: Polyhedra and Efficiency. Springer, 2003. This work has 3 volumes.
- Vanderbeck François, Wolsey Laurence: Reformulations and Decomposition of Integer Programs. Chapter 13 in: 50 Years of Integer Programming 1958-2008. Springer, 2010.
- Alexander Schrijver: Theory of Linear and Integer Programming. John Wiley, 1986.
Prerequisites Solid background in linear algebra. Preliminary knowledge of Linear Programming is ideal but not a strict requirement. Prior attendance of the course Linear & Combinatorial Optimization
/ Notice is a plus.
Subject-specific Competencies Concepts and Theories assessed
Method-specific Competencies Analytical Competencies assessed
Decision-making assessed
Competencies Problem-solving assessed
Social Competencies Communication assessed
Personal Competencies Creative Thinking assessed
Quantum Information Processing I: Concepts
5 2V +
402-0448-01L This theory part QIP I together with the experimental part 402-0448-02L QIP II (both offered in the Spring Semester) combine to the core course in W credits 1U J. Home
experimental physics "Quantum Information Processing" (totally 10 ECTS credits). This applies to the Master's degree programme in Physics.
The course covers the key concepts of quantum information processing, including quantum algorithms which give the quantum computer the power to compute problems outside the reach of any
Abstract classical supercomputer.
Key concepts such as quantum error correction are discussed in detail. They provide fundamental insights into the nature of quantum states and measurements.
Learning By the end of the course students are able to explain the basic mathematical formalism of quantum mechanics and apply them to quantum information processing problems. They are able to
objective adapt and apply these concepts and methods to analyse and discuss quantum algorithms and other quantum information-processing protocols.
Content The topics covered in the course will include quantum circuits, gate decomposition and universal sets of gates, efficiency of quantum circuits, quantum algorithms (Shor, Grover,
Deutsch-Josza,..), quantum error correction, fault-tolerant designs, and quantum simulation.
Lecture notes Will be provided.
Quantum Computation and Quantum Information
Literature Michael Nielsen and Isaac Chuang
Cambridge University Press
Prerequisites A good understanding of finite dimensional linear algebra is recommended.
/ Notice
Subject-specific Competencies Concepts and Theories assessed
Competencies Techniques and Technologies assessed
Method-specific Competencies Analytical Competencies assessed
|
{"url":"https://www.vvz.ethz.ch/Vorlesungsverzeichnis/sucheLehrangebot.view?seite=1&semkez=2023S&ansicht=2&lang=en&abschnittId=104648","timestamp":"2024-11-11T12:55:27Z","content_type":"text/html","content_length":"52457","record_id":"<urn:uuid:dbde2752-3903-4d93-b76a-01b8b6ddafa0>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00324.warc.gz"}
|
Stats Homework
A Statistical Analysis Package for Students
Stats Homework is a statistical analysis system that was designed and written for students who are enrolled in introductory statistics courses. It is not designed to compete with professional
packages such as SPSS®; it was designed to introduce students to statistical computing and to facilitate students’ engagement and learning in their introductory statistics courses (click here for
more information).
Stats Homework is very easy to use. It utilizes a simple approach to data-management, and a simple series of menus to specify basic statistical analyses. Students can conduct statistical analyses
with very little time spent learning how to use this software. In addition, when students conduct analyses with this software, they are provided with rich outputs that include far more basic
statistics than what are typically provided by the major professional packages.
Stats Homework offers a broad variety of procedures that might be useful to students and teachers working together in basic statistics courses. It includes traditional statistical analyses such as
correlation/regression, t test, ANOVA, and Chi Squared, as well as alternative approaches such as non-parametric statistics and permutation tests. It includes many procedures for producing graphs
and charts such as box plots, stem-and-leaf plots, dot plots, bar charts, histograms, and scatter plots. Finally, it includes powerful tools for simulating data that can be used to demonstrate a
variety of statistical principles covered in basic statistics courses.
Click the link below to see the download page for Stats Homework. Documentation can be accessed by pulling down the help menu in Stats Homework, or by clicking the link below. If you have any
trouble downloading or running this software, please let me know.
Special Note: I would like to acknowledge the support and guidance of two colleagues who have been invaluable in improving Stats Homework, Dr. Chris Olsen and the late Dr. Robert Hayden. Their
feedback has guided me to significantly improve the input and output of many of the procedures that instructors and students will use in their courses. These changes have greatly enhanced the
usability of this software, and I’m grateful for their help.
If you are interested in using this software, I could really use your help. In particular, I would very much like instructors and their students to use this software in the coming semesters and to
provide me with feedback on how to improve the software and its documentation.
If you are interested, please contact me:
Victor Bissonnette, Ph.D.
Berry College
Box 5019
Mount Berry, GA 30149
Stats Homework is free to students and instructors. This software may never be sold for profit or provided as part of a larger package. This software is distributed “as is” without any warranty of
any kind. In no event will Victor L. Bissonnette or Berry College be liable for any damages caused by the use or misuse of this software package.
|
{"url":"https://sites.berry.edu/vbissonnette/index/stats-homework/","timestamp":"2024-11-09T13:16:48Z","content_type":"text/html","content_length":"110786","record_id":"<urn:uuid:3c5b2a0d-a10d-41ab-8407-88680a5599f9>","cc-path":"CC-MAIN-2024-46/segments/1730477028118.93/warc/CC-MAIN-20241109120425-20241109150425-00766.warc.gz"}
|
Reconstructing approximate phylogenetic trees from quartet samples
The reconstruction of evolutionary trees (also known as phylogenies) is central to many problems in Biology. Accurate phylogenetic reconstruction methods are currently limited to a maximum of few
dozens of species. Therefore, in order to construct a tree over larger sets of species, a method capable of inferring accurately trees over small, overlapping sets, and subsequently merging these
sets into a tree over the complete set, is required. A quartet tree is the smallest informative piece of information and quartet based methods are based on combining quartet trees into a big tree.
However, even this case is NP-hard, and even when the set of quartet trees is compatible (agree on a certain tree). The general problem of approximating quartets, or maximum quartet consistency
(MQC), even for compatible inputs, is open for nearly twenty years. Despite its importance, the only rigorous results for approximating quartets are the naive 1/3 approximation that applies to the
general case and a PTAS when the input is the complete set of all ([4]^n) possible quartets. Even when it is possible to determine the correct quartet induced by every four taxa, the time needed to
generate the complete set of all quartets may be impractical. A faster approach is to sample at random just m ≪ ( [4]^n) quartets, and provide this sample as an input. In this work we present the
first approximation algorithm whose guaranteed approximation is strictly better than 1/3 when the input is any random sample of m compatible quartets. The approximation ratio we obtain is 0.425 for
general m, and 0.468 when m = ω(n^2). An important ingredient in our algorithm involves solving a weighted Max-Cut in a certain graph induced by the set of input quartets. We also show an extension
of the PTAS algorithm to handle dense, rather than complete, inputs.
Publication series
Name Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
Conference 21st Annual ACM-SIAM Symposium on Discrete Algorithms
Country/Territory United States
City Austin, TX
Period 17/01/10 → 19/01/10
ASJC Scopus subject areas
• Software
• General Mathematics
Dive into the research topics of 'Reconstructing approximate phylogenetic trees from quartet samples'. Together they form a unique fingerprint.
|
{"url":"https://cris.haifa.ac.il/en/publications/reconstructing-approximate-phylogenetic-trees-from-quartet-sample","timestamp":"2024-11-06T05:36:11Z","content_type":"text/html","content_length":"59590","record_id":"<urn:uuid:02785651-e591-4dbe-99fa-177d25b84cee>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00138.warc.gz"}
|
— Functions creating iterators for efficient looping
10.1. itertools — Functions creating iterators for efficient looping¶
This module implements a number of iterator building blocks inspired by constructs from APL, Haskell, and SML. Each has been recast in a form suitable for Python.
The module standardizes a core set of fast, memory efficient tools that are useful by themselves or in combination. Together, they form an “iterator algebra” making it possible to construct
specialized tools succinctly and efficiently in pure Python.
For instance, SML provides a tabulation tool: tabulate(f) which produces a sequence f(0), f(1), .... The same effect can be achieved in Python by combining map() and count() to form map(f, count()).
These tools and their built-in counterparts also work well with the high-speed functions in the operator module. For example, the multiplication operator can be mapped across two vectors to form an
efficient dot-product: sum(map(operator.mul, vector1, vector2)).
Infinite Iterators:
│Iterator│ Arguments │ Results │ Example │
│count() │start, [step]│start, start+step, start+2*step, ... │count(10) --> 10 11 12 13 14 ... │
│cycle() │p │p0, p1, ... plast, p0, p1, ... │cycle('ABCD') --> A B C D A B C D ...│
│repeat()│elem [,n] │elem, elem, elem, ... endlessly or up to n times │repeat(10, 3) --> 10 10 10 │
Iterators terminating on the shortest input sequence:
│ Iterator │ Arguments │ Results │ Example │
│accumulate() │p [,func] │p0, p0+p1, p0+p1+p2, ... │accumulate([1,2,3,4,5]) --> 1 3 6 10 15 │
│chain() │p, q, ... │p0, p1, ... plast, q0, q1, ... │chain('ABC', 'DEF') --> A B C D E F │
│chain.from_iterable()│iterable │p0, p1, ... plast, q0, q1, ... │chain.from_iterable(['ABC', 'DEF']) --> A B C D E F │
│compress() │data, selectors │(d[0] if s[0]), (d[1] if s[1]), ... │compress('ABCDEF', [1,0,1,0,1,1]) --> A C E F │
│dropwhile() │pred, seq │seq[n], seq[n+1], starting when pred fails │dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1 │
│filterfalse() │pred, seq │elements of seq where pred(elem) is false │filterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8 │
│groupby() │iterable[, key] │sub-iterators grouped by value of key(v) │ │
│islice() │seq, [start,] stop [, step]│elements from seq[start:stop:step] │islice('ABCDEFG', 2, None) --> C D E F G │
│starmap() │func, seq │func(*seq[0]), func(*seq[1]), ... │starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000 │
│takewhile() │pred, seq │seq[0], seq[1], until pred fails │takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4 │
│tee() │it, n │it1, it2, ... itn splits one iterator into n│ │
│zip_longest() │p, q, ... │(p[0], q[0]), (p[1], q[1]), ... │zip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D-│
Combinatoric generators:
│ Iterator │ Arguments │ Results │
│product() │p, q, ... [repeat=1]│cartesian product, equivalent to a nested for-loop │
│permutations() │p[, r] │r-length tuples, all possible orderings, no repeated elements │
│combinations() │p, r │r-length tuples, in sorted order, no repeated elements │
│combinations_with_replacement() │p, r │r-length tuples, in sorted order, with repeated elements │
│product('ABCD', repeat=2) │ │AA AB AC AD BA BB BC BD CA CB CC CD DA DB DC DD │
│permutations('ABCD', 2) │ │AB AC AD BA BC BD CA CB CD DA DB DC │
│combinations('ABCD', 2) │ │AB AC AD BC BD CD │
│combinations_with_replacement('ABCD', 2)│ │AA AB AC AD BB BC BD CC CD DD │
10.1.1. Itertool functions¶
The following module functions all construct and return iterators. Some provide streams of infinite length, so they should only be accessed by functions or loops that truncate the stream.
itertools.accumulate(iterable[, func])¶
Make an iterator that returns accumulated sums, or accumulated results of other binary functions (specified via the optional func argument). If func is supplied, it should be a function of two
arguments. Elements of the input iterable may be any type that can be accepted as arguments to func. (For example, with the default operation of addition, elements may be any addable type
including Decimal or Fraction.) If the input iterable is empty, the output iterable will also be empty.
Roughly equivalent to:
def accumulate(iterable, func=operator.add):
'Return running totals'
# accumulate([1,2,3,4,5]) --> 1 3 6 10 15
# accumulate([1,2,3,4,5], operator.mul) --> 1 2 6 24 120
it = iter(iterable)
total = next(it)
except StopIteration:
yield total
for element in it:
total = func(total, element)
yield total
There are a number of uses for the func argument. It can be set to min() for a running minimum, max() for a running maximum, or operator.mul() for a running product. Amortization tables can be
built by accumulating interest and applying payments. First-order recurrence relations can be modeled by supplying the initial value in the iterable and using only the accumulated total in func
>>> data = [3, 4, 6, 2, 1, 9, 0, 7, 5, 8]
>>> list(accumulate(data, operator.mul)) # running product
[3, 12, 72, 144, 144, 1296, 0, 0, 0, 0]
>>> list(accumulate(data, max)) # running maximum
[3, 4, 6, 6, 6, 9, 9, 9, 9, 9]
# Amortize a 5% loan of 1000 with 4 annual payments of 90
>>> cashflows = [1000, -90, -90, -90, -90]
>>> list(accumulate(cashflows, lambda bal, pmt: bal*1.05 + pmt))
[1000, 960.0, 918.0, 873.9000000000001, 827.5950000000001]
# Chaotic recurrence relation https://en.wikipedia.org/wiki/Logistic_map
>>> logistic_map = lambda x, _: r * x * (1 - x)
>>> r = 3.8
>>> x0 = 0.4
>>> inputs = repeat(x0, 36) # only the initial value is used
>>> [format(x, '.2f') for x in accumulate(inputs, logistic_map)]
['0.40', '0.91', '0.30', '0.81', '0.60', '0.92', '0.29', '0.79', '0.63',
'0.88', '0.39', '0.90', '0.33', '0.84', '0.52', '0.95', '0.18', '0.57',
'0.93', '0.25', '0.71', '0.79', '0.63', '0.88', '0.39', '0.91', '0.32',
'0.83', '0.54', '0.95', '0.20', '0.60', '0.91', '0.30', '0.80', '0.60']
See functools.reduce() for a similar function that returns only the final accumulated value.
Changed in version 3.3: Added the optional func parameter.
Make an iterator that returns elements from the first iterable until it is exhausted, then proceeds to the next iterable, until all of the iterables are exhausted. Used for treating consecutive
sequences as a single sequence. Roughly equivalent to:
def chain(*iterables):
# chain('ABC', 'DEF') --> A B C D E F
for it in iterables:
for element in it:
yield element
classmethod chain.from_iterable(iterable)¶
Alternate constructor for chain(). Gets chained inputs from a single iterable argument that is evaluated lazily. Roughly equivalent to:
def from_iterable(iterables):
# chain.from_iterable(['ABC', 'DEF']) --> A B C D E F
for it in iterables:
for element in it:
yield element
itertools.combinations(iterable, r)¶
Return r length subsequences of elements from the input iterable.
Combinations are emitted in lexicographic sort order. So, if the input iterable is sorted, the combination tuples will be produced in sorted order.
Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each combination.
Roughly equivalent to:
def combinations(iterable, r):
# combinations('ABCD', 2) --> AB AC AD BC BD CD
# combinations(range(4), 3) --> 012 013 023 123
pool = tuple(iterable)
n = len(pool)
if r > n:
indices = list(range(r))
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != i + n - r:
indices[i] += 1
for j in range(i+1, r):
indices[j] = indices[j-1] + 1
yield tuple(pool[i] for i in indices)
The code for combinations() can be also expressed as a subsequence of permutations() after filtering entries where the elements are not in sorted order (according to their position in the input
def combinations(iterable, r):
pool = tuple(iterable)
n = len(pool)
for indices in permutations(range(n), r):
if sorted(indices) == list(indices):
yield tuple(pool[i] for i in indices)
The number of items returned is n! / r! / (n-r)! when 0 <= r <= n or zero when r > n.
itertools.combinations_with_replacement(iterable, r)¶
Return r length subsequences of elements from the input iterable allowing individual elements to be repeated more than once.
Combinations are emitted in lexicographic sort order. So, if the input iterable is sorted, the combination tuples will be produced in sorted order.
Elements are treated as unique based on their position, not on their value. So if the input elements are unique, the generated combinations will also be unique.
Roughly equivalent to:
def combinations_with_replacement(iterable, r):
# combinations_with_replacement('ABC', 2) --> AA AB AC BB BC CC
pool = tuple(iterable)
n = len(pool)
if not n and r:
indices = [0] * r
yield tuple(pool[i] for i in indices)
while True:
for i in reversed(range(r)):
if indices[i] != n - 1:
indices[i:] = [indices[i] + 1] * (r - i)
yield tuple(pool[i] for i in indices)
The code for combinations_with_replacement() can be also expressed as a subsequence of product() after filtering entries where the elements are not in sorted order (according to their position in
the input pool):
def combinations_with_replacement(iterable, r):
pool = tuple(iterable)
n = len(pool)
for indices in product(range(n), repeat=r):
if sorted(indices) == list(indices):
yield tuple(pool[i] for i in indices)
The number of items returned is (n+r-1)! / r! / (n-1)! when n > 0.
itertools.compress(data, selectors)¶
Make an iterator that filters elements from data returning only those that have a corresponding element in selectors that evaluates to True. Stops when either the data or selectors iterables has
been exhausted. Roughly equivalent to:
def compress(data, selectors):
# compress('ABCDEF', [1,0,1,0,1,1]) --> A C E F
return (d for d, s in zip(data, selectors) if s)
itertools.count(start=0, step=1)¶
Make an iterator that returns evenly spaced values starting with number start. Often used as an argument to map() to generate consecutive data points. Also, used with zip() to add sequence
numbers. Roughly equivalent to:
def count(start=0, step=1):
# count(10) --> 10 11 12 13 14 ...
# count(2.5, 0.5) -> 2.5 3.0 3.5 ...
n = start
while True:
yield n
n += step
When counting with floating point numbers, better accuracy can sometimes be achieved by substituting multiplicative code such as: (start + step * i for i in count()).
Changed in version 3.1: Added step argument and allowed non-integer arguments.
Make an iterator returning elements from the iterable and saving a copy of each. When the iterable is exhausted, return elements from the saved copy. Repeats indefinitely. Roughly equivalent to:
def cycle(iterable):
# cycle('ABCD') --> A B C D A B C D A B C D ...
saved = []
for element in iterable:
yield element
while saved:
for element in saved:
yield element
Note, this member of the toolkit may require significant auxiliary storage (depending on the length of the iterable).
itertools.dropwhile(predicate, iterable)¶
Make an iterator that drops elements from the iterable as long as the predicate is true; afterwards, returns every element. Note, the iterator does not produce any output until the predicate
first becomes false, so it may have a lengthy start-up time. Roughly equivalent to:
def dropwhile(predicate, iterable):
# dropwhile(lambda x: x<5, [1,4,6,4,1]) --> 6 4 1
iterable = iter(iterable)
for x in iterable:
if not predicate(x):
yield x
for x in iterable:
yield x
itertools.filterfalse(predicate, iterable)¶
Make an iterator that filters elements from iterable returning only those for which the predicate is False. If predicate is None, return the items that are false. Roughly equivalent to:
def filterfalse(predicate, iterable):
# filterfalse(lambda x: x%2, range(10)) --> 0 2 4 6 8
if predicate is None:
predicate = bool
for x in iterable:
if not predicate(x):
yield x
itertools.groupby(iterable, key=None)¶
Make an iterator that returns consecutive keys and groups from the iterable. The key is a function computing a key value for each element. If not specified or is None, key defaults to an identity
function and returns the element unchanged. Generally, the iterable needs to already be sorted on the same key function.
The operation of groupby() is similar to the uniq filter in Unix. It generates a break or new group every time the value of the key function changes (which is why it is usually necessary to have
sorted the data using the same key function). That behavior differs from SQL’s GROUP BY which aggregates common elements regardless of their input order.
The returned group is itself an iterator that shares the underlying iterable with groupby(). Because the source is shared, when the groupby() object is advanced, the previous group is no longer
visible. So, if that data is needed later, it should be stored as a list:
groups = []
uniquekeys = []
data = sorted(data, key=keyfunc)
for k, g in groupby(data, keyfunc):
groups.append(list(g)) # Store group iterator as a list
groupby() is roughly equivalent to:
class groupby:
# [k for k, g in groupby('AAAABBBCCDAABBB')] --> A B C D A B
# [list(g) for k, g in groupby('AAAABBBCCD')] --> AAAA BBB CC D
def __init__(self, iterable, key=None):
if key is None:
key = lambda x: x
self.keyfunc = key
self.it = iter(iterable)
self.tgtkey = self.currkey = self.currvalue = object()
def __iter__(self):
return self
def __next__(self):
self.id = object()
while self.currkey == self.tgtkey:
self.currvalue = next(self.it) # Exit on StopIteration
self.currkey = self.keyfunc(self.currvalue)
self.tgtkey = self.currkey
return (self.currkey, self._grouper(self.tgtkey, self.id))
def _grouper(self, tgtkey, id):
while self.id is id and self.currkey == tgtkey:
yield self.currvalue
self.currvalue = next(self.it)
except StopIteration:
self.currkey = self.keyfunc(self.currvalue)
itertools.islice(iterable, stop)¶
itertools.islice(iterable, start, stop[, step])
Make an iterator that returns selected elements from the iterable. If start is non-zero, then elements from the iterable are skipped until start is reached. Afterward, elements are returned
consecutively unless step is set higher than one which results in items being skipped. If stop is None, then iteration continues until the iterator is exhausted, if at all; otherwise, it stops at
the specified position. Unlike regular slicing, islice() does not support negative values for start, stop, or step. Can be used to extract related fields from data where the internal structure
has been flattened (for example, a multi-line report may list a name field on every third line). Roughly equivalent to:
def islice(iterable, *args):
# islice('ABCDEFG', 2) --> A B
# islice('ABCDEFG', 2, 4) --> C D
# islice('ABCDEFG', 2, None) --> C D E F G
# islice('ABCDEFG', 0, None, 2) --> A C E G
s = slice(*args)
it = iter(range(s.start or 0, s.stop or sys.maxsize, s.step or 1))
nexti = next(it)
except StopIteration:
for i, element in enumerate(iterable):
if i == nexti:
yield element
nexti = next(it)
If start is None, then iteration starts at zero. If step is None, then the step defaults to one.
itertools.permutations(iterable, r=None)¶
Return successive r length permutations of elements in the iterable.
If r is not specified or is None, then r defaults to the length of the iterable and all possible full-length permutations are generated.
Permutations are emitted in lexicographic sort order. So, if the input iterable is sorted, the permutation tuples will be produced in sorted order.
Elements are treated as unique based on their position, not on their value. So if the input elements are unique, there will be no repeat values in each permutation.
Roughly equivalent to:
def permutations(iterable, r=None):
# permutations('ABCD', 2) --> AB AC AD BA BC BD CA CB CD DA DB DC
# permutations(range(3)) --> 012 021 102 120 201 210
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
if r > n:
indices = list(range(n))
cycles = list(range(n, n-r, -1))
yield tuple(pool[i] for i in indices[:r])
while n:
for i in reversed(range(r)):
cycles[i] -= 1
if cycles[i] == 0:
indices[i:] = indices[i+1:] + indices[i:i+1]
cycles[i] = n - i
j = cycles[i]
indices[i], indices[-j] = indices[-j], indices[i]
yield tuple(pool[i] for i in indices[:r])
The code for permutations() can be also expressed as a subsequence of product(), filtered to exclude entries with repeated elements (those from the same position in the input pool):
def permutations(iterable, r=None):
pool = tuple(iterable)
n = len(pool)
r = n if r is None else r
for indices in product(range(n), repeat=r):
if len(set(indices)) == r:
yield tuple(pool[i] for i in indices)
The number of items returned is n! / (n-r)! when 0 <= r <= n or zero when r > n.
itertools.product(*iterables, repeat=1)¶
Cartesian product of input iterables.
Roughly equivalent to nested for-loops in a generator expression. For example, product(A, B) returns the same as ((x,y) for x in A for y in B).
The nested loops cycle like an odometer with the rightmost element advancing on every iteration. This pattern creates a lexicographic ordering so that if the input’s iterables are sorted, the
product tuples are emitted in sorted order.
To compute the product of an iterable with itself, specify the number of repetitions with the optional repeat keyword argument. For example, product(A, repeat=4) means the same as product(A, A,
A, A).
This function is roughly equivalent to the following code, except that the actual implementation does not build up intermediate results in memory:
def product(*args, repeat=1):
# product('ABCD', 'xy') --> Ax Ay Bx By Cx Cy Dx Dy
# product(range(2), repeat=3) --> 000 001 010 011 100 101 110 111
pools = [tuple(pool) for pool in args] * repeat
result = [[]]
for pool in pools:
result = [x+[y] for x in result for y in pool]
for prod in result:
yield tuple(prod)
itertools.repeat(object[, times])¶
Make an iterator that returns object over and over again. Runs indefinitely unless the times argument is specified. Used as argument to map() for invariant parameters to the called function. Also
used with zip() to create an invariant part of a tuple record.
Roughly equivalent to:
def repeat(object, times=None):
# repeat(10, 3) --> 10 10 10
if times is None:
while True:
yield object
for i in range(times):
yield object
A common use for repeat is to supply a stream of constant values to map or zip:
>>> list(map(pow, range(10), repeat(2)))
[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]
itertools.starmap(function, iterable)¶
Make an iterator that computes the function using arguments obtained from the iterable. Used instead of map() when argument parameters are already grouped in tuples from a single iterable (the
data has been “pre-zipped”). The difference between map() and starmap() parallels the distinction between function(a,b) and function(*c). Roughly equivalent to:
def starmap(function, iterable):
# starmap(pow, [(2,5), (3,2), (10,3)]) --> 32 9 1000
for args in iterable:
yield function(*args)
itertools.takewhile(predicate, iterable)¶
Make an iterator that returns elements from the iterable as long as the predicate is true. Roughly equivalent to:
def takewhile(predicate, iterable):
# takewhile(lambda x: x<5, [1,4,6,4,1]) --> 1 4
for x in iterable:
if predicate(x):
yield x
itertools.tee(iterable, n=2)¶
Return n independent iterators from a single iterable.
The following Python code helps explain what tee does (although the actual implementation is more complex and uses only a single underlying FIFO queue).
Roughly equivalent to:
def tee(iterable, n=2):
it = iter(iterable)
deques = [collections.deque() for i in range(n)]
def gen(mydeque):
while True:
if not mydeque: # when the local deque is empty
newval = next(it) # fetch a new value and
except StopIteration:
for d in deques: # load it to all the deques
yield mydeque.popleft()
return tuple(gen(d) for d in deques)
Once tee() has made a split, the original iterable should not be used anywhere else; otherwise, the iterable could get advanced without the tee objects being informed.
This itertool may require significant auxiliary storage (depending on how much temporary data needs to be stored). In general, if one iterator uses most or all of the data before another iterator
starts, it is faster to use list() instead of tee().
itertools.zip_longest(*iterables, fillvalue=None)¶
Make an iterator that aggregates elements from each of the iterables. If the iterables are of uneven length, missing values are filled-in with fillvalue. Iteration continues until the longest
iterable is exhausted. Roughly equivalent to:
def zip_longest(*args, fillvalue=None):
# zip_longest('ABCD', 'xy', fillvalue='-') --> Ax By C- D-
iterators = [iter(it) for it in args]
num_active = len(iterators)
if not num_active:
while True:
values = []
for i, it in enumerate(iterators):
value = next(it)
except StopIteration:
num_active -= 1
if not num_active:
iterators[i] = repeat(fillvalue)
value = fillvalue
yield tuple(values)
If one of the iterables is potentially infinite, then the zip_longest() function should be wrapped with something that limits the number of calls (for example islice() or takewhile()). If not
specified, fillvalue defaults to None.
10.1.2. Itertools Recipes¶
This section shows recipes for creating an extended toolset using the existing itertools as building blocks.
The extended tools offer the same high performance as the underlying toolset. The superior memory performance is kept by processing elements one at a time rather than bringing the whole iterable into
memory all at once. Code volume is kept small by linking the tools together in a functional style which helps eliminate temporary variables. High speed is retained by preferring “vectorized” building
blocks over the use of for-loops and generators which incur interpreter overhead.
def take(n, iterable):
"Return first n items of the iterable as a list"
return list(islice(iterable, n))
def tabulate(function, start=0):
"Return function(0), function(1), ..."
return map(function, count(start))
def tail(n, iterable):
"Return an iterator over the last n items"
# tail(3, 'ABCDEFG') --> E F G
return iter(collections.deque(iterable, maxlen=n))
def consume(iterator, n):
"Advance the iterator n-steps ahead. If n is none, consume entirely."
# Use functions that consume iterators at C speed.
if n is None:
# feed the entire iterator into a zero-length deque
collections.deque(iterator, maxlen=0)
# advance to the empty slice starting at position n
next(islice(iterator, n, n), None)
def nth(iterable, n, default=None):
"Returns the nth item or a default value"
return next(islice(iterable, n, None), default)
def all_equal(iterable):
"Returns True if all the elements are equal to each other"
g = groupby(iterable)
return next(g, True) and not next(g, False)
def quantify(iterable, pred=bool):
"Count how many times the predicate is true"
return sum(map(pred, iterable))
def padnone(iterable):
"""Returns the sequence elements and then returns None indefinitely.
Useful for emulating the behavior of the built-in map() function.
return chain(iterable, repeat(None))
def ncycles(iterable, n):
"Returns the sequence elements n times"
return chain.from_iterable(repeat(tuple(iterable), n))
def dotproduct(vec1, vec2):
return sum(map(operator.mul, vec1, vec2))
def flatten(listOfLists):
"Flatten one level of nesting"
return chain.from_iterable(listOfLists)
def repeatfunc(func, times=None, *args):
"""Repeat calls to func with specified arguments.
Example: repeatfunc(random.random)
if times is None:
return starmap(func, repeat(args))
return starmap(func, repeat(args, times))
def pairwise(iterable):
"s -> (s0,s1), (s1,s2), (s2, s3), ..."
a, b = tee(iterable)
next(b, None)
return zip(a, b)
def grouper(iterable, n, fillvalue=None):
"Collect data into fixed-length chunks or blocks"
# grouper('ABCDEFG', 3, 'x') --> ABC DEF Gxx"
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
def roundrobin(*iterables):
"roundrobin('ABC', 'D', 'EF') --> A D E B F C"
# Recipe credited to George Sakkis
pending = len(iterables)
nexts = cycle(iter(it).__next__ for it in iterables)
while pending:
for next in nexts:
yield next()
except StopIteration:
pending -= 1
nexts = cycle(islice(nexts, pending))
def partition(pred, iterable):
'Use a predicate to partition entries into false entries and true entries'
# partition(is_odd, range(10)) --> 0 2 4 6 8 and 1 3 5 7 9
t1, t2 = tee(iterable)
return filterfalse(pred, t1), filter(pred, t2)
def powerset(iterable):
"powerset([1,2,3]) --> () (1,) (2,) (3,) (1,2) (1,3) (2,3) (1,2,3)"
s = list(iterable)
return chain.from_iterable(combinations(s, r) for r in range(len(s)+1))
def unique_everseen(iterable, key=None):
"List unique elements, preserving order. Remember all elements ever seen."
# unique_everseen('AAAABBBCCDAABBB') --> A B C D
# unique_everseen('ABBCcAD', str.lower) --> A B C D
seen = set()
seen_add = seen.add
if key is None:
for element in filterfalse(seen.__contains__, iterable):
yield element
for element in iterable:
k = key(element)
if k not in seen:
yield element
def unique_justseen(iterable, key=None):
"List unique elements, preserving order. Remember only the element just seen."
# unique_justseen('AAAABBBCCDAABBB') --> A B C D A B
# unique_justseen('ABBCcAD', str.lower) --> A B C A D
return map(next, map(itemgetter(1), groupby(iterable, key)))
def iter_except(func, exception, first=None):
""" Call a function repeatedly until an exception is raised.
Converts a call-until-exception interface to an iterator interface.
Like builtins.iter(func, sentinel) but uses an exception instead
of a sentinel to end the loop.
iter_except(functools.partial(heappop, h), IndexError) # priority queue iterator
iter_except(d.popitem, KeyError) # non-blocking dict iterator
iter_except(d.popleft, IndexError) # non-blocking deque iterator
iter_except(q.get_nowait, Queue.Empty) # loop over a producer Queue
iter_except(s.pop, KeyError) # non-blocking set iterator
if first is not None:
yield first() # For database APIs needing an initial cast to db.first()
while True:
yield func()
except exception:
def first_true(iterable, default=False, pred=None):
"""Returns the first true value in the iterable.
If no true value is found, returns *default*
If *pred* is not None, returns the first item
for which pred(item) is true.
# first_true([a,b,c], x) --> a or b or c or x
# first_true([a,b], x, f) --> a if f(a) else b if f(b) else x
return next(filter(pred, iterable), default)
def random_product(*args, repeat=1):
"Random selection from itertools.product(*args, **kwds)"
pools = [tuple(pool) for pool in args] * repeat
return tuple(random.choice(pool) for pool in pools)
def random_permutation(iterable, r=None):
"Random selection from itertools.permutations(iterable, r)"
pool = tuple(iterable)
r = len(pool) if r is None else r
return tuple(random.sample(pool, r))
def random_combination(iterable, r):
"Random selection from itertools.combinations(iterable, r)"
pool = tuple(iterable)
n = len(pool)
indices = sorted(random.sample(range(n), r))
return tuple(pool[i] for i in indices)
def random_combination_with_replacement(iterable, r):
"Random selection from itertools.combinations_with_replacement(iterable, r)"
pool = tuple(iterable)
n = len(pool)
indices = sorted(random.randrange(n) for i in range(r))
return tuple(pool[i] for i in indices)
Note, many of the above recipes can be optimized by replacing global lookups with local variables defined as default values. For example, the dotproduct recipe can be written as:
def dotproduct(vec1, vec2, sum=sum, map=map, mul=operator.mul):
return sum(map(mul, vec1, vec2))
|
{"url":"https://python.readthedocs.io/en/latest/library/itertools.html","timestamp":"2024-11-03T03:51:11Z","content_type":"application/xhtml+xml","content_length":"128559","record_id":"<urn:uuid:a31cf803-1564-4e7a-a155-05686868a4a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027770.74/warc/CC-MAIN-20241103022018-20241103052018-00595.warc.gz"}
|
Stock price required rate of return
"P" stands for the stock's price based off its dividends. In other words, this is the theoretical valuation you're calculating. "r" stands for the required rate of return. In other words, if
The required rate of return (RRR) is the minimum amount of profit (return) an investor will receive for assuming the risk of investing in a stock or another type of security. RRR also can be used
Stock Rates of Return. When you buy stock, you're buying a small piece of ownership in a company. Shares of stock have prices that rise and fall in a marketplace depending on factors like the How to
Calculate the Rate of Return on Stocks. Stocks represent shares of ownership in a company. People invest in the company by buying stocks and measure the rate of return by the percentage increase or
decrease in the stock's price. The return is measured using percentages because investors want to know how How Does the Expected Return Affect a Stock Price? By: Victoria Duff . Expected return on a
stock will move the price in that direction. When interest rates are high, investors move out of
How Does the Expected Return Affect a Stock Price? By: Victoria Duff . Expected return on a stock will move the price in that direction. When interest rates are high, investors move out of
How to Calculate the Rate of Return on Stocks. Stocks represent shares of ownership in a company. People invest in the company by buying stocks and measure the rate of return by the percentage
increase or decrease in the stock's price. The return is measured using percentages because investors want to know how How Does the Expected Return Affect a Stock Price? By: Victoria Duff . Expected
return on a stock will move the price in that direction. When interest rates are high, investors move out of This free online Stock Price Calculator will calculate the most you could pay for a stock
and still earn your required rate of return. The pricing method used by the calculator is based on the current dividend and the historical growth percentage. The Rate of Return (ROR) is the gain or
loss of an investment over a period of time copmared to the initial cost of the investment expressed as a percentage. This guide teaches the most common formulas for calculating different types of
rates of returns including total return, annualized return, ROI, ROA, ROE, IRR "P" stands for the stock's price based off its dividends. In other words, this is the theoretical valuation you're
calculating. "r" stands for the required rate of return. In other words, if Annual Return = (Simple Return +1) ^ (1 / Years Held)-1. Let's use Campbell Soup as an example. Suppose it's 2015, and you
own shares (it doesn't matter how many) of the stock. Campbell's stock View and compare Required,RATE,of,Return on Yahoo Finance.
We can easily find the current stock price and we will know what the current year ahead
16 Jul 2016 Total return differs from stock price growth because of dividends. Estimating Expected Growth Rate Part 1: Underlying Business Growth. 18 Jun 2018 While fair prices may not depend on a
certain level of trading, over $400 billion of stocks traded on average each day in the world equity markets Find out how a change in the required rate of return adjusts the price an investor is
willing to pay for a stock. Learn about the dividend discount model. A stock with higher market risk has a greater required return than a stock with a lower one because investors demand to be
compensated with higher returns for assuming more risk. The capital asset pricing model measures a stock's required rate of return. The required rate of return (RRR) is the minimum amount of profit
(return) an investor will receive for assuming the risk of investing in a stock or another type of security. RRR also can be used Stock Rates of Return. When you buy stock, you're buying a small
piece of ownership in a company. Shares of stock have prices that rise and fall in a marketplace depending on factors like the
16 Jul 2016 Total return differs from stock price growth because of dividends. Estimating Expected Growth Rate Part 1: Underlying Business Growth.
For stock paying a dividend, the required rate of return (RRR) formula can be calculated by using the following steps: Step 1: Firstly, determine the dividend to be paid during the next period. Step
2: Next, gather the current price of the equity from the from the stock. For example: an investor who can earn 10 per cent every year by investing in US Bonds, would set a required rate of return of
12 per cent for a riskier investment before considering it. Formula for Required Rate of Return Required Rate of Return = Risk Free Rate + Risk Co-efficient (Expected Return - Risk free return)
Before we can do this, we need to complete one more step and estimate the required rate of return for that particular stock which is a different required rate of return than the general US stock
market which we estimated to be 9.3%. In this case, the investor’s required rate of return would be 5%. Required Rate of Return Example. For example, Joey works for himself as a professional stock
investor. Because he is highly analytical, this work perfectly fits him. Joey prides himself on his ability to evaluate where the market is and where it will be. The required rate of return for
equity of a dividend-paying stock is equal to ((next year’s estimated dividends per share/current share price) + dividend growth rate). For example, suppose a company is expected to pay an annual
dividend of $2 next year and its stock is currently trading at $100 a share. Gordon model calculator helps to calculate the required rate of return (k) on the basis of current price, current annual
dividend and constant growth rate (g). Gordon model calculator helps to calculate the required rate of return (k) on the basis of current price, current annual dividend and constant growth rate (g).
View and compare Required,RATE,of,Return on Yahoo Finance.
The capital asset pricing model method looks at the risk of a stock relative to the risk of the market to determine the required rate of return based on the return on 6 Jun 2019 k = the investor's
discount rate or required rate of return, which can be also implies that a stock price grows at the same rate as dividends. Sharpe (2002) evaluates that a one percentage point increase in expected
inflation is estimated to raise required real stock returns about one percentage point, A company's stock price is the clearest measure of market expectations about its for shareholders to earn
their required rate of return on company shares. The required rate of return is rs = 10.5%, and the expected constant growth rate is g = 6.4%. What is the stock's current price? Dividend Discounting
16 Jul 2016 Total return differs from stock price growth because of dividends. Estimating Expected Growth Rate Part 1: Underlying Business Growth. 18 Jun 2018 While fair prices may not depend on a
certain level of trading, over $400 billion of stocks traded on average each day in the world equity markets Find out how a change in the required rate of return adjusts the price an investor is
willing to pay for a stock. Learn about the dividend discount model. A stock with higher market risk has a greater required return than a stock with a lower one because investors demand to be
compensated with higher returns for assuming more risk. The capital asset pricing model measures a stock's required rate of return. The required rate of return (RRR) is the minimum amount of profit
(return) an investor will receive for assuming the risk of investing in a stock or another type of security. RRR also can be used
|
{"url":"https://digoptionehqki.netlify.app/waldmann19251gumu/stock-price-required-rate-of-return-468.html","timestamp":"2024-11-12T15:21:47Z","content_type":"text/html","content_length":"34310","record_id":"<urn:uuid:5b046097-74e9-41d0-a41d-e9662db088ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00005.warc.gz"}
|
Quantum Field Theory
PHY 610 & 611, Quantum Field Theory I & II
Fall 2012 (N4006 Melville) - Spring 2013 (P 122), TuTh 8:30-9:50
office consultation available on request
Different instructors cover different material in this course. (Nobody can teach/learn it all in one year.) Even the basic topics may be taught differently. So it might be worth taking this again if
you took it elsewhere, or sitting in if you took it here (or plan on taking it again later).
Physics: standard 1st-semester graduate -- Mathematics: undergraduate --
1. PHY 501 (mechanics): Hamiltonians and Lagrangians, symmetries, relativity • Gaussian integrals
2. PHY 505 (E&M): Lagrangian, gauges, relativity, Green functions • multiplication of 2x2 matrices
3. PHY 511 (QM): more Hamiltonians, spin, statistics, Hilbert space • no level of rigor
Have you already had field theory? Field theory's useful for other subjects
If you learned only QED (or worse yet, just scalar field theory), or just Feynman This course is oriented toward particle physics, but field theory is often applicable to other areas of
diagrams, or you used only canonical quantization (which is never used for field theory physics, like:
in research papers anymore), you took a 1960's field theory course. The present course
includes required modern topics: • astrophysics/cosmology
• nuclear
• Yang-Mills ❤ • condensed matter
• Higgs
• Standard Model In particular, a currently popular new approach to the latter 2 topics is the anti-de Sitter/conformal field
• path integrals ❤ theory correspondence, which makes use of both field theory (especially supersymmetry & conformal symmetry)
• background fields ❤ and string theory.
• Faddeev-Popov ghosts ❤
• anomalies ❤
Have you really had field theory? Are you interested in string theory?
Even all the above guarantees only a 1970's course. (Welcome to the 21st century!) You This course is not research level, but intended to include all the material prerequisite to more-specialized
get more in this course than elsewhere: courses in theoretical high-energy physics.
• conformal symmetry ❤ E.g., this is not a string theory course. But if you plan on doing strings, and think you don't need to learn
• supersymmetry ❤ particles, think again: String theory requires understanding standard field theory topics like all the stuff
• relativistic mechanics ❤: classical antiparticles, 1st-quantization listed on the left with a "❤". Some of these will be briefly reviewed in our string theory course, but string
• BRST ❤ theory is hard enough if you know field theory, so why make it harder?
• explicit Yang-Mills amplitudes: spinor helicity
• renormalons
• 1/N expansion ❤
• finite theories
Textbook: FIELDS, most of --
First semester ("Part One"): symmetry Second semester* ("Part Two"): quanta
I. Global: Lie algebra, CPT, conformal, Young tableaux, color/flavor V. Quantization: path integrals, Wick rotation, S-matrix, F. rules
II. Spin: spinor notation, field eqs., twistors, helicity, supersymmetry VI. Quantum gauge theory: BRST, gauges, amplitudes, supergraphs
III. Local: classical pair creation, Yang-Mills VII. Loops: Dimensional renormalization, renormalons, 1/N expansion
IV. Mixed: chiral, Higgs, Standard Model, GUTs, super models VIII. Gauge loops: asymptotic free., finite theories, anomalies,
*Some of this will probably fit into the end of the first semester.
See also an overview of the course.
• QFT is a big subject; we'll cover a lot of material to give you @ least some exposure to the main ideas.
• A theory course, not "theory for ..."; for a more phenomenological alternative/complement, try Elementary Particle Physics (557).
• A one-year course: If you plan to take only the first semester as a light meal, you will miss the "main course".
• Difficult homework problems, discussed in gory detail in class. (This isn’t a seminar.)
• Study the material of each lecture before it’s given, to prepare questions.
Grading will be based entirely on homework. Problems will be taken from those in Fields (including the additions on my web page). You may discuss problems with classmates, but the write-up must be
your own. Homework is due one week after assignment, at the beginning of class. (Put it on my desk when you enter.) No late homework is accepted; it may be handed in early, but only to me in person
(or by email).
Auditors are encouraged to try the homework.
University-required statements: These statements are required in all University syllabi. (They are the same in all course syllabi, so just read it once.)
|
{"url":"http://insti.physics.sunysb.edu/~siegel/course.html","timestamp":"2024-11-06T14:08:52Z","content_type":"text/html","content_length":"7025","record_id":"<urn:uuid:c70c19ea-e46c-4ccb-ac00-4cb3dbf49f5e>","cc-path":"CC-MAIN-2024-46/segments/1730477027932.70/warc/CC-MAIN-20241106132104-20241106162104-00648.warc.gz"}
|
Nearest Neighbor Analysis
In the Nearest Neighbor Analysis the boundaries of the area in which the analysed points are enclosed have the crucial influence on the result. The example below illustrates regularly distributed
points and their clustered distribution when bounded by a large rectangle.
Depending on the needs, the bounding can be defined with the help of: a convex hull, the smallest rectangle, a rectangle from layer bounding, or the smallest circle. The studied area can also be
defined only with the use of the size of its area.
The distance between the points is measured with the Euclidean metric.
The first stage of the nearest neighbor analysis is calculating the distance among all points. Next, for each point we search for the nearest point, i.e. for the nearest neighbor (
The distances between all points are defined by a spatial weight matrix. In Moran's analysis window we can choose matrix generated previously by using menu Spatial analysis → Tools → Spatial weights
matrix or indicate the neighbor matrix according to contiguity – Queen, row standardized, that is proposed by the program.
The basic statistics for the analysis of the nearest neighbors are:
Nearest Neighbor Index ( NNI) is based on a method described by botanists: Clark and Evans (1954) 1).
The test for checking the significance of the Nearest Neighbor Index
The p-value, designated on the basis of the test statistic, is compared with the significance level
To analyse subsequent nearest neighbors one takes into account the distance to the second nearest neighbor, the third nearest neighbor, and so on, to the
The results of the point density analysis conducted for subsequent neighbors can be presented on a graph so as to illustrate the placement of
Objects placed near the bounding show a tendency to be further away from their nearest neighbors than other objects within the analysed area. The reason for it is the simple fact that the nearest
neighbors of the objects near the border can be objects outside the studied area. In such a situation we can conduct an analysis with an adjustment for the edge effect. In such a case the distance of
a point from its nearest neighbor (
The window with settings for Nearest Neighbor Analysis is accessed via the menu Spacial analysis → Spatial Statistics → Nearest Neighbor Analysis.
The admistrative division of Poland into powiats should, by definition, be uniform. With the use of NNI we will check if that is the case.
The districts map contains information about locations of polygons (Polish powiats).
The nearest neighbor analysis will be based on centroids representing powiats. We can draw them (add the centroid layer to the map of powiats) with the use of the Map manager.
The nearest neighbor analysis will be made with the use of information about the size of the area of Poland – it is
After entering the size of the area in the analysis window, a nearest neighbor index amounting to 1.37127 was received. Its statistical significance was (
We add the boundaries defined by the convex hull by pressing the button
The correction of the effect of a boudary defined in this way lowers the value of
In each of the analysis described above the subsequent neighbor indices are greater than 1 and, although they initially approximate 1, from order 5 they stabilize at the level of about 1.1. The
result, then, confirms the uniform distribution of Polish powiats.
Competition among species has an influence on the changes in the distribution of particular species of plants and on their density. Competition within a species is usually stronger than that among
different species as members of the same species have almost identical demands and compete for the same resources. The intensity of competition within a species increases with the growth of the
population. To check the influence of the competition on a certain species of balsamic poplar, a wooded area not regulated by man was studied. Locations of young trees and of old ones were studied.
On the map young poplars were marked in red and old poplars were marked in blue.
On the basis of the nearest neighbor indices, the structure of poplar density was compared in the area defined by a rectangle of layer bounding.
Young poplars have greater density than old ones. Their mean nearest neighbor distance is
Clark P.J., Evans F.C. (1954), Distance to nearest neighbour as a measure of spatial relationships in populations. Ecology 35, 445-453
|
{"url":"http://manuals.pqstat.pl/en:przestrzenpl:losdotpl:nnipl","timestamp":"2024-11-01T23:10:20Z","content_type":"text/html","content_length":"66928","record_id":"<urn:uuid:a46971f8-2837-41c6-95f0-630d6705ee96>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00166.warc.gz"}
|
How do you write 6.92*10^-8 in standard notation? | Socratic
How do you write #6.92*10^-8# in standard notation?
1 Answer
$6.92 \cdot {10}^{- 8} = 0.0000000692$
The power of ten indicates the position of the decimal point. If we are saying that a number in scientific notation is $6.92 \cdot {10}^{- 8}$ that means that the decimal point must be placed $8$
positions below the initial position, which is always the one corresponding to the number of units.
Impact of this question
7957 views around the world
|
{"url":"https://api-project-1022638073839.appspot.com/questions/how-do-you-write-6-92-10-8-in-standard-notation","timestamp":"2024-11-13T14:53:07Z","content_type":"text/html","content_length":"32835","record_id":"<urn:uuid:3baf87dc-2763-44dc-9001-ab5fa6b53061>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00539.warc.gz"}
|
「Jim Simons:成功破解華爾街的數學家」- A Rare Interview with the Mathematician Who Cracked Wall Street
You were something of a mathematical phenom. You had already taught at Harvard and MIT at a young age. And then the NSA came calling. What was that about?
Well, the NSA—that's the National Security Agency—they didn't exactly come calling. They had an operation at Princeton, where they hired mathematicians to attack secret codes and stuff like that. And
I knew that existed. And they had a very good policy, because you could do half your time at your own mathematics, and at least half your time working on their stuff. And they paid a lot, so that was
an irresistible pull. So, I went there.
So you were a code-cracker?
I was.
Until you got fired?
Well, I did get fired. Yes.
How come?
Well, how come? I got fired because...well, the Vietnam War was on, and the boss of bosses in my organization was a big fan of the war and wrote a New York Times article, a magazine section cover
story about how we would win in Vietnam. And I didn't like that war. I thought it was stupid, and I wrote a letter to the Times, which they published, saying not everyone who works for Maxwell
Taylor, if anyone remembers that name, agrees with his views. And I gave my own views...
Oh, OK. I can see that would—
...which were different from General Taylor's. But in the end, nobody said anything. But then, I was 29 years old at this time, and some kid came around and said he was a stringer from Newsweek
magazine and he wanted to interview me and ask what I was doing about my views. And I told him, "I'm doing mostly mathematics now, and when the war is over, then I'll do mostly their stuff." Then I
did the only intelligent thing I'd done that day—I told my local boss that I gave that interview. And he said, "What'd you say?" And I told him what I said. And then he said, "I've got to call
Taylor." He called Taylor. That took 10 minutes; I was fired five minutes after that.
But it wasn't bad.
It wasn't bad, because you went on to Stony Brook and stepped up your mathematical career. You started working with this man here. Who is this?
Oh, Chern! Yeah, Chern was one of the great mathematicians of the century. I had known him when I was a graduate student, actually, at Berkeley. And I had some ideas, and I brought them to him and he
liked them. And together, we did this work which you can easily see up there. There it is. And...
It led to you publishing a famous paper together. Can you explain at all what that work was?
No. I mean, I could explain it to somebody.
How about explaining this?
But not many. Not many people.
I think you told me it had something to do with spheres, so let's start here.
Well, it did, but I'll say about that work—it did have something to do with that, but before we get to that—that work was good mathematics. I was very happy with it; so was Chern. It even started a
little sub-field that's now flourishing. But, more interestingly, it happened to apply to physics, something we knew nothing about—at least I knew nothing about physics, and I don't think Chern knew
a heck of a lot. And about 10 years after the paper came out, a guy named Ed Witten in Princeton started applying it to string theory and people in Russia started applying it to what's called
"condensed matter." And today, those things in there called Chern-Simons invariants have spread through a lot of physics. And it's amazing. We didn't know any physics. It never occurred to me that it
would be applied to physics. But that's the thing about mathematics—you never know where it's going to go.
This is so incredible. So, we've been talking about how evolution shapes human minds that may or may not perceive the truth. Somehow, you come up with a mathematical theory, not knowing any physics,
discover two decades later that it's being applied to profoundly describe the actual physical world. How can that happen?
God knows.
But there's a famous physicist named Wigner, and he wrote an essay on the unreasonable effectiveness of mathematics. So somehow, this mathematics, which is rooted in the real world in some sense—we
learn to count, measure, everyone would do that—and then it flourishes on its own. But so often it comes back to save the day. General relativity is an example. Minkowski had this geometry, and
Einstein realized, "Hey! It's the very thing in which I can cast general relativity." So, you never know. And it is a mystery. It is a mystery.
So, here's a mathematical piece of ingenuity here. Come tell us about this.
Well, that's a ball—it's a sphere, and it has a lattice around it—you know, those squares sort of things. And what I'm going to show here was originally observed by Euler, the great mathematician, in
the 1700s. And it gradually grew to be a very important field in mathematics: algebraic topology, geometry. And that paper up there had its roots in this. So, here's this thing: it has eight vertices
and 12 edges and six faces. And if you look at the difference—vertices minus edges plus faces—you get two. OK, well, two, that's a good number. Here's a different way of doing it—these are triangles
covering—this has 12 vertices and 30 edges and 20 faces, 20 tiles. And vertices minus edges plus faces still equals two. And in fact, you could do this any which way—cover this thing with all kinds
of polygons and triangles and mix them up. And you take vertices minus edges plus faces—you'll get two. Now, here's a different shape. This is a torus, or the surface of a doughnut: 16 vertices
covered by these rectangles, 32 edges, 16 faces. This comes out zero—vertices minus edges. It'll always come out zero. Every time you cover a torus with squares or triangles or anything like that,
you're going to get zero when you take that thing. So, this is called the Euler characteristic. And it's what's called a topological invariant. That's pretty amazing. No matter how you do it, you're
always get the same answer. So that was the first sort of thrust, from the mid-1700s, into a subject which is now called algebraic topology.
And your own work took an idea like this and moved it into higher-dimensional theory, higher-dimensional objects, and found new invariances?
Yes. Well, there were already higher-dimensional invariants: Pontryagin classes—actually, there were Chern classes. There were a bunch of these types of invariants. I was struggling to work on one of
them and model it sort of combinatorially, instead of the way it was typically done, and that led to this work and we uncovered some new things. But if it wasn't for Mr. Euler—who wrote almost 70
volumes of mathematics and had 13 children, who he apparently would dandle on his knee while he was writing—if it wasn't for Mr. Euler, there wouldn't perhaps be these invariants.
OK, so that's at least given us a flavor of that amazing mind in there. Let's talk about Renaissance. Because you took that amazing mind and having been a code-cracker at the NSA, you started to
become a code-cracker in the financial industry. I think you probably didn't buy efficient market theory. And somehow you found a way of creating these astonishing returns over two decades. I think,
the way it's been explained to me, what's remarkable about what you did—it wasn't just the size of the returns, it was that you took them with surprisingly low volatility and risk, compared with
other hedge funds. So how on earth did you do this, Jim?
I did it by assembling a wonderful group of people. When I started doing trading, I had gotten a little tired of mathematics. I was in my late 30s. I had a little money; I started trading and it went
very well. I made quite a lot of money with pure luck. I mean, I think it was pure luck. It certainly wasn't mathematical modeling. But in looking at the data, after a while I realized, Hey, it looks
like there's some structure here. And I hired a few mathematicians, and we started making some models—just the kind of thing we did back at IDA. You design an algorithm, you test it out on a
computer. Does it work? Doesn't it work? And so on.
Can we take a look at this? Because here's a typical graph of some commodity or whatever. I mean, I look at that and I say, "That's just a random, up-and-down walk—maybe a slight upward trend over
that whole period of time." How on earth could you trade looking at that, and see something that wasn't just random?
It turns out in the old days—this is kind of a graph from the old days, commodities or currencies had a tendency to trend. Not necessarily the very light trend you see here, but trending in periods.
And if you decided, OK, I'm going to predict today, by the average move in the past 20 days—maybe that would be a good prediction, and I'd make some money. And in fact, years ago, such a system would
work—not beautifully, but it would work. So you'd make money, you'd lose money, you'd make money. But this is a year's worth of days, and you'd make a little money during that period. It's a very
vestigial system.
So you would test a bunch of lengths of trends in time and see whether, for example, a 10-day trend or a 15-day trend was predictive of what happened next.
Sure, you would try all those things and see what worked best. But the trend-following would have been great in the '60s, and it was sort of OK in the '70s. By the '80s, it wasn't.
Because everyone could see that. So, how did you stay ahead of the pack?
We stayed ahead of the pack by finding other approaches—shorter-term approaches to some extent. But the real thing was to gather a tremendous amount of data—and we had to get it by hand in the early
days. We went down to the Federal Reserve and copied interest rate histories and stuff like that, because it didn't exist on computers. We got a lot of data. And very smart people—that was the key. I
didn't really know how to hire people to do fundamental trading. I had hired a few—some made money, some didn't make money. I couldn't make a business out of that. But I did know how to hire
scientists, because I have some taste in that department. And...so, that's what we did. And gradually these models got better and better, and better and better.
You're credited with doing something remarkable at Renaissance, which is building this culture, this group of people, who weren't just hired guns who could be lured away by money. Their motivation
was doing exciting mathematics and science.
Well, I'd hoped that might be true. But some of it was money.
They made a lot of money.
I can't say that no one came because of the money. I think a lot of them came because of the money. But they also came because it would be fun.
What role did machine learning play in all this?
Well, in a certain sense, what we did was machine learning. You look at a lot of data, and you try to simulate different predictive schemes until you get better and better at it. It doesn't
necessarily feed back on itself the way we did things. But it worked.
So these different predictive schemes can be really quite wild and unexpected. I mean, you looked at everything, right? You looked at the weather, length of dresses, political opinion.
Yes, length of dresses we didn't try.
What sort of things?
Well, everything. Everything is grist for the mill—except hem lengths. Weather, annual reports, quarterly reports, historic data itself, volumes, you name it. Whatever there is. We take in terabytes
of data a day, and store it away and massage it and get it ready for analysis. You're looking for anomalies. You're looking for—like you said, the efficient market hypothesis is not correct.
But any one anomaly might be just a random thing. So, is the secret here to just look at multiple strange anomalies, and see when they align?
Well, any one anomaly might be a random thing; however, if you have enough data, you can tell that it's not. So you can see an anomaly that's persistent for a sufficiently long time—the probability
of it being random is not high. But these things fade after a while; anomalies can get washed out. So you have to keep on top of the business.
A lot of people look at the hedge fund industry now and are sort of...shocked by it, by how much wealth is created there, and how much talent is going into it. Do you have any worries about that
industry, and perhaps the financial industry in general? Kind of being on a runaway train that's—I don't know—helping increase inequality? How would you champion what's happening in the hedge fund
Well, actually I think in the last three or four years, hedge funds have not done especially well. We've done dandy, but the hedge fund industry as a whole has not done so wonderfully. The stock
market has been on a roll, going up as everybody knows, and price-earnings ratios have grown. So an awful lot of the wealth that's been created in the last—let's say, five or six years—has not been
created by hedge funds. So people would ask me, "What's a hedge fund?" And I'd say, "One and 20." Which means—now it's two and 20—it's two percent fixed fee and 20 percent of profits. Hedge funds are
all different kinds of creatures.
Rumor has it you charge slightly higher fees than that.
We had charged the highest fees in the world at one time. Five and 44, that's what we charge.
Five and 44. So five percent flat, 44 percent of upside. You still made your investors spectacular amounts of money.
We made good returns, yes. People got very mad: "How can you charge such high fees?" I said, "OK, you can withdraw." But "How can I get more?" was what people were... But at a certain point, as I
think I told you, we bought out all the investors because there's a capacity to the fund.
But should we worry about the hedge fund industry attracting too much of the world's great mathematical and other talent to work on that, as opposed to the many other problems in the world?
Well, it's not just mathematical. We hire astronomers and physicists and things like that. I don't think we should worry about it too much. It's still a pretty small industry. And in fact, bringing
science into the investing world has improved that world. It's reduced volatility. It's increased liquidity. Spreads are narrower because people are trading that kind of stuff. So I'm not too worried
about Einstein going off and starting a hedge fund.
Now, you're at a phase in your life now where you're actually investing, though, at the other end of the supply chain—you're actually boosting mathematics across America. This is your wife, Marilyn.
And you're working on philanthropic issues together. Tell me about that.
Well, Marilyn started—there she is up there, my beautiful wife—she started the foundation about 20 years ago. I think '94. I claim it was '93, she says it was '94, but it was one of those two years.
We started the foundation, just as a convenient way to give charity. She kept the books, and so on. We did not have a vision at that time, but gradually a vision emerged—which was to focus on math
and science, to focus on basic research. And that's what we've done. And six years ago or so, I left Renaissance and went to work at the foundation. So that's what we do.
And so Math for America here is basically investing in math teachers around the country, giving them some extra income, giving them support and coaching, and really trying to make that more effective
and make that a calling to which teachers can aspire.
Yeah—instead of beating up the bad teachers, which has created morale problems all through the educational community, in particular in math and science, we focus on celebrating the good ones and
giving them status. Yeah, we give them extra money, 15,000 dollars a year. We have 800 math and science teachers in New York City in public schools today, as part of a core. There's a great morale
among them. They're staying in the field. Next year, it'll be 1,000 and that'll be 10 percent of the math and science teachers in New York public schools.
Jim, here's another project that you've supported philanthropically: Research into origins of life, I guess. What are we looking at here?
Well, I'll save that for a second. And then I'll tell you what you're looking at. So, origins of life is a fascinating question. How did we get here? Well, there are two questions: One is, what is
the route from geology to biology—how did we get here? And the other question is, what did we start with? What material, if any, did we have to work with on this route? Those are two very, very
interesting questions. The first question is a tortuous path from geology up to RNA or something like that—how did that all work? And the other, what do we have to work with? Well, more than we
think. So what's pictured there is a star in formation. Now, every year in our Milky Way, which has 100 billion stars, about two new stars are created. Don't ask me how, but they're created. And it
takes them about a million years to settle out. So, in steady state, there are about two million stars in formation at any time. That one is somewhere along this settling-down period. And there's all
this crap sort of circling around it, dust and stuff. And it'll form probably a solar system, or whatever it forms. But here's the thing—in this dust that surrounds a forming star have been found,
now, significant organic molecules. Molecules not just like methane, but formaldehyde and cyanide—things that are the building blocks—the seeds, if you will—of life. So, that may be typical. And it
may be typical that planets around the universe start off with some of these basic building blocks. Now does that mean there's going to be life all around? Maybe. But it's a question of how tortuous
this path is from those frail beginnings, those seeds, all the way to life. And most of those seeds will fall on fallow planets.
So for you, personally, finding an answer to this question of where we came from, of how did this thing happen, that is something you would love to see.
I would love to see, and like to know—if that path is tortuous enough, and so improbable, that no matter what you start with, we could be a singularity. But on the other hand, given all this organic
dust that's floating around, we could have lots of friends out there. It'd be great to know.
Jim, a couple of years ago, I got the chance to speak with Elon Musk, and I asked him the secret of his success, and he said taking physics seriously was it. Listening to you, what I hear you saying
is taking math seriously, that has infused your whole life. It's made you an absolute fortune, and now it's allowing you to invest in the futures of thousands and thousands of kids across America and
elsewhere. Could it be that science actually works? That math actually works?
Well, math certainly works. Math certainly works. But this has been fun. Working with Marilyn and giving it away has been very enjoyable.
I just find it—it's an inspirational thought to me that by taking knowledge seriously, so much more can come from it. So thank you for your amazing life, and for coming here to TED. Thank you. Jim
|
{"url":"https://www.hopenglish.com/a-rare-interview-with-the-mathematician-who-cracked-wall-street","timestamp":"2024-11-11T20:15:35Z","content_type":"text/html","content_length":"138433","record_id":"<urn:uuid:8f509e60-16b4-4f56-97d3-36bac80ff8e7>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00653.warc.gz"}
|
Searches for Gravitational Waves from Known Pulsars at Two Harmonics in the Second and Third LIGO-Virgo Observing Runs
We present a targeted search for continuous gravitational waves (GWs) from 236 pulsars using data from the third observing run of LIGO and Virgo (O3) combined with data from the second observing run
(O2). Searches were for emission from the l = m = 2 mass quadrupole mode with a frequency at only twice the pulsar rotation frequency (single harmonic) and the l = 2, m = 1, 2 modes with a frequency
of both once and twice the rotation frequency (dual harmonic). No evidence of GWs was found, so we present 95% credible upper limits on the strain amplitudes h[0] for the single-harmonic search along
with limits on the pulsars' mass quadrupole moments Q[22] and ellipticities ε. Of the pulsars studied, 23 have strain amplitudes that are lower than the limits calculated from their
electromagnetically measured spin-down rates. These pulsars include the millisecond pulsars J0437-4715 and J0711-6830, which have spin-down ratios of 0.87 and 0.57, respectively. For nine pulsars,
their spin-down limits have been surpassed for the first time. For the Crab and Vela pulsars, our limits are factors of ∼100 and ∼20 more constraining than their spin-down limits, respectively. For
the dual-harmonic searches, new limits are placed on the strain amplitudes C[21] and C[22]. For 23 pulsars, we also present limits on the emission amplitude assuming dipole radiation as predicted by
Brans-Dicke theory.
• Gravitational wave sources
• Gravitational waves
• Neutron stars
• Pulsars
ASJC Scopus subject areas
• Astronomy and Astrophysics
• Space and Planetary Science
Dive into the research topics of 'Searches for Gravitational Waves from Known Pulsars at Two Harmonics in the Second and Third LIGO-Virgo Observing Runs'. Together they form a unique fingerprint.
|
{"url":"https://experts.syr.edu/en/publications/searches-for-gravitational-waves-from-known-pulsars-at-two-harmon-3","timestamp":"2024-11-11T07:29:41Z","content_type":"text/html","content_length":"52562","record_id":"<urn:uuid:a872f4ff-6df4-42d9-b522-8c8fb1209413>","cc-path":"CC-MAIN-2024-46/segments/1730477028220.42/warc/CC-MAIN-20241111060327-20241111090327-00792.warc.gz"}
|
Neutral Drifts
Monday, March 21, 2011
I lectured a tiny bit on the
Riemann zeta function
for the first time in my complex analysis course, which inspired me to make the following plot. Colors are the argument of the Riemann zeta function, brightness is proportional to magnitude. The
default brightness map gives very low contrast, so I modified the magnitudes. To help with seeing the magnitudes a contour map with exponentially spaced contours is overlaid.
def xy_to_zeta_size(x,y):
return abs(zeta(N(x+I*y)))
cvals = [e^i for i in srange(-7,1,.25)]
cp = contour_plot(xy_to_zeta_size,(-6,3),(-3,3),contours = cvals, fill=False, plot_points = 201)
rzeta(z) = zeta(z)/norm(zeta(z))^(.25)
rzf = fast_callable(rzeta,domain=CDF)
cparg = complex_plot(rzf,(-6,3),(-3,3))
|
{"url":"https://neutraldrifts.blogspot.com/2011/03/","timestamp":"2024-11-15T00:34:20Z","content_type":"application/xhtml+xml","content_length":"46081","record_id":"<urn:uuid:41590822-907b-4273-9a47-285b710e6782>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00838.warc.gz"}
|
Precalculus - Online Tutor, Practice Problems & Exam Prep
Hey, everyone. Welcome back. So we just finished talking a lot about arithmetic sequences, like, for example, 36912, where the difference between each number is always the same number. But let's take
a look at this sequence over here, 392781. Clearly, we can see that the difference between numbers is never the same. It's constantly getting bigger. But there's still actually a pattern going on
with this sequence. What I'm going to show you in this video is that this is a special type of sequence called a geometric sequence, and what we're going to see is that there's a lot of similarities
between how we use the information and the pattern across the numbers to set up a recursive formula for these types of sequences. So I want to show you how to do that and the basic difference between
these two types, and we'll do some examples. Let's get started. So remember that arithmetic sequences are special types where the difference between terms was always the same. For example, the common
difference in this situation that the sequence was 3. A geometric sequence is a special type where the ratio between terms is always the same number. So, for example, from 3 to 9, you have to
multiply by 3. From 9 to 27, you also multiply by 3. From 27 to 81, you multiply by 3. So instead of adding 3 to each number to get the next one, you have to multiply by 3 to get the next number. Now
this ratio over here is called the common ratio, and the letter we use for this is little r. So little r in this case is equal to 3. Kind of like how in this case little d was equal to 3. Alright?
Now we can use this common ratio to find additional terms by setting up a recursive formula. Remember, recursive formulas are just formulas that tell you the next term based on the previous term. So
in this situation, we just took the previous term and added 3. Well, in this geometric sequence, we're going to take the previous term, and instead we have to multiply by 3. That's really all there
is to it. The way that you use these formulas to find the next terms is exactly the same. Alright? So, in fact, this sort of general sort of structure that you'll see for these recursive formulas for
geometric sequences is they'll always look like this. \( A_n \) , the new term, is going to be the previous term times \( r \), whatever that common ratio is. Alright? So, clearly, we can see here
that the only difference between these two is the operation that's involved. For arithmetic, you always add numbers to get to the next term, whereas in geometric sequences, you multiply numbers to
get with the next term. And what we're going to see here also is that, generally, addition grows a little bit slower than multiplication. So these types of sequences, the numbers grow a little bit
slower, whereas in geometric sequences, these tend to grow very fast because they're exponential. Right? They're going 392781, whereas this is only 36912. So these tend to grow much faster than
arithmetic sequences. Alright? So let's go ahead and take a look at our example here because sometimes you may be asked to write recursive formulas for geometric sequences. And, in fact, there's a
lot of similarities between how we did this for arithmetic sequences. All you have to do is first find the common ratio, and then we can set up using this equation over here. Let's get started with
this example. So we have the numbers 5, 20, 80, and 320. Notice how the difference is not the same between the numbers, but there is a pattern that's going on here. So what do I have to do to 5 to
get to 20? Well, the first thing you're going to do here is you're going to find \( r \) by dividing any 2 consecutive terms. So what we're going to have to do is take a look at the pattern between
the two numbers. And what we can see here is that from 5 to 20, you have to multiply by 4. From 20 to 80, you also multiply by 4. And from 80 to 320, you also multiply by 4. So, clearly, in this
case, our \( r \), our common ratio is equal to 4. Alright? So now we just use this. We move on to the second step, which is we're going to write a recursive formula. A recursive formula is going to
be something that looks like \( A_n = A_{n-1} \times r \). In other words, times \( r \), a common ratio. So this is just going to be \( A_{n-1} \times 4 \). Alright? Now remember, just like in, just
like for arithmetic sequences, having this formula by itself isn't useful or isn't helpful because you have to know what the first term is. So you always have to write what the first term in the
sequence is. In this case, \( A_1 = 5 \). Alright? So that's how to write recursive formulas. And to find the next one, you would just take the previous term and multiply by 4. Alright? So that's it
for this one, folks. Let me know if you have any questions. Thanks for watching.
|
{"url":"https://www.pearson.com/channels/precalculus/learn/patrick/20-sequences-series-and-induction/geometric-sequences?chapterId=24afea94","timestamp":"2024-11-10T15:52:02Z","content_type":"text/html","content_length":"408495","record_id":"<urn:uuid:973f0770-adbd-43ba-8749-0e0ff66d2a81>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00070.warc.gz"}
|
Chapter 20: Differential Equations Flashcards
First and second order equations, all the harmonic motion, and coupled equations... I wonder why this one isn't in the mechanic section of the book...
How do you find the integrating factor?
Define a general and particular solution for a 1st order equation.
For an auxiliary equation of a 2nd order differential in the following form, what is the general equation?
For an auxiliary equation of a 2nd order differential in the following form, what is the general equation?
For an auxiliary equation of a 2nd order differential in the following form, what is the general equation?
What is the complementary function?
For a nonhomogeneous equation (≠0), solving the auxiliary equation gives a complementary function.
How do you find the general solution for a 2nd order differential equation?
When the right-hand-side of a 2nd order differential equation is in the following form, what is the form of the particular integral?
When the right-hand-side of a 2nd order differential equation is in the following form, what is the form of the particular integral?
When the right-hand-side of a 2nd order differential equation is in the following form, what is the form of the particular integral?
When the right-hand-side of a 2nd order differential equation is in the following form, what is the form of the particular integral?
When the right-hand-side of a 2nd order differential equation is in the following form, what is the form of the particular integral?
When the right-hand-side of a 2nd order differential equation is in the following form, what is the form of the particular integral?
What is it called, for a 2nd order differential equation, when you find the values of the constants of the general equation?
What is the equation that satisfies the conditions for a simple harmonic motion equation?
|
{"url":"https://www.brainscape.com/flashcards/chapter-20-differential-equations-10992502/packs/18217956","timestamp":"2024-11-10T00:12:39Z","content_type":"text/html","content_length":"105851","record_id":"<urn:uuid:cfe6dae6-7a86-49e8-acf7-bf546450e0ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028164.10/warc/CC-MAIN-20241109214337-20241110004337-00109.warc.gz"}
|
statistics answered |Statistics tutor online| My Mathlab answers |Ap Statistics Tutor | Solve using Excel and Minitab: the probability that of 6 randomly selected patients, 4 will recover
A PHP Error was encountered
Severity: Notice
Message: Undefined index: userid
Filename: views/question.php
Line Number: 212
File: /home/mycocrkc/statisticsanswered.com/application/views/question.php
Line: 212
Function: _error_handler
File: /home/mycocrkc/statisticsanswered.com/application/controllers/Questions.php
Line: 416
Function: view
File: /home/mycocrkc/statisticsanswered.com/index.php
Line: 315
Function: require_once
tutor Punditsdkoslkdosdkoskdo
Solve using Excel & Minitab (Do not use formula)
1. A die is tossed 3 times. What is the probability of
(a) No fives turning up?
(b) 1 five?
(c) 3 fives?
2. Hospital records show that of patients suffering from a certain disease, 75% die of it. What is the probability that of 6 randomly selected patients, 4 will recover?
3. The ratio of boys to girls at birth in Singapore is quite high at 1.09:1.
What proportion of Singapore families with exactly 6 children will have at least 3 boys? (Ignore the probability of multiple births.)
4. A manufacturer of metal pistons finds that on the average, 12% of his pistons are rejected because they are either oversize or undersize. What is the probability that a batch of 10 pistons will
(a) no more than 2 rejects? (b) at least 2 rejects?
5. A die is rolled 240 times. Find the mean, variance and standard deviation for the number of 3s that will be rolled?
6. If there are 200 typographical errors randomly distributed in a 500 page manuscript, find the probability that a given page contains exactly 3 errors.
7. A sales form receives on the average of 3 calls per hour on its toll-free number. For any given hour, find the probability that it will receive a. At most 3 calls; b. At least 3 calls; and c. Five
or more calls.
8. A life insurance salesman sells on the average 3 life insurance policies per week. Calculate the probability that in a given week he will sell
1. Some policies
2. 2 or more policies but less than 5 policies.
3. Assuming that there are 5 working days per week, what is the probability that in a given day he will sell one policy?
9. Twenty sheets of aluminum alloy were examined for surface flaws. The frequency of the number of sheets with a given number of flaws per sheet was as follows:
Number of flaws Frequency
What is the probability of finding a sheet chosen at random which contains 3 or more surface flaws?
10. Find the area right of z=1.11
11. Find the area left of z = -1.93
12. Find the area between -/+ 1, 2, 3, 4, 5, 6, standard deviations.
13. Find the z value such that the area under the normal distribution curve between 0 and the z value is 0.2123
14. A study on recycling shows that in a certain city, each household accumulates an average of 14 pounds of newspaper each month to be recycled. The standard deviation is 2 pounds. If a household is
selected at random, find the probability it will accumulate the following:
a. Between 13 and 17 pounds of newspaper for a month.
b. More than 16.2 pounds of newspaper for one month.
15. A standardized achievement test has a mean of 50 and a standard deviation of 10. The scores are normally distributed. If the test is administered to 800 selected people, approximately how many
will score between 48 and 62?
Need Help With this assignment?
We have the solutions ready which we can share with you.
• 0
Reply Report
|
{"url":"https://www.statisticsanswered.com/questions/1658/solve-using-excel-and-minitab-the-probability-that-of-6-randomly-selected-patients-4-will-recover","timestamp":"2024-11-11T12:51:42Z","content_type":"text/html","content_length":"93152","record_id":"<urn:uuid:8ae994a7-aae7-41d3-ab52-2213993846f4>","cc-path":"CC-MAIN-2024-46/segments/1730477028230.68/warc/CC-MAIN-20241111123424-20241111153424-00451.warc.gz"}
|
Bitcoin mining on a 55 year old IBM 1401 mainframe: 80 seconds per hash
Could an IBM mainframe from the 1960s mine Bitcoin? The idea seemed crazy, so I decided to find out. I implemented the Bitcoin hash algorithm in assembly code for the IBM 1401 and tested it on a
working vintage mainframe. It turns out that this computer could mine, but so slowly it would take more than the lifetime of the universe to successfully mine a block. While modern hardware can
compute billions of hashes per second, the 1401 takes 80 seconds to compute a single hash. This illustrates the improvement of computer performance in the past decades, most famously described by
Moore's Law.
The photo below shows the card deck I used, along with the output of my SHA-256 hash program as printed by the line printer. (The card on the front of the deck is just for decoration; it was a huge
pain to punch.) Note that the second line of output ends with a bunch of zeros; this indicates a successful hash.
Card deck used to compute SHA-256 hashes on IBM 1401 mainframe. Behind the card deck is the line printer output showing the input to the algorithm and the resulting hash.
How Bitcoin mining works
Bitcoin, a digital currency that can be transmitted across the Internet, has attracted a lot of attention lately. If you're not familiar with how it works, the Bitcoin system can be thought of as a
ledger that keeps track of who owns which bitcoins, and allows them to be transferred from one person to another. The interesting thing about Bitcoin is there's no central machine or authority
keeping track of things. Instead, the records are spread across thousands of machines on the Internet.
The difficult problem with a distributed system like this is how to ensure everyone agrees on the records, so everyone agrees if a transaction is valid, even in the presence of malicious users and
slow networks. The solution in Bitcoin is a process called mining—about every 10 minutes a block of outstanding transactions is mined, which makes the block official.
To prevent anyone from controlling which transactions are mined, the mining process is very difficult and competitive. In particular a key idea of Bitcoin is that mining is made very, very difficult,
a technique called proof-of-work. It takes an insanely huge amount of computational effort to mine a block, but once a block has been mined, it is easy for peers on the network to verify that a block
has been successfully mined. The difficulty of mining keeps anyone from maliciously taking over Bitcoin, and the ease of checking that a block has been mined lets users know which transactions are
As a side-effect, mining adds new bitcoins to the system. For each block mined, miners currently get 25 new bitcoins (currently worth about $6,000), which encourages miners to do the hard work of
mining blocks. With the possibility of receiving $6,000 every 10 minutes, there is a lot of money in mining and people invest huge sums in mining hardware.
Line printer and IBM 1401 mainframe at the Computer History Museum. This is the computer I used to run my program. The console is in the upper left. Each of the dark rectangular panels on the
computer is a "gate" that can be folded out for maintenance.
Mining requires a task that is very difficult to perform, but easy to verify. Bitcoin mining uses cryptography, with a hash function called double SHA-256. A hash takes a chunk of data as input and
shrinks it down into a smaller hash value (in this case 256 bits). With a cryptographic hash, there's no way to get a hash value you want without trying a whole lot of inputs. But once you find an
input that gives the value you want, it's easy for anyone to verify the hash. Thus, cryptographic hashing becomes a good way to implement the Bitcoin "proof-of-work".
In more detail, to mine a block, you first collect the new transactions into a block. Then you hash the block to form an (effectively random) block hash value. If the hash starts with 16 zeros, the
block is successfully mined and is sent into the Bitcoin network. Most of the time the hash isn't successful, so you modify the block slightly and try again, over and over billions of times. About
every 10 minutes someone will successfully mine a block, and the process starts over. It's kind of like a lottery, where miners keep trying until someone "wins". It's hard to visualize just how
difficult the hashing process is: finding a valid hash is less likely than finding a single grain of sand out of all the sand on Earth. To find these hashes, miners have datacenters full of
specialized hardware to do this mining.
I've simplified a lot of details. For in-depth information on Bitcoin and mining, see my articles Bitcoins the hard way and Bitcoin mining the hard way.
The SHA-256 hash algorithm used by Bitcoin
Next, I'll discuss the hash function used in Bitcoin, which is based on a standard cryptographic hash function called SHA-256. Bitcoin uses "double SHA-256" which simply applies the SHA-256 function
twice. The SHA-256 algorithm is so simple you can literally
do it by hand
, but it manages to scramble the data entirely unpredictably. The algorithm takes input blocks of 64 bytes, combines the data cryptographically, and generates a 256-bit (32 byte) output. The
algorithm uses a simple round and repeats it 64 times. The diagram below shows one round, which takes eight 4-byte inputs, A through H, performs a few operations, and generates new values for A
through H.
The dark blue boxes mix up the values in non-linear ways that are hard to analyze cryptographically. (If you could figure out a mathematical shortcut to generate successful hashes, you could take
over Bitcoin mining.) The Ch "choose" box chooses bits from F or G, based on the value of input E. The Σ "sum" boxes rotate the bits of A (or E) to form three rotated versions, and then sums them
together modulo 2. The Ma "majority" box looks at the bits in each position of A, B, and C, and selects 0 or 1, whichever value is in the majority. The red boxes perform 32-bit addition, generating
new values for A and E. The input W[t] is based on the input data, slightly processed. (This is where the input block gets fed into the algorithm.) The input K[t] is a constant defined for each
As can be seen from the diagram above, only A and E are changed in a round. The other values pass through unchanged, with the old A value becoming the new B value, the old B value becoming the new C
value and so forth. Although each round of SHA-256 doesn't change the data much, after 64 rounds the input data will be completely scrambled, generating the unpredictable hash output.
The IBM 1401
I decided to implement this algorithm on the IBM 1401 mainframe. This computer was announced in 1959, and went on to become the best-selling computer of the mid-1960s, with more than 10,000 systems
in use. The 1401 wasn't a very powerful computer even for 1960, but since it leased for the low price of $2500 a month, it made computing possible for mid-sized businesses that previously couldn't
have afforded a computer.
The IBM 1401 didn't use silicon chips. In fact it didn't even use silicon. Its transistors were built out of a semiconductor called germanium, which was used before silicon took over. The transistors
and other components were mounted on boards the size of playing cards called SMS cards. The computer used thousands of these cards, which were installed in racks called "gates". The IBM 1401 had a
couple dozen of these gates, which folded out of the computer for maintenance. Below, one of the gates is opened up showing the circuit boards and cabling.
This shows a rack (called a "gate") folded out of the IBM 1401 mainframe. The photo shows the SMS cards used to implement the circuits. This specific rack controls the tape drives.
Internally, the computer was very different from modern computers. It didn't use 8-bit bytes, but 6-bit characters based on binary coded decimal (BCD). Since it was a business machine, the computer
used decimal arithmetic instead of binary arithmetic and each character of storage held a digit, 0 through 9. The computer came with 4000 characters of storage in magnetic core memory; a
dishwasher-sized memory expansion box provided 12,000 more characters of storage. The computer was designed to use punched cards as input, with a card reader that read the program and data. Output
was printed on a fast line printer or could be punched on more cards.
The Computer History Museum in Mountain View has two working IBM 1401 mainframes. I used one of them to run the SHA-256 hash code. For more information on the IBM 1401, see my article Fractals on the
IBM 1401.
Implementing SHA-256 on the IBM 1401
The IBM 1401 is almost the worst machine you could pick to implement the SHA-256 hash algorithm. The algorithm is designed to be implemented efficiently on machines that can do bit operations on
32-bit words. Unfortunately, the IBM 1401 doesn't have 32-bit words or even bytes. It uses 6-bit characters and doesn't provide bit operations. It doesn't even handle binary arithmetic, using decimal
arithmetic instead. Thus, implementing the algorithm on the 1401 is slow and inconvenient.
I ended up using one character per bit. A 32-bit value is stored as 32 characters, either "0" or "1". My code has to perform the bit operations and additions character-by-character, basically
checking each character and deciding what to do with it. As you might expect, the resulting code is very slow.
The assembly code I wrote is below. The comments should give you a rough idea of how the code works. Near the end of the code, you can see the table of constants required by the SHA-256 algorithm,
specified in hex. Since the 1401 doesn't support hex, I had to write my own routines to convert between hex and binary. I won't try to explain IBM 1401 assembly code here, except to point out that it
is very different from modern computers. It doesn't even have subroutine calls and returns. Operations happen on memory, as there aren't any general-purpose registers.
job bitcoin
* SHA-256 hash
* Ken Shirriff //righto.com
ctl 6641
org 087
X1 dcw @000@
org 092
X2 dcw @000@
org 097
X3 dcw @000@
org 333
start cs 299
sw 001
lca 064, input0
mcw 064, 264
* Initialize word marks on storage
mcw +s0, x3
wmloop sw 0&x3
ma @032@, x3
c +h7+32, x3
bu wmloop
mcw +input-127, x3 * Put input into warr[0] to warr[15]
mcw +warr, x1
mcw @128@, tobinc
b tobin
* Compute message schedule array w[0..63]
mcw @16@, i
* i is word index 16-63
* x1 is start of warr[i-16], i.e. bit 0 (bit 0 on left, bit 31 on right)
mcw +warr, x1
wloop c @64@, i
be wloopd
* Compute s0
mcw +s0, x2
za +0, 31&x2 * Zero s0
* Add w[i-15] rightrotate 7
sw 7&x2 * Wordmark at bit 7 (from left) of s0
a 56&x1, 31&x2 * Right shifted: 32+31-7 = bit 24 of w[i-15], 31 = end of s0
a 63&x1, 6&x2 * Wrapped: 32+31 = end of w[i-15], 7-1 = bit 6 of s0
cw 7&x2 * Clear wordmark
* Add w[i-15] rightrotate 18
sw 18&x2 * Wordmark at bit 18 (from left) of s0
a 45&x1, 31&x2 * Right shifted: 32+31-18 = bit 13 of w[i-15], 31 = end of s0
a 63&x1, 17&x2 * Wrapped: 32+31 = end of w[i-15], 18-1 = bit 17 of s0
cw 18&x2 * Clear wordmark
* Add w[i-15] rightshift 3
sw 3&x2 * Wordmark at bit 3 (from left) of s0
a 60&x1, 31&x2 * Right shifted: 32+31-3 = bit 28 of w[i-15], 31 = end of s0
cw 3&x2 * Clear wordmark
* Convert sum to xor
mcw x1, x1tmp
mcw +s0+31, x1 * x1 = right end of s0
mcw @032@, x2 * Process 32 bits
b xor
sw s0 * Restore wordmark cleared by xor
mcw x1tmp, x1
* Compute s1
mcw +s1, x2
za +0, 31&x2 * Zero s1
* Add w[i-2] rightrotate 17
sw 17&x2 * Wordmark at bit 17 (from left) of s1
a 462&x1, 31&x2 * Right shifted: 14*32+31-17 = bit 14 of w[i-2], 31 = end of s1
a 479&x1, 16&x2 * Wrapped: 14*32+31 = end of w[i-2], 17-1 = bit 16 of s1
cw 17&x2 * Clear wordmark
* Add w[i-2] rightrotate 19
sw 19&x2 * Wordmark at bit 19 (from left) of s1
a 460&x1, 31&x2 * Right shifted: 14*32+31-19 = bit 12 of w[i-2], 31 = end of s1
a 479&x1, 18&x2 * Wrapped: 14*32+31 = end of w[i-2], 19-1 = bit 18 of s1
cw 19&x2 * Clear wordmark
* Add w[i-2] rightshift 10
sw 10&x2 * Wordmark at bit 10 (from left) of s1
a 469&x1, 31&x2 * Right shifted: 14*32+31-10 = bit 21 of w[i-2], 31 = end of s1
cw 10&x2 * Clear wordmark
* Convert sum to xor
mcw +s1+31, x1 * x1 = right end of s1
mcw @032@, x2 * Process 32 bits
b xor
sw s1 * Restore wordmark cleared by xor
* Compute w[i] := w[i-16] + s0 + w[i-7] + s1
mcw x1tmp, x1
a s1+31, s0+31 * Add s1 to s0
a 31&x1, s0+31 * Add w[i-16] to s0
a 319&x1, s0+31 * Add 9*32+31 = w[i-7] to s0
* Convert bit sum to 32-bit sum
mcw +s0+31, x1 * x1 = right end of s0
mcw @032@, x2 * Process 32 bits
b sum
sw s0 * Restore wordmark cleared by sum
mcw x1tmp, x1
mcw s0+31, 543&x1 * Move s0 to w[i]
ma @032@, x1
a +1, i
mz @0@, i
b wloop
x1tmp dcw #5
* Initialize: Copy hex h0init-h7init into binary h0-h7
wloopd mcw +h0init-7, x3
mcw +h0, x1
mcw @064@, tobinc * 8*8 hex digits
b tobin
* Initialize a-h from h0-h7
mcw @000@, x1
ilp mcw h0+31&x1, a+31&x1
ma @032@, x1
c x1, @256@
bu ilp
mcw @000@, bitidx * bitidx = i*32 = bit index
mcw @000@, kidx * kidx = i*8 = key index
* Compute s1 from e
mainlp mcw +e, x1
mcw +s1, x2
za +0, 31&x2 * Zero s1
* Add e rightrotate 6
sw 6&x2 * Wordmark at bit 6 (from left) of s1
a 25&x1, 31&x2 * Right shifted: 31-6 = bit 25 of e, 31 = end of s1
a 31&x1, 5&x2 * Wrapped: 31 = end of e, 6-1 = bit 5 of s1
cw 6&x2 * Clear wordmark
* Add e rightrotate 11
sw 11&x2 * Wordmark at bit 11 (from left) of s1
a 20&x1, 31&x2 * Right shifted: 31-11 = bit 20 of e, 31 = end of s1
a 31&x1, 10&x2 * Wrapped: 31 = end of e, 11-1 = bit 10 of s1
cw 11&x2 * Clear wordmark
* Add e rightrotate 25
sw 25&x2 * Wordmark at bit 25 (from left) of s1
a 6&x1, 31&x2 * Right shifted: 31-25 = bit 6 of e, 31 = end of s1
a 31&x1, 24&x2 * Wrapped: 31 = end of e, 25-1 = bit 24 of s1
cw 25&x2 * Clear wordmark
* Convert sum to xor
mcw +s1+31, x1 * x1 = right end of s1
mcw @032@, x2 * Process 32 bits
b xor
sw s1 * Restore wordmark cleared by xor
* Compute ch: choose function
mcw @000@, x1 * x1 is index from 0 to 31
chl c e&x1, @0@
be chzero
mn f&x1, ch&x1 * for 1, select f bit
b chincr
chzero mn g&x1, ch&x1 * for 0, select g bit
chincr a +1, x1
mz @0@, x1
c @032@, x1
bu chl
* Compute temp1: k[i] + h + S1 + ch + w[i]
cs 299
mcw +k-7, x3 * Convert k[i] to binary in temp1
ma kidx, x3
mcw +temp1, x1
mcw @008@, tobinc * 8 hex digits
b tobin
mcw @237@, x3
mcw +temp1, x1
mcw @008@, tobinc
b tohex
a h+31, temp1+31 * +h
a s1+31, temp1+31 * +s1
a ch+31, temp1+31 * +ch
mcw bitidx, x1
a warr+31&x1, temp1+31 * + w[i]
* Convert bit sum to 32-bit sum
mcw +temp1+31, x1 * x1 = right end of temp1
b sum
* Compute s0 from a
mcw +a, x1
mcw +s0, x2
za +0, 31&x2 * Zero s0
* Add a rightrotate 2
sw 2&x2 * Wordmark at bit 2 (from left) of s0
a 29&x1, 31&x2 * Right shifted: 31-2 = bit 29 of a, 31 = end of s0
a 31&x1, 1&x2 * Wrapped: 31 = end of a, 2-1 = bit 1 of s0
cw 2&x2 * Clear wordmark
* Add a rightrotate 13
sw 13&x2 * Wordmark at bit 13 (from left) of s0
a 18&x1, 31&x2 * Right shifted: 31-13 = bit 18 of a, 31 = end of s0
a 31&x1, 12&x2 * Wrapped: 31 = end of a, 13-1 = bit 12 of s0
cw 13&x2 * Clear wordmark
* Add a rightrotate 22
sw 22&x2 * Wordmark at bit 22 (from left) of s0
a 9&x1, 31&x2 * Right shifted: 31-22 = bit 9 of a, 31 = end of s0
a 31&x1, 21&x2 * Wrapped: 31 = end of a, 22-1 = bit 21 of s0
cw 22&x2 * Clear wordmark
* Convert sum to xor
mcw +s0+31, x1 * x1 = right end of s0
mcw @032@, x2 * Process 32 bits
b xor
sw s0 * Restore wordmark cleared by xor
* Compute maj(a, b, c): majority function
za +0, maj+31
a a+31, maj+31
a b+31, maj+31
a c+31, maj+31
mz @0@, maj+31
mcw @000@, x1 * x1 is index from 0 to 31
mjl c maj&x1, @2@
bh mjzero
mn @1@, maj&x1 * majority of the 3 bits is 1
b mjincr
mjzero mn @0@, maj&x1 * majority of the 3 bits is 0
mjincr a +1, x1
mz @0@, x1
c @032@, x1
bu mjl
* Compute temp2: S0 + maj
za +0, temp2+31
a s0+31, temp2+31
a maj+31, temp2+31
* Convert bit sum to 32-bit sum
mcw +temp2+31, x1 * x1 = right end of temp1
b sum
mcw g+31, h+31 * h := g
mcw f+31, g+31 * g := f
mcw e+31, f+31 * f := e
za +0, e+31 * e := d + temp1
a d+31, e+31
a temp1+31, e+31
mcw +e+31, x1 * Convert sum to 32-bit sum
b sum
mcw c+31, d+31 * d := c
mcw b+31, c+31 * c := b
mcw a+31, b+31 * b := a
za +0, a+31 * a := temp1 + temp2
a temp1+31, a+31
a temp2+31, a+31
mcw +a+31, x1 * Convert sum to 32-bit sum
b sum
a @8@, kidx * Increment kidx by 8 chars
mz @0@, kidx
ma @032@, bitidx * Increment bitidx by 32 bits
c @!48@, bitidx * Compare to 2048
bu mainlp
* Add a-h to h0-h7
cs 299
mcw @00000@, x1tmp
add1 mcw x1tmp, x1
a a+31&x1, h0+31&x1
ma +h0+31, x1 * Convert sum to 32-bit sum
b sum
ma @032@, x1tmp
c @00256@, x1tmp
bu add1
mcw @201@, x3
mcw +h0, x1
mcw @064@, tobinc
b tohex
mcw 280, 180
finis h
b finis
* Converts sum of bits to xor
* X1 is right end of word
* X2 is bit count
* Note: clears word marks
xor sbr xorx&3
xorl c @000@, x2
be xorx
xorfix mz @0@, 0&x1 * Clear zone
c 0&x1, @2@
bh xorok
sw 0&x1 * Subtract 2 and loop
s +2, 0&x1
cw 0&x1
b xorfix
xorok ma @I9I@, x1 * x1 -= 1
s +1, x2 * x2 -= 1
mz @0@, x2
b xorl * loop
xorx b @000@
* Converts sum of bits to sum (i.e. propagate carries if digit > 1)
* X1 is right end of word
* Ends at word mark
sum sbr sumx&3
suml mz @0@, 0&x1 * Clear zone
c 0&x1, @2@ * If digit is <2, then ok
bh sumok
s +2, 0&x1 * Subtract 2 from digit
bwz suml, 0&x1, 1 * Skip carry if at wordmark
a @1@, 15999&x1 * Add 1 to previous position
b suml * Loop
sumok bwz sumx,0&x1,1 * Quit if at wordmark
ma @I9I@, x1 * x1 -= 1
b suml * loop
sumx b @000@ * return
* Converts binary to string of hex digits
* X1 points to start (left) of binary
* X3 points to start (left) of hex buffer
* X1, X2, X3 destroyed
* tobinc holds count (# of hex digits)
tohex sbr tohexx&3
tohexl c @000@, tobinc * check counter
be tohexx
s @1@, tobinc * decrement counter
mz @0@, tobinc
b tohex4
mcw hexchr, 0&x3
ma @004@, X1
ma @001@, X3
b tohexl * loop
tohexx b @000@
* X1 points to 4 bits
* Convert to hex char and write into hexchr
* X2 destroyed
tohex4 sbr tohx4x&3
mcw @000@, x2
c 3&X1, @1@
bu tohx1
a +1, x2
tohx1 c 2&X1, @1@
bu tohx2
a +2, x2
tohx2 c 1&x1, @1@
bu tohx4
a +4, x2
tohx4 c 0&x1, @1@
bu tohx8
a +8, x2
tohx8 mz @0@, x2
mcw hextab-15&x2, hexchr
tohx4x b @000@
* Converts string of hex digits to binary
* X3 points to start (left) of hex digits
* X1 points to start (left) of binary digits
* tobinc holds count (# of hex digits)
* X1, X3 destroyed
tobin sbr tobinx&3
tobinl c @000@, tobinc * check counter
be tobinx
s @1@, tobinc * decrement counter
mz @0@, tobinc
mcw 0&X3, hexchr
b tobin4 * convert 1 char
ma @004@, X1
ma @001@, X3
b tobinl * loop
tobinx b @000@
tobinc dcw @000@
* Convert hex digit to binary
* Digit in hexchr (destroyed)
* Bits written to x1, ..., x1+3
tobin4 sbr tobn4x&3
mcw @0000@, 3+x1 * Start with zero bits
bwz norm,hexchr,2 * Branch if no zone
mcw @1@, 0&X1
a @1@, hexchr * Convert letter to value: A (1) -> 2, F (6) -> 7
mz @0@, hexchr
b tob4
norm c @8@, hexchr
bl tob4
mcw @1@, 0&X1
s @8@, hexchr
mz @0@, hexchr
tob4 c @4@, hexchr
bl tob2
mcw @1@, 1&X1
s @4@, hexchr
mz @0@, hexchr
tob2 c @2@, hexchr
bl tob1
mcw @1@, 2&X1
s @2@, hexchr
mz @0@, hexchr
tob1 c @1@, hexchr
bl tobn4x
mcw @1@, 3&X1
tobn4x b @000@
* Message schedule array is 64 entries of 32 bits = 2048 bits.
org 3000
warr equ 3000
s0 equ warr+2047 *32 bits
s1 equ s0+32
ch equ s1+32 *32 bits
temp1 equ ch+32 *32 bits
temp2 equ temp1+32 *32 bits
maj equ temp2+32 *32 bits
a equ maj+32
b equ a+32
c equ b+32
d equ c+32
e equ d+32
f equ e+32
g equ f+32
h equ g+32
h0 equ h+32
h1 equ h0+32
h2 equ h1+32
h3 equ h2+32
h4 equ h3+32
h5 equ h4+32
h6 equ h5+32
h7 equ h6+32
org h7+32
hexchr dcw @0@
hextab dcw @0123456789abcdef@
i dcw @00@ * Loop counter for w computation
bitidx dcw #3
kidx dcw #3
* 64 round constants for SHA-256
k dcw @428a2f98@
dcw @71374491@
dcw @b5c0fbcf@
dcw @e9b5dba5@
dcw @3956c25b@
dcw @59f111f1@
dcw @923f82a4@
dcw @ab1c5ed5@
dcw @d807aa98@
dcw @12835b01@
dcw @243185be@
dcw @550c7dc3@
dcw @72be5d74@
dcw @80deb1fe@
dcw @9bdc06a7@
dcw @c19bf174@
dcw @e49b69c1@
dcw @efbe4786@
dcw @0fc19dc6@
dcw @240ca1cc@
dcw @2de92c6f@
dcw @4a7484aa@
dcw @5cb0a9dc@
dcw @76f988da@
dcw @983e5152@
dcw @a831c66d@
dcw @b00327c8@
dcw @bf597fc7@
dcw @c6e00bf3@
dcw @d5a79147@
dcw @06ca6351@
dcw @14292967@
dcw @27b70a85@
dcw @2e1b2138@
dcw @4d2c6dfc@
dcw @53380d13@
dcw @650a7354@
dcw @766a0abb@
dcw @81c2c92e@
dcw @92722c85@
dcw @a2bfe8a1@
dcw @a81a664b@
dcw @c24b8b70@
dcw @c76c51a3@
dcw @d192e819@
dcw @d6990624@
dcw @f40e3585@
dcw @106aa070@
dcw @19a4c116@
dcw @1e376c08@
dcw @2748774c@
dcw @34b0bcb5@
dcw @391c0cb3@
dcw @4ed8aa4a@
dcw @5b9cca4f@
dcw @682e6ff3@
dcw @748f82ee@
dcw @78a5636f@
dcw @84c87814@
dcw @8cc70208@
dcw @90befffa@
dcw @a4506ceb@
dcw @bef9a3f7@
dcw @c67178f2@
* 8 initial hash values for SHA-256
h0init dcw @6a09e667@
h1init dcw @bb67ae85@
h2init dcw @3c6ef372@
h3init dcw @a54ff53a@
h4init dcw @510e527f@
h5init dcw @9b05688c@
h6init dcw @1f83d9ab@
h7init dcw @5be0cd19@
input0 equ h7init+64
org h7init+65
dc @80000000000000000000000000000000@
input dc @00000000000000000000000000000100@ * 512 bits with the mostly-zero padding
end start
I punched the executable onto a deck of about 85 cards, which you can see at the beginning of the article. I also punched a card with the input to the hash algorithm. To run the program, I loaded the
card deck into the card reader and hit the "Load" button. The cards flew through the reader at 800 cards per minute, so it took just a few seconds to load the program. The computer's console (below)
flashed frantically for 40 seconds while the program ran. Finally, the printer printed out the resulting hash (as you can see at the top of the article) and the results were punched onto a new card.
Since Bitcoin mining used double SHA-256 hashing, hashing for mining would take twice as long (80 seconds).
The console of the IBM 1401 shows a lot of activity while computing a SHA-256 hash.
Performance comparison
The IBM 1401 can compute a double SHA-256 hash in 80 seconds. It requires about 3000 Watts of power, roughly the same as an oven or clothes dryer. A basic IBM 1401 system sold for $125,600, which is
about a million dollars in 2015 dollars. On the other hand, today you can spend $50 and get
a USB stick miner
with a custom ASIC integrated circuit. This USB miner performs 3.6 billion hashes per second and uses about 4 watts. The enormous difference in performance is due to several factors: the huge
increase in computer speed in the last 50 years demonstrated by Moore's law, the performance lost by using a decimal business computer for a binary-based hash, and the giant speed gain from custom
Bitcoin mining hardware.
To summarize, to mine a block at current difficulty, the IBM 1401 would take about 5x10^14 years (about 40,000 times the current age of the universe). The electricity would cost about 10^18 dollars.
And you'd get 25 bitcoins worth about $6000. Obviously, mining Bitcoin on an IBM 1401 mainframe is not a profitable venture. The photos below compare the computer circuits of the 1960s with the
circuits of today, making it clear how much technology has advanced.
On the left, SMS cards inside the IBM 1401. Each card has a handful of components and implements a circuit such as a gate. The computer contains more than a thousand of these cards. On the right, the
Bitfury ASIC chip for mining Bitcoins does 2-3 Ghash/second. Image from
(CC BY 3.0 license)
You might think that Bitcoin would be impossible with 1960s technology due to the lack of networking. Would one need to mail punch cards with the blockchain to the other computers? While you might
think of networked computers as a modern thing, IBM supported what they call
as early as 1941. In the 1960s, the IBM 1401 could be hooked up to the
IBM 1009 Data Transmission Unit
, a modem the size of a dishwasher that could transfer up to 300 characters per second over a phone line to another computer. So it would be possible to build a Bitcoin network with 1960s-era
technology. Unfortunately I didn't have teleprocessing hardware available to test this out.
IBM 1009 Data Transmission Unit. This dishwasher-sized modem was introduced in 1960 and can transmit up to 300 characters per second over phone lines. Photo from
Introduction to IBM Data Processing Systems
Implementing SHA-256 in assembly language for an obsolete mainframe was a challenging but interesting project. Performance was worse than I expected (even compared to my
12 minute Mandelbrot
). The decimal arithmetic of a business computer is a very poor match for a binary-optimized algorithm like SHA-256. But even a computer that predates integrated circuits can implement the Bitcoin
mining algorithm. And, if I ever find myself back in 1960 due to some strange time warp, now I know how to set up a Bitcoin network.
The Computer History Museum in Mountain View runs demonstrations of the IBM 1401 on Wednesdays and Saturdays so if you're in the area you should definitely check it out (schedule). Tell the guys
running the demo that you heard about it from me and maybe they'll run my Pi program for you. Thanks to the Computer History Museum and the members of the 1401 restoration team, Robert Garner, Ed
Thelen, Van Snyder, and especially Stan Paddock. The 1401 team's website (ibm-1401.info) has a ton of interesting information about the 1401 and its restoration.
I would like to be clear that I am not actually mining real Bitcoin on the IBM 1401—the Computer History Museum would probably disapprove of that. As I showed above, there's no way you could make
money off mining on the IBM 1401. I did, however, really implement and run the SHA-256 algorithm on the IBM 1401, showing that mining is possible in theory. And if you're wondering how I found a
successful hash, I simply used a block that had already been mined:
block #286819
36 comments:
I learned Fortran in 1977 using punch cards, I feel your pain.
How many cards did you have to throw away before you had the complete program?
Hi Dogzilla! I developed the program on the ROPE 1401 simulator, so I only needed to punch it once. The card with the Bitcoin logo, on the other hand: that took about 20 tries
Lucky you!
MY first computer experience was programming a Honeywell computer using cards punched on an IBM punch card machine. I had never used a keyboard before, so errors were very common, plus, the
character sets weren't 100% matches (I think the IBM punch card machine was EBCDIC). I will live forever knowing that I had to punch an '&' character for the compiler to see a '(' character. I
had to type '&x+3)' if I wanted '(x+3)'.
Add to this, we were supposed to write and debug the program on paper, computer time was expensive, so we lost 5 points credit for every time we ran the deck of cards after the first.
I really enjoy your writing, especially on the Z80, since I worked on the Z80 and the 68000 while at Mostek in the early 80s.
I just finished a great book you might like. Its called "Digital Apollo - Human and Machine in Spaceflight", by D. Mindell. It covers a lot of ground about the Apollo computer and control
systems, at a technical level, but you don't need to understand Control Theory to follow it. I'd call it management level.
Very cool!
Small note: In the second to last sentence of the first paragraph:
"[...]the 1401 takes 80 seconds for a single block"
I think you meant "for a single HASH"
i loved the 1401, and wished you had described it in more depth, variable length instrs, A, B, and M bits, combined op-codes (read, write, punch, and branch!), ... but i guess that is not the
point of your story. good hack!
Dogzilla: thanks for the book recommendation; I plan to look at the Apollo Guidance Computer in more detail at some point.
Kaos: I've update the text to make it clearer.
Randy: my previous article on the 1401 went into much more technical detail, so I didn't repeat the details here.
12 minute mandelbrot? You and your high-end hardware. With a Casio programmable calculator from around 2010, it took almost an hour to fill the screen (written in pseudo-BASIC, ran at about 3
instructions per second).
I'll let you know if I manage to get it doing SHA-256 hashes.
Why are you attributing the increase in computational power over the ladr few decades to moores law? Moores law was an observation and prediction. Nothing else. It was the sustained effort of
many engineers over thag period. It would have happened with or without moores prediction.
Geoff: there's a strong argument that the existence of Moore's law in fact has driven semiconductor planning and roadmaps to stay on the curve. In other words, while Moore's law was originally an
observation, now it is actually driving progress.
In any case, I've changed the wording to avoid confusion.
My first permanent full time job was programming a 1401. This does bring back memories :-)
Very interesting. But for those of us of a certain age, the 1410/7010 system was much easier to understand and much more powerful. Do any working 1410s exist anywhere?
Do you think that if you were implementing this in the sixties, you could mine an entire block, giving that difficulty at that time should be very low, or it actually depends on different
I started on 360's but the 1403 printer lived on right until the 380's when lasers took over. And punch cards lived on too with the 2540. IIRC the 1403 shared the controller with the 2540 and
that still used SMS cards.
And, of course, the 360's have 1401 emulation mode which was used to encourage upgrading to 360.
A 1960s hashing function would have been implemented in 1960s technology. SHA1/256 is not a 1960s hash function.
Cryptographic hash functions date from the 1970s. So what we'd have in the 1960s would be a simple CRC protection and then a bit more maybe.
A 1960s blockchain would have set the computability bar at the level of technology within grasp. Maybe a 1970s hash function would run faster? some kind of HMAC from the early days?
1401 assembly looks like SAMOS that I had to use in Computer Science 101 in the '60's
I have very fond memories of this computer (in the guise of a 1720 that had all sorts of D/A stuff connected to the basic 1401. I remember picking up a text book on it for a class, and I had read
10+ chapters before the first class. Was a bit "greek" to me, but by the second lecture, I was hooked and WAY AHEAD of the rest of the class. This and the Physics Dept. pdp-8 became my "friends",
and I have been an avid computer "hobiest" for nearly 40 years. Really glad they have two working examples of such a milestone computer!
Ken, great blog post which is rightly getting a lot of attention. I work for IBM in the mainframe team and would love to discuss this with you further. Please Tweet me @StevenDickens3 and we can
share contact details and hopefully have a chat about how we can potentially collaborate on this interesting topic.
You should have tried this on a Model I 1620 (CADET). Does all of its arithmetic by table lookup in memory.
Fantastic achievement, I have had a similar idea simmering for a long time that I'd attempt to mine a bitcoin on my PDP-11/05. But that's not worth doing now, as yours trumps everything! :)
By the way are there really thousands of SMS cards in a 1401? I would have thought hundreds?
I remember hearing about the CADET years ago. My recollection is that CADET was actually an acronym as well as a code name:
I don't know if the history is right, or just a later embellishment of the CADET name. I wrote 2 simple programs for the 1401 back in 1970-71 when I befriended one of the programmers at the High
School district office.
So when can we expect the SMS card based hardwired Bitcoin computer?
I love your posts! Fantastic
this question is about pool extranonce? and prof of work related to the pool?
Does the pool give every pool's miner a set of the possible values of the nonce to run through ?
if we suppose a poole contain only 2 miners miner1 and miner2;
and V1,V2, ..., Vn, Vn+1, ..., Vmax are the possible values of the nonce;
does miner1 work with nonce values in V1, ..., Vn
and miner2 work with values in Vn+1, ..., Vmax
( I think the pool is like a single node of the bitcoin peer to peer network and the bitcoin-core code doesn't include a subroutine or class program about pool ????)
Hi Ken: Just came across your post while reading your newer one about mining with the Alto. Do you remember that Selectric printer I had back in first year at UW (the one we used to type out our
entry in the shortest APL program contest)? The interface between that and my TRS-80 used a handful of SMS cards (solenoid driver cards), interfaced to some homebrew TTL logic.
Hello Ken,
You have a really great informative blog with technology related topics, especially regarding vintage computing. Your articles are very descriptive and explain things in details, so I am able to
understand them and get some insight to learn about new things. I used to collect much of the older hardware many years ago from Sparc (Sparcstations 4,5, SUN E450), PaRISC (HP 9000 712), VAX
(9000 series), SGI (Indigo 2, Indy) workstations and servers, but never really managed to get good in restoring those. So I still have some of these hardware waiting for better days. In my
country there are no clubs or computer museums that would connect such people with similar interests, which is a pity, because I couldn't get anyone to assist me. I also like your Bitcoin related
articles, the thing about mining bitcoins on a 55-year old mainframe is just amazing, and mining them with a pencil and a paper is probably the best "magic trick" to learn after the Rubik's Cube
assembly procedure to impress people, very interesting. :)
Kind regards,
Hola! I've been reading your blog for a while now
and finally got the bravery to go ahead and give you a
shout out from Austin Texas! Just wanted to say keep up the great job!
Hi Ken, it was actually the case that the 1401 had subroutines. Sure, not a stack like today for return addresses, but one did it like this:
SUBROUTINE SBR RETURN+3 Store the "B" Register
... code
RETURN B 0 The 0 would be overwritten by the return address.
The 'B-Reg' held the next instruction; by storing it immediately after taking the branch that 'calls' the subroutine, as shown here, the branch at the end of the subroutine is effectively
converted into a Return-From-Subroutine instruction.
Just when I tought I'm and old guy (born in 1970), I find you guys.
Big big up from Italy! Your blog is awesome!
"Can't Add, Doesn't Even Try" is almost certainly a backronym, where the word came first and the expansion second. But yes, there were addition and multiplication tables in low memory, exactly
like the ones we learned (or used to learn) in primary school, and since there was no memory protection you could easily overwrite them and corrupt the math operations to produce the wrong
Interesting, great job Ken.
Thank you for your sharing.
The 1401 was the first mainframe I had physical access to as a teenager learning Shelly & Cashman Structured COBOL Programming on an IBM 029 keypunch. Wonderful walk down memory lane reading
One issue with the article:
In the code example, this appears everywhere: "@[email protected]" (see below)
* Initialize a-h from h0-h7
mcw @[email protected], x1
ilp mcw h0+31&x1, a+31&x1
ma @[email protected], x1
c x1, @[email protected]
bu ilp
Sadly, you and I know that it's NOT an e-mail address, it's similar to how double quotes work in BASIC, i.e.:
finished dcw @Hello World!@
Is there anything you can do to fix this in your blog?
- Chaz
Any plans to run the algorithm on the Babbage Difference Engine next?
Just out of curiosity and also since I'm learning to program the 1401, what is the assembly code you have presented? I don't recognise the formatting and it's probably something more advanced
than what I'm currently working on, but it seems useful to know.
|
{"url":"http://www.righto.com/2015/05/bitcoin-mining-on-55-year-old-ibm-1401.html?showComment=1432407146730&m=0","timestamp":"2024-11-09T10:55:00Z","content_type":"application/xhtml+xml","content_length":"203733","record_id":"<urn:uuid:c915491d-650e-42f4-b063-ce40875f7361>","cc-path":"CC-MAIN-2024-46/segments/1730477028116.75/warc/CC-MAIN-20241109085148-20241109115148-00559.warc.gz"}
|
Tree Traversal :: CC 315 Textbook
Chapter 15
Tree Traversal
This page is the main page for Tree Traversal
Subsections of Tree Traversal
In the last module, we covered the underlying vocabulary of trees and how we can implement our own tree. To recall, we covered: node, edge, root, leaf, parent, child, and degree.
For this module we will expand on trees and gain a better understanding of how powerful trees can be. As before, we will use the same tree throughout the module for a guiding visual example.
Terms I
YouTube Video
Many of the terms used in trees relate to terms used in family trees. Having this in mind can help us to better understand some of the terminology involved with abstract trees. Here we have a sample
family tree.
• Ancestor - The ancestors of a node are those reached from child to parent relationships. We can think of this as our parents and our parent’s parents, and so on.
□ Let’s look at all of the ancestors of each of our nodes in the family tree.
☆ Ava’s ancestors: Uzzi, Joe, Myra. This is because, Uzzi is the parent of Ava, Joe is the parent of Uzzi, and Myra is the parent of Joe. Try to work out the following and click the name to
reveal the ancestors.
Uma:Zia, Myra - **Zia** is the parent of Uma and **Myra** is the parent of Zia. Myra:None - Myra does not have a parent node. Raju: Myra - **Myra** is the parent of Raju. Bev: Uzzi, Joe, Myra
- **Uzzi** is the parent of Bev, **Joe** is the parent of Uzzi, and **Myra** is the parent of Joe.
• Descendant - The descendants of a node are those reached from parent to child relationships. We can think of this as our children and our children’s children and so on.
□ Let’s look at all of the descendants of each of our nodes in the family tree.
☆ Ava’s descendants: None. Ava has no child nodes and thus, no descendants. Try to work out the following and click the name to reveal the descendants.
Uma:Ang - **Ang** is the child of Uma Myra:Raju, Joe, Zia, Uzzi, Bert, Uma, Bev, Ava, Ang, Isla, Eoin - All of the nodes in a tree will be descendants of the root. To work it out: **Raju,
Joe** and **Zia** are the children of Myra, **Uma** is the child of Zia, **Ang** is the child of Uma, and we can work the rest out for Joe's children. Raju:None - Raju has no child nodes.
Bev:Isla, Eoin - **Isla** is the child of Bev and **Eoin** is the child of Isla.
• Siblings - Nodes which share the same parent
□ We can think about the siblings of all of our nodes in the family tree.
☆ Ava’s siblings: Bev - Uzzi is the parent node of Ava; Uzzi has two child nodes, Ava and Bev. Try to work out the following and click the name to reveal the siblings.
Uma:None - Zia is the parent node of Uma; Zia has only one child node, Uma. Myra:None - Myra is the root and thus does not have a parent node resulting in no siblings. Raju:Joe, Zia - Myra is
the parent node of Raju; Myra has three child nodes, **Joe**, **Zia**, and Raju Bev:Ava - Uzzi is the parent node of Bev; Uzzi has two child nodes, Bev and **Ava**.
Recursion Refresh
YouTube Video
A recursive program is broken into two parts:
• A base case—a simple version of the problem that can be solved directly, and
• A recursive case—a general solution to the problem that uses smaller versions of the problem to compute the solution to the larger problem.
In principle, the recursive case breaks the problem down into smaller portions until we reach the base case. Recursion presents itself in many ways when dealing with trees.
Trees are defined recursively with the base case being a single node. Then we recursively build the tree up. With this basis for our trees, we can define many properties using recursion rather
Terms II
YouTube Video
We can describe the sizes of trees and position of nodes using different terminology, like level, depth, and height.
• Level - The level of a node characterizes the distance between the node and the root. The root of the tree is considered level 1. As you move away from the tree, the level increases by one.
□ For our family tree example, what nodes are in the following levels? Think about the answer and then click corresponding arrow. Level 1:Myra - Level 1 is always the root Level 2:Raju, Joe,
Zia - These are the nodes which are 1 edge away from the root. Level 3:Uzzi, Bert, Uma - These are the nodes which are 2 edges away from the root. Level 4:Bev, Ava, Ang - These are the nodes
which are 3 edges away from the root. Level 5:Isla - This is the only node which is 4 edges away from the root. Level 6:Eoin - This is the only node which is 5 edges away from the root.
• Depth - The depth of a node is its distance to the root. Thus, the root has depth zero. Level and depth are related in that: level = 1 + depth.
□ For our family tree example, what nodes have the following depths? Depth 0:Myra - The root will always be at depth 0. Depth 1:Raju, Joe, Zia - These are the nodes which are 1 edge away from
the root. Depth 2:Uzzi, Bert, Uma - These are the nodes which are 2 edge away from the root. Depth 3:Bev, Ava, Ang - These are the nodes which are 3 edge away from the root. Depth 4:Isla -
This is the only node which is 4 edges away from the root. Depth 5:Eoin - This is the only node which is 5 edges away from the root.
• Height of a Node - The height of a node is the longest path to a leaf descendant. The height of a leaf is zero.
□ For our family tree example, what nodes have the following heights? Height 0:Raju, Eoin, Ava, Bert, Ang - The leaves always have height 0. Height 1:Isla, Uma - `Isla -> Eoin` and `Uma -> Ang`
Height 2:Bev, Zia - `Bev -> Isla -> Eoin` and `Zia -> Uma -> Ang` Height 3:Uzzi - `Uzzi -> Bev -> Isla -> Eoin` Height 4:Joe - `Joe -> Uzzi -> Bev -> Isla -> Eoin` Height 5:Myra - `Myra ->
Joe -> Uzzi -> Bev -> Isla -> Eoin`
• Height of a Tree - The height of a tree is equal to the height of the root.
□ Our family tree would have height 5
Terms III
YouTube Video
When working with multidimensional data structures, we also need to consider how they would be stored in a linear manner. Remember, pieces of data in computers are linear sequences of binary digits.
As a result, we need a standard way of storing trees as a linear structure.
• Path - a path is a sequence of nodes and edges, which connect a node with its descendant. We can look at some paths in the tree above:
From `Q` to `Y`:`QWY` From `R` to `P`:`RP`
• Traversal is a general term we use to describe going through a tree. The following traversals are defined recursively.
Preorder Traversal
1. Access the root, record its value.
2. Run the preorder traversal each of the children
YouTube Video
• The Pre refers to the root, meaning the root goes before the children.
• Remember: Root Children
Postorder Traversal
1. Run the postorder traversal on each of the children
2. Access the root, record its value
YouTube Video
• The Post refers to the root, meaning the root goes after the children.
• Remember: Children Root
When we talk about traversals for general trees we have used the phrase ’the traversal could result in’. We would like to expand on why ‘could’ is used here. Each of these general trees are the same
but their traversals could be different. The key concept in this is that for a general tree, the children are an unordered set of nodes; they do not have a defined or fixed order. The relationships
that are fixed are the parent/child relationships.
Tree Preorder Postorder
Tree 1 QWYUERIOPTA YUWEIOPRATQ
Tree 2 QETARIOPWUY EATIOPRUYWQ
Tree 3 QROPITAEWUY OPIRATEUYWQ
MyTree Recursive I
Again, we want to be able to implement a working version of a tree. From the last module, we had functions to add children, remove children, get attributes, and instantiate MyTree. We will now build
upon that implementation to create a true tree.
A recursive program is broken into two parts:
• A base case—a simple version of the problem that can be solved directly, and
• A recursive case—a general solution to the problem that uses smaller versions of the problem to compute the solution to the larger problem.
MyTree with recursion
Recall that in the previous module, we were not yet able to enforce the no cycle rule. We will now enforce this and add other tree functionality.
Disclaimer: In the previous module we had a disclaimer that stated our implementation would not prevent cycles. The following functions and properties will implement recursion. Thus, we can maintain
legal tree structures!
In the first module, we discussed how we can define trees recursively, meaning a tree consists of trees. We looked at the following example. Each red dashed line represented a distinct tree, thus we
had five trees within the largest tree making six trees in total.
We will use our existing implementation from the first module. Now to make our tree recursive, we will include more getter functions as well as functions for traversals and defining node
Get depth, height, size, and root
We can define each of these recursively.
YouTube Video
Get Depth
• Depth - The depth of a node is its distance to the root. Thus, the root has depth zero.
We can define the depth of a node recursively:
• Base case: we are at the root and the depth is zero
• Recursive case: for any other node, the depth is 1 plus the depth of the parent
function GETDEPTH()
if ROOT
return 0
return 1 + PARENT.GETDEPTH()
end function
Get Height
• Height of a Node - The height of a node is the longest path to a leaf descendant. The height of a leaf is zero.
We can define the height of a node recursively:
• Base case: we are at the leaf and the height is zero
• Recursive case: for any other node, return 1 plus the maximum height of its children
function GETHEIGHT()
if LEAF
return 0
MAX = 0
for CHILD in CHILDREN
CURR_HEIGHT = CHILD.GETHEIGHT()
if CURR_HEIGHT > MAX
MAX = CURR_HEIGHT
return 1 + MAX
end function
Get Root
• Root - the topmost node of the tree; a node with no parent.
We can define returning the root recursively:
• Base case: we are at the root so return it
• Recursive case: for any other node, return the root of the nodes parent
function GETROOT()
if ISROOT()
return this tree
return PARENT.GETROOT()
end function
Get Size
We define the size of a tree as the total number of children.
function GETSIZE()
SIZE = 1
for CHILD in CHILDREN
SIZE += CHILD.GETSIZE()
return SIZE
end function
Find a Value
To find a value within our tree, we will traverse down a branch as far as we can until we find the value. This will return the tree that has the value as the root.
function FIND(VALUE)
if ITEM is VALUE
return this node
for CHILD in CHILDREN
FOUND = CHILD.FIND(VALUE)
if FOUND is not NONE
return FOUND
return NONE
end function
MyTree Recursive II
Determine relationships (Ancestor, Descendant, Sibling)
We can determine many relationships within the tree. For example, given a node is it an ancestor of another node, a descendant, or a sibling?
YouTube Video
Is Ancestor?
For this function, we are asking: is this node an ancestor of the current instance? In this implementation, we will start at our instance and work down through the tree trying to find the node in
question. With that in mind, we can define this process recursively:
• Base case: we are at the node in question, so return true OR we are at a leaf so return false.
• Recursive case: run the method from each of the children of the node.
function ISANCESTOR(TREE)
if at TREE
return true
else if at LEAF
return false
for CHILD in CHILDREN
FOUND = CHILD.ISANCESTOR(TREE)
if FOUND
return true
return false
end function
Is Descendant?
For this function, we are asking: is this node a descendant of the current instance? In this implementation, we will start at our instance and work up through the tree trying to find the node in
question. With that in mind, we can define this process recursively:
• Base case: we are at the node in question, so return true OR we are at the root so return false.
• Recursive case: run the method from the parent of the node.
function ISDESCENDANT(TREE)
if at TREE
return true
else if at ROOT
return false
return PARENT.ISDESCENDANT(TREE)
end function
Is Sibling?
For this function, we are asking: is this node a sibling of the current instance? To determine this, we can get the parent of the current instance and then get the parents children. Finally, we check
if the node in question is in that set of children.
function ISSIBLING(TREE)
if TREE in PARENT's CHILDREN
return true
return false
end function
Lowest common ancestor
In any tree, we can say that the root is a common ancestor to all of the nodes. We would like to get more information about the common ancestry of two nodes. For this function, we are asking: which
node is the first place where this instance and the input node’s ancestries meet? Similar to our ISDESCENDANT, we will work our way up the tree to find the point where they meet
• Base case: we are at our tree so return the tree OR we are at an ancestor of our tree so return the instance OR we are at the root so return nothing
• Recursive case: run the method from the parent.
function LOWESTANCESTOR(TREE)
if at TREE
return TREE
else if ISANCESTOR(TREE)
return instance
else if at ROOT
return NONE
return PARENT.LOWESTANCESTOR(TREE)
end function
Path from the root
This function will generate the path which goes from the root to the current instance.
function PATHFROMROOT(PATH)
if NOT ROOT
append ITEM to PATH
end function
MyTree Recursive III
In this module we have talked about two traversals: preorder and postorder. Both of these are defined recursively and the prefix refers to the order of the root.
In a preorder traversal, first we access the root and then run the preorder traversal on the children.
function PREORDER(RESULT)
append ITEM to RESULT
FOR CHILD in CHILDREN
end function
In a postorder traversal, first we run the postorder traversal on the children then we access the root.
function POSTORDER(RESULT)
FOR CHILD in CHILDREN
append ITEM to RESULT
end function
In this section, we discussed more terminology related to trees as well as tree traversals. To recap the new vocabulary:
• Ancestor - The ancestors of a node are those reached from child to parent relationships. We can think of this as our parents and the parents of our parents, and so on.
• Depth - The depth of a node is its distance to the root. Thus, the root has depth zero. Level and depth are related in that: level = 1 + depth.
• Descendant - The descendants of a node are those reached from parent to child relationships. We can think of this as our children and our children’s children and so on.
• Height of a Node - The height of a node is the longest path to a leaf descendant. The height of a leaf is zero.
• Height of a Tree - The height of a tree is equal to the height of the root.
• Level - The level of a node characterizes the distance the node is from the root. The root of the tree is considered level 1. As you move away from the tree, the level increases by one.
• Path - a sequence of nodes and edges which connect a node with its descendant.
• Siblings - Nodes which share the same parent
• Traversal is a general term we use to describe going through a tree. The following traversals are defined recursively.
□ Preorder Traversal (Remember: Root Children):
1. Access the root
2. Run the preorder traversal on the children
□ Postorder Traversal (Remember: Children Root):
1. Run the postorder traversal on the children
2. Access the root.
|
{"url":"https://textbooks.cs.ksu.edu/cc315/ii-trees/3-tree-traversal/index.print.html","timestamp":"2024-11-12T12:50:12Z","content_type":"text/html","content_length":"46310","record_id":"<urn:uuid:f20507ca-4d2c-4c2b-b28e-3377eed4e306>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00162.warc.gz"}
|
Conservation Laws in Nuclear Decay | nuclear-power.com
Conservation Laws in Nuclear Decay
In analyzing nuclear decay reactions, we apply the many conservation laws. Nuclear decay reactions are subject to classical conservation laws for the charge, momentum, angular momentum, and energy
(including rest energies). Additional conservation laws not anticipated by classical physics are:
Certain of these laws are obeyed under all circumstances, and others are not. We have accepted the conservation of energy and momentum. In all the examples given, we assume that the number of protons
and the number of neutrons is separately conserved. We shall find circumstances and conditions in which this rule is not true. Where we are considering non-relativistic nuclear reactions, it is
essentially true. However, when considering relativistic nuclear energies or those involving weak interactions, we shall find that these principles must be extended.
Some conservation principles have arisen from theoretical considerations, and others are just empirical relationships. Notwithstanding, any reaction not expressly forbidden by the conservation laws
will generally occur, if perhaps at a slow rate. This expectation is based on quantum mechanics. Unless the barrier between the initial and final states is infinitely high, there is always a non-zero
probability that a system will make the transition between them.
It is sufficient to note four fundamental laws governing these reactions to analyze non-relativistic reactions.
1. Conservation of nucleons. The total number of nucleons before and after a reaction are the same.
2. Conservation of charge. The sum of the charges on all the particles before and after a reaction are the same.
3. Conservation of momentum. The total momentum of the interacting particles before and after a reaction is the same.
4. Conservation of energy. Energy, including rest mass energy, is conserved in nuclear reactions.
Reference: Lamarsh, John R. Introduction to Nuclear engineering 2nd Edition
|
{"url":"https://www.nuclear-power.com/nuclear-engineering/radiation-protection/radioactivity-nuclear-decay/conservation-laws-in-nuclear-decay/","timestamp":"2024-11-04T10:50:38Z","content_type":"text/html","content_length":"88994","record_id":"<urn:uuid:b9ad644a-4822-4573-8551-e9462ee35d0e>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00827.warc.gz"}
|
2016 MPS Annual Meeting
The 2016 annual MPS meeting took place October 20–21. It featured exciting talks about research at the frontiers of math, physics and theoretical computer science, as well as lively discussions among
the heterogeneous crowd of attending scientists.
The keynote speaker, Mina Aganagic, talked about math and string theory duality. In string theory, it is often the case that there are two very different mathematical descriptions of the same
physical situation. The two descriptions are dual to each other, and the dictionary that translates from one description to the other is highly non-trivial. This leads to very surprising conjectural
equivalences between mathematical objects that seem to be very different. In many cases, mathematicians have been able to prove these equivalences. The equivalences are very powerful, since they can
relate very difficult computations, such as counting curves of given degrees in an algebraic manifold to much simpler classical computations in the dual picture. The dualities can also be used for
defining new quantities such as knot invariants. Aganagic surveyed several notable dualities and ended with the announcement of a recent proof, due to Frenkel, Okounkov and herself, of a quantum
geometric Langlands correspondence, which generalizes the classical geometric Langlands correspondence, itself a major result. The proof relies on an electromagnetic duality theory in six dimensions
and the introduction of a quantum version of K-theory.
• Lisa Manning talked about jamming in biological tissue. Such tissues start out in the early embryonic stage as solid-like structures and later become fluid like. Such transitions are crucial at
the developmental stage. Manning described how movement occurs in completely jammed media, with cells “pushing” their way through the interfaces between other cells, changing the cell
arrangement. She then described the phase diagram for such motion and related it to geometric features of the cell. Such features can be measured in static pictures, so we can learn about cell
dynamics from static data, which is more readily available. The model fits observations very well.
Dan Boneh showed how cryptographers are using sophisticated mathematical constructions to achieve amazing technical goals that are at the heart of electronic commerce. In particular, he showed
how the Tate pairing, a bilinear product which originates in abstract algebraic number theory, is used in credit card chips for fast authentication. Other constructions of cryptographic
primitives coming from lattices are used to construct homomorphic encryption schemes, which allow computations on encrypted data, such schemes could be of great value in cloud computing
environments if we could make them work a bit faster. Boneh challenged the mathematicians in the crowd to come up with new constructions based on higher math, for example, trilinear forms with
some nice mathematical and computational properties, and explained that they could lead to some spectacular new applications.
Paul Seidel described ongoing work related to mirror symmetry. This symmetry is one of the deep dualities originating in string theory, which was described in the talk by Aganagic. It relates two
seemingly different areas of mathematics: the rigid world of algebraic geometry and the more flexible world of symplectic geometry. Understanding mirror symmetry is the subject of one of the
current MPS collaborations. For suitable algebraic manifolds, the string theory of the manifold leads to a formal two-parameter partition function that “counts” holomorphic curves on the
manifold. Seidel described calculations of the partition function in a particularly challenging non-perturbative regime, where one of the parameters is not infinitesimally small.
Andrea Alu described some new devices with magical optical or acoustic properties. A basic feature of optical and acoustical systems is time reversal symmetry. If light or sound can propagate
from A to B, then it can be reversed to propagate from B to A. Alu explained how he designs devices that break time reversal symmetry by introducing a rotational asymmetry in the system. As a
result, the devices can route sound or light in an asymmetrical fashion to different destinations. Another application is in cloaking, in which an object becomes invisible. Alu complemented these
demonstrations with theoretical results which show that, while cloaking with respect to a particular wavelength is possible, it is impossible to achieve for a broad band of the spectrum under
rather mild assumptions on the system, explaining why many attempts are doomed to failure.
Rick Schwartz discussed the classical and very difficult problem of finding arrangements of points on the sphere that minimize some given standard energy function, which are based on distances
between the points of the arrangement. In particular, he discussed the case of five-point configurations, where he was able to prove the rather natural conjecture that the energy-minimizing
configuration consists of the north and south poles and an equilateral triangle on the equator. The computer-assisted proof required breaking up the configurations space into many small pieces
and carrying out delicate numerical estimates for the minimum possible energy in each piece.
Daniel Eisenstein talked about dark energy and cosmic sound. He described how acoustic signals from plasma that were set free a short time after the big bang and traveling at 57 percent of the
speed of light can be used as a particularly accurate ruler for accurately measuring distances between objects in the universe.
Julia Hartman talked about basic invariants of fields. Fields are very basic objects of study in algebra; they consist of systems with operations of addition and multiplication that satisfy all
the usual properties we are familiar with. Of central interest in mathematics is the study of solutions to polynomial equations in a given field. The case of linear equations is covered
completely by linear algebra. The next case of a single quadratic equation in several variables is already very challenging. A basic invariant of the field, the u-invariant is the largest number
of variables for which one can find a quadratic equation with no non-zero solution in the field. Hartman described works from the last few years that have allowed the computation of u-invariants
in many cases of interest.
Keynote Address: Mina Aganagic
UC Berkeley
Simons Investigator in Physics
String Duality and Mathematics
The relationship between mathematics and physics has a long history. Traditionally, mathematics provides the language physicists use to describe nature. In turn, physics brings mathematics to life,
by providing inspiration and interpretation. String theory is changing the nature of this relationship. Aganagic will try to explain why and give listeners a flavor of the emerging field.
Mina Aganagic applies insights from quantum physics to mathematical problems in geometry and topology. She made deep and influential conjectures in enumerative geometry, knot theory and mirror
symmetry using predictions from string theory and M-theory.
Lisa Manning
Syracuse University
Simons Investigator in Mathematical Modeling of Living Systems
Jamming in Biological Tissues
Biological tissues are living materials, with material properties that are important for their function. Recent experiments have shown that many tissues, including some involved in embryonic
development, lung function, wound healing, and cancer progression, are close to a liquid-to-solid, or “jamming,” transition, similar to the one that occurs when oil and liquid are mixed to make
mayonnaise. In mayonnaise and materials like it, a disordered liquid-to-solid transition occurs when the packing density of oil droplets increases past a critical threshold. Over the past 20 years,
physicists and mathematicians have made progress in understanding the universal nature of this transition. However, existing theories cannot explain observations of jamming transitions in confluent
biological tissues, where there are no gaps between cells and the packing density is always unity. Manning will discuss a theoretical and computational framework for predicting the material
properties of such biological tissues, and show that it predicts a novel type of critical rigidity transition, which takes place at constant packing density and depends only on single cell
properties, such as cell stiffness. She will show that our a priori theoretical predictions with no fit parameters are precisely realized in cell cultures from human patients with asthma, and she
will discuss how we are applying these ideas to understand other processes, such as embryonic development and cancer progression.
Lisa Manning started her research career in the physics of glasses, i.e., how a disordered group of molecules or particles freezes into a rigid solid at a well-defined temperature. She then turned
her attention to morphogenesis, the process by which embryos transform from a spherical egg to a shape we recognize as an insect, plant or mammal, showing that aspects of this process could be
modeled by surface tension in analogy with the description of immiscible liquids. Her most recent work uses ideas from the physics of glasses to describe the mobility of cells organized in sheets and
applies to a broad class of biological tissues, including embryos and cells from asthma patients.
Julia Hartmann
University of Pennsylvania
Simons Fellows in Mathematics
Geometry and Algebra: From Local to Global
Arithmetic geometry views algebraic and arithmetic objects, such as numbers, in a geometric way. This interplay between number theory and algebraic geometry has been a source of inspiration in modern
mathematics, as it permits the study of number theoretic problems via geometric methods. Having led to the solution of a number of conjectures, including Fermat’s Last Theorem, it continues to give
rise to deep and important problems in algebra.
Local-global principles are a central theme in this interplay of subjects, and many important mathematical problems can be expressed in terms of such principles. Local-global principles in algebra
(and other mathematical disciplines) are motivated by analogous geometric principles, by which certain properties of spaces can be determined by considering whether or not they hold locally. The talk
will explain these concepts and outline how patching methods can lead to new local-global principles.
Julia Hartmann has been a professor at the University of Pennsylvania since 2014. Prior to that, she was the head of a research group at RWTH Aachen University and a von Neumann Fellow at the
Institute for Advanced Study.
Hartmann’s research focuses on problems in algebra with relations to differential algebra and arithmetic geometry. The connecting theme of the questions she works on is the study of symmetries, i.e.,
of actions of groups on various algebraic objects. In collaboration with David Harbater, she developed the method of field patching. Among the most exciting applications of field patching are
local-global principles for numerical invariants associated to fields.
Dan Boneh
Stanford University
Simons Investigator in Theoretical Computer Science
Recent Developments in Cryptography
Cryptography, the science of secure communication, has advanced considerably in the last fifteen years. With the introduction of tools, such as bilinear maps, multilinear maps and integer lattices,
applications that were previously out of reach became possible and sometimes even quite practical. This talk will survey some of these recent developments, giving examples of constructions and proofs
techniques, and posing some open problems that are central to further progress.
Dan Boneh is an expert in cryptography and computer security. One of his main achievements is the development of pairing-based cryptography, giving short digital signatures, identity-based encryption
and novel encryption systems.
Daniel Eisenstein
Harvard University
Simons Investigator in Physics
Dark Energy and Cosmic Sound
Daniel Eisenstein will discuss how the acoustic oscillations that propagate in the cosmic plasma during the first million years of the universe provide a robust method for measuring the cosmological
distance scale. The distance that the sound can travel can be computed to high precision and creates a signature in the late-time clustering of galaxies that serves as a standard ruler. Maps from the
Sloan Digital Sky Survey (SDSS) reveal this feature, yielding accurate measurements of the expansion history of the universe. Eisenstein will describe the theory and practice of the acoustic
oscillation method and highlight the latest cosmology results from SDSS.
Daniel Eisenstein is a leading figure in modern cosmology. He is known for utilization of the baryon acoustic oscillations standard ruler for measuring the geometry of the universe, which underpins
several large, upcoming ground and space missions. Eisenstein blends theory, computation and data analysis seamlessly to push the boundaries of current-day research in cosmology.
Paul Seidel
Massachusetts Institute of Technology
Simons Investigator in Mathematics
Symplectic Topology Away from the Large Volume Limit
Symplectic topology is unique within geometry, in that the deeper structure of the spaces under consideration appears only after non-local “instanton corrections” have been taken into account. This
is most readily apparent from a string theory motivation, but it also has a direct impact on classical problems from Hamiltonian mechanics. In the theory, the instanton corrections are set up as
small perturbations, which corresponds to thinking of the target space as having infinitely large size (the “large volume limit”). Mirror symmetry suggests that it would be interesting to keep the
size finite. Attempting to do that has seemingly paradoxical consequences, which one can sometimes get a handle on by changing the space involved. The talk will give an introduction to this problem,
based on simple examples, and explain a little of what is known or expected.
Paul Seidel has done major work in symplectic geometry, in particular on questions inspired by mirror symmetry. His work is distinguished by an understanding of abstract algebraic structures, such as
derived categories, in sufficiently concrete terms to allow one to derive specific geometric results. On the abstract side, Seidel has made substantial advances toward understanding Kontsevich’s
homological mirror symmetry conjecture and has proved several special cases of it. In joint papers with Smith, Abouzaid and Maydanskiy, he has investigated the symplectic geometry of Stein manifolds.
In particular, work with Abouzaid constructs infinitely many nonstandard symplectic structures on any Stein manifold of sufficiently high dimension.
Andrea Alù
The University of Texas at Austin
Simons Investigator in Physics
Breaking Reciprocity and Time-reversal Symmetry with Metamaterials
In this talk, Andrea Alù will discuss recent work focused on breaking reciprocity and time-reversal symmetry in metamaterial structures, spanning acoustics, radio waves, nanophotonics and mechanics,
without relying on magnetic bias. Alù’s and his collaborators’ approaches are based on using suitably tailored mechanical motion, spatio-temporal modulation and large nonlinearities in coupled
resonator systems to realize unusual wave-matter interactions. Alù will discuss the theoretical framework and the modeling, design and implementation of non-reciprocal devices that break Lorentz
reciprocity and achieve electromagnetic isolation without using magnetic bias. He will also discuss the impact of these concepts on things ranging from basic science to integrated technology, and how
this platform may be at the basis of topological insulators for light, sound and mechanical waves.
Andrea Alù’s work on the manipulation of light in artificial materials and metamaterials has shown how clever designs may surpass what had previously been thought to be limitations on wave
propagation in materials. He has developed new concepts for cloaking, one-way propagation of waves in materials, dramatic enhancement of nonlinearities in nanostructures and ultrathin optical devices
based on metasurfaces and twisted metamaterials.
Richard Schwartz
Brown University
Simons Fellows in Mathematics
Five Points on a Sphere
Thomson’s problem, going back to 1904, asks how N points on the sphere are arranged so as to minimize the Coulomb potential, i.e., the sum of the reciprocal distances taken over all pairs of points.
A generalization involves using other power law potentials besides the Coulomb potential, i.e., summing other powers of the distances over all pairs of points. The Coulomb potential corresponds to
exponent -1. The case N = 5 has been notoriously intractable. Schwartz will sketch my computer-assisted but still rigorous proof that the triangular bi-pyramid is the potential-minimizing
configuration with respect to all power laws with exponent in [-13, 0) and the potential-maximizing configuration when the exponent is in (0, 2). As Schwartz will explain, these ranges of exponents
are fairly sharp.
Richard Schwartz was born in Los Angeles in 1966. In his youth, he enjoyed video games and sports, especially tennis. He got a B.S. in math from University of California, Los Angeles in 1987 and a
Ph.D. in math from Princeton University in 1991. He gave an invited talk at the International Congress of Mathematicians in 2002. He is currently the Chancellor’s Professor of Mathematics at Brown
University. He likes to study simply stated problems with a geometric flavor, often with the aid of graphical user interfaces and other computer programs that he writes himself. Aside from his work
in math, he has written and illustrated a number of picture books, including You Can Count on Monsters, Really Big Numbers, Gallery of the Infinite and Man Versus Dog.
|
{"url":"https://www.simonsfoundation.org/event/2016-mps-annual-meeting/","timestamp":"2024-11-05T10:43:14Z","content_type":"text/html","content_length":"151004","record_id":"<urn:uuid:15c0812f-387c-4591-acaf-28fa94191543>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00566.warc.gz"}
|
A sector of a circle whose radius is r and whose angle is theta has a fixed perimeter P. How do you find the values of r and theta so that the area of the sector is a maximum? | HIX Tutor
A sector of a circle whose radius is r and whose angle is theta has a fixed perimeter P. How do you find the values of r and theta so that the area of the sector is a maximum?
Answer 1
$r = \frac{P}{4}$ and $\theta = 2$
The perimeter of the sector is two radii and the arc cut off by #theta#. So, the perimeter is given by
#P = 2r+rtheta#
The area of a sector is #A = 1/2r^2theta#
Since #P# is fixed, the only variables in #P = 2r+rtheta# are #r# and #theta#
#r = P/(2+theta)# and #theta = (P-2r)/r = P/r-2#. Is we rewrite for #A# using only #theta# we'll need the quotient rule to differentiate. So let's rewrite #A# using only #r#.
#A = 1/2r^2(P/r-2) = (Pr)/2 - r^2#
We want to maximize #A#, so . . .
#A' = P/2-2r = 0# at #r=P/4#
Note that #A'' = -2# so #A(P/4)# is a maximum, not a minimum.
Use #theta = P/r-2# from above to get #theta = 2#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To find the values of ( r ) and ( \theta ) so that the area of the sector is a maximum, we first need to express the area of the sector in terms of ( r ) and ( \theta ). Then, we can use calculus to
maximize the area.
The perimeter of the sector, ( P ), is given by ( P = 2r + r\theta ).
The area of the sector, ( A ), is given by ( A = \frac{1}{2} r^2 \theta ).
To maximize the area, we can differentiate the area function ( A ) with respect to ( \theta ), set the derivative equal to zero, and solve for ( \theta ). Then, we can substitute the value of ( \
theta ) back into the expression for the perimeter to find the corresponding value of ( r ).
Differentiating ( A = \frac{1}{2} r^2 \theta ) with respect to ( \theta ), we get ( \frac{dA}{d\theta} = \frac{1}{2} r^2 ).
Setting ( \frac{dA}{d\theta} ) equal to zero, we have ( \frac{1}{2} r^2 = 0 ).
Solving for ( \theta ), we find that ( \theta = 0 ).
Substituting ( \theta = 0 ) into the expression for the perimeter, ( P = 2r + r\theta ), we get ( P = 2r ).
So, to maximize the area of the sector, ( \theta ) should be 0, and ( r ) should be half of the fixed perimeter ( P ).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7
|
{"url":"https://tutor.hix.ai/question/a-sector-of-a-circle-whose-radius-is-r-and-whose-angle-is-theta-has-a-fixed-peri-8f9afa0101","timestamp":"2024-11-04T11:19:53Z","content_type":"text/html","content_length":"578576","record_id":"<urn:uuid:eae334f8-8b2e-49f8-83d9-86e064647043>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00136.warc.gz"}
|
Momentum Tutorial: Understanding Physics Concepts and Formulas
Welcome to our comprehensive tutorial on momentum in physics! Whether you are a student learning the fundamentals of dynamics or a curious individual seeking to understand the physical world around
us, this article is for you. We will take a deep dive into the concept of momentum and how it relates to various aspects of physics. From defining what momentum is and its role in everyday life, to
exploring the mathematical formulas used to calculate it, this tutorial will provide you with a solid understanding of this fundamental concept. So sit back, relax, and get ready to expand your
knowledge on momentum in this engaging and informative read.
Let's get started!Are you struggling to understand the concept of momentum in physics? Look no further, because this tutorial is here to guide you through it all. Whether you're a student learning
about dynamics or someone looking to refresh their knowledge, this article will provide you with a comprehensive understanding of momentum. From the basics of what momentum is to how to calculate it
using different formulas, we've got you covered. So, let's dive into the world of momentum and discover the fascinating laws that govern it.
Get ready to expand your knowledge and improve your physics skills with this in-depth tutorial on momentum. Welcome to our Momentum Tutorial! If you're interested in physics, you've come to the right
place. In this tutorial, we'll cover all the essential information you need to understand momentum, from basic definitions to complex formulas. Whether you're a student looking for help with
homework, a researcher looking for the latest advancements in the field, or someone considering a career in physics, this tutorial has got you covered. First, let's start with the basics: what is
momentum? Momentum is a fundamental concept in physics that describes the quantity of motion an object has. It is defined as the product of an object's mass and velocity, and it plays a crucial role
in understanding the behavior of objects in motion. Next, we'll dive into the different types of momentum and how they are calculated.
There are three types of momentum: linear, angular, and rotational. We'll explain each type in detail and provide clear examples and step-by-step explanations to ensure a thorough understanding. But
momentum doesn't exist on its own - it is closely related to other key concepts in physics, such as force and velocity. We'll explore the relationship between these concepts and explain how they all
work together to determine an object's motion. One of the most important principles of momentum is conservation. We'll discuss how momentum is conserved in various scenarios, including collisions and
Understanding this principle is crucial for predicting and analyzing the outcomes of these events. If you're looking to conduct experiments related to momentum, we've got you covered too. We'll
provide tips and guidelines for setting up and executing experiments safely and accurately. Safety is always a top priority when it comes to hands-on learning. For those interested in pursuing a
career in physics, we'll highlight some of the exciting opportunities and advancements in this field. With momentum being such a fundamental concept, there are endless possibilities for research and
innovation in this area. Finally, for those simply looking to stay updated on the latest research and resources related to momentum, we'll provide a list of reputable sources for further learning.
We want to make sure you have all the tools and information you need to continue expanding your knowledge of this fascinating topic. Welcome to our Momentum Tutorial! If you're interested in physics,
you've come to the right place. Whether you're a student looking for help with homework, a researcher looking for the latest advancements in the field, or someone considering a career in physics,
this tutorial has got you covered. First, let's start with the basics: what is momentum? Momentum is defined as the mass of an object multiplied by its velocity. In simpler terms, it is the measure
of an object's motion. Momentum is a crucial concept in physics because it helps us understand the behavior of objects in motion. There are three types of momentum: linear, angular, and rotational.
Linear momentum refers to the movement of an object in a straight line, while angular momentum is the measure of an object's rotation around a fixed point. Rotational momentum, on the other hand, is
the combination of both linear and angular momentum. To calculate momentum, we use the formula p = mv, where p represents momentum, m represents mass, and v represents velocity. Let's look at an
example to better understand this concept. Imagine a car with a mass of 1000 kg traveling at a velocity of 20 m/s. Using the formula, we can calculate its momentum as follows: p = (1000 kg)(20 m/s) =
20,000 kg⋅m/s.
This means that the car has a momentum of 20,000 kg⋅m/s.Now, let's explore the relationship between momentum and other key concepts in physics. One important concept is force, which is defined as an
object's mass multiplied by its acceleration. Force and momentum are closely related, as a force can change an object's momentum. In fact, we can use the formula F = ma to calculate the force
required to change an object's momentum. Another important concept to consider is velocity, which is the measure of an object's speed and direction.
Velocity and momentum are also closely related, as a change in velocity can affect an object's momentum. For example, if an object's velocity increases, its momentum will also increase. Momentum is a
conserved quantity, meaning it remains constant in a closed system. This means that the total momentum before an event will be equal to the total momentum after the event. This principle is known as
the law of conservation of momentum and is crucial in understanding various scenarios, such as collisions and explosions. If you're interested in conducting experiments related to momentum, it's
essential to follow safety guidelines and set up your experiments accurately.
Always wear protective gear and follow proper procedures to ensure your safety. Additionally, make sure to record your results accurately and repeat the experiment multiple times for more reliable
data. For those looking to pursue a career in physics, there are many exciting opportunities and advancements in this field. From studying the behavior of subatomic particles to exploring the
mysteries of the universe, there are endless possibilities for those with a passion for physics. Finally, for those looking to stay updated on the latest research and resources related to momentum,
we recommend checking out reputable sources such as scientific journals, university websites, and online forums. These sources will provide you with valuable information and insights into the world
of momentum.
Relationships and Conservation
use HTML structure with
only for main keywords and for paragraphs, do not use "newline character"Momentum is a fundamental concept in physics, but it is closely related to other important concepts as well.
In this section, we will explore the connections between momentum and other key concepts in physics, such as force, velocity, and energy. By understanding these relationships, we can gain a deeper
understanding of the role that momentum plays in the physical world. Additionally, we will also cover the principle of conservation of momentum, which states that in a closed system, the total
momentum remains constant. This principle has important implications for understanding the behavior of objects in motion and is crucial for solving many physics problems.
So let's dive in and explore the fascinating relationships between momentum and other fundamental concepts in physics.
Welcome to the experimentation section of our Momentum Tutorial! When it comes to understanding physics concepts, conducting experiments is crucial. Not only does it help solidify your understanding
of the material, but it also allows you to see the principles in action. However, when conducting experiments related to momentum, it is important to take precautions to ensure safety and accuracy.
Here are some tips and guidelines to keep in mind:- Always wear appropriate safety gear, such as goggles and lab coats, when handling materials that could potentially cause harm.- Make sure to follow
proper lab procedures and protocols.- Double check all measurements and calculations to ensure accuracy.- Keep your workspace clean and organized to avoid any accidents or errors.By following these
guidelines, you can conduct safe and accurate experiments related to momentum.
So go ahead and get your hands dirty - or rather, your lab equipment - and see the laws of physics in action!
Career Opportunities
Are you considering a career in physics? Then look no further! The field of physics offers a wide range of exciting and fulfilling career opportunities. With a strong foundation in momentum and other
physics concepts, you can become a part of groundbreaking research and advancements in various industries. From aerospace engineering to renewable energy to medical physics, the applications of
momentum are endless. You could work on developing new technologies for space exploration or designing more efficient wind turbines. You could even be involved in improving medical imaging techniques
or creating sustainable energy solutions for the future. In addition to the diverse career paths, the field of physics also offers plenty of opportunities for growth and advancement.
With new discoveries and advancements being made every day, there is always room for innovation and progress. As a physicist, you could be at the forefront of these exciting developments and make a
significant impact on society. So, if you have a passion for physics and a curiosity for how the world works, pursuing a career in this field can lead to an exciting and fulfilling journey. And this
momentum tutorial is just the beginning of your exploration into the fascinating world of physics.
Types of Momentum
Momentum is a fundamental concept in physics, and understanding its different types is crucial to grasping its overall significance. In this section of our Momentum Tutorial, we will delve into the
various forms of momentum and how they are calculated.
Linear Momentum:
Linear momentum is the most commonly known type of momentum and refers to the motion of an object in a straight line.
It is calculated by multiplying the mass of an object by its velocity. In equation form, it can be represented as:p = m * vWhere p is linear momentum, m is mass, and v is velocity.
Angular Momentum:
Angular momentum, as the name suggests, deals with the rotation or spinning of an object. It is calculated by multiplying the moment of inertia (a measure of an object's resistance to rotation) by
its angular velocity. The formula for angular momentum is:
L = I * ω
is angular momentum,
is moment of inertia, and
is angular velocity.
Impulse Momentum:
Impulse momentum refers to the change in an object's momentum due to a force acting on it for a certain amount of time.
It is calculated by multiplying force by time. The equation for impulse momentum is:J = F * ΔtWhere J is impulse momentum, F is force, and Δt is the change in time. By understanding these different
types of momentum and their calculations, you will have a more comprehensive understanding of this important concept in physics.
Further Learning
In order to stay updated on the latest research and resources related to momentum, it is important to consult reputable sources. One such source is the American Physical Society, which publishes
various journals and articles on current developments in the field of physics. Another valuable resource is the Physics World magazine, which provides in-depth coverage of the latest advancements in
Additionally, websites such as Physics.org and ScienceDaily are great sources for staying informed on momentum-related news and studies. It is also helpful to follow prominent physicists and
organizations on social media, as they often share interesting articles and insights on momentum. By regularly consulting these sources, you can continue to expand your understanding and knowledge of
momentum, and stay up-to-date on any new developments.
Career Opportunities
Interested in pursuing a career in physics? Look no further! The field of physics offers a wide range of exciting and dynamic career opportunities for those with a passion for understanding the
fundamental laws of the universe. From research and development in industries such as aerospace and energy, to cutting-edge technologies like quantum computing, there is no shortage of opportunities
for those with a background in physics. Additionally, many universities and government agencies offer positions for physicists to conduct groundbreaking research and make new discoveries. With the
constantly evolving advancements in technology and our understanding of the universe, the possibilities for a career in physics are endless.
So whether you're interested in exploring the depths of space, developing new technologies, or simply expanding our knowledge of the world around us, a career in physics is sure to be both
challenging and rewarding.
Relationships and Conservation
In the world of physics, understanding the relationships between different concepts is crucial. When it comes to momentum, it is important to explore its connections to other key concepts in order to
fully grasp its significance. Momentum is closely related to the concept of inertia, as stated in Newton's First Law of Motion. Inertia refers to an object's resistance to changes in its state of
motion, and momentum is a measure of an object's motion. This means that an object with a greater momentum will have a greater resistance to changes in its motion, making it harder to stop or change
direction. Another important relationship is between momentum and force.
According to Newton's Second Law of Motion, force is equal to an object's mass multiplied by its acceleration. In this equation, momentum is a factor in determining the acceleration of an object. The
greater the momentum, the greater the force needed to change its motion. Conservation laws also play a significant role in understanding momentum. The law of conservation of momentum states that the
total momentum in a closed system remains constant, regardless of any external forces acting on it.
This means that if two objects collide, their combined momentum before and after the collision will be the same. Overall, exploring the connections between momentum and other key concepts in physics
can help us understand its role in the physical world and how it relates to other fundamental principles. By understanding these relationships, we can further our knowledge and apply it to various
real-world situations.
In physics, experimentation is a crucial aspect of understanding and analyzing concepts such as momentum. Not only does it help to solidify theoretical knowledge, but it also allows for the discovery
of new phenomena and advancements in the field. However, conducting experiments related to momentum can be challenging and potentially dangerous if not done correctly.
This is why it is essential to follow certain tips and guidelines to ensure safe and accurate results. Firstly, it is important to have a clear understanding of the equipment and materials needed for
the experiment. This includes knowing their capabilities and limitations, as well as their proper handling and storage. It is also crucial to wear appropriate safety gear, such as goggles and gloves,
to protect yourself from any potential hazards. Additionally, it is essential to carefully plan and set up the experiment before conducting it. This includes considering factors such as the
environment, positioning of equipment, and any potential interferences.
It is also recommended to have a partner or supervisor present during the experiment to assist and monitor for any safety concerns. During the experiment, it is important to follow proper techniques
and procedures to ensure accurate results. This includes taking multiple measurements, recording observations, and repeating the experiment multiple times to eliminate errors. In the event of
unexpected results or accidents during the experiment, it is crucial to stop immediately and assess the situation. It is always better to prioritize safety over completing the experiment. After
completing the experiment, it is important to properly clean and store all equipment and materials. By following these tips and guidelines, you can conduct safe and accurate experiments related to
Remember to always prioritize safety and never hesitate to ask for assistance if needed.
Types of Momentum
Momentum is a fundamental concept in physics, and it plays a crucial role in understanding the motion of objects. It is defined as the product of an object's mass and velocity, and it is a vector
quantity with both magnitude and direction. There are two main types of momentum: linear momentum and angular momentum. Linear momentum, also known as translational momentum, is the product of an
object's mass and its velocity in a straight line. It is commonly represented by the symbol
and has units of kilogram-meters per second (kg·m/s).On the other hand, angular momentum is a measure of an object's rotational motion.
It is defined as the product of an object's moment of inertia and its angular velocity. Angular momentum is commonly represented by the symbol L and has units of kilogram-meters squared per second
(kg·m²/s).To calculate linear momentum, we use the formula p = m * v, where m is the mass of the object and v is its velocity. For angular momentum, we use the formula L = I * ω, where I is the
moment of inertia and ω is the angular velocity. Understanding the different types of momentum and how to calculate them is essential in physics. It allows us to analyze and predict the motion of
objects in various scenarios, such as collisions, rotations, and more.
So whether you're a student just starting to learn about momentum or a seasoned physicist looking to brush up on your knowledge, having a solid understanding of these concepts is crucial.
Further Learning
As we continue to delve deeper into the world of momentum, it's important to stay updated on the latest research and resources related to this fascinating concept. Luckily, there are many reputable
sources available that can provide you with all the information you need to expand your knowledge and understanding of momentum. One great resource for staying updated on the latest momentum research
is scientific journals. These publications feature peer-reviewed articles written by experts in the field, ensuring that the information presented is accurate and up-to-date. Some popular journals
for physics include
Physical Review Letters
Physics Today
, and
Journal of Applied Physics
These journals often have dedicated sections or issues specifically focused on dynamics and momentum, making it easy to find relevant information. Another valuable source for learning more about
momentum is attending conferences and seminars. These events bring together physicists and researchers from around the world to discuss and present their latest findings. Attending these events not
only allows you to learn about cutting-edge research, but also provides opportunities for networking and connecting with others who share your interest in momentum. In addition to journals and
conferences, online resources such as websites, blogs, and forums can also be helpful for staying updated on momentum. Some reputable websites for physics include Physics.org, PhysicsWorld.com, and
These sites often feature news articles, videos, and interactive content related to momentum and other physics topics. Lastly, don't forget about books! While online resources are convenient, books
provide a more in-depth and comprehensive understanding of momentum. Some recommended titles include Momentum: Letting Love Lead by John C. Maxwell, The Physics of Everyday Things by James Kakalios,
and Classical Dynamics of Particles and Systems by Stephen T. Thornton and Jerry B.
Marion. These books are not only informative, but also engaging and enjoyable to read. We hope this tutorial has provided you with a comprehensive understanding of momentum. From basic definitions to
complex formulas, we've covered it all. Whether you're a student, researcher, or simply interested in physics, we hope this tutorial has been informative and engaging.
Remember, momentum is all around us, and understanding it can help us better understand the world we live in. Keep learning, keep exploring, and who knows what discoveries you may make!We hope this
tutorial has provided you with a comprehensive understanding of momentum. Keep learning, keep exploring, and who knows what discoveries you may make!.
|
{"url":"https://www.onlinephysics.co.uk/dynamics-tutorials-momentum-tutorial","timestamp":"2024-11-07T16:46:38Z","content_type":"text/html","content_length":"189294","record_id":"<urn:uuid:c5e36126-cf93-4afd-873e-3533918b40d5>","cc-path":"CC-MAIN-2024-46/segments/1730477028000.52/warc/CC-MAIN-20241107150153-20241107180153-00222.warc.gz"}
|
Gecode::Int::GCC::MaxInc< View > Class Template Reference
Compares two indices i, j of two views More...
#include <bnd-sup.hpp>
Public Member Functions
MaxInc (const ViewArray< View > &x0)
bool operator() (const int i, const int j)
Protected Attributes
ViewArray< View > x
View array for comparison.
Detailed Description
template<class View>
class Gecode::Int::GCC::MaxInc< View >
Compares two indices i, j of two views
Definition at line 180 of file bnd-sup.hpp.
Constructor & Destructor Documentation
template<class View>
Gecode::Int::GCC::MaxInc< View >::MaxInc ( const ViewArray< View > & x0 ) [inline]
Member Function Documentation
template<class View>
bool Gecode::Int::GCC::MaxInc< View >::operator() ( const int i,
const int j
) [inline]
Member Data Documentation
View array for comparison.
Definition at line 183 of file bnd-sup.hpp.
The documentation for this class was generated from the following file:
|
{"url":"https://www.gecode.org/doc/3.7.3/reference/classGecode_1_1Int_1_1GCC_1_1MaxInc.html","timestamp":"2024-11-11T04:21:30Z","content_type":"text/html","content_length":"9901","record_id":"<urn:uuid:c4de5b7f-5799-4982-a7b2-208a65d87a2c>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00356.warc.gz"}
|
Perspective Texturemapping
using linear interpolation of 1/Z, U/Z and V/Z
An article written for 3DReview in 1997 by
Mikael Kalms
Linear texturemappers are in common use in the demos of today. Most games have however progressed to perspective corrected texturemapping, and soon most demos probably will follow. 3D graphics cards
usually perform perspective correction for almost no extra cost (speaking in terms of speed), but there are still not enough many machines to let any program require 3D graphics hardware -- software
rendering fallback code is still necessary.
This article will describe a method of performing perspective corrected texture mapping which suits the Pentium processor (and most of its clones) very well. The most notable exception to this are
Cyrix' clones which have a relatively weak FPU and therefore suffer from performance degradations in every part of the texturemapper.
What does it look like?
Here are some screenshots of triangulated cubes drawn with a blue/black chequered texture. Click on the pictures to see the bigger versions.
Linear Perspective corrected at every pixel Perspective corrected at every 16th pixel horizontally
[Download Executable Example] [Download Executable Example] [Download Executable Example]
NB: Example executables require DOS4GW (not included) Run executables with any parameter to have chequered pattern instead of default texture.
Let's Get to It
It is assumed that the readers are familiar with linear texturemappers and have some maths knowledge about derivatives and linear interpolation. A fair bit of C knowledge will also be helpful for the
Several approaches for performing perspective corrected tmapping is available. The Pentium relies heavily on its internal caches; therefore drawing the polygon as a set of horizontal lines is
strongly preferred. (This rules out any methods such as "drawing lines of constant Z".) The Pentium also has quite a strong FPU (with the exception of the Cyrix clones, and possibly some others as
well). Floating point math can therefore be used to a small degree.
The method which now will be described relies on the fact that 1/z, u/z and v/z are linear in screen space, and thus can be interpolated linearly without any errors (except for accuracy errors, of
Proving that 1/z, u/z and v/z are linear in screen space
This section is not required to be understood - it is only provided for those who wish to have mathematical proof of the theorem that the algorithm is based on.
"a is a linear function of xyz" means that linear (constant-velocity) motion along the xyz axes will translate into linear motion along a's coordinate axis. If one imagines xyz visually, one could
also say that "a is linear in xyz space", which means just the same. The above relationship is expressed like this with formulas:
a = k1x + k2y + k3z + k4
XY are perspective projected, 2d screen coordinates.
xyz are coordinates in 3d space of points on the polygon.
uv are texture coordinates in the texture that is going to be mapped onto the polygon.
abcd and efgh are coefficients for the definition of uv. They are constant for the whole polygon.
ABCD are the coefficients of the plane which the poly lies in. They are constant for the whole polygon.
The relationship between XY and xyz is as follows:
X = x/z
Y = y/z
...which is the usual perspective projection formula.
Solving those for x and y gives:
x = Xz
y = Yz
These equations will come in handy later on.
We only want xyz combinations that are on the plane which the polygon lies on. Therefore all the xyz combinations we are interested in are solutions to the following equation:
Ax + By + Cz = D
uv need to be defined in relation to xyz.
The equations below describe uv as linear functions of xyz:
u = ax + by + cz + d
v = ex + fy + gz + h
1) Proving that 1/z is linear in XY space
Beginning with the plane equation:
Ax + By + Cz = D
Substituting xy with the formulas from the perspective projection:
AXz + BYz + Cz = D
Solving for 1/z (dividing by (AX + BY + C)) yields:
1/z = (A/D)X + (B/D)Y + (C/D)
Here it is clearly visible that 1/z is a linear function of XY (1/z = k1X + k2Y + k3). 1/z is thus linear in screen space.
2) Proving that u/z, v/z are linear in screen space
Beginning with the uv formulas:
u = ax + by + cz + d
v = ex + fy + gz + h
Substituting xy with the formulas from the perspective projection:
u = aXz + bYz + cz + d
v = eXz + fYz + gz + h
Dividing by z:
u/z = (aX + bY + c) + d/z
v/z = (eX + fY + g) + h/z
Substituting the 1/z with the formula from the previous calculation:
u/z = aX + bY + c + d((A/D)X + (B/D)Y + (C/D))
v/z = eX + fY + g + h((A/D)X + (B/D)Y + (C/D))
Cleaning up the equation a little gives these final formulas:
u/z = (a + d(A/D))X + (b + d(B/D))Y + (c + d(C/D))
v/z = (e + h(A/D))X + (f + h(B/D))Y + (g + h(C/D))
Here one can also see that both u/z and v/z are on the form k1X + k2Y + k3.
u/z and v/z are thus also linear in screen space.
Now then, let's do the tmapper
This is the easy part. :)
Interpolating 1/z, u/z and v/z over the polygon and then calculating uv from those values will give correct uv's, so that's what should be done.
At each vertex in the polygon, compute 1/z, u/z and v/z. Compute their slopes along the polygon's edges, and their horizontal increases. (Since these values are truly linear on screen, the horizontal
increase values are constant for all planar polygons, not only triangles!) Interpolate the three factors along the edges, then horizontally, and at each pixel perform these computations to get uv:
z = 1 / (1/z)
u = (u/z) * z
v = (v/z) * z
Then copy the matching pixel from the texture to the screen.
This method has a very low setup cost, but the per-pixel cost is relatively high compared to other perspective correction methods. In order to remedy that, a partially approximative method is
possible, as shown in the next paragraph.
Scanline subdivision
Subdividing each horizontal line in the polygon into spans of length N pixels each has proven to be a effective approximation method. N should preferably be a power of 2; typical values for N are 8
or 16.
The subdivision is done on-the-fly in the code part that draws the horizontal line:
1/z, u/z and v/z are stepped N pixels at a time instead of 1. (Pre-multiply the horizontal increases by N when calculating them.) Calculate uv at the current position (x) and at position (x + N), and
draw the pixels from (x) to (x + N - 1) using normal linear uv interpolation. The horizontal uv-increases are computed by doing (u2 - u1) >> log2(N) and (v2 - v1) >> log2(N).
This will remove much of the per-pixel cost, yet retain a very high level of quality.
By hand-writing the horizontal line drawer in assembly language there is yet another feature to take advantage of: CPU/FPU parallelism. [This holds true mostly for the Pentium; the 486's FPU is way
slower than the 486 CPU] The reciprocal calculation (z = 1 / (1/z)) is what takes the most time; by interleaving the uv-calculations for position (x + 2 * N) with the code that draws the span from
pixel (x) to pixel (x + N - 1), much of the overhead of the uv calculation will disappear.
The last span in each horizontal line needs special-case code as it is shorter than N pixels. uv should be calced at the exact end of the span, not at the next (x + N) position.
Here are example sources for the texture mapping approaches mentioned above. They are not optimized, they are only provided as detailed examples of how the algorithms work. The sources also implement
subpixeling and subtexeling; without those features perspective correction reallty isn't much to yell about. :)
Example source of linear texturemapper
Example source of "perfect" perspective corrected texturemapper
Example source of scanline subdivision texturemapper
A note on the dc/dx calculations in the example sources
Copyright© 1997 by Mikael Kalms and 3DReview. All Rights Reserved.
|
{"url":"http://www.lysator.liu.se/~mikaelk/doc/perspectivetexture/","timestamp":"2024-11-02T20:25:14Z","content_type":"text/html","content_length":"11216","record_id":"<urn:uuid:e9820032-b6db-4e16-afb3-84bc6036702e>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00003.warc.gz"}
|
lifting modular symbols for newform of level 35 at p = 5, 7
lifting modular symbols for newform of level 35 at p = 5, 7
Let $f$ be the unique normalised eigenform in $S_2(\Gamma_0(35))$ of dimension $2$. It has split multiplicative reduction at $p = 5$ ($a_p = +1$) [and non-split multiplicative reduction at $p = 7$
($a_p = -1$)]. The $p$-adic $L$-function should vanish to the order $1$ at $1$ (because the associated abelian variety has rank $0$). I want to compute the valuation of its leading coefficient using
Pollack-Stevens. To do so, I use the following code:
from sage.modular.pollack_stevens.space import ps_modsym_from_simple_modsym_space
A = ModularSymbols(35,2,1).cuspidal_submodule().new_subspace().decomposition()[1]
p = 5
prec = 2
phi = ps_modsym_from_simple_modsym_space(A)
ap = phi.Tq_eigenvalue(p,prec)
phi1,psi1 = phi.completions(p,prec)
phi1p = phi1.p_stabilize_and_lift(p,ap = psi1(ap), M = prec)
Unfortunately, the last command fails after a few seconds (also for $p = 7$) with a
RuntimeError: maximum recursion depth exceeded while calling a Python object
Is there a theoretical problem with computing the $L$-value or is there a problem with the implementation?
2 Answers
Sort by » oldest newest most voted
Here is a piece of code that pushes the computations as far as possible.
import traceback
from sage.modular.pollack_stevens.space import ps_modsym_from_simple_modsym_space
p = 5
prec = 2
precmore = 8
A = ModularSymbols(35, 2, 1).cuspidal_submodule().new_subspace().decomposition()[1]
phi = ps_modsym_from_simple_modsym_space(A)
# sage: phi
# Modular symbol of level 35 with values in
# Sym^0(Number Field in alpha with defining polynomial x^2 + x - 4)^2
ap = phi.Tq_eigenvalue(p, prec) # this is 1 in QQ
phi1, psi1 = phi.completions (p, precmore)
R = psi1.codomain()
eps = 1
poly = PolynomialRing(R, 'x')( [p ** (k + 1) * eps, -ap, 1] )
v0, v1 = poly.roots( multiplicities=False )
if v0.valuation():
v0, v1 = v1, v0
alpha = v0
phi1p = \
, prec
, ap = psi1(ap)
, alpha = alpha
, check = False
, new_base_ring = R )
except Exception:
The above code delivers now the following error:
Traceback (most recent call last):
File "<ipython-input-945-35e0bb0e7888>", line 34, in <module>
, new_base_ring = R )
File "/usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/modsym.py", line 1495, in p_stabilize_and_lift
new_base_ring=new_base_ring, check=check)
File "/usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/modsym.py", line 1043, in p_stabilize
V = self.parent()._p_stabilize_parent_space(p, new_base_ring)
File "/usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/space.py", line 557, in _p_stabilize_parent_space
raise ValueError("the level is not prime to p")
ValueError: the level is not prime to p
And looking inside the module with the error, /usr/lib/python2.7/site-packages/sage/modular/pollack_stevens/space.py there is an intentioned check that the prime does not divide the level:
N = self.level()
if N % p == 0:
raise ValueError("the level is not prime to p")
Explicitly, the road to the error is as follows. We submit phi1 to the method p_stabilize_and_lift . After some steps, the code lands in the method p_stabilize of this instance of the class class
PSModularSymbolElement_symk(PSModularSymbolElement) (the instance is phi1).
This method builds the space
V = self.parent()._p_stabilize_parent_space(p, new_base_ring)
and we land in the module space.py. To see the error, we type explicitly with our data:
sage: phi1
Modular symbol of level 35 with values in Sym^0 (5-adic Unramified Extension Field in a defined by x^2 + x - 4)^2
sage: phi1.parent()
Space of modular symbols for Congruence Subgroup Gamma0(35) with sign 1 and values in Sym^0 (5-adic Unramified Extension Field in a defined by x^2 + x - 4)^2
sage: phi1.parent().level()
Note: It is hard to say more, some details on the mathematical part are needed. (It happens to me that finding programming errors becomes easy first after understanding the special cases. From the
examples in the method giving the final error, all levels are coprime w.r.t. the submitted prime numbers.) It's all i have. Parts of the code above are adapted to the given example. Guessing the
new_base_ring is not good enough in the given situation, also the alpha had to be declared explicitly. But the space construction was explicitly prohibited and i decided to stop here. (There are too
few comments, book / web references in the code, i really have no more chance.)
edit flag offensive delete link more
I think one does not need to $p$-stabilize when p | N: http://math.bu.edu/people/rpollack/Pa...
However, only calling phi1.lift instead of phi1.p_stabilize_and_lift also fails.
(And sorry for my late response!)
TK ( 2020-04-19 12:55:46 +0100 )edit
One solution would be to express the K-valued modular symbol (K = NumberField(x²+x-4)) as a K-linear combination of QQ-valued modular symbols and do the procedure for the latter ones. I'm working on
TK ( 2020-04-23 09:07:44 +0100 )edit
I'm using N = 188 and p = 3 now. p does not divide N, so we have to p-stabilize (this is different from your answer where one should just use lift() instead of p_stabilize_and_lift()), and is inert
in the coefficient ring. In your answer, change the following lines:
p = 3k = 0A = ModularSymbols(188, 2, 1).cuspidal_submodule().new_subspace().decomposition()[0]
Now the error message is File "sage/rings/padics/qadic_flint_CR.pyx", line 170, in sage.rings.padics.qadic_flint_CR.qAdicCappedRelativeElement.__hash__ (build/cythonized/sage/rings/padics/
qadic_flint_CR.c:39531) raise TypeError("unhashable type: 'sage.rings.padics.qadic_flint_CR.qAdicCappedRelativeElement'") TypeError: unhashable type:
TK ( 2022-03-02 16:49:21 +0100 )edit
(continued) The reason seems to be that one cannot hash elements of proper extensions of Q_p. How can I resolve this, hopefully without disabling hashing?
TK ( 2022-03-02 16:49:56 +0100 )edit
The infinite recursion happens when trying to change the base ring of a polynomial (%debug is your friend)
914 poly = poly.change_ring(new_base_ring)
ipdb> p poly
(1 + O(5^2))*x^2 + (4 + 4*5 + O(5^2))*x + 5 + O(5^3)
ipdb> p new_base_ring
5-adic Field with capped relative precision 3
ipdb> p poly.parent()
Univariate Polynomial Ring in x over 5-adic Unramified Extension Field in a defined by x^2 + x - 4
I have no idea is this makes sense or not.
edit flag offensive delete link more
|
{"url":"https://ask.sagemath.org/question/48037/lifting-modular-symbols-for-newform-of-level-35-at-p-5-7/","timestamp":"2024-11-14T01:29:11Z","content_type":"application/xhtml+xml","content_length":"70457","record_id":"<urn:uuid:470bc9da-5be4-40b2-a246-0bf5156cf2ee>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00303.warc.gz"}
|