content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
IPS Expert GmbH: MER expert
Determination of cost, time prices etc. using the calculation method of multiple, influencing variables.
Today, statistical analysis is a fundamental principle in problem solving. In practice, easy to understand and to handle graphical approximate solutions are used, since the determination of
statistical relevant key figures and mathematical equations demand a high expenditure of time.
Our solution for the calculation of these key figures is a user-friendly tool, which offers the easy handling of complex connections, while also considering statistical and mathematical sound
IPS Expert offers you the tool to efficiently process and solve complex statistical-mathematical problems without in-depth knowledge of higher mathematics.
Our solution uses sophisticated statistical and mathematical methods which is preparated to be easily used and understood by the user, therefore giving the user a lucid instrument to make the right
choices and decisions.
Every event is the result of influencing variables
Deriving impacts from causes and vice versa is the working hypothesis on which the Delphi principle is based upon.
Constant and variable influencing variables
This principle assumes, that each and every event (e.g. the business operating result, the currency exchange rate or the production time of some article) is the result of influencing variables and
their dependency on each other. Depending on the type of the influence of a variable, the division between constant and variable influencing variables must be made.
The influence of constant influencing variables can more or less be described by mathematical formulas.
The influence of variable influencing variables can, on the other hand, only be described by its concrete characteristic, while the influence itself is a result of mathematical formulas or
probabilities. The influence of multiple factors on each other has to be considered here as well.
The Delphi principle is looking to:
-determine the future development of a target value through concrete observation/experiences and through the result of these observations and to make a prediction on how these observation results
might effect the target value as well as the observation itself.
-determine significantly influencing variables for a target value through knowing the make-up of that target value.
Both principles aim to predict future events with a relatively small corridor for errors using the calculation method of multiple, influencing variables.
Experiences gleaned are hereby used to further improve the result of the calculation itself.
The future is a result of the past. A result of its rational or irrational interpretation and the will to influence it. Sophisticated observation of the past and the knowledge to know which you could
not observe, is what makes the future more understandable and maybe a bit predictable.
The ancient Greeks already used that circumstance to come up with godly prophecies by making daily observations.
Through the undogmatic, individual deciphering and interpretation of symbolic signs, likes the ones which the oracle of Delphi gave its worshippers as response, everyone gained a way to influence
one’s future by himself.
This principle stands true till today – just the number of factors and influencing variables to recognize, interpret and evaluate have significantly increased.
IPS Expert developed a software solution which incorporates these factors using statistical methods and procedures to quickly and reliably find profound results. Based on this precise data material,
it becomes increasingly easier to make the right business decisions and therefore aid in the successful development of your business.
Just as it was with the oracle of Delphi, the trick to it is to interpret these “signs” yourself.
The calculation method of multiple, influencing variables is used, when target values have to be determined dealing with complex correlations to each other or depending on influencing variables.
Examples hereof are:
-Determination of target work duration
-Determination of lead cycle times
-Determination of production costs
-Price comparison analysis with competitors
-Analysis of purchase prices
-Determination of the cost of auxiliary materials for the costing calculation
-Determination of surcharge based on expected scrap rate
-Calculation of percentage of return material
-Determination of extra work risk
Further areas of applications of applicable due the felxible nature of the program design.
IPS Expert GmbH
Schweinfurter Straße 28
97076 Würzburg Germany
Tel.: +49 (0) 931 30 980-0
Fax: +49 (0) 931 30 980-22 | {"url":"https://www.ipsexpert.com/en/addons/mer-expert","timestamp":"2024-11-15T02:35:30Z","content_type":"text/html","content_length":"21703","record_id":"<urn:uuid:c855b571-4d0c-4e2c-b958-b7b5a381423b>","cc-path":"CC-MAIN-2024-46/segments/1730477400050.97/warc/CC-MAIN-20241115021900-20241115051900-00488.warc.gz"} |
Perceived Brightness Index
Help Support Candle Power Flashlight Forum
Knowing the OTF lumens of a flashlight is good data to have. However, when comparing lights, lumens ratings are of limited value. This is because we perceive brightness in a generally logarithmic
manner. Doubling the OTF lumens of a flashlight does not double the perceived brightness. I was wondering if anyone else might consider an index value for flashlights based on their relative
brightness to be useful.
I put together the following chart based on logarithms of base 1+√(2)/2. (I obtained that base from this
.) Assuming the base number to be correct, then an increase or decrease of less than 1 in the index value would not yield a noticeable difference in brightness (to the naked eye). For example, a 170
lumen flashlight and a 210 lumen flashlight will appear to be approximately the same brightness. However, a 10 lumen flashlight would be noticeably brighter than a 5 lumen flashlight.
Or, put another way, lights with similar index values (a difference of less than one) are approximately equal in perceived brightness, so other factors should be considered more important when
choosing between/among them for purchase or use.
I'm interested in feedback, including information regarding the best base number to use for a logarithmic scale.
Note that the following table is a starting point, and not a finished product. It may not be accurate in its present form and might require modification. The goal of starting this thread is to
develop a scale that is reasonably accurate in order to compare the perceived brightness of different lights based on their OTF lumens.
To obtain the index value
for a flashlight with
OTF lumens:
= ln(
) / ln(1+√(2)/2)
Last edited:
Apr 10, 2006
Interesting thread. :twothumbs
I want to add one non-scientific item . . . .
My one flashlight (Low-Medium-High) produces (OTF):
roughly 60 Lumens on Medium
roughly 160 Lumens on High
In MY real life use, (outdoors, very rural setting),
there is such little noticeable difference between the two levels,
that i virtually never even Bother with High level.
Medium is almost as bright (to me, anyway)
and lasts LOTS longer !
This fact really surprised me, folks !
If i hadn't tried it for myself, i wouldn't have believed it.
Yes, I noted in my post that we know that our perception of brightness is logarithmic. I'm in no way claiming to have discovered anything. I was merely trying to
that logarithmic quality in a manner useful to those who wish to be able to predict the perceived brightness between two lights based on their OTF lumens.
Astronomers use a similar system utilizing logarithm of base 10^(2/5), a system that has evolved since Hipparchus first ranked stars in six classes of magnitude. The Richter scale is a log base ten
scale for measuring earthquake magnitude. The idea of the scale is nothing new, but as far as I was able to find, has not yet been applied to flashlights.
Last edited:
Dec 3, 2009
Or, put another way, lights with similar index values (a difference of less than one) are approximately equal in perceived brightness, so other factors should be considered more important when
choosing between/among them for purchase or use.
I'm interested in feedback, including information regarding the best base number to use for a logarithmic scale.
I look at the values in index values 9 and 10 and wonder are the values too far apart at that level.
I have no method of testing the actual lumens produced in my own torches but using factory specs to judge values when comparing my quark R5 on max with AA at a supposed 109 lumens with the same head
on max with a CR123 at supposedly 206 lumens I perceive an absolutely massive difference, more than I'd call a "step" (a step would be a very subjective measure though, I admit).
Ok, yes, my torch has a 97 lumen "step" between those two batteries but that's not really that far apart from your chart's 87 lumen step and for me at that brightness range my eyes seem to regard a
torch at the bottom of your range to be well dimmer than a torch towards the top and thus not really "approximately equal".
I don't know if that's because my eyes are attuned to seeing the difference between my many torches or because I when I use my torches at night it's usually in very very dark open spaces without much
ambient light (unless it's full moon). In daylight I find it much harder to discern the smaller differences in brightness.
I've no idea if any of this rambling of mine assists you to form a view on the value of your chart's values/base or not. I do agree that such a chart has vlaue though. It would be nice to have
something to help dampen the focus on lights being thought of as ten or twenty lumens "better".
Last edited:
Yes, I noted in my post that we know that our perception of brightness is logarithmic. I'm in no way claiming to have discovered anything. I was merely trying to apply that logarithmic quality in
a manner useful to those who wish to be able to predict the perceived brightness between two lights based on their OTF lumens.
Astronomers use a similar system utilizing logarithm of base 10^(2/5), a system that has evolved since Hipparchus first ranked stars in six classes of magnitude. The Richter scale is a log base
ten scale for measuring earthquake magnitude. The idea of the scale is nothing new, but as far as I was able to find, has not yet been applied to flashlights.
yes, they have already been applied to flashlights over 100 years ago. each light emission pattern has a unique multiplier.
Thanks for the feedback.
I look at the values in index values 9 and 10 and wonder are the values too far apart at that level.
I have no method of testing the actual lumens produced in my own torches but using factory specs to judge values when comparing my quark R5 on max with AA at a supposed 109 lumens with the same
head on max with a CR123 at supposedly 206 lumens I perceive an absolutely massive difference, more than I'd call a "step" (a step would be a very subjective measure though, I admit).
Using the current base (approximately 1.71) 109 lumens would correspond to an index value of 8.77. 206 lumens would correspond to an index value of 9.96. The difference between the to is 1.19. So, we
would expect to see a difference, but not an "absolutely massive difference."
I see a few possibilities for the large perceived brightness difference you're seeing around the 9 - 10 index values.
First, the base for the logarithm could be incorrect. If that's the case, then I would expect that error would become more apparent as the index values increased.
A related possibility is that people perceive brightness differently depending on the wavelength of the light. Perhaps the base of the logarithm should not be a constant, but rather a function of the
color temperature or other quality of the light.
Another related possibility is that our sensitivity to brightness varies as the brightness varies.
It's possible (and probable) that different individuals likely have slightly different sensitivities to brightness. Yours might be more sensitive than most, due to genetics, or environmental factors
(e.g., your flashlight hobby could have allowed you to develop a keener sensitivity to brightness).
Another possibility is that the advertised lumens of your Quark are incorrect relative to the particular light you received. This itself could be due to an over/understatement of brightness by the
manufacturer, or just variation in the emitters they used. This possibility would be the easiest to accurately test, since the tests would be completely objective.
I have a Fenix PD30 with a brightness index step of 1.19 from high to turbo. Tonight, I'll put some primaries in it, and see how large that difference appears to me (not very scientific, since I
already know the index step and can't immediately verify the accuracy of the lumens rating, but it's slightly better than nothing). From medium to high there is a step of 0.96, so that one should
barely be noticeable if the base is correct.
Did you compare the brightness levels of your Quark on a white wall, or in a real world situation (e.g., outdoors)?
yes, they have already been applied to flashlights over 100 years ago. each light emission pattern has a unique multiplier.
Perhaps you would be kind enough to link to a source (or provide the necessary info to locate the source in a library) showing where such an index has been in existence for flashlights for over a
century. That index would be extremely useful for CPF members, so it would be great to have access to it.
Perhaps you would be kind enough to link to a source (or provide the necessary info to locate the source in a library) showing where such an index has been in existence for flashlights for over a
century. That index would be extremely useful for CPF members, so it would be great to have access to it.
already did, Stevens' Power Law (linked above) has at least five brightness related stimulus conditions.
Nov 28, 2009
Well personally stevens power law wasnt even on my radar until this thread was set up so without taking anything away from the original law all credit to the OP for bringing this index to many more
peoples attention.
I'm sure that if people are sufficently interested by it then they will look into the subject further and find out other perhaps prior laws and indexes as well. Thanks for providing some links to
facilitate further reading, but please don't take away from the OP's efforts to bring this to more peoples attention simply for the sake of it. IMO this simple table is a much easier way of
understanding the increased difference in lumens needed to create the same apparent difference in 'actual' percieved brightness than the slightly obscure references provided by stevens law.
Last edited:
Nov 8, 2009
I couldn't agree more with RedForest UK. Thanks JCD!
already did, Stevens' Power Law (linked above) has at least five brightness related stimulus conditions.
The links you previously posted do not provide the necessary information for a suitable perceived brightness scale.
If we use Stevens' law, we are interested in the sensation magnitude
. Substituting 1/2 for the measured exponent for a point source (per Stevens' values), the equation becomes ,
). Clearly this is not a logarithmic equation (although that doesn't mean it's wrong), so if it is correct, my initial assumption that we perceive brightness logarithmically is incorrect.
Since it is not a logarithmic equation, one lumen can't provide a sensation magnitude of 0 as a logarithmic scale would provide. That's fine. We can arbitrarily give a 1 lumen point source a
sensation magnitude of 1. This arbitrarily gives us
So, for:
=1, Intensity = 1 lumen
=2, Intensity = 4 lumens
=3, Intensity = 9 lumens
=4, Intensity = 16 lumens
=5, Intensity = 25 lumens
=6, Intensity = 36 lumens
=7, Intensity = 49 lumen
=8, Intensity = 64 lumen
=9, Intensity = 81 lumen
=10, Intensity = 100 lumen
The first problem that immediately jumps out is that arbitrarily assigning a one lumen torch a Sensation magnitude of 1 eliminates any useful meaning of an increase of 1 to the sensation magnitude
value. So, before Stevens' law can potentially serve our purpose, we have to find a suitable non-arbitrary value of
when using lumens as our unit for stimuli intensity.
Even without knowing the correct value for
, we can check to see if the predictions seem reasonable. Each equal step in
should offer an approximately equal incremental change in perceived brightness. I would hypothesize that a four lumen torch is much brighter relative to a 1 lumen torch than a 100 lumen torch is,
relative to an 81 lumen torch.
Edit to add:
I just noticed that I used the Stevens measured exponent for looking into the light, not looking at the area illuminated. The closest Stevens offers for comparing brightness of the area illuminated
the measured exponent associated with "5º target in dark." That would change our formula such that
increases with the cube root of
instead of with the square root of
. So, the formula would predict that increasing from 1 lumen to 8 lumens should yield the same proportional increase in brightness as an increase from 729 lumens to 1000 lumens. I do not believe this
to be a reasonable prediction. (End edit)
Of course, mine are not the only
criticisms of Stevens' law
The Weber-Fechner law tells us that perceived intensity is a logarithmic function of stimuli intensity, but it doesn't provide us with real world numbers so our function can fit the data related to
the specific stimulus type we are looking for, so that we might make meaningful predictions. That is the very information we are trying to obtain in this thread. Without it, the W-F law isn't very
useful for us.
Last edited:
May 21, 2008
Ragiska, do you have anything to actually contribute, or do you just want to repeatedly point out that at some time in the past someone else has done similar calculations?
As to this principle having been applied to flashlights "100 years" ago, that was no doubt handy for early 20th century flashaholics, but it aint much us on the forum though, is it?
I don't normally comment on such things, but JCD took the time and effort to put an idea into an easy to use reference form for the forum to discuss, and the first reply is "you're 2000 years late".
My point is, if you have something to say, say it, if you just want to bring down, complain, pick, whatever the word is, we don't need that, especially in a thread where someone has taken the time to
Personally I think this is a great thing JCD has done, something to link to when members are wondering over the 220 lumen or the 240. This will help to put that all into perspective.
Last edited:
Well personally stevens power law wasnt even on my radar until this thread was set up so without taking anything away from the original law all credit to the OP for bringing this index to many
more peoples attention.
I'm sure that if people are sufficently interested by it then they will look into the subject further and find out other perhaps prior laws and indexes as well. Thanks for providing some links to
facilitate further reading, but please don't take away from the OP's efforts to bring this to more peoples attention simply for the sake of it. IMO this simple table is a much easier way of
understanding the increased difference in lumens needed to create the same apparent difference in 'actual' percieved brightness than the slightly obscure references provided by stevens law.
Thank you (and others) for the kind words. I want to clarify that the table I provided is not necessarily accurate. It requires assumptions that we have not yet verified to be accurate. In
particular, the base of the logarithmic function needs to be verified. (I can write the equation in a different way to simply make that unknown a constant (or function) of proportionality.)
At a minimum, accurate lumens ratings and a lot of A-B comparisons (brighter, the same, dimmer?) of many lights from many people. It might also be necessary to have accurate color temperature
readings of each light.
So, at this point, the table in post 1 is a starting point only. I expect it to be modified before it is accurate. But, we have to start somewhere, and, collectively, CPF members have the resources
to develop a useful index. I don't doubt that we
make it happen. My hope is that we
make it happen.
Last edited:
i recommend a paper titled "the visual discrimination of intensity and the weber-fechner law" by selig hecht.
i recommend a paper titled "the visual discrimination of intensity and the weber-fechner law" by selig hecht.
The table in the first post of this thread is based on the Weber-Fechner law, where ∆
= (√(2)/2)/1 = √(2)/2.
The Hecht paper seems to imply that the formula we are looking for will have the form P =
) + C instead of P =
) + C.
Last edited:
Apr 1, 2002
I put together the following chart based on logarithms of base 1+√(2)/2. (I obtained that base from post 4 in this
.) Assuming the base number to be correct, then an increase or decrease of less than 1 in the index value would not yield a noticeable difference in brightness (to the naked eye). For example, a
170 lumen flashlight and a 210 lumen flashlight will appear to be approximately the same brightness. However, a 10 lumen flashlight would be noticeably brighter than a 5 lumen flashlight.
Or, put another way, lights with similar index values (a difference of less than one) are approximately equal in perceived brightness, so other factors should be considered more important when
choosing between/among them for purchase or use.
To obtain the index value
for a flashlight with
OTF lumens:
= ln(
) / ln(1+√(2)/2)
Great start to an overlooked topic. With the ever growing race to produce products that maximize output, I think a lot of people loose sight of the fact that bumping up the current to go from 350
lumens to 450 lumens may not be beneficial from a practical standpoint as most of us would find it difficult to 'see' the difference.
I am not qualified to comment on the math behind your scale, but it seems that you are trying to define the smallest step up in lumens (relative to the previous step) that would give a perceptual
increase in brightness? How would I define a light on this scale that is about twice as bright as another light? Would the assumption be that in order to compare lights on this Index, they would have
to be of a similar beam pattern? I know that non-flashaholics would say that an XX lumen pencil beam is brighter than an XX lumen flood.
Thanks for your time, and I applogize if the answers to my questions are self-evident in your table.
… it seems that you are trying to define the smallest step up in lumens (relative to the previous step) that would give a perceptual increase in brightness?
Yes, that is the goal.
How would I define a light on this scale that is about twice as bright as another light?
Do you mean twice as bright in terms of lumens produced by the light or in terms of perceived brightness? Doubling lumens would result in an increase of about 1.3 in the perceived brightness index
number. I'm not sure how large an increase in the index would be caused by doubling perceived brightness.
Would the assumption be that in order to compare lights on this Index, they would have to be of a similar beam pattern?
Honestly, I'm not quite sure. Ideally, it will work across all beam types, like an integrating sphere. However, we don't live in an ideal world, and experimental data may reveal that, all else equal,
one beam type is generally perceived to be brighter than a second beam type. That, too, would prove to be useful information.
I think the usefulness will come from being able to see that relative brightness between two lights shouldn't always be an important consideration. For example, if someone is considering Light A and
Light B, if Light A produces 15% more lumens, but Light B has a nicer beam and warmer tint, their respective perceived brightness index values would show that the increased brightness of Light A
isn't great enough to sacrifice the beam quality or tint of Light B.
Thanks for your time, and I applogize if the answers to my questions are self-evident in your table.
No apology is necessary. I don't believe the answers were self evident.
Apr 1, 2002
Do you mean twice as bright in terms of lumens produced by the light or in terms of perceived brightness? Doubling lumens would result in an increase of about 1.3 in the perceived brightness
index number. I'm not sure how large an increase in the index would be caused by doubling perceived brightness.
Oops, sorry I wasn't clear. In terms of the Index. I suppose it would be advantageous at some point to be able to say that in general, moving up 'X' index spots would be about a doubling of perceived
Honestly, I'm not quite sure. Ideally, it will work across all beam types, like an integrating sphere. However, we don't live in an ideal world, and experimental data may reveal that, all else
equal, one beam type is generally perceived to be brighter than a second beam type. That, too, would prove to be useful information.
I think the usefulness will come from being able to see that relative brightness between two lights shouldn't always be an important consideration. For example, if someone is considering Light A
and Light B, if Light A produces 15% more lumens, but Light B has a nicer beam and warmer tint, their respective perceived brightness index values would show that the increased brightness of
Light A isn't great enough to sacrifice the beam quality or tint of Light B.
I understand the goal, and think that beam profile considerations will bring an unecessary level of complexity to this index. However, through exeperience, I do know that a more focused beam will
tend to be perceived as being brighter even if I qualify 'brighter' as being the total amount of light.
Again, thanks for putting this out there.
Jan 12, 2012
I just found this thread....sorry to drag it up from the grave...albeit It occurred to me that we can't actually see lumens, and the the perceived brightness of the flashlight is really due to what
reflects back off of a target...lets call that the Lux on the target perhaps.
This means that a beam pattern with a large spill/corona will be perceived differently than a tight hot beam..because the same lumens will all be in a smaller area for a thrower, and a larger area
for a flooder.
If I have 1,000 lumens, and I shine them in a beam that makes a solid 1 m2 circle of light on my target, I will see 1,000 lux.
If that same emitter is projecting a floody 10 m2 circle of light, I will only see 100 Lux....and it will look dimmer.
So, if we want to say a light has to have twice the lumens to be able to tell its brighter...we should also consider that it could in fact even have half the lumens and look brighter, etc.
I think for a floody light at least, I don't need it to produce twice the lux for me to be able to tell it is brighter....but for a thrower, especially at close range, the hot spot tends to stop my
eyes down after a point of diminishing return, essentially, once its as bright as I can see, I won't know if it got brighter, only if the spot enlarged, etc....kind of my eye's light meter
overloading/being off scale.
So, the dogma about "needing twice the lumens to tell a difference" when the context is comparing two different lights for example is misleading. | {"url":"https://www.candlepowerforums.com/threads/perceived-brightness-index.271967/","timestamp":"2024-11-06T08:16:54Z","content_type":"text/html","content_length":"218922","record_id":"<urn:uuid:edcc725a-2af0-4567-bfb6-2ec566b4264d>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00414.warc.gz"} |
Discrimination of Toxic Flow in Uniswap V3: Part 2
This post is a new installment in an ongoing series by @0xfbifemboy on Uniswap liquidity pools, concentrated liquidity, and fee dynamics. It is the second of multiple posts in a subsequence which
aims to focus on the characterization of toxic flow in ETH/USDC swap data and potential implementations of price discrimination or flow segmentation mechanisms.
In the previous post, we began to characterize toxic and non-toxic swap flow on Uniswap V3’s ETH/USDC pools in greater detail. We identified several patterns of interest; for example, we observed
that ‘fresh’ wallets with limited trading history were likely to make uninformed trades (profitable for the liquidity pool), whereas a very small subset of wallets (fewer than 1%), trading frequently
and at size, originated the majority of toxic flow and the lion’s share of the pool’s losses.
Although we were able to produce some interesting observations about the nature of toxic and nontoxic flow, our post left a number of critical questions unanswered. How robust is our method for
identifying sources of toxic flow? Can we say anything deeper about the behavior of wallets that originate toxic versus nontoxic flow? Finally, and perhaps most critically, how can we bring our
insights together into a practical methodology for price discrimination based on the source of a swap?
In this post, we attempt to bring some of these loose strands together. We explore alternate definitions of toxicity, find them to be largely consistent with our prior work, and create a consolidated
classification of wallets into three toxicity levels. We analyze the trading of these wallets in greater depth. Closing out, we sketch out a tentative proposal for the design of a system that could
be used to upcharge toxic flow or, equivalently, give discounts to retail traders.
An alternate definition of toxicity
Recall that initially we attempted to segment swap flow on a per-wallet basis by looking at aggregated wallet statistics. In particular, we looked at each wallet’s average notional swap size versus
its average PnL, for wallets with at least 250 swaps:
We also calculated the autocorrelation of swap PnL across buckets of 50 swaps each. Noticing the presence distinct clusters in the above plot, we may make the following tentative definitions:
• High toxicity: Wallets with at least 250 swaps, mean PnL < 0 basis points, mean notional > 10k USD, swap PnL autocorrelation > 0.9
• Medium toxicity: Wallets with at least 250 swaps, mean PnL < 0 basis points, mean notional > 10k USD, swap PnL autocorrelation ≤ 0.9
• Low toxicity: All other wallets
It appeared, based on this categorization, that the bulk of the liquidity pool’s losses originated from the high and medium toxicity groups, suggesting that our grouping of wallets is valid.
However, when trying to analyze situations with relatively unknown “ground truth,” it is always useful to ask the following: How robust are our results to alternate metrics or definitions? We will
attempt to identify toxic wallets in a different way and see how the results compare.
Instead of starting with aggregated statistics, can we instead look at the wallets from which toxic swaps originate? Recall our original observation, where we found that the vast majority of the
liquidity pool’s losses came from large swaps with negative PnL:
We restrict specifically to the set of wallets with at least five ETH/USDC swaps, where all of those swaps were between 80th and 95th percentile in notional size and all of which had a markout PnL of
-5 basis points or less. This yields a list of merely 1,776 wallets, a very small fraction of the over 450k distinct wallets recorded in the ETH/USDC swap data.
Tentatively classifying these wallets as “toxic wallets” and plotting the distribution of each wallet’s average notional swap size, we do see that these wallets have much higher notional swap sizes
than usual:
This is an unsurprising result, as having multiple large swaps was a precondition to being included in this group of wallets to begin with!
One might then ask if the same trend holds for swap PnL. Below, we plot the distribution of average swap PnL per wallet, restricting to wallets with at least 20 swaps in the ETH/USDC pool for
simplicity (this eliminates a great deal of noise from the results):
Interestingly, it appears that our group of toxic wallets exhibits a bimodal distribution in swap PnL. One of the peaks has, on average, positive PnL for the liquidity pool, much like the larger
distribution of non-toxic wallets; however, the other peak is deeply into negative territory, with a mean of -5 basis points or so. The second (leftmost) peak in the distribution probably reflects
wallets that consistently generate swap toxicity for ETH/USDC liquidity providers!
Let us take this leftmost peak of toxic swappers with negative average PnL. To refresh, this subset consists of 817 wallets with the following characteristics:
• Average negative PnL across all swaps
• At least five swaps with PnL below -5 basis points and notional size between the 80th and 95th percentiles
If we look at the swaps originating specifically from these wallets, do we find anything interesting? We can once again segment by notional swap size:
Swap PnL indeed declines as notional swap size increases, and here the rate of decline is quite regular and begins at a lower percentile of notional swap size than when looking at the totality of all
swaps. Additionally, the PnL seems quite noisy for low notional sizes, leading us to suspect that most of the swaps from these wallets are concentrated at higher notional swap sizes:
As expected, the vast majority of the swaps made by these wallets are at very high notional trade sizes!
In aggregate, the notional PnL realized by the liquidity pool as a result of all the swaps from this small group of 817 swappers is a whopping -165 million USD. Recall that the overall profitability
of the liquidity pool using short-term Binance markouts was merely -43 million USD, meaning that the aggregate PnL from all other swappers is positive 122 million USD! We see once again the
compelling value of effective discrimination between toxic and non-toxic flow: if all the swaps originating from these 817 swappers had been charged five additional basis points, the aggregate
liquidity pool PnL would almost certainly be deep “into the black!”
Consolidating our definitions
Now, on top of our first classification of wallets as possessing high, medium, or low toxicity, we now have an equally plausible division of wallets based on the following definitions:
• Toxic wallets: Wallets with negative average PnL which also possess at least 5 swaps, all of which are between the 80th and 95th percentile in notional swap size and all of which have PnL of -5
basis points or less
• Nontoxic wallets: All other wallets
Both sets of definitions are relatively similar, but not identical. How do they compare? It turns out that:
• The high toxicity wallets in the first definition are all classified as toxic wallets in the second definition
• The medium toxicity wallets in the first definition are almost all (>97%) classified as toxic wallets in the second definition
• The low toxicity wallets in the first definition are almost all (>99%) classified as nontoxic wallets in the second definition
We have almost perfect concordance between the two classifications, with two caveats:
• The high toxicity group in the first definition, which looks at autocorrelation of bucketed swap profitability, does seem to be picking out an important and special subset of toxic wallets
• There are 1,408 wallets classified as toxic in the second definition but as low toxicity in the first definition; while a small fraction of the whole, these wallets do seem to genuinely originate
toxic flow, and they are missed by the first categorization mainly because they have fewer than 250 swaps overall
We therefore generate a consolidated ranking of wallets into three toxicity groups:
• High toxicity: Classified as high toxicity in the first definition
• Medium toxicity: Classified as medium toxicity in the first definition or classified as toxic wallets in the second definition
• Low toxicity: All others
While this may seem a little pedantic, it is nevertheless valuable to show that, approaching an unclear problem from two distinct perspectives, we still end up at roughly the same conclusion. It
gives us confidence, furthermore, that our results will be robust across different choices of explicit parameters or thresholds. We recognize, naturally, that this is fundamentally an imperfect
categorization. Certainly there will be real arbitrageur wallets we have missed and, similarly, real instances of retail flow that we may have miscategorized as being high or medium toxicity!
Nevertheless, we believe that reasonable analyses of this data will largely concur with our results here, even if not with perfect precision.
Characteristics of toxic flow
Now that we have a consolidated classification of wallets as possessing high, medium, or low toxicity, what else can we say about the swap behavior of these wallets?
One natural question to ask is how frequently swaps come in. Intuitively, we expect automated trading strategies with systematic alphas to fire off swaps at regular intervals (whenever trading
opportunities or price dislocations are detected). On the other end of the spectrum, a retail trader might fire off swaps at relatively more random, infrequent times: perhaps a trade here when they
feel like it, another trade over the weekend… and so on.
Accordingly, we examine the distribution of each wallet’s median time between consecutive swaps (we use the median here to avoid biasing estimates from temporary pauses in trading and so on, and
restrict to wallets with more than 20 swaps recorded):
It appears, consistent with our expectations, that high toxicity wallets trade extremely frequently (typically an average of 2 minutes or so between swaps). Interesting, medium toxicity wallets have
a more “spread out” distribution of between-swap intervals, taking slightly longer on average but still usually placing orders at a fairly rapid clip. This might suggest that the high-toxicity
wallets participate in arbitrage trades of a more ‘consistent’ nature, perhaps atomic, on-chain arbitrage for example, whereas perhaps medium-toxicity wallets tend to focus on statistical arbitrage
in comparison, with alphas at a larger variety of timescales. Such a model would also be consistent with our prior observation that the high-toxicity wallets tend to make swaps with notional sizes
1–2 orders of magnitude than wallets in the medium-toxicity group.
Beyond looking at swap frequency alone, we can also look at how frequently each wallet switches the direction of its swaps. One might naively expect that high-frequency trading strategies would
switch swap direction very often; for example, they might buy ETH at a low price then sell at a higher price the next block; alternatively, even if only “one side” of each trade happens on Uniswap,
we would expect many types of statistical alphas to give us essentially a random distribution of ETH buys vs. sells. In contrast, one might expect that a retail trader would exhibit a great deal of
momentum (from FOMO, etc.) in their trades, leading to longer durations of time between switching swap direction.
We directly examined the distribution of each wallet’s average “run length,” where we look at the length (in terms of numbers of swaps) of “runs” of consecutive, same-direction swaps in each wallet’s
swap history:
(Note that to control the presence of extreme outliers, we are again restricting to wallets with more than 20 swaps.)
Clearly, high-toxicity wallets switch swap directionality far more frequently than either the medium- or low-toxicity wallets! It is actually quite interesting that the medium-toxicity wallets switch
swap direction so much less frequently than the high-toxicity wallets; this would be consistent with the medium-toxicity wallets focusing on statistical arbitrage, which resolves on longer time
horizons than the high-toxicity wallets’ strategies and, perhaps, even involve multiple swaps in the same direction as part of a trading response to the same signal.
This naturally brings to mind a subsequent question: on what timescale exactly, do the alphas of high or medium toxicity wallets persist? For a retail trader who simply decides to buy or sell some
quantity of ETH on a whim, it does not really matter (in a sense) if they execute their trade now, or in ten seconds, or in ten minutes, or perhaps even in ten hours or ten days; some undesirable
variance is introduced via random price movements, of course, but on the whole, they do not expect to have any particular ability to ‘time’ ETH/USDC prices especially well, and should be mostly
indifferent to swap execution now versus in the next hour.
On the other hand, high-frequency traders seeking to exploit statistical correlations, predicted movements, and dislocations between venues should exhibit much higher sensitivity to the choice of
timescale. If Binance price leads ETH price discovery, for example, then Uniswap pools will only remain ‘mispriced’ after a large movement on Binance for a single-digit number of minutes (if even
that long). They should certainly very much not be indifferent to placing a swap now versus placing a swap in the next hour.
We can try to analyze this question by looking at how the average PnL per wallet varies across different markout horizons:
(In the above graph, we are taking the notional-weighted average of the PnL of each wallet’s swaps. The motivation here is that the calculation of each wallet’s PnL should take into account the
relative “bet sizing” of each trade, rather than assuming that notional sizes of different trades are completely independent of each other.)
As expected, low-toxicity wallets have a high average PnL (profitable for the liquidity pool), and the PnL of these wallets is largely independent of markout horizon. Intuitively, this makes sense:
if you are a trader with zero edge, then your expected PnL in the future, whether we check 1 minute or 10 minutes after your trade, should still be exactly zero (albeit with higher variance for
longer markouts). Interestingly, though, we do see that high and medium toxicity wallets — especially high toxicity wallets — seem to suffer severe alpha decay on the sub-1-minute horizon, with PnL
largely plateauing afterwards. This does suggest that the trading signals that profitable systematic traders use in the ETH/USDC pool decay to zero alpha throughout the first minute or so (4–5 blocks
on Ethereum mainnet).
We can refine our understanding of these wallets’ trading behaviors even further by calculating wallet PnL using markouts based on the marginal Uniswap ETH/USDC pool price rather than Binance data:
Notice here that the high-toxicity group now appears to exhibit alpha decay on the order of >5 basis points after 10–15 minutes, rather than mostly decaying in the first minute of data. To see the
significance of this observation, imagine that you are an arbitrageur and you notice that ETH is trading 20 basis points higher on Binance, which leads in ETH price discovery, than on Uniswap. You
want to arbitrage the difference, so you swap through the 0.05% fee pool. However, you only buy up enough ETH such that the price of ETH goes up to 15 basis points higher at the margin and no more;
any closer to the Binance price, and you are now overpaying for ETH after taking into account the 5 basis point trading fee.
Eventually, because the price feeds of ETH on Binance and Uniswap are tightly cointegrated processes, you do expect the Uniswap price to converge to the Binance price; however, this might take place
over the course of, say, 10 minutes or so, rather than a sub-1-minute timescale. Because the gap between the post-arbitrage Uniswap price and the Binance price was 5 basis points, you would expect to
see the Uniswap pool price move more 5 basis points in the same direction as the arbitrage swap over the next 10+ minutes, making the arbitrage swap’s PnL seemingly increase in the arbitrageur’s
favor by 5 extra basis points if we calculate markout PnLs based on markout durations in that time interval. This is exactly the empirical pattern we see in the plot above!
Similarly, if a zero-edge retail trader’s swap creates a backrun opportunity, the markout PnL of that swap will decline over the next minute or so as the backrun opportunity is inevitably filled by
an arbitrageur. This, too, is exactly the pattern we see in the plot above — the markout PnL of low-toxicity wallets’ swaps increases in the pool’s favor over the first couple blocks’ worth of time
after the swap before completely plateauing.
All in all, we have managed to make some fascinating observations about the behavior of wallets at different toxicity levels. However, can these findings be applied practically, or are they largely
of academic interest?
Implementation of a discriminator mechanism
Let us take a step back and appreciate the problem before us. How do we actually implement an effective program of price discrimination? Although we have discussed quite a few characteristics of
toxic and non-toxic flow in this post so far, it would be challenging and gas-intensive to implement these in an actual smart contract. Address whitelisting or blacklisting is trivially checked and
evaded; explicit conditions can be reverse-engineered and gamed; finally, performing computation on the EVM is simply a very expensive endeavor, and so overly complex fee discrimination schemes may
increase gas costs for swappers well past the point of tolerability.
We are in active exploration of various implementation methods for fee discrimination schemes. One compelling way to approach the problem is shifting the framing of the question: instead of asking,
How can we implement this in a way compatible with the EVM’s computational requirements?, one might instead ask, How do we shift the computational burden elsewhere so that we can take full advantage
of our characterization of toxic flow? It is not strictly necessary for the protocol itself to perform price discrimination at the smart contract level; instead, one could take advantage of the rich
network structure inherent in modern blockchains and set up systems of incentives that allow price discrimination to occur in a more decentralized fashion.
To make this concrete: suppose that the protocol has a whitelist of certain relayers which receive a privileged (discounted) fee rate. Because relayers profit from being allowed to relay
transactions, they will want to stay on the protocol whitelist, so that they are chosen by more swappers. If protocol governance periodically monitors the PnL of whitelisted relayers to ensure that
whitelisted relayers are consistently sending in nontoxic swap flow, this creates a system of natural incentives for relayers to implement arbitrary complex wallet profiling schemes. (Hopefully, the
research in these posts will serve useful for that purpose!)
Conversely, one might one ask: How could wallets signal signs of nontoxicity, and how can we detect and privilege wallets which signal sufficient nontoxicity? One natural example comes directly to us
from the analysis of alpha decay in the prior section! Recall that the alphas of high and medium toxicity wallets appear to decay (relative to Binance markouts) on the sub-1-minute timescale.
Therefore, if a wallet is willing to execute its swaps on a delayed timescale, for example delaying 5 or 10 minutes after the time of swap submission, that is actually a very strong signal of swap
nontoxicity, and the wallet’s swap should likely be incentivized with a fee discount! One may imagine the incentivized-relayer setup previously described easily implementing such a method of flow
Further methods of utilizing a blockchain’s network structure can be devised; for example, one could imagine block builders playing some role in such a system. Regardless of the exact implementation,
however, it seems that partial decentralization of the discriminatory element into a multi-actor system is a potentially very powerful method of characterizing and segmenting incoming ETH/USDC swap
In this post, we have made considerable progress in our study of toxic and nontoxic flow on Uniswap ETH/USDC pools. In particular, we have shown that our prior identification method for toxic wallets
was fairly robust with respect to alternate definitions of wallet toxicity. After doing so, we generated a ‘consolidated’ categorization of wallets as having high, medium, or low toxicity. We were
able to look more deeply at the swapping behavior of these wallets, finding that:
• High-toxicity wallets swap more frequently, and change swap directionality more frequently, than either medium or low toxicity wallets
• High-toxicity and medium-toxicity wallets have trading alphas that decay on the timescale of ~1 minute, although it takes on the order of ~10 minutes for Uniswap prices to fully converge to the
prevailing Binance price
We also proposed a relayer-based scheme for swap flow segmentation and gave a simple example of how wallets might be able to signal nontoxicity to a relayer or to the underlying protocol. In sum, we
have worked out an “end-to-end” example of how a protocol might in principle implement a working price discrimination mechanism to give nontoxic flow a substantial fee discount.
However, we have still left many stones unturned. For example, now that we have a fairly good classification of wallets in terms of their swap toxicity, we might begin to ask: how predictable is the
PnL of each group of wallets if we look at retrospective data, such as the volatility of ETH prices in the last 5 minutes? We hope to explore such questions, and more, in future installments. | {"url":"https://crocswap.medium.com/discrimination-of-toxic-flow-in-uniswap-v3-part-2-21d84aaa33f5?source=user_profile_page---------3-------------5d3be1062035---------------","timestamp":"2024-11-09T01:23:19Z","content_type":"text/html","content_length":"202609","record_id":"<urn:uuid:11466dad-a6f1-45f5-8423-b8919866b32b>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00628.warc.gz"} |
Detailed Course Information
An exploration of present-day applications of mathematics focused on developing numeracy. Major topics include quantitative reasoning and problem-solving strategies, probability and statistics, and
financial mathematics; these topics are to be weighted approximately equally. This course emphasizes mathematical literacy and communication, relevant everyday applications, and the appropriate use
of current technology. This course is part of the Oregon Common Course Numbering System.
4.000 Credit hours
40.000 TO 48.000 Lecture hours
Syllabus Available
Levels: Credit
Schedule Types: Lecture
Mathematics Division
Mathematics Department
Course Attributes:
Tuition, Science/Math/Computer Science
Must be enrolled in one of the following Levels:
Skills Development
Bachelor Applied Science
May not be enrolled in one of the following Colleges:
College Now | {"url":"https://crater.lanecc.edu/banp/zwckctlg.p_disp_course_detail?cat_term_in=202510&subj_code_in=MTH&crse_numb_in=105Z","timestamp":"2024-11-04T01:20:46Z","content_type":"text/html","content_length":"8186","record_id":"<urn:uuid:d76c7d24-bfb4-42b5-a2a3-1b378453ba56>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00262.warc.gz"} |
C# Coding Assignment- Visual Studio (MUST USE THE ATTACHED SHELL) - Assignment help
You’re a programmer working for a large aerospace company. One of the aerospace engineers comes to you asking for help in setting up some data that is needed for a simulation. They have a vehicle
that flies through the air and knows what direction its moving in, but can only measure its speed in reference to the ground. The engineers always know the direction and speed of the wind when their
vehicle is flying because they can measure it from a ground station. They want to know the speed of the vehicle as it moves through the air. If this seems like a hard concept to grasp, that’s ok –
the engineers have provided you with the following equation to calculate airspeed: airspeed = (windSpeed * cosine(absoluteValue(vehicleDirection – windDirection)) + vehicleSpeed The engineers give
you the following vehicle speeds and directions: Vehicle speeds: 1, 3, 5, 11, 16, 18, 22, 26, 30, 34 Vehicle directions: 10, 30, 50, 110, 160, 185, 260, 280, 315, 330 They want you to come up with a
random value of wind speed between 10 and 30 along with a random value of wind direction between 0 and 359 for each of the 10 vehicle speed and direction pairs (i.e. at vehicle speed 1, it is moving
in direction 10). Then they want to see the airspeed calculation based on those four numbers. You should output a table that looks like this (negative airspeeds are allowed): NOTE: Your Wind Speed,
Wind Direction, and Airspeed values will probably be different since your Wind Speed and Wind Direction values will be random! You will need a total of five arrays. These all need to be of type
double! Two of those arrays will be created as initialized arrays using the given vehicle speeds and vehicle directions (which you can conveniently copy and paste into your code). The other three
will be created using an array-creation expression as shown in slide 10 of this week s PowerPoint. To fill your wind speed and wind direction arrays you will need to use a for loop and the .Length
property of one of those arrays (all the arrays are the same size, so it doesn t matter however, you have to use .Length in case you change the array size later). Since your wind speeds and
directions need to be random, you need to look in your text (page 252, section 7.8) to see how to use the Random class. This means you ll need two Random number generators 1 each for wind speed
and direction, and both combining “shifting” and “scaling” (see 7.8.5). You will also need to use a for loop to calculate and fill the airspeed array using the equation above. Outputting your final
table will also require a for loop. Again, make sure you re using the .Length property of one of your arrays so that you don t go out of bounds. Some hints to help get you started. You ll need to
use some methods from the Math library: Math.Cos and Math.Abs. Also, the vehicle directions are given in degrees, while Math.Cos takes a number in radians. This means you ll have to convert from
degrees to radians, but don t worry it s a simple conversion. Radians = degrees * (Pi / 180). C# makes this easy by providing you with Math.PI, so somewhere in your code you need to convert the
vehicle direction and wind directions into radians. It might look like this: windRadians = windDirection * (Math.PI / 180); As a reminder your output table should still show the degree values NOT
the radian values. There are several ways to accomplish this task, but here is one algorithm that might help: -Declare your variables -Fill the wind speed and wind direction arrays with the
appropriate random values -Convert your vehicle direction and wind direction values to radians -Calculate airspeed values -Output the table The provided shell also has these steps. MUST USE SHELL
BELOW AS A GUIDE: //Name //Data //Assignment using System; using System.Collections.Generic; using System.Linq; using System.Text; namespace assignment8 { class assignment8 { static void Main(string
[] args) { //Declare your variables and array…
Looking for a solution written from scratch with No plagiarism and No AI?
WHY CHOOSE US?
We deliver quality original papers Our experts write quality original papers using academic databases.We dont use AI in our work. We refund your money if AI is detected
Free revisions We offer our clients multiple free revisions just to ensure you get what you want.
Discounted prices All our prices are discounted which makes it affordable to you. Use code FIRST15 to get your discount
100% originality We deliver papers that are written from scratch to deliver 100% originality. Our papers are free from plagiarism and NO similarity.We have ZERO TOLERANCE TO USE OF
On-time delivery We will deliver your paper on time even on short notice or short deadline, overnight essay or even an urgent essay
https://myprivateresearcher.com/wp-content/uploads/2021/02/MPR-ICON-300x56.png 0 0 admin https://myprivateresearcher.com/wp-content/uploads/2021/02/MPR-ICON-300x56.png admin2024-11-01 08:47:50
2024-11-01 08:47:50C# Coding Assignment- Visual Studio (MUST USE THE ATTACHED SHELL) | {"url":"https://myprivateresearcher.com/c-coding-assignment-visual-studio-must-use-the-attached-shell/","timestamp":"2024-11-07T04:17:14Z","content_type":"text/html","content_length":"68602","record_id":"<urn:uuid:4e3e29e6-e0dd-4135-8957-6e95e97f4e90>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00292.warc.gz"} |
Export Reviews, Discussions, Author Feedback and Meta-Reviews
Submitted by Assigned_Reviewer_18
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
This work presents a method to efficiently train object detectors in the presence of geometric transformations that can be represented as vector-matrix multiplications. This has recently been
developed for the case of translation transformations ([1,8,14]) but has not been too obvious for other transformations, such as rotations, let alone non-rigid deformations.
The authors propose to adapt the Fourier-based method that originally allowed the development of efficient algorithms for the case of translations so that we can now deal with rotations and other
`cyclic' signal transformations (e.g. the walking pattern of a pedestrian). The condition for this to hold is that the transformation is norm-preserving, can be represented as a matrix
multiplication, x_transformed = Q x, has an inverse Q^{-1} = Q^T and for some s Q^s = I.
The starting point for the previous works was the fact that the 'data matrix' obtained by 'stacking' together all translated versions of a signal is a circulant NxN matrix, where N is the length of
the signal, and as such can be diagonalized using the discrete harmonic basis (or, Discrete Fourier Transform-DFT matrix), Eq. 3, and reference [5].
The authors note that in the case of general transforms (not translations) the 'data matrix' may not be circulant in general - but as long as the conditions above hold, the Gram matrix will be. As
such, one can exploit the DFT-based diagonalization to efficiently train a detector.
Using these, the authors obtain closed-form solutions for ridge regression that only involve inversions of diagonal Gram matrixes - and present also a nice extension for the simultaneous training of
multiple pose detectors.
Turning to applying this idea to visual data, we are presented with two different methods for constructing the Gram matrix, one involving an analytic construction of the transformation matrix Q, and
another being more `data-driven', where there is in principle no underlying linear transformation, but rather a cyclic pattern, as in the walking cycle of pedestrians, or even out-of-plane rotations
of cars.
Result-wise the authors demonstrate remarkable acceleration in terms of training time, albeit with an occasional loss in performance, even when comparing to a ridge regression baseline.
This work presents a method to efficiently train object detectors in the presence of geometric transformations that can be represented as vector-matrix multiplications. This has recently been
developed for the case of translation transformations ([1,8,14]) but has not been too obvious for other transformations, such as rotations, let alone non-rigid deformations.
The authors propose to adapt the Fourier-based method that originally allowed the development of efficient algorithms for the case of translations so that we can now deal with rotations and other
`cyclic' signal transformations (e.g. the walking pattern of a pedestrian). The condition for this to hold is that the transformation is norm-preserving, can be represented as a matrix
multiplication, x_transformed = Q x, has an inverse Q^{-1} = Q^T and for some s Q^s = I.
The starting point for the previous works was the fact that the 'data matrix' obtained by 'stacking' together all translated versions of a signal is a circulant NxN matrix, where N is the length of
the signal, and as such can be diagonalized using the discrete harmonic basis (or, Discrete Fourier Transform-DFT matrix), Eq. 3, and reference [5].
The authors note that in the case of general transforms (not translations) the 'data matrix' may not be circulant in general - but as long as the conditions above hold, the Gram matrix will be. As
such, one can exploit the DFT-based diagonalization to efficiently train a detector.
Using these, the authors obtain closed-form solutions for ridge regression that only involve inversions of diagonal Gram matrixes - and present also a nice extension for the simultaneous training of
multiple pose detectors.
Turning to applying this idea to visual data, we are presented with two different methods for constructing the Gram matrix, one involving an analytic construction of the transformation matrix Q, and
another being more `data-driven', where there is in principle no underlying linear transformation, but rather a cyclic pattern, as in the walking cycle of pedestrians, or even out-of-plane rotations
of cars.
Result-wise the authors demonstrate remarkable acceleration in terms of training time, albeit with an occasional loss in performance, even when comparing to a ridge regression baseline.
There is in my understanding a substantial amount of originality in this paper, proposing to combine recent advances in fast learning of object detectors in the presence of translations, with more
challenging transformations. Some 'out of the box' thinking was required here, as it is most common to deal with rotations through polar coordinates - which would make everything harder; it turns out
that by using the Gram matrix we are back to the setting of [14], which is a not-too-evident result; so I was very interested when originally reading the paper.
Still, the paper leaves more to be desired to make the paper significant to the learning/vision community.
The most obvious critique is that the detection results are not compared to a stronger baseline, e.g. an SVM-based detector, so we do not know to what extent the presented method will be practically
useful. Along the same lines, the authors could build a pose-invariant detector out of their system (by using the max over poses, as they discuss) and compare it to the methods that are currently out
there for detection in the presence of pose variation (e.g. [24] or [c] below).
It is also disappointing to see that there is no discussion about how this method could be extended to other problems, beyond ridge regression. In [14], and (a) below, a combination of similar ideas
with the hinge loss has been pursued; it should at least be discussed, and ideally tried out. This is, in my opinion, the most important development that would be needed to make this paper have high
impact. As it stands, the work would be of use only to practitioners who want a 'quick and dirty' detector, which may be only as good as one trained with ridge regression.
In particular it is not true that ridge regression (+ whitening) performs similar to SVMs for detection; it merely serves as an 'ok' proxy. This was made more pronounced in the DPM training work of
R. Girshick, J. Malik 'Training Deformable Part Models with Decorrelated Features' in ICCV 2013 - there was quite a big difference in performance with and without the final SVM training stage.
I found it also disappointing that the presentation is a bit blurred, leaves several gaps to be filled in by the reader's imagination. I provide below a (partial) list of points where this was the
Eq. 3:
The DFT matrix F is never defined; so even if the reader has a signal processing background, one still needs to think about the period of the DFT (is it s or m?) whether F^H is the inverse of forward
DFT (I understand it is the inverse) and what the normalization factors are; a single line equation would have made all of this clear.
l. 157-158: 'the matrices involved in a learning problem become diagonal, which drastically reduces the needed computations': There are many learning problems, and many ways in which matrices may be
involved. Unless the authors pin down their optimization problem, show how matrices are involved in it, and clarify to what extent the computations become reduced, this is a vague statement.
l. 182: 'structure of the covariance matrix is unknown': we do know the structure of the covariance matrix (we have an equation for it in l. 174 - and it is diagonalizable); we simply cannot express
it in terms of the DFT matrix.
l. 182: 'the diagonalization trick ... offers a way out, by computing the Gram matrix, and solving the dual problem'->
The Gram matrix can be computed anyway, and the dual problem can be solved irrespectively of the diagonalization trick. So maybe the authors should be a bit more precise, stating e.g.
'The diagonalization technique allows us to express the Gram matrix in a form which makes an analytical solution of the dual problem possible.'
l. 190: 'a regularized risk': The authors have to provide the form of the regularized risk. It is very common indeed, but presentation-wise it cannot be that the single most important thing (the
optimization objective) is never written down. Strictly speaking the paper does not make sense from here on, since there are no 'primal', 'dual' objectives provided anywhere, but the authors keep
referring to them. Lack of space is not a real excuse, several more verbose parts of the text can easily become condensed.
l. 191: why should we have 's' dual variables? Where do they show up?
In general, it is bad practice to leave all technical details to other references [5]/[23], since this practically requires that the reader reads three papers rather than one. A paper needs to be as
self-contained as possible, and we are talking about things that can be explained with a couple of equations.
l. 198-205: this could be placed right after theorem 1. Here it breaks the presentation's flow
l. 209: the reason why Eq. 6 follows from applying the circulant decomposition to G is not entirely obvious: to a casual reader it may seem as if you were saying F(a/b) = F(a)/F(b), with F being the
Fourier transform, which does not make too much sense.
The result is correct indeed (it is a one line proof), but some hand-holding would help. Stating a few central properties of circulant matrices in Section 2.2 would be useful.
Section 4.1: this is, in my understanding, the same story as [14] (they also pass through the gram matrix in their optimization, and finally end up solving many simpler problems). If this is not the
case, the difference should be clarified, it it is, it should be stated.
Section 5.1: from the style of the presentation, it is not too clear what exactly the authors do; we read about 'possibilities' (l.299,l.322), but it would be better if the authors state in advance
that we use 'such and such a method for this experiment because of this reason' and then go on to describe what the method consists in.
Section 5.3: I could not understand this section. Apparently the authors are describing a technical problem that they faced, but they describe it in a somehow esoteric way : 'we have no principled
way of obtaining negative samples for different poses for the same transformation ... lacking a definitive answer a simple strategy is .. - we can further optimize this by realizing that..'
I understand that it is about negative samples not coming with a pose annotation - but I would guess that if one knows the transformation (e.g. rotation) all one has to do is apply all
transformations to each negative sample, and label all transformed samples as negatives - so I do not see what is special about negative samples.
I cannot see why the samples are constant along the DFT dimension. The labels may be (all zeros), but the features will be changing if we transform the image.
l. 371: rephrasing is needed (break into sentences, describing the setup: Pascal criterion assigns ground truth object to hypothesis; pose error is measured with respect to that object, as #i/#poses
where #i is discretized pose difference)
Table 2: what do you mean by 'with calibration'? Was the proposed method not calibrated?
Appendix: to prove that the data matrix 'will not be circulant in general', we would also need a counterexample for the case of non-translations (I understand that there are plenty - but providing
one is a formal requirement).
One work that seems to me relevant (in the sense that it is learning the transformation Q) is
[a] Transformation Equivariant Boltzmann Machines, Jyri J. Kivinen and Christopher K. I. Williams, ICANN 2011,
The authors should also cite a precursor to the more recent works [1,14]:
[b] Maximum Margin Correlation Filter: A New Approach for Simultaneous Localization and Classification
Andres Rodriguez, Vishnu Naresh Boddeti, B.V.K Vijaya Kumar and Abhijit Mahalanobis
IEEE Transactions on Image Processing 2013
Finally, this is one of the works working on detecting rotated cars, the authors could compare to it (or [28])
[c] Andrea Vedaldi, Matthew B. Blaschko, Andrew Zisserman: Learning equivariant structured output SVM regressors. ICCV 2011
Q2: Please summarize your review in 1-2 sentences
The paper contains an interesting idea, but is somehow inconclusive (it is hard to tell whether this idea would be broadly useful). I found it an interesting read, but I think it requires more work.
Submitted by Assigned_Reviewer_31
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
The paper proposes a fast method to train pose detectors/classifiers in the Fourier domain. The idea of training pose detectors in the Fourier domain has been extensively used for cyclic
translations, since this transformation can be diagonalised in the Fourier Domain, thus dramatically reducing the computational cost. However, up to now this idea has not been extended to more
general transformations. In this paper, the authors show that the “Fourier trick” can be generalised to other cyclic transformations. This is a significant contribution, since the proposed framework
allows to consider a large class of image transformations that are particularly useful for pose detection, with the benefit of training acceleration provided by the DFT. The authors show that within
their proposed approach, image transformations do not necessarily need to be characterised analytically since they become implicit. Therefore, classifiers can be trained not only for virtually
generated image transformations, but also for natural datasets with pose annotations.
The framework proposed by the authors also allows training of multiple pose classifiers simultaneously. This is another major improvement with respect to previous work, since training times are
dramatically reduced with respect to current approaches that only allow training classifiers for different poses independently.
The article is very well written and organised. A brief but complete overview of relevant related work that clearly points out the advantages of the proposed approach is first presented. Then, the
authors show how the can set up the pose classification problem in terms of circulant matrices, thanks to Theorem 1, by solving the dual problem. For that they propose (as other previous work do) to
use Ridge Regression, here in the dual domain, and then retrieve the primal solution, leading to a simple closed formula, that can be easily computed, leading to an impressive computational cost
reduction with respect to previous work. The framework is presented in increasing complexity, starting from the training of a single classifier for a single cyclic orthogonal transformation of a
single image (Sect. 3.1), then extending it to multiple classifiers (one for each pose) trained simultaneously but still from a single image (Sect. 3.2), and finally extending it to the
transformation of multiple images (Sect. 4). This final extension benefits from the fact that different Fourier components can be computed independently, making it naturally suited for parallel
implementation. In Sect. 5 the fact that the transformation only appears implicitly in the proposed approach is exploited to enlarge the class of considered transformations (including natural
transformations, e.g. non-rigid, using pose annotations).
Experiments on three very different settings using standard databases show the suitability of the approach, specially regarding training acceleration.
To sum up, this is a very interesting paper, that presents original contributions, and that has the beauty of being based on extremely simple ideas and principles that lead extremely powerful
results. I recommend the paper to be accepted with a few minor corrections:
- the meaning of the acronym HOG is never given (Histogram of Oriented Gradients)
- A discussion on how the regularisation parameter lambda is set is missing
- Section 5.3 (Untransformed negative samples) is not clear; please rewrite with more details it and make an effort to make it more clear.
Q2: Please summarize your review in 1-2 sentences
The paper presents several original contributions to the widely used approach to accelerate training of pose classifiers in the Fourier domain. More specifically, the proposed approach allows to: (i)
take into account much more general transformations than the cyclic translations used in previous works; (ii) train multiple classifiers for different poses simultaneously, with no significant
increase of computational cost w.r.t. training a single classifier. The paper is clear and really well written. I strongly recommend acceptance of this paper.
Submitted by Assigned_Reviewer_41
Q1: Comments to author(s). First provide a summary of the paper, and then address the following criteria: Quality, clarity, originality and significance. (For detailed reviewing guidelines, see http:
This presents a new method for accelerating object detection training stage. The main idea is aligning all training image samples to a fixed position and orientations, so that learning a classifier
can be performed with much less degree of freedom. The idea is made based on the observation that most image detection datasets contain objects whose appearance is very similar to each other up to
simple image geometric transformations. The actual alignment of feature vectors is performed in Fourier Domain, since rotation, translation, or other geometric transformations can be easily captured
in FFT representation. The author proved that if the transformation of training images are norm-preserving and cyclic, the training can be done in a very efficient fashion. Experiments in a few
dataset show that the proposed method significantly reduces the feature training time while preserving similar detection accuracy, compared to a full-fledged training approach. The paper's writing is
good and easy to understand. However, the paper should also address the following issues to fully demonstrate the advantage of the propose method:
- Time and accuracy evaluation with more datasets especially PASCAL. Since the paper makes a strong assumption that training objects are similar only up to geometric transformation, whether or not
this assumption hurts the detection accuracy can be only validated with much more extensive experiments.
- Comparing 3D pose detection result with object detectors designed for 3D pose estimation, such as Estimating the Aspect Layout of Object Categories by Xiang et al.
Q2: Please summarize your review in 1-2 sentences
This paper presents a new idea to reduce the complexity in training object detectors. The mathematics behind the method is well studied and presented. However, the paper lacks convincing experiment
results to show the accuracy trade-off from increasing training speed is minor and worthwhile.
Q1:Author rebuttal: Please respond to any concerns raised in the reviews. There are no constraints on how you want to argue your case, except for the fact that your text should be limited to a
maximum of 6000 characters. Note however, that reviewers and area chairs are busy and may not read long vague rebuttals. It is in your own interest to be concise and to the point.
We thank all reviewers for their helpful comments, and are glad that they (in particular R1 and R2) have enjoyed our paper. We are extremely encouraged that they consider it has a "substantial amount
of originality" and "out of the box thinking" (R1), and the "beauty of being based on extremely simple ideas and principles that lead to extremely powerful results" (R2).
Although R2 and R3 consider the article to be very well written, there is always room for improvement and R1 makes many important suggestions. We want our paper to realize its full potential, and as
such we will integrate them in the final version.
Our choice of Ridge Regression for both our method and the baseline is seen as a weakness by R1. The source of this impression can be traced to [12], which presents whitening/LDA as a quick-and-dirty
alternative to SVM, with large losses on Inria Pedestrians (from 79.6% AP to 75.1%). However, we must stress that LDA is *not* the same as Fourier-space Ridge Regression techniques [1, 8, 14]. One
key difference is that the authors of [12] estimate and invert a 10000x10000 covariance matrix, which is reported to be rank-deficient, while [1, 8, 14] estimate several *independent* 36x36
covariance matrices, which can be done with much greater accuracy. Multiple practitioners have pointed out that these techniques perform better than SVM without bootstrapping [1, 8, 14], and SVM only
achieves marginally better performance (<1%) after many bootstrapping rounds. As an example, the publicly available implementation of [14] achieves 80.7% AP on Inria with Ridge Regression, which is
very far from the figure of 75.1% given in [12], for LDA. To the credit of [12], it is a precursor and an inspiration that shaped these later techniques.
In conclusion, our baseline is indeed strong, but we also have results with SVM that corroborate these previous works. We will include them to dispel any doubts.
R1 also makes a nice suggestion, that we explore algorithms other than Ridge Regression. From [14], extensions of our method for Support Vector Regression (SVR) and others can be obtained easily.
However, rather than go for full generality, we opted to keep the discussion focused, since the solution in Section 4 already deals with many factors simultaneously (arbitrary transformations +
multiple base samples + multiple output classifiers). We hope this is understandable. We will briefly discuss it in the paper, and include the (straightforward but tedious) SVR derivation and
experimental results either as an appendix or a separate technical report (SVR scores only about 1-2% higher AP).
Concerning calibration, our method does not require it (l.366).
We realize that the paragraph of Section 5.3 is probably the least clear and we will reword it accordingly. The problem at hand boils down to the lack of pose annotations in negative samples. For
planar rotation we can easily obtain rotated negative samples with random poses. However, the same operation with walk cycles of pedestrians is not defined. How do we advance the walk cycle of a
non-pedestrian? As a pragmatic solution, we consider that transformations of this type have no effect on negative samples. This allows us to solve this quandary. Incidentally, their DFT in pose-space
turns out to be 0 everywhere, except for the DC value, which allows massive computational savings. We hope this also answers a question by R1.
Unfortunately, there is a bit of a misunderstanding of the main idea of our paper, which we hope we can clarify. We do not perform alignment of samples to a canonical pose in the Fourier domain.
Instead, we provide a fast solution for the case of simultaneously training with *all* poses of the given samples. There are no assumptions of 3D geometry, as we demonstrate with the non-rigid walk
cycle experiment, and the pose transformation is learned within a monolithic HOG template. Our method is complementary to the suggested work on 3D aspect layouts, and can be used to train the
individual part templates for a range of viewpoints (which would remove the need for rectification and allow non-planar aspect parts). We will make this distinction in the final version and include
the suggested reference.
About Pascal VOC, it is not the case that any single transformation is responsible for most of the appearance variability. Our method is not directly applicable to such a scenario without handling
multiple transformation types simultaneously, a non-trivial problem that will be considered in future work. However, there are a multitude of computer vision tasks where handling a single
transformation is extremely useful -- we have demonstrated results in 3 very different settings (pedestrian walk cycles, satellite images, and the azimuth of cars in street scenes). This should take
care of any doubts of broad applicability. We must also emphasize that the KITTI dataset (http://www.cvlibs.net/datasets/kitti/) is large-scale, very realistic and of high significance for a
real-world application (autonomous driving). | {"url":"https://proceedings.neurips.cc/paper_files/paper/2014/file/b710915795b9e9c02cf10d6d2bdb688c-Reviews.html","timestamp":"2024-11-11T12:09:14Z","content_type":"application/xhtml+xml","content_length":"32451","record_id":"<urn:uuid:2113ba3b-8b38-4404-93b2-d330759327ac>","cc-path":"CC-MAIN-2024-46/segments/1730477028228.41/warc/CC-MAIN-20241111091854-20241111121854-00815.warc.gz"} |
Possible Range
The median of a set of five positive integers is one more than the mode and one less than the mean. Can you find the largest range possible?
The median of a set of five positive integers, is one more than the mode, and one less than the mean.
What is the largest possible value of the range of the five integers?
This problem is taken from the
UKMT Mathematical Challenges
Student Solutions
Let the median be $n$, so the numbers are:
_ , _ , $n$, _ , _
The mode is $n-1$, so the smallest two numbers must both be $n-1$, giving us:
$n-1, n-1, n,$ _ , _
The mean is $n+1$, so the total of the five numbers is $5n+5$, so the last two numbers must add up to $2n+7$.
The fourth number must be greater than $n$ (if it was $n$ there would not be a unique mode) so it must be at least $n+1$. That would give a value of $n+6$ for the fifth number.
If the fourth number was any bigger, the fifth number would have to be smaller (giving us a smaller range), so this gives the maximum range:
$n-1, n-1, n, n+1, n+6$
The difference between $n+6$ and $n-1$ is $7$,
so the maximum range is 7. | {"url":"https://nrich.maths.org/problems/possible-range","timestamp":"2024-11-09T03:20:14Z","content_type":"text/html","content_length":"36958","record_id":"<urn:uuid:352b2d1a-eade-4738-a678-f15d039fad65>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00154.warc.gz"} |
Eisenstein series for G₂ and the symmetric cube Bloch--Kato conjecture
2021 Theses Doctoral
Eisenstein series for G₂ and the symmetric cube Bloch--Kato conjecture
The purpose of this thesis is to construct nontrivial elements in the Bloch--Kato Selmer group of the symmetric cube of the Galois representation attached to a cuspidal holomorphic eigenform 𝐹 of
level 1. The existence of such elements is predicted by the Bloch--Kato conjecture. This construction is carried out under certain standard conjectures related to Langlands functoriality. The broad
method used to construct these elements is the one pioneered by Skinner and Urban in [SU06a] and [SU06b].
The construction has three steps, corresponding to the three chapters of this thesis. The first step is to use parabolic induction to construct a functorial lift of 𝐹 to an automorphic representation
π of the exceptional group G₂ and then locate every instance of this functorial lift in the cohomology of G₂. In Eisenstein cohomology, this is done using the decomposition of Franke--Schwermer
[FS98]. In cuspidal cohomology, this is done assuming Arthur's conjectures in order to classify certain CAP representations of G₂ which are nearly equivalent to π, and also using the work of
Adams--Johnson [AJ87] to describe the Archimedean components of these CAP representations. This step works for 𝐹 of any level, even weight 𝑘 ≥ 4, and trivial nebentypus, as long as the symmetric cube
𝐿-function of 𝐹 vanishes at its central value. This last hypothesis is necessary because only then will the Bloch--Kato conjecture predict the existence of nontrivial elements in the symmetric cube
Bloch--Kato Selmer group. Here this hypothesis is used in the case of Eisenstein cohomology to show the holomorphicity of certain Eisenstein series via the Langlands--Shahidi method, and in the case
of cuspidal cohomology it is used to ensure that relevant discrete representations classified by Arthur's conjecture are cuspidal and not residual.
The second step is to use the knowledge obtained in the first step to 𝓅-adically deform a certain critical 𝓅-stabilization 𝜎π of π in a generically cuspidal family of automorphic representations of
G₂. This is done using the machinery of Urban's eigenvariety [Urb11]. This machinery operates on the multiplicities of automorphic representations in certain cohomology groups; in particular, it can
relate the location of π in cohomology to the location of 𝜎π in an overconvergent analogue of cohomology and, under favorable circumstances, use this information to 𝓅-adically deform 𝜎π in a
generically cuspidal family. We show that these circumstances are indeed favorable when the sign of the symmetric functional equation for 𝐹 is -1 either under certain conditions on the slope of 𝜎π,
or in general when 𝐹 has level 1.
The third and final step is to, under the assumption of a global Langlands correspondence for cohomological automorphic representations of G₂, carry over to the Galois side the generically cuspidal
family of automorphic representations obtained in the second step to obtain a family of Galois representations which factors through G₂ and which specializes to the Galois representation attached to
π. We then show this family is generically irreducible and make a Ribet-style construction of a particular lattice in this family. Specializing this lattice at the point corresponding to π gives a
three step reducible Galois representation into GL₇, which we show must factor through, not only G₂, but a certain parabolic subgroup of G₂. Using this, we are able to construct the desired element
of the symmetric cube Bloch--Kato Selmer group as an extension appearing in this reducible representation. The fact that this representation factors through the aforementioned parabolic subgroup of
G₂ puts restrictions on the extension we obtain and guarantees that it lands in the symmetric cube Selmer group and not the Selmer group of 𝐹 itself. This step uses that 𝐹 is level 1 to control
ramification at places different from 𝓅, and to ensure that 𝐹 is not CM so as to guarantee that the Galois representation attached to π has three irreducible pieces instead of four.
• Mundy_columbia_0054D_16464.pdf application/pdf 1.83 MB Download File
More About This Work
Academic Units
Thesis Advisors
Urban, Eric Jean-Paul
Ph.D., Columbia University
Published Here
April 21, 2021 | {"url":"https://academiccommons.columbia.edu/doi/10.7916/d8-k3ys-vh32","timestamp":"2024-11-10T06:28:59Z","content_type":"text/html","content_length":"27131","record_id":"<urn:uuid:afdb802e-0e0f-4dee-9892-503904ae95c7>","cc-path":"CC-MAIN-2024-46/segments/1730477028166.65/warc/CC-MAIN-20241110040813-20241110070813-00817.warc.gz"} |
Bivariate zero inflated generalized Poisson regression model in the number of pregnant maternal mortality and the number of postpartum maternal mortality in the Central Java Province in 2017
Excess zero is one of the problems in Generalized Poisson regression where the number of responses is contained zero exceeds 60 percent. One of the statistical methods have been developed is Zero
Inflated Generalized Poisson Regression (ZIGPR). If there are two response variables, the appropriate regression analysis is Bivariate ZIGPR (BZIGPR). This study aims to determine the factors that
influence the number of pregnant maternal mortality and the number of postpartum maternal mortality in 91 sub districts in Pekalongan Residency, Central Java Province 2017 through by the BZIGPR. The
variables are percentage of K1 pregnancy examinations, percentage of K4 pregnancy examinations, percentage of deliveries assisted by health workers, percentage of pregnant women who received Fe3,
percentage of TT2 + immunization in pregnant women, ratio of midwives per 100,000 population and percentage of obstetric complications management. The estimation of BZIGPR parameters using the
Maximum Likelihood Estimation (MLE) method which results in a non-linear form so that, it is solved by the Berndt Hall-Hall Hausman (BHHH) iteration method. The result hypothesis testing using the
Maximum Likelihood Ratio Test (MLRT) is reject null hypothesis. BZIGPR produces 2 regression models ln μ[li] and logit p[li]. The predictor variables that affect response Y[1] and Y[2] on model ln μ
[li] are all of predictor variables. The predictor variables that affect response Y[1] on model logit p[li] are all of predictor variables while the predictor variables that affect response Y[2] are
all of predictor variables except percentage of pregnant TT2 + immunization women and percentage of midwives per 100,000 population.
Dive into the research topics of 'Bivariate zero inflated generalized Poisson regression model in the number of pregnant maternal mortality and the number of postpartum maternal mortality in the
Central Java Province in 2017'. Together they form a unique fingerprint. | {"url":"https://scholar.its.ac.id/en/publications/bivariate-zero-inflated-generalized-poisson-regression-model-in-t","timestamp":"2024-11-06T04:32:31Z","content_type":"text/html","content_length":"66133","record_id":"<urn:uuid:9f827038-3cdd-46c6-8d3f-1aee965486a3>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00484.warc.gz"} |
20. Solve the triangle in which a=2 cm,b=1 cm and c=3 cm.
21. ... | Filo
Question asked by Filo student
20. Solve the triangle in which and . 21. In a , if and , find
Not the question you're searching for?
+ Ask your question
Video solutions (1)
Learn from their 1-to-1 discussion with Filo tutors.
5 mins
Uploaded on: 9/20/2022
Was this solution helpful?
Found 2 tutors discussing this question
Discuss this question LIVE for FREE
13 mins ago
One destination to cover all your homework and assignment needs
Learn Practice Revision Succeed
Instant 1:1 help, 24x7
60, 000+ Expert tutors
Textbook solutions
Big idea maths, McGraw-Hill Education etc
Essay review
Get expert feedback on your essay
Schedule classes
High dosage tutoring from Dedicated 3 experts
Practice more questions on Trigonometry
View more
Students who ask this question also asked
View more
Stuck on the question or explanation?
Connect with our Mathematics tutors online and get step by step solution of this question.
231 students are taking LIVE classes
Question Text 20. Solve the triangle in which and . 21. In a , if and , find
Updated On Sep 20, 2022
Topic Trigonometry
Subject Mathematics
Class Class 11
Answer Type Video solution: 1
Upvotes 90
Avg. Video Duration 5 min | {"url":"https://askfilo.com/user-question-answers-mathematics/20-solve-the-triangle-in-which-and-21-in-a-if-and-find-31353834323337","timestamp":"2024-11-15T01:10:25Z","content_type":"text/html","content_length":"235233","record_id":"<urn:uuid:f3c7b20a-2dd5-475f-ab59-14ebb234dd87>","cc-path":"CC-MAIN-2024-46/segments/1730477397531.96/warc/CC-MAIN-20241114225955-20241115015955-00136.warc.gz"} |
Are you trading or gambling?
How to not be a degenerate trader
If you enjoy or get value from Investing Lessons, consider helping me reach 1,000 subscribers by the end of June (my birthday!) by doing any of the following:
□ Forwarding this post to friends, and getting them to subscribe.
□ Sharing on Twitter, Facebook, and Linkedin with a note of what you learned.
□ Sharing within community or company Discord/Slack/Facebook groups.
Thanks to the r/wallstreetbets subreddit, we have all heard of infamous tales where retail traders have profited millions of dollars. Accompanying this is often a cult-like worship by individuals who
wish to do the same, and are desperate in finding the secret to such success.
Greed afterall, is human nature.
When it comes to games of pure chance such as the roulette, we are quick to label it as gambling. For games of chance like poker where you possess some level of control, conventional wisdom tells us
that it is a game of skill. Because there are elements of decision making and skill associated with trading, it is often not viewed as gambling.
But what actually is gambling? Gambling cannot simply be any games involving uncertainty, otherwise casinos would be out of business. Additionally just because a game involves skill, it doesn’t mean
that it is not gambling, otherwise Lehman Brothers would never have collapsed.
Gambling occurs when you have a poor understanding of risk, resulting in either (1) negative expected value bets, or (2) poor bet sizing that leads to ruin.
Negative Expected Value Bets
All bets have the potential to make or lose you money when standalone. The expected value of your bets, is how much you make on average as the cumulation of all of your bets. Thus it follows, a
negative EV bet is one where you will on average lose money - like the roulette table.
To help reinforce your intuition, let us imagine a game involving coin flipping. You will win $2 if the coin flips heads, or lose $1 if the coin flips tails. Do you take this bet?
It is pretty obvious that this is a good deal for you. This is because on average, you will gain $1 with every coinflip. For those interested in the maths, you have a 50% chance of winning $2, and a
50% chance of losing $1, 50% * (+2) + 50% * (-1) = +$0.50. Try to do the same thing yourself with roulette table odds.
A not so obvious result that follows from making successive negative expected value bets, is that in the long run you are guaranteed to lose all your money (or ruin). Intuitively this makes sense as
with each bet, you are losing money on average.
It should be noted that you can still make money in the short run due to volatility, so the unintuitive optimal strategy for a night out at Vegas is to bet all your money in one go. If you win, cash
out and walk; if you lose, go home - either way, it makes for an extremely short night.
Poor Bet Sizing
Bet sizing is simply how large your bets are - do you throw $100 or $100k at a particular stock? However, it is often more useful to think of bet sizing in terms of percentages, as opposed to numeric
figures. In summary the larger the expected return, the larger your bet size.
The concept of bet sizing can be modelled mathematically through Kelly Criterion. It is worth pointing out that it is extremely common for poker players or hedge funds to run Kelly Criterion
suboptimally, by betting less. This dramatically reduces the risk of ruin, and adds a margin of safety to often nebulous assumptions.
So why is bet sizing important? This is an often overlooked concept, but it is extremely important to prevent ruin (or losing all your money).
To reinforce your intuition, we will once again return to the coin flip game. Suppose you are in a situation where you win $200 for heads, but lose $100 for tails. But, you only have $100 - do you
take this bet?
While this is a positive EV bet, remember that you are only making money on average in the long run. This does not safeguard you against the short term, where you can be making or losing money. Had
you taken the bet and the coin landed tails, you would have lost all your money. This means that you are barred from participating in further potentially profitable bets.
Translating this to investing, if you over-leverage yourself, not only might you lose all your money, but you might miss out on partaking in potentially profitable situations.
A 101 on not gambling:
1. Don’t make negative expected value bets - you are guaranteed to lose all your money in the long run.
2. Scale your bets according to confidence in order to prevent ruin. Overextending yourself can make good bets go bad. | {"url":"https://gamesofchance.substack.com/p/are-you-trading-or-gambling","timestamp":"2024-11-12T20:21:31Z","content_type":"text/html","content_length":"137478","record_id":"<urn:uuid:d7ac4794-5c3a-4dcb-875a-5ab2fbd8194b>","cc-path":"CC-MAIN-2024-46/segments/1730477028279.73/warc/CC-MAIN-20241112180608-20241112210608-00707.warc.gz"} |
How does the central limit theorem apply to sampling distributions? | Hire Someone To Do Assignment
How does the central limit theorem apply to sampling distributions?
How does the central limit theorem apply to sampling distributions? “The central limit theorem (C.L.) is an important quantity for our analytic study of the law of averages” \[[@bib8]\], and it’s use
in this paper is questionable. This C.L. is proved as a additional reading which, if we exclude the null hypothesis on empirical infeasibility, was used in Corollary 2.1 in \[[@bib8]\]. Here we
examine its implication in connection to the sampling process. [**Proof of Corollary 2.2.** ](#adc2517-2){ref-type=”disp-formula”}. Considering the log-log diagonal case (see, e.g., \[[@bib17]\]), in
what sense does the central limit theorem apply to sampling distributions? At the moment, it appears that it does and it is actually true, although we can find some preliminary results in \[[@bib4],
[@bib29]\]. However, the reasons behind the central limit theorem for sampling distributions being weak in the sense that it doesn’t seem to apply to the zero mean white-noise control, where sampling
has been used to sample from at least Gaussian mean distributions, and it is not true quantitatively enough to capture the specific case of the zero mean oscillator. Furthermore, sampling
distributions are of no interest here because they share nothing with any quantile process, but rather from a rather wide range of distributions: the continuous distribution and the discrete
distribution, so that it is surely negligible compared to the more central limit theorem for any one of them. For a sample of even scale we might say something like, say, the joint product of two
distributions. However, if we represent it by the normalized log-log pair (to be precise, that is, with respect to the nonparametric Gaussian variance), we would have something like, $$\widetilde{\
rho}_{\tau\measuredexp\lbrack-\mathsf{\zeta}^{\tau}\rbrack} = \lbrack\rho_{\tau\measuredexp\lbrack-\mathsf{\zeta}^{\tau}\rbrack}^{- 1}\rbrack \,.$$ here $$\rho_{\tau\measuredexp\lbrack-\mathsf{\zeta}
^{\tau}\rbrack} = \frac{1}{{\sqrt{\lbrack\lbrack\lbrack\lbrack\lbrack\lbrack\lbrack\lbrack\lbrack\lbrack\rbrack\rbrack\rbrack\rbrack\rbracket}} \, }} \lbrack\How does the central limit theorem apply
to sampling distributions? A: Dimensional sampling is the technique used to sample finite dimensional distributions. So the main thing for doing this is – any continuous finite dimensional random, i.
What Are Some Good Math Websites?
e. function that you can write as a random variable which can be computed sequentially. Now i use the definition of the distance from you defined on standard variables: $$ d(x,y) :=\|x-y\| $$ The
distribution of $y$ is a standard function of $x$, as if you don’t write $y=\frac{\delta}{dx}$, then the distribution of $x$ is given by: $$ x=\frac{(\frac{dy}{dz})^2}{\sqrt{\frac{d}{dz}}}=\frac{\
delta}{x^2} $$ Similarly for $y$ and $z$ you have to observe that $$ y =\int\frac{\partial^{2}z}{\partial x + \partial^2 x} \ dz =\int\frac{\partial^{2}z}{\partial x + \partial^2 z} \ dz=\int\delta^
{2}\frac{(\delta^{2})}{\delta z} \ visit our website x^{2}+\partial^{2}z^{2}} \ dz $$ You can then perform the integration domain of the integration symbol to obtain $$ x+(-t)z =z+(-t)\int\delta^{2}\
frac{\partial^{2}z^{2}}{\partial x^{2}+\partial^{2}z^{2}} \ dz $$ Here or if you define $\delta$ as a zero-spread over the range of $z$, where $z \sim \delta$ and you get through $z$ you have found
$x=\delta(x)=0$. Now, if you then perform the integration of the form $$ x\to x+(-t)z +(-t)\delta(z) ^2 +(-t)\delta^2(z) ^2 +(-t)\delta^3(z) ^2 +(-t)\delta^4(z) ^2+(-t)\frac{t}{1+(t/2)(x+z)} $$ I’ll
have already written some clever work and add it on top. Anyway the answer is: $$ \|\delta(z)\|^2 +\|z\|^4 +\|x\|^2+\|z\|^3 +\int\|\frac{\partial^{2}z^{2}}{\partial xHow does the central limit
theorem apply to sampling distributions? – Evan. I believe the solution is, in order, to collect data and sample them. The problem is to do it efficiently rather than inefficiently, so I was thinking
about ways of using univariate samples to represent sample data rather than sample data that represent data with attributes. I did find something similar to this on other, related topics. I didn’t
even get that question. My reason is that I first discovered that sample data that are attribute-dependent have many attributes that do not belong as attributes in any of the attribute-dependent
sample data. For example, if attributes one and two were used to identify that sample points, one would try this website some attributes to identify that sample points that represent e.g. different
positions of a line that appeared in the sample data. The point that the point represents is an attribute that has not been assigned to a specific sample point, so the attribute-dependent sample data
might be selected for comparison to a given sample point in other sample data. With sample data, you’d have to study each sample point separately to be sure that your attribute-dependent sample data
has all the attributes you desire and attribute-dependent sample data. For example, if I compared the attribute-dependent sample data to another sample data, both attributes have no attributes among
those sample data, and so using attribute-dependent samples data could give me a sample point that I don’t want to look for to the test data and then it wouldn’t include the specified sample points
in the test data. Or if I compared the attribute-dependent sample data to another sample data, I would have to reduce the sample data for that next sample data by adding some attributes which were
not significant or not wanted within the attribute-dependent sample data in order to get those attributes which I wished to study. At all, though, this is an efficient way of doing the simple task of
detecting differences in attribute-dependent samples. I understand that the simple solution is to | {"url":"https://hiresomeonetodo.com/how-does-the-central-limit-theorem-apply-to-sampling-distributions","timestamp":"2024-11-06T17:26:41Z","content_type":"text/html","content_length":"87372","record_id":"<urn:uuid:24738282-7cef-4caf-9fa9-475366f818a1>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00234.warc.gz"} |
Mean Field-Based Dynamic Backoff Optimization for MIMO-Enabled Grant-Free NOMA in Massive IoT Networks
1 School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing, 100044, China
2 Department of Information Systems Technology and Design Pillar, Singapore University of Technology and Design, Singapore, 487372, Singapore
3 Nokia Group, Alcatel Lucent Shanghai Bell, Shanghai, 200120, China
* Corresponding Author: Hongwei Gao. Email:
Journal on Internet of Things 2024, 6, 17-41. https://doi.org/10.32604/jiot.2024.054791
Received 07 June 2024; Accepted 31 July 2024; Issue published 26 August 2024
In the 6G Internet of Things (IoT) paradigm, unprecedented challenges will be raised to provide massive connectivity, ultra-low latency, and energy efficiency for ultra-dense IoT devices. To address
these challenges, we explore the non-orthogonal multiple access (NOMA) based grant-free random access (GFRA) schemes in the cellular uplink to support massive IoT devices with high spectrum
efficiency and low access latency. In particular, we focus on optimizing the backoff strategy of each device when transmitting time-sensitive data samples to a multiple-input multiple-output (MIMO)
-enabled base station subject to energy constraints. To cope with the dynamic varied channel and the severe uplink interference due to the uncoordinated grant-free access, we formulate the
optimization problem as a multi-user non-cooperative dynamic stochastic game (MUN-DSG). To avoid dimensional disaster as the device number grows large, the optimization problem is transformed into a
mean field game (MFG), and its Nash equilibrium can be achieved by solving the corresponding Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck-Kolmogorov (FPK) equations. Thus, a Mean Field-based
Dynamic Backoff (MFDB) scheme is proposed as the optimal GFRA solution for each device. Extensive simulation has been fulfilled to compare the proposed MFDB with contemporary random access approaches
like access class barring (ACB), slotted-Additive Links On-line Hawaii Area (ALOHA), and minimum backoff (MB) under both static and dynamic channels, and the results proved that MFDB can achieve the
least access delay and cumulated cost during multiple transmission frames.
In the 6G IoT paradigm, grant-free (GF) with non-orthogonal multiple access (NOMA) techniques is considered a key enabler for massive ultra-reliable and low-latency communication (mURLLC) services to
facilitate smart transportation, smart factory, smart grid, and other mission-critical applications [1–3]. GF random access allows wireless terminals to transmit their preamble and data to the base
station (BS) in one shot and avoid the four-handshake process in grant-based random access [4]. The combination of GF and NOMA simultaneously solves the problem of access delay, signaling overhead,
as well as the scarcity of orthogonal channel resources in conventional massive access schemes [5–7]. Existing NOMA schemes for GF access include power-domain NOMA (PD-NOMA), code-domain NOMA, or
interleave-based NOMA [8]. While PD-NOMA has been studied extensively in [5–7], it may introduce a long decoding delay for massive GF access devices due to the successive interference cancellation
(SIC) receiver employed to distinguish different PD-NOMA signals sequentially. On the contrary, in code-domain NOMA, such as Sparse Code Multiple Access (SCMA), it allows multiple users to occupy the
same resource block at the same time, achieving efficient use of spectrum resources, and SCMA uses the message passing algorithm (MPA) for detection [9]. MPA has low complexity and good performance.
When multiple users access at the same time, it can effectively detect and decode users, which is crucial to support large-scale IoT device access.
Meanwhile, massive multiple-input multiple-output (MIMO) antennas are expected to be equipped on all 6G BSs. By using receiver beamforming (e.g., Zero-Forcing (ZF) [10]) at the BS, GF-NOMA
transmitters can be differentiated based on their spatial characteristics, which means the access devices could be divided into multiple spatial beams (clusters) and each preamble may be reused among
multiple spatial clusters to accommodate even more access devices simultaneously [11–13].
In this work, we investigate the optimal backoff strategy for IoT devices in MIMO-based GF-NOMA systems within the mURLLC paradigm, applicable to scenarios such as intelligent transportation,
autonomous driving, and smart factories. The proposed strategy not only effectively meets the low latency requirements of URLLC but also reduces the probability of interference between users.
Additionally, it can improve the system resource allocation efficiency, thereby enhancing the overall spectrum resource utilization. With GF-NOMA, each IoT device needs to select its access
parameters in a distributed manner, which will cause severe system interference and network access congestion when the number of active devices is large. Conventional ALOHA-like multiple access
schemes have the devices to select a backoff time based on a random factor [14,15], which might be efficient for semi-static IoT services but far from optimal under the highly dynamic environment and
the stringent delay constraint of mURLLC. Theoretically, when a large number of devices compete for limited communication resources with distributed decision-making subject to highly dynamic system
states, this problem can be formulated as a DSG, and the optimal solution can be derived by solving multiple correlated stochastic differential equations (SDEs). When the amount of devices is large,
it becomes prohibitively difficult to solve these SDEs simultaneously. In this work, we propose to employ mean field game (MFG) theory to solve the dynamic stochastic game (DSG) of massive IoT
devices in their GF SCMA processes to minimize their average backoff delay under a limited energy budget. To the best of our knowledge, this is the first work that adopts MFG theory to dynamically
optimize the backoff strategy for multi-beam MIMO based on GF-NOMA. The contributions of this study can be summarized as follows:
• A two-step GF random access scheme is proposed for MIMO BF-based cells, in which SCMA is adopted for multiple IoT devices within the same antenna beam, and ZF is employed to eliminate inter-beam
interference in the uplink.
• We formulate a backoff delay minimization problem in GF-NOMA for mURLLC services as a multiuser non-cooperative DSG, subject to the dynamic channels, energy states, and interference among NOMA
devices. In this DSG, the objective of each device is to seek the optimal dynamic backoff strategy within energy constraints to minimize the long-term backoff delay costs.
• We adopt the MFG to simplify the complex interplay between device backoff strategies. In order to obtain the optimal backoff scheme, we derive the Hamilton-Jacobi-Bellman (HJB) and Fokker-Planck
Kolmogorov (FPK) equations, which are relevant to achieve the mean-field equilibrium (MFE). By solving these two coupled equation pairs iteratively with the finite difference method (FDM), we obtain
the optimal backoff strategy and the evolution of the system states.
• We numerically evaluate the performance of the proposed Mean Field-based Dynamic Backoff (MFDB) scheme in comparison with conventional GF schemes based on access class barring (ACB) and
slotted-Additive Links On-line Hawaii Area (ALOHA). Numerical results show that the proposed scheme can minimize the backoff delay cost and maintain a nearly constant backoff delay when the number of
devices increases rapidly.
The rest of this paper is organized as follows. The related work and contributions are introduced in Section 2. The system model is presented in Section 3, and the problem formulation is described in
Section 4. The MFG approach and the corresponding Dynamic Backoff Algorithm are proposed in Section 5. Section 6 numerically evaluates the performance of our proposal and other contemporary random
access schemes. Finally, Section 7 concludes the paper.
Combining GF-NOMA and beam-space MIMO can increase system capacity, improve spectral efficiency and reduce access delay, making it a promising solution for wireless communication systems. However,
adopting NOMA can lead to severe co-channel interference, especially in ultra-dense IoT scenarios, where interference analysis and resource allocation become challenges. To address the above issues,
the authors in [16] proposed a Random Access NOMA (RA-NOMA) transmission protocol for IoT networks that employs a timer and power backoff strategy. However, this method significantly increases energy
consumption. This poses a substantial negative impact on devices that require long-term operation and rely on battery power, thereby limiting the effectiveness and feasibility of this method in
practical applications. The authors in [17] proposed a detailed offloading protocol for the GF-SCMA enhanced MEC scheme. However, relying solely on SCMA codebooks to differentiate users in the event
of resource access conflicts is insufficient, as it results in significant resource consumption for codebooks, especially with a large number of devices. In [18], the authors proposed an optimization
method to maximize the service quality of SCMA grant-free access with multipacket reception (MPR). In the event of a collision, the user skips the current frame with the probability of collision, and
the colliding and queuing users continue to wait for the next transmission in a random time slot in another frame according to the random escape strategy. However, the random waiting time for each
user after a collision is not the optimal choice for the system, potentially causing the user equipment to wait during unnecessary periods and increasing overall delay. The above MIMO-NOMA studies
only consider a limited number of devices within the cell, primarily because an increase in the number of devices will lead to increased interference and the complexity of resource allocation.
Besides, no works have optimized the backoff delay of a Massive SCMA-based GF-NOMA system, considering the dynamic change for system states under the limited device energy budget. To the best of our
knowledge, this is the first work to propose a dynamic backoff scheme for SCMA-based GF-NOMA with practical MIMO settings.
For interference management and resource allocation in ultra-dense IoT systems, game theory can be employed to analyze the cooperation and competition among rational devices while developing
strategies to maximize their payoff [19]. In the existing resource allocation schemes based on the game theory, the authors of [20] proposed a power allocation framework based on cognitive radio NOMA
which optimized the utility function of each device and proved the existence of Nash equilibrium. The authors of [21] have proposed a Nash Bargaining Solution-based (NBS) game to achieve the optimal
power allocation scheme based on channel conditions in a MIMO-NOMA system while ensuring both allocation fairness and maximum transmission rate. According to these papers, when multiple devices
compete for limited communication resources in a distributed game, the dynamic optimization problem can be transformed into a DSG. However, as described by the authors in [22], the device’s DSG
process in the ultra-dense IoT scenario will generate many SDEs, resulting in the dimensional explosion problem. To overcome the issues mentioned above, the authors of [23] proposed the MFG to
transform the one-to-one interaction between devices into a more tractable interaction between the device and the mean field.
MFG is created to describe the collective behavior of a large number of interacting individuals in a system [23]. It handles the interactions in complex systems by simplifying and approximating them,
and simplifies the influence of individuals on other individuals into an average effect, which helps to understand and analyze the macroscopic behavior of the system. Therefore, MFG has been widely
used in optimizing the performance of large scale communication systems, which involves energy efficiency [24], transmission rate [25], and transmission power [26]. The application of MFG to the NOMA
system can transform massive devices into a continuum and simulate their state distributions, thereby simplifying the complex interference into the mean field interference, which is easier to
analyze. In related studies, the authors of [27] proposed a NOMA-based resource allocation scheme for ultra-dense mobile edge computing (MEC) systems. To address this problem, the authors divided it
into two subproblems, device clustering and power allocation. They clustered the devices based on the channel gain and proposed a resource allocation algorithm using the mean-field framework. The
authors of [28] addressed the power control problem in Massive Machine Type Communication (mMTC) systems. When performing successive interference cancellation (SIC) at the receiving end, the
interference is estimated by converting the location-based interference into a more manageable mean-field interference. However, SIC requires strict power ordering, and the complexity of interference
estimation is greatly increased when multiple system states are considered simultaneously. Different from the previous mean-field-based power allocation schemes, in this paper, we investigate the
massive GF-NOMA problem in a dynamic radio environment for the 6G IoT scenario. Our approach focuses on dynamic changes in device energy and channel state with a limited energy budget based on MFG
and SCMA to minimize the backoff delay.
As shown in Fig. 1, we consider a 6G single-cell system in which a BS equips with L antennas in the cell center, and N (n∈N={1,…,N}) single antenna IoT devices locate in this circular cell following
a two-dimensional spatial Poisson distribution with density ρ. Through the fixed grid of beams (GoB), the whole cell coverage area is divided into M beams [29]. Devices with the same beam are
selected to form a cluster. Considering that each radio frequency (RF) chain supports at most one device in the same time-frequency resources [30], we assume that the number of RF chains adopted at
the BS is equal to the number of beams. Each RF chain provides services for devices within the corresponding beam respectively. Devices within the same cluster employ SCMA and the grant-free random
access protocol for data uploading. Based on the NB-IoT standard [31], all devices in the cell share the same subcarrier and adopt time division duplexing (TDD) mode. Time t∈𝒯=[0,T] is divided into
frames with equal duration Δt and the frame index is denoted by i∈ℐ={1,…,Iindex} which satisfies T=IindexΔt. Each frame is further divided into K (k∈𝒦={1,…,K}) time-slots (TSs) with duration Δτ per
TS and satisfies Δt=KΔτ. Assuming that the device needs to upload the status update packet periodically at each frame, whose transmission requires exactly one TS. The channel realization is described
as a block-fading channel model, which remains unchanged within a frame but may vary between frames. During the packet upload process, we define the backoff delay as the time interval between the
start of each frame and the data transmission TS, which can be expressed as Dn(i)∈{Δτ,2Δτ,…,KΔτ}. In grant-free random access (GFRA), each device needs to independently decide its backoff delay Dn(i)
at the beginning of each frame similar to the slotted-ALOHA protocol [32] and the transmission power pn(i,Dn(i)) is adjusted indirectly based on its backoff delay and quality-of-service (QoS)
The GFRA procedure is illustrated in Fig. 2, which can be divided into two main stages: broadcasting and data transmission.
Stage I—Broadcasting: Broadcasting: Before the beginning of each uplink transmission session within time [0, T], the BS will broadcast the pre-derived optimal MFDB policy set [33] and the statistic
channel variation models all the IoT devices in this cell, as well as the available frequency resources, the trained path loss model, preamble configuration information, and reference signals. The
preamble configuration information, in particular, details the format that devices must follow to generate SCMA preambles. Upon receiving these broadcast messages, each device performs channel
estimation, selects an access beam based on the strength of the reference signals, and a SCMA preamble. Besides, it will predict the channel variations in the next T time duration based on the
statistic channel variation models from the BS.
Stage II—Data transmission: The device can derive the channel state of each frame according to the initial channel state and the predicted channel evolution model. Before data transmission, each
device selects its optimal backoff delay using our proposed MFDB scheme (the optimal policy set has been derived by the BS), according to its predicted channel states and remaining energy level.
Following this backoff period, the device generates the preamble based on the configuration information received during the broadcast stage and appends it to the header of the upload packet. The
detailed workings of the MFDB scheme are elaborated in Sections 1–4.
In this work, the uplink channel gain of each IoT devices is modeled with two components, namely the path-loss and the fading component. Assuming that the devices move slowly relative to the
investigated transmission period, the path-loss ln will keep constant during [0,T] (thus not relevant to time index i) and can be expressed as:
where a is the path loss coefficient, and rn is the distance between the device n and the base station. The small fading component of device n in the beam cluster m is denoted as hnm(i)∈CL×1, modeled
as an Itô process [22,34], i.e.,
where αnm(i,hnm(i)) is the deterministic fading coefficient which can be predicted as described in Stage I— Broadcasting, and σnm(i)Δ𝒲(i) denotes the Wiener process that follows N(0,σnm(i)Δt) for
modeling the channel prediction uncertainty due to the small-scale fading. The initial channel value hnm(0) for all device n and beam m can be estimated from the downlink broadcast reference signal
according to the reciprocity between TDD uplink and downlink channels [35]. Based on the initial channel value and Eq. (2), the channel states of the device n at each frame can be derived.
Considering the limited battery capacity of IoT devices, the energy budget of each device within duration T is assumed as E0. The energy states evolution of device n can be expressed as:
in which En(i) is the remaining energy at the end of the frame i, pn(i,Dn(i)) represents the transmission power of the device n.
With beamforming, the signal received at the BS can be expressed as:
y(i,D)=wnmH(i)hnm(i)ln⋅pn(i,D)Sn(i,D)⏟desire signal+∑n′∈Φm(i,D)/nwn′mH(i)hn′m(i)ln′⋅pn′(i,D)Sn′(i,D)⏟intra-beam interference+∑m′∈ℳ/m∑n′∈Φm′(i,D)wn′m′H(i)hn′m′(i)ln′⋅pn′(i,D)Sn′(i,D)⏟inter-beam
where Φm(i,D) is a subset of N which selecting backoff delay D and beam m at frame i. wnm∈CL×1 is the beamforming vector of cluster m and (⋅)H denote the conjugate transpose. Sn(i,D) represents the
transmission signal of device n where E(|Sn(i,D)|2)=1. Moreover, n0 is the power density of white Gaussian noise. Assuming that the BS can estimate perfect uplink CSI, we employ ZF beamforming to
eliminate the inter-beam interference [10]. The BF matrix satisfies W^mH(i)=HmH(i)(Hm(i)HmH(t))−1, in which Hm(i)=[h1m(i),…,h|Φm(i)|m(i)] is the collective vector channel between the device in
cluster m and the BS, and then apply the BF vector wnmH(i)=w^nmH(i)|w^nmH(i)|, in which w^nmH(i) is the n-th column of W^mH(i).
A MPA decoder is assumed to be employed at the BS for SCMA decoding, which allows parallel decoding for different uplink signals from each device with different SCMA patterns in the same resource
block (RB) [9]. Therefore, for a specific device signal, the SCMA signals of other devices in the same beam and RB can be treated as interference. When device n selecting backoff delay Dn(i), its
signal-to-interference-noise-plus-ratio (SINR) at the BS can be denoted as:
in which |⋅| is the Euclidean norm. And B is the channel bandwidth, respectively.
It should be noted that, a low received signal to interference plus noise ratio (SINR) will lead to compromised decoding quality and diminished precoding effectiveness for the adopted ZF receiver,
which in turn results in interference among devices distributed across different beams. Therefore, each device needs to ensure that SINR of its received signal at the BS is greater than the
pre-defined SINR threshold γn(i) when determining its backoff delay Dn(i), i.e.,
For the convenience of writing, we assume that H^nm(i)=|wnmH(i)hnm(i)|, which satisfies:
in which, δnma(i) and δnmb(i) are sign functions and satisfy δnma(i)=sgn(αnm(i,hnm(i))), δnmb(i)=sgn(σnm(i)).
The interference In(i,Dn(i)) received by device i is caused by other devices in the same cluster that accidentally choose the same backoff delay, which can be represented as:
By inverting (5), the minimum required power pnreq(i,Dn(i)) is obtained as:
To minimize the energy consumption while maintaining a transmission quality constraint, we select preq as the transmission power and assume that the maximum transmission power of the device in each
TS is pmax. When the channel condition is too poor, it may cause preq>pmax, then the data packet is dropped in the current frame, and the data transmission is resumed in the next frame. Therefore,
the transmission power can be expressed as:
In the investigated scenario, each device n needs to select its optimal backoff delay Dn∗={Dn∗(1),…,Dn∗(i),…,Dn∗(Iindex)} for transmission frame i={1,…,Iindex} from a bounded action set Dn∗(i)∈
{Δτ,2Δτ,…,KΔτ}. The backoff delay should be minimized to ensure the effectiveness of its task data, under the long-term energy budget constraint En(0), and based on the dynamic evolution of its
remaining energy state En(i) and channel states hnm(i). Thus, we adopt a cost function with distinct convexity [36], such as:
To facilitate the optimization process, Dn(i) can be relaxed to a continuous space, and the obtained optimal value can be converted back to discrete value by rounding. Therefore, the optimization
problem of backoff decisions for device n can be defined as:
Dn∗=arg minDnE[∫0TCn(t)dt]s.t.C1:dH^nm(t)=δnma(i)|wnmH(i)αnm(i,hnm(i))|dt+δnmb(i)|wnmH(i)σnm(i)|d𝒲(t),C2:dEn(t)=−pn(t,Dn(t))Kdt,C3:En(0)=En0,C4:hnm(0)=hnm0,C5:En(T)≥0.(12)
in which C1 and C2 describe the evolution of the channel gain and the remaining energy state of device n, respectively; C3 and C4 represent the initial energy and channel states, respectively. Each
device n attempts to solve its own version of the optimization problem (12) at the same time, leading to an n-player non-cooperative dynamic stochastic game (DSG). Based on the dynamic programming
theory [37], the optimal solution of (12) within duration [0,T] is to solve the Bellman running cost function in a time-reversed order, which can be defined as:
is the state of device n at time t, composed of the remaining energy En(t) and the channel state H^nm(t). F(En(T)) represents the penalty function that penalizes the case of exhausting all energy
before time t. If En(T)≤0, F(En(T)) should be an appropriately large positive value; if En(T)≥0, F(En(T))=0. In this work, a parametric logistic penalty function is adopted as: F(En(T))=ϕ1+eρEn(T)
Definition 1: The optimal backoff strategy D∗(t)={D1∗(t),…,Dn∗(t),…,DN∗(t)} is a Nash Equilibrium (NE) for the n-player DSG described in (12), if and only if D∗(t) is the optimal control for the
following problem:
Dn∗(t)=arg minDn∗(t)E[∫tTCn(u,Dn(u),D−n∗)du+F(En(T))](15)
where D−n∗ represents the backoff strategies of all the devices except device n. Under the NE definition, none of the devices can achieve lower cost by deviating from its optimal backoff strategy
Based on [38] the sufficient condition for the existence of the NE is that the running cost function vn(t,Xn(t)) for n devices has a solution to its HJB equation, which can be guaranteed by the
smoothness of the Hamiltonian Ham. In this optimization problem, the HJB equation and the corresponding Hamiltonian for each device are shown in Eqs. (16) and (17), respectively
Proof: See Appendix A.
To obtain the optimal control strategy Dn(t), given that this is a convex optimization problem, we take the partial derivative of the function and set it to zero, resulting in Eq. (18):
Proof: See Appendix B.
According to the proof in Appendix B, the Hamiltonian is smooth, which implies the existence of the Nash equilibrium [39]. However, it must be noted that in Eq. (18), the interference term I(t,Dn(t))
represents the cumulated result of D−n∗ for Dn∗(t) of device n, which means that n correlated partial differential equations (PDEs) need to be solved simultaneously. As n becomes large, this task
would become prohibitively difficult. To address this scaling problem, we next transform the problem into a MFG, which provides better tractability.
In this section, MFG [40] is introduced to convert the n-player non-cooperative game into the interaction between only two bodies, namely the generic device and the mean field, such that the problem
can be solved no matter how large is n. Then the MFE is derived with both HJB and FPK equations, and the corresponding Mean Field-based Dynamic Backoff (MFDB) algorithm is proposed.
5.1 Problem Reformulation with Mean Field Theory
According to the mean field theory [23], a MFG model consists of a generic player who takes rational actions and a mean field representing the collective actions of all other players. When the game
starts, the generic player devises a decision set for all possible states to optimize their cost, which is shared among all players. Subsequently, the mean field, using its probability density
function (PDF), calculates the cumulative impact of all other players on the generic player based on this shared decision set. In response, the generic player adjusts their decisions based on the
mean field’s feedback. The mean field then updates its impacts reflecting the new decision set. This iterative process continues until a NE is achieved. It is obvious that in a MFG, which functions
as a two-body game, the convergence time does not increase with the number of players.
In a MFG framework [23], the model features a typical agent who follows rational decision-making, and a mean field that aggregate the behavior of all other agents who are also rational. As the game
commences, this typical agent formulates a strategy for all conceivable states to minimize its associated cost, which is uniformly adopted by all the agents in the game. Then the PDF of the mean
field can be employed to calculate the collective effect of all the typical agents, leveraging the common strategic framework. In reaction to the impact of the mean field, the typical agent
fine-tunes its strategy accordingly. The mean field, in turn, updates its effects to reflect the revised strategy. This dynamic interaction will continuous until a NE is reached. It is obvious that
the convergence time of a MFG, which essentially operates as a two-agent interaction, remains stable regardless of the number of agents.
To formulate a MFG, four hypotheses need to be satisfied:
• H1—A continuum of a large number of players: Assuming a sufficiently large number of IoT devices participating in the game, such that it can be approximated as infinite. Since the number of
clusters is limited and far smaller than the number of devices, the number of devices in each cluster can also be considered infinite so that the devices can be regarded as the player continuum.
• H2—The player’s rational behaviors: It is assumed that the devices involved in the game have rational behavior. The devices will all implement the optimal backoff delay at any given time, and it
will depend exclusively on the current state Xn(t) they are in, which makes these strategies predictable for other devices.
• H3—The interchangeability of the players: Since the optimal backoff strategy of each device only depends on its state and the interference of other devices. Therefore, changing the order of devices
does not change their backoff decision. Devices in the same state will have the same backoff delay. Based on this assumption, we can decide the backoff delay based on the state of the device rather
than n separate strategies.
• H4—The mean field can describe the interaction between players: For a single device n, instead of considering the one-to-one interaction, we only consider the jointly affected by Φm(t,Dn(t))−1
other devices, namely the intra-beam interference, which consists of the weighted sum of the transmission power of other devices in the same cluster under the same backoff delay. Due to the above
three characteristics, we can convert the interference into the mean field interference based on the backoff delay strategy and the distribution of system states.
Given the investigated system satisfies H1–H4, the DSG problem (12) can be transformed to a MFG as follows:
Definition 2: For the state space Xn(t)=[En(t),H^nm(t)], the mean field is the probability distribution of this state space at time t, where the PDF of users in any specific state is:
in which M(t,X) represents the proportion of devices in state X at frame t. 1 denotes the indicator function that returns 1 when the given condition is satisfied, otherwise it returns 0. The density
function M(t,X) will converge to the mean field density m(t,X) as the number of devices n tends to infinity which satisfies:
in which ℋ and ℰ are the set of channel gain and remaining energy of all devices, respectively. m(t,X) is a continuous PDF. The optimal backoff delay can be determined by solving the HJB equation. We
denote the proportion of devices with the same backoff delay D at frame t and the corresponding device state distribution by Λ(t,D) and G(t,D,X), respectively. As n tends to infinity, they can be
converted into λ(t,D)and g(t,D,X), which are continuous PDFs and can be deduced as:
They also satisfy the following conditions:
in which 𝒟 is the set of the backoff delay that the device can choose. Therefore, the number of the devices that select the same TS to transmit in the same cluster |Φm(t,D)| can be expressed as:
Due to the fixed device density ρ, the interference term in (8) will converge to a constant value that depends on the device density as the number of devices increases [41]. In order to describe the
interaction between devices with the mean field, we transform the interference into the mean field interference and guarantee its boundedness. That is Eq. (9) is rewritten as:
where β denotes the normalized interference factor depending on the path loss index and the device density.
The complete proof is presented in Appendix C.
Then, the interference term can be converted from (27) to:
When |Φm(t)| approaches infinity, according to H1–H4, all the devices will rationally select the optimal backoff Dn(t)=Dn∗(t) (from Eq.(18)), therefore (28) can be calculated based on the continuous
mean-field PDF in (21) and (22), such as:
From (28) and (29), it can be seen that devices transmitting with the same backoff delay will approximately suffer the same cumulative interference as the number of devices tends to infinity.
Therefore, we can ignore the device index n and establish the relationship between backoff delay and interference:
5.2 Mean Field-Based Dynamic Backoff Scheme
To this end, the N-body problem in (12) can be converted to an equivalent MFG, viewed as a two-body problem, as illustrated in Fig. 3. Then we explain how the optimal control D∗ to achieve the MFE
will be derived from the interaction between these two bodies:
First body—Generic Device: According to the HJB equation, each device can decide its optimal backoff delay based on its state. The general HJB equation is expressed as (26) at the bottom of the page,
and the index n in (18) can be removed, leading to the optimal backoff policy for a generic device as follows:
Second body—Mean Field: The cumulated interference to a generic device is now sufficiently described by (30), in which the evolution of the mean field PDF can be derived as [23]:
Proof: See Appendix D.
As presented in Fig. 3, the HJB Eq. (26) is employed to derive the optimal backoff strategy (31) to be used for any device in any states (channel state, remaining energy) under the initial mean field
interference (from any initial mean field PDF), while the FPK Eq. (32) allows to calculate the mean field interference (30) given all devices in the system follow the optimal backoff strategies from
HJB since they are all rational. After that, the HJB will recalculate the optimal control solution according to the updated mean field interference, then the FPK will derive the new mean field
evolution based on the updated backoff control. This interactive process will be repeated until the optimal control or its corresponding value function converge, as shown in Algorithm 1.
In the MFG, when the individual strategies (their optimal policy in (31)) and the mean field reach a stable state, where no device can increase its value by unilaterally changing their strategy, the
system reaches a Mean Field Equilibrium (MFE), which can be seen as the equivalent to the Nash equilibrium for the n-player DSG in (16) before MFG is employed. At this point, each device’s strategy
is the best response to the strategies of all others. In our system, at any time t and state X, the value function v(t,X) and the mean field m(t,X) interact with each other, where the optimal value
v∗(t,X) is determined by solving the HJB equation, as described in (31), and m(t,X) is the solution to the FPK equation in (32). The optimal value v∗(t,X) determines the optimal strategy D∗(t,X) in
(31), which influences the evolution of the mean field m(t,X) via (32). This, in turn, determines the mean interference In(t) in (30), which affects v∗(t,X) through (26). Therefore, the optimal
strategy can be obtained by iteratively solving two coupled forward-backward PDEs. Since all the functions involved are smooth, the iterative algorithm is guaranteed to converge to the optimal mean
field strategy [42], thereby bringing the system into the MFE state.
The computational complexity of the n-player DSG in Section 4 and the proposed MFG-based Algorithm 1 are compared as follows:
• N-player DSG: In the model, each device is required to account for both its own action and the actions of all other devices, by solving (16)–(18) for N devices at the same time. This integration
leads to a significant increase in action space and computational complexity as the number of devices N grows, e.g., if the action space of each device is A-dimensional, then the total action space
of the system becomes AN.
• MFG: The MFG simplifies the interactions between N devices by transforming the complex multi-player game into a two-player game, where each individual interacts with the average behavior of all the
others. In other words, the mean field simplifies the complex interactions of a large number of participants into interactions between individuals and the mean field. This method introduces the mean
field approximation, which transforms the high-dimensional game problem of n devices into a game between an individual and the mean field of the overall system. This approach substantially reduces
the complexity by limiting the system action space to A2, thus significantly reducing the computational complexity. Consequently, the MFG-based Algorithm 1 will converge fast, since its HJB-FPK
iterations only involve two players instead of the n-players in the N-body DSG [43].
In this section, we employ the FDM to solve the proposed MFDB scheme numerically, as described in Algorithm 1. Since ZF precoding eliminates the inter-beam interference, all of the following
numerical results are for the device in one spatial beam, and the devices in other beams follow the same strategy. To maintain generality and ensure consistency, the system states E can be normalized
to the interval [0, 1]. Table 1 presents the key simulation parameters employed in our work.
We assume a semi-static channel with constant channel gain during the simulations in Figs. 4 and 5. Fig. 4 describes the optimal backoff decisions Dn∗(t,E) for each device with a constant channel
gain Hc=H^02⋅l=3×10−3, which reveals that the backoff delay for a specific frame decreases as the remaining energy increases. Moreover, when the remaining energy is fixed, the IoT devices can adopt a
lower backoff delay as the frame index gets closer to the final one. This is due to the fact that devices with sufficient energy will reduce the worry of running out of energy budgets and incurring
penalties before the transmission deadline. Fig. 5 shows the evolution of the optimal mean field distribution mn∗(t,E) where the initial mean field m(0,E) is uniformly distributed in all energy
states under the constant channel gain Hc. The figure reveals that most devices just run out of energy or have energy left by the end of the transmission duration. Only a few devices experience
penalties due to insufficient initial energy to complete the data transmission leading to an early exhaustion of energy.
As in Eq. (7), the dynamic channel evolution is modeled as a stochastic differential equation with the uncertainty coefficient σ. In Fig. 6, we evaluate the impact of channel uncertainty on the
backoff delay of MFDB by considering the following:
• h1: The certain channel with σ=0.
• h2: The low unpredictable channel with σ=0.1.
• h3: The medium unpredictable channel with σ=1.
• h4: The high unpredictable channel with σ=10.
All the above channel scenarios have the same deterministic part, i.e.,
where Hc=3×10−3, A=2×10−3, f0=0.4, θ=2. It can be observed in Fig. 6a as the channel uncertainty σgets larger, the uncertainty of the channel increases. This indicates that the channel quality
deviates more from the predicted channel evolution. In Fig. 6b, we depict the effect of channel uncertainty on the backoff delay in the MFDB strategy. It can be seen that compared with the fully
predictable channel h1, the higher the uncertainty of the channel, the higher the backoff delay will be selected by the MFDB strategy. This is because when the channel becomes highly unpredictable,
the device cannot judge whether the remaining energy can support the data transmission. In this case, the device may not accurately estimate the energy required for data transmission at a certain
moment, which will increase the risk of transmission failure and waste precious energy resources. This strategy helps to maintain the availability of the device in the long term and avoid the device
stopping working due to energy exhaustion.
6.3 Comparison with Other Backoff Schemes
In this subsection, we compare the performance of the MFDB scheme with other backoff schemes, which are:
• ACB: The BS generates an ACB factor b0 in each frame and broadcasts it to the device. Then, the device generates a random number b∈[0,1] before sending the data and compares it with the ACB factor.
If b>b0, the device transmits data with a fixed transmission power using SCMA. The base station determines whether the decoding is successful according to the received SINR of a specific device. Then
the base station sends a feedback ACK or NACK signal (“ACK” for success while “NACK” for failure) to inform the device. If receiving a NACK, the device will randomly backoff for 1 to 3 TSs and resend
the data packet.
• Slotted-ALOHA: The device randomly selects a backoff delay in each frame to transmit data with a fixed transmission power using SCMA. The base station determines whether the decoding is successful
according to the received SINR and sends a feedback signal with ACK or NACK. If the decoding fails, the device will randomly backoff for 1 to 3 TSs and retransmit the data.
• Minimum backoff (MB): In this baseline scheme, the device will always transmit SCMA data in each frame’s first TS. The interference is determined by all device power and channel state, which is
pre-counted by the BS and broadcast to all devices in each frame [28]. The device decides the transmission power based on the interference level.
To evaluate the backoff delay of the above scheme, we consider the following channel scenarios:
• Constant channel (CC): The channel gain is consistently h0.
• Dynamic channel (DC): According to (7), the DC is modeled as two parts, where the deterministic part follows (36) with different parameter f0=20 and variance σ=0.1.
The normalized energy budget E(0)=0.7 in the following simulations, and the simulated device number is 1000. As shown in Fig. 7, for ACB and slotted-ALOHA scheme, the backoff delay is the average
result of all the devices due to the random backoff. For the MFDB scheme, since all devices follow the same backoff strategy, the figure depicts the expected result for a generic device. Fig. 7
reveals that whether it is under CC or DC, the MB scheme cannot complete the data transmission in all frames. This is because when all the devices are transmitting with the minimum backoff, high
transmitting power will be required to overcome the severe interference among devices. Therefore, the remaining energy in this scheme is used up before the end of the transmission, regardless of the
channel condition. For the MFDB scheme, the backoff delay remains relatively constant under CC, even if the device’s energy decreases throughout the frame evolution. This is due to the fact that the
device is able to predict the continuous decrease of the other devices’ energy with the mean field. And the device is able to dynamically adjust its backoff delay according to the changing channel
under DC.
Moreover, it can be seen from the figure that the MFDB significantly outperforms the ACB and the slotted-ALOHA scheme in terms of backoff delay. That is because the MFDB scheme can dynamically adjust
the backoff in each frame according to its current channel gain and remaining energy. Thus, the MFDB scheme can avoid the case that a large number of devices access the same TS resulting in high
decoding failure probability at the BS and extra delay due to data re-transmissions.
Fig. 8 depicts the cumulated delay cost (CDC) for the four evaluated strategies. According to (11), CDC(t) is defined as CDC(t)=∑i=1tDn2(i). It can be observed that the CDC of the MFDB scheme
demonstrates a significantly slower increase compared to the ACB and slotted-ALOHA schemes, regardless of the channel condition being CC or DC. However, because the MB scheme is transmitted
continuously on the first TS of each frame, the remaining energy is used up early. When the remaining energy is exhausted, the device cannot transmit, and the corresponding CDC is denoted as INF.
Fig. 9 illustrates the average backoff delay vs. the number of devices in the beam with different backoff strategies under different channel conditions. It can be observed that no matter what channel
condition, the device with MFDB strategy always maintains the lowest backoff delay, which has little growth trend and is almost independent of the number of devices. When the number of devices is
less than 900, the backoff delay of ACB and slotted-ALOHA tends to be stable, and the backoff delay of ACB is slightly higher than that of slotted-ALOHA. This is because the average number of
transmitting devices per slot in this case is less than the threshold for the number of devices that can be successfully decoded. Moreover, the random factor judgment of the ACB strategy will
increase the backoff delay. When the number of devices is between 900 and 1300, the backoff delay of slotted-ALOHA rapidly exceeds that of ACB. This is due to the increased probability of decoding
failure in this case, and the random factor judgment of ACB can adjust the number of access devices to reduce the probability of decoding failure. When the number of devices exceeds 1300, in this
case, the random factor of ACB also fails to alleviate the decoding failure but increases the backoff delay.
In this work, we investigate the optimal dynamic backoff mechanism for massive random access within a 6G ultra-dense IoT system. Considering a 6G cell employing GF-NOMA and multi-beam MIMO, we design
a clustering scheme based on GoB and an access signaling process based on GFRA. A MFDB scheme is proposed for each cluster to minimize the long-term cost of backoff delay of a generic device.
Numerical results validate that the proposed MFDB can proactively adjust the backoff delay and transmission power according to the predicted channel gain and energy level evolution subject to the
specified energy constraints. Compared with three other GFRA schemes, namely ACB, slotted-ALOHA, and MB, the proposed MFDB mechanism can significantly reduce the average access delay and maintain a
nearly constant backoff delay level even as the number of active devices achieves 2000 in a single subcarrier per cell.
In future work, we intend to setup real-world experiment environment to implement the proposed MFDB scheme and to evaluate its validity. Meanwhile, we would also add other evaluation indicators such
as energy efficiency to evaluate the performance of our proposed method. After that, the proposed MFG approach needs to be extended to multi-cell and multi-channel cellular systems with combined
backoff delay, frequency resource, and NOMA preamble selections.
Acknowledgement: This work was supported by the National Natural Science Foundation of China.
Funding Statement: This work was supported by the National Natural Science Foundation of China under Grant 62371036, supported authors Haibo Wang, Hongwei Gao and Pai Jiang. Website: https://
www.nsfc.gov.cn/english/site_1/index.html (accessed on 25 July 2024).
Author Contributions: Haibo Wang provided the problem formulation, proposed the idea of Mean-field Game-based backoff scheme, and revise the JIOT manuscript for many times; Hongwei Gao contributed
the major writing of the journal paper, and most of the math derivation and simulations. Pai Jiang wrote the related conference paper, and made the basic simulation for the conference paper. During
the writing and submission of the JIOT paper, she had graduated from her master study and cannot make further contribution to the journal paper. Matthieu De Mari co-supervised both Pai Jiang and
Hongwei Gao in deriving the MFG solutions. Panzer Gu gave guidance on how to adopt the MFG backoff strategy in the MIMO-enabled cellular IoT systems, and how to design the grant-free access
procedure. Yinsheng Liu gave guidance on how to design the MIMO channel model and express it in partial differential equations. All authors reviewed the results and approved the final version of the
Availability of Data and Materials: The data that support the findings of this study are available from the corresponding author, Hongwei Gao, upon reasonable request.
Ethics Approval: Not applicable.
Conflicts of Interest: The authors declare that they have no conflicts of interest to report regarding the present study.
1. P. Jiang, H. Wang, and M. De Mari, “Optimal dynamic backoff for grant-free NOMA IoT networks: A mean field game approach,” in 2022 IEEE/CIC Int. Conf. Commun. China (ICCC), Sanshui, Foshan, China,
2022, pp. 997–1002. [Google Scholar]
2. M. Dohler and S. J. Johnson, “Massive non-orthogonal multiple access for cellular IoT: Potentials and limitations,” IEEE Commun. Mag., vol. 55, no. 9, pp. 55–61, Sep. 2017. doi: 10.1109/
MCOM.2017.1600618. [Google Scholar] [CrossRef]
3. Y. Liu, Y. Deng, M. Elkashlan, A. Nallanathan, and G. K. Karagiannidis, “Optimization of grant-free NOMA with multiple configured-grants for mURLLC,” IEEE J. Sel. Areas Commun., vol. 40, no. 4,
pp. 1222–1236, Apr. 2022. doi: 10.1109/JSAC.2022.3143264. [Google Scholar] [CrossRef]
4. J. Choi, J. Ding, N. -P. Le, and Z. Ding, “Grant-free random access in machine-type communication: approaches and challenges,” IEEE Wirel. Commun., vol. 29, no. 1, pp. 151–158, Feb. 2022. doi:
10.1109/MWC.121.2100135. [Google Scholar] [CrossRef]
5. J. Zhang, X. Tao, H. Wu, N. Zhang, and X. Zhang, “Deep reinforcement learning for throughput improvement of the uplink grant-free NOMA system,” IEEE Internet Things J., vol. 7, no. 7, pp.
6369–6379, Jul. 2020. doi: 10.1109/JIOT.2020.2972274. [Google Scholar] [CrossRef]
6. M. Fayaz, W. Yi, Y. Liu, and A. Nallanathan, “Transmit power pool design for grant-free NOMA-IoT networks via deep reinforcement learning,” IEEE Trans. Wirel. Commun., vol. 20, no. 11, pp.
7626–7641, Nov. 2021. doi: 10.1109/TWC.2021.3086762. [Google Scholar] [CrossRef]
7. J. Liu, G. Wu, X. Zhang, S. Fang, and S. Li, “Modeling, analysis, and optimization of grant-free NOMA in massive MTC via stochastic geometry,” IEEE Internet Things J., vol. 8, no. 6, pp.
4389–4402, Mar. 15, 2021. doi: 10.1109/JIOT.2020.3027158. [Google Scholar] [CrossRef]
8. B. Wang, K. Wang, Z. Lu, T. Xie, and J. Quan, “Comparison study of non-orthogonal multiple access schemes for 5G,” in 2015 IEEE Int. Symp. Broadb. Multimed. Syst. Broadcast., Ghent, Belgium, 2015,
pp. 1–5. [Google Scholar]
9. W. Yuan, N. Wu, Q. Guo, Y. Li, C. Xing and J. Kuang, “Iterative receivers for downlink MIMO-SCMA: Message passing and distributed cooperative detection,” IEEE Trans. Wirel. Commun., vol. 17, no. 5
, pp. 3444–3458, May 2018. doi: 10.1109/TWC.2018.2813378. [Google Scholar] [CrossRef]
10. A. Almradi, P. Xiao, and K. A. Hamdi, “Hop-by-Hop ZF beamforming for MIMO full-duplex relaying with co-channel interference,” IEEE Trans. Commun., vol. 66, no. 12, pp. 6135–6149, Dec. 2018. doi:
10.1109/TCOMM.2018.2863723. [Google Scholar] [CrossRef]
11. W. A. Al-Hussaibi and F. H. Ali, “Efficient user clustering, receive antenna selection, and power allocation algorithms for massive MIMO-NOMA systems,” IEEE Access, vol. 7, pp. 31865–31882, 2019.
[Google Scholar]
12. S. Gong, C. Xing, V. K. N. Lau, S. Chen, and L. Hanzo, “Majorization-minimization aided hybrid transceivers for MIMO interference channels,” IEEE Trans. Signal Process., vol. 68, pp. 4903–4918,
2020. [Google Scholar]
13. X. Ge, W. Shen, C. Xing, L. Zhao, and J. An, “Training beam design for channel estimation in hybrid mmWave MIMO systems,” IEEE Trans. Wirel. Commun., vol. 21, no. 9, pp. 7121–7134, Sep. 2022.
doi: 10.1109/TWC.2022.3155157. [Google Scholar] [CrossRef]
14. S. Duan, V. Shah-Mansouri, Z. Wang, and V. W. S. Wong, “D-ACB: Adaptive congestion control algorithm for bursty M2M traffic in LTE networks,” IEEE Trans. Veh. Technol., vol. 65, no. 12, pp.
9847–9861, Dec. 2016. doi: 10.1109/TVT.2016.2527601. [Google Scholar] [CrossRef]
15. T. Tao, F. Han, and Y. Liu, “Enhanced LBT algorithm for LTE-LAA in unlicensed band,” in 2015 IEEE 26th Annu. Int. Symp. Per., Indoor, Mob. Radio Commun. (PIMRC), 2015, pp. 1907–1911. [Google
16. M. R. Amini, A. Al-Habashna, G. Wainer, and G. Boudreau, “Performance analysis of random access NOMA for critical mIoT with timer-power back-off strategy,” IEEE Trans. Veh. Technol., vol. 72, no.
8, pp. 10754–10769, Aug. 2023. doi: 10.1109/TVT.2023.3257107. [Google Scholar] [CrossRef]
17. P. Liu, K. An, J. Lei, Y. Sun, W. Liu and S. Chatzinotas, “Grant-free SCMA enhanced mobile edge computing: Protocol design and performance analysis,” IEEE Internet Things J., vol. 11, no. 15, pp.
25895–25909, 2024. doi: 10.1109/JIOT.2024.3386593. [Google Scholar] [CrossRef]
18. L. Wang, J. Xu, T. Qi, X. Jiang, J. Cui and B. Zheng, “An optimization method to maximize the service quality of SCMA grant-free access with MPR,” in 2021 13th Int. Conf. Wirel. Commun. Signal
Process. (WCSP), Changsha, China, 2021, pp. 1–5. doi: 10.1109/WCSP52459.2021.9613275. [Google Scholar] [CrossRef]
19. M. J. Osborne, An Introduction to Game Theory. New York: Oxford University Press, 2004. [Google Scholar]
20. S. S. Abidrabbu and H. Arslan, “Energy-efficient resource allocation for 5G cognitive radio NOMA using game theory,” in 2021 IEEE Wirel. Commun. Netw. Conf. (WCNC), 2021, pp. 1–5. [Google Scholar
21. M. Fadhil, A. H. Kelechi, R. Nordin, N. F. Abdullah, and M. Ismail, “Game theory-based power allocation strategy for NOMA in 5G cooperative beamforming,” Wirel. Pers. Commun., vol. 122, no. 2,
pp. 1101–1128, 2022. doi: 10.1007/s11277-021-08941-y. [Google Scholar] [CrossRef]
22. R. Zheng, H. Wang, M. De Mari, M. Cui, X. Chu and T. Q. S. Quek, “Dynamic computation offloading in ultra-dense networks based on mean field games,” IEEE Trans. Wirel. Commun., vol. 20, no. 10,
pp. 6551–6565, Oct. 2021. doi: 10.1109/TWC.2021.3075028. [Google Scholar] [CrossRef]
23. J. M. Lasry and P. L. Lions, “Mean field games,” Jpn. J. Math., vol. 2, no. 1, pp. 229–260, Mar. 2007. doi: 10.1007/s11537-007-0657-8. [Google Scholar] [CrossRef]
24. H. Gao et al., “Energy-efficient velocity control for massive numbers of UAVs: A mean field game approach,” IEEE Trans. Veh. Technol., vol. 71, no. 6, pp. 6266–6278, Jun. 2022. doi: 10.1109/
TVT.2022.3158896. [Google Scholar] [CrossRef]
25. T. Li et al., “A mean field game-theoretic cross-layer optimization for multi-hop swarm UAV communications,” J. Commun. Netw., vol. 24, no. 1, pp. 68–82, Feb. 2022. doi: 10.23919/JCN.2021.000035.
[Google Scholar] [CrossRef]
26. M. De Mari, E. Calvanese Strinati, M. Debbah, and T. Q. S. Quek, “Joint stochastic geometry and mean field game optimization for energy-efficient proactive scheduling in ultra dense networks,”
IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 4, pp. 766–781, Dec. 2017. doi: 10.1109/TCCN.2017.2761381. [Google Scholar] [CrossRef]
27. A. Benamor, O. Habachi, I. Kammoun, and J. -P. Cances, “Mean field game-theoretic framework for distributed power control in hybrid NOMA,” IEEE Trans. Wirel. Commun., vol. 21, no. 12, pp.
10502–10514, Dec. 2022. doi: 10.1109/TWC.2022.3184623. [Google Scholar] [CrossRef]
28. L. Li et al., “Resource allocation for NOMA-MEC systems in ultra-dense networks: A learning aided mean-field game approach,” IEEE Trans. Wirel. Commun., vol. 20, no. 3, pp. 1487–1500, Mar. 2021.
doi: 10.1109/TWC.2020.3033843. [Google Scholar] [CrossRef]
29. R. S. Ganesan, W. Zirwas, B. Panzner, K. I. Pedersen, and K. Valkealahti, “Integrating 3D channel model and grid of beams for 5G mMIMO system level simulations,” in 2016 IEEE 84th Veh. Technol.
Conf. (VTC-Fall), 2016, pp. 1–6. [Google Scholar]
30. B. Wang, L. Dai, Z. Wang, N. Ge, and S. Zhou, “Spectrum and energy-efficient beamspace MIMO-NOMA for millimeter-wave communications using lens antenna array,” IEEE J. Sel. Areas Commun., vol. 35,
no. 10, pp. 2370–2382, Oct. 2017. doi: 10.1109/JSAC.2017.2725878. [Google Scholar] [CrossRef]
31. 3GPP TS 36.211 V15.5.0, “Evolved universal terrestrial radio access (E-UTRAPhysical channels and modulation (Release 15),” Mar. 2019. Accessed: Jul. 25, 2024. [Online]. Available: https://
portal.3gpp.org/desktopmodules/Specifications/SpecificationDetails.aspx?specificationId=2425 [Google Scholar]
32. H. Chen, Y. Gu, and S. -C. Liew, “Age-of-information dependent random access for massive IoT networks,” in IEEE INFOCOM 2020—IEEE Conf. Comput. Commun. Workshops (INFOCOM WKSHPS), Toronto, ON,
Canada, 2020, pp. 930–935. [Google Scholar]
33. H. Khan, M. M. Butt, S. Samarakoon, P. Sehier, and M. Bennis, “Deep learning assisted CSI estimation for joint URLLC and eMBB resource allocation,” in 2020 IEEE Int. Conf. Commun. Workshops (ICC
Workshops), Dublin, Ireland, 2020, pp. 1–6. [Google Scholar]
34. M. M. Olama, S. M. Djouadi, and C. D. Charalambous, “Stochastic power control for time-varying long-term fading wireless networks,” EURASIP J. Adv. Signal Process., vol. 2006, no. 1, 2006, Art.
no. 089864. doi: 10.1155/ASP/2006/89864. [Google Scholar] [CrossRef]
35. F. Tang, Y. Zhou, and N. Kato, “Deep reinforcement learning for dynamic uplink/downlink resource allocation in high mobility 5G HetNet,” IEEE J. Sel. Areas Commun., vol. 38, no. 12, pp.
2773–2782, Dec. 2020. doi: 10.1109/JSAC.2020.3005495. [Google Scholar] [CrossRef]
36. S. Lasaulce and H. Tembine, Game Theory and Learning foR Wireless Networks: Fundamentals and Applications. Oxford, Waltham, MA: Academic Press, 2011. Accessed: Jul. 25, 2024. [Online]. Available:
https://www.researchgate.net/publication/278768710_Game_Theory_and_Learning_for_Wireless_Networks_Fundamentals_and_Applications [Google Scholar]
37. R. Bellman, “Dynamic programming and stochastic control processes,” Inf. Control, vol. 1, no. 3, pp. 228–239, 1958. doi: 10.1016/S0019-9958(58)80003-0. [Google Scholar] [CrossRef]
38. Y. Jiang, Y. Hu, M. Bennis, F. Zheng, and X. You, “A mean field game-based distributed edge caching in fog radio access networks,” IEEE Trans. Commun., vol. 68, no. 3, pp. 1567–1580, Mar. 2020.
doi: 10.1109/TCOMM.2019.2961081. [Google Scholar] [CrossRef]
39. T. Başar and G. J. Olsder, Dynamic Noncooperative Game Theory. Philadelphia, PA: Society for Industrial and Applied Mathematics, 1999. [Google Scholar]
40. X. Ge, H. Jia, Y. Zhong, Y. Xiao, Y. Li and B. Vucetic, “Energy efficient optimization of wireless-powered 5G full duplex cellular networks: A mean field game approach,” IEEE Trans. Green Commun.
Netw., vol. 3, no. 2, pp. 455–467, Jun. 2019. doi: 10.1109/TGCN.2019.2904093. [Google Scholar] [CrossRef]
41. B. Blaszczyszyn, M. Jovanovic, and M. K. Karray, “Performance laws of large heterogeneous cellular networks,” in 2015 13th Int. Symp. Model. Optim. Mo., Ad Hoc, Wirel. Netw. (WiOpt), May 2015,
pp. 597–604. [Google Scholar]
42. M. De Mari and T. Quek, “Energy-efficient proactive scheduling in ultra dense networks,” in 2017 IEEE Int. Conf. Commun. (ICC), Paris, 2017, pp. 1–6. [Google Scholar]
43. M. Burger and J. M. Schulte, Adjoint Methods for HamiltonJacobi-Bellman Equations. Munster, Germany: Universität Münster, 2010. [Google Scholar]
As vn(t,xn(t)) is the value function of cost Cn(t) at the state Xn(t), according to Bellman’s principle of optimality, increasing time t to t+dt, leads to:
By performing Taylor’s expansion on vn(t,Xn(t)), we get:
Then, by substituting (35) into (34), subtracting vn(t,Xn(t)) from both sides of the equation, and dividing both sides by dt. When dt approaches zero, o(dt) tends to zero and is negligible. Therefore
(34) can be written as:
Because of Xn(t)=[En(t),H^n(t)] and Cn(t)=Dn2(t), we obtain the HJB equation.
From (40), the optimal backoff delay Dn∗(t) can be derived as:
Dn∗(t)=arg minDn(t)[−γ0K⋅ln⋅H^nm2(t)[In(t,Dn(t))+|wnmH(t)n0|2B]⋅∂v∗(t,Xn(t))∂En(t)+δnma(t)|wnmH(t)α(t,hnm(t))|∂v∗(t,Xn(t))∂H^nm(t)+δnmb(t)|wnmH(t)σh,i|22∂2v∗(t,Xn(t))∂H^nm2(t)+Dn2(t)](37)
For the first derivative of the Hamiltonian with respect to Dn(t):
Taking the derivative of the interference term, we can obtain:
In which |Φm(t,Dn(t))| represents the number of devices whose backoff delay is Dn(t) in cluster m. It has no explicit mathematical relationship with the backoff delay, but in the process of using the
MFG to solve, |Φm(t,Dn(t))| can be converted from the mean field density, which is differentiable. So the partial differential equation of the interference term with respect to Dn(t) exists.
Therefore, Hamiltonian is smooth. The minimum value of Dn(t) exists in which the first order partial derivative of Hamiltonian with respect to it is equal to zero, i.e.,
Therefore, the backoff delay can be derived as (18).
The interference in (8) can be transformed into:
in which
in which rm is the radius of cluster m. Since the number of other devices in each cluster can be estimated by cell area and device density ρ which satisfy |Φm(t)|−1=ρ⋅πrm2. The interference can be
derived as:
where β=ρπ(1+2a−2−rm2−a).
Let’s suppose a smooth and compactly supported function y(X), it can be deduced that:
By taking the partial derivative of t on both sides of the equation and applying the chain rule of derivation, we can get:
When n tends to infinity, (45) converts to:
Applying integration by parts on (46), convert it to:
When assuming y(X)=1, (47) can be converted to:
Since p(t,D∗(t),X)=γl⋅H^2(t)(I(t,D∗(t))+N0B), in which P(t,D∗(t),X) is not affected by E under the condition of given Dn(t), according to the chain rule of derivation:
Since ∂2v∗(t,Xn(t))∂E2=0,∂p(t,Dn(t),X)∂E=0, the final form of the FPK equation can be derived as (32).
Cite This Article
APA Style
Wang, H., Gao, H., Jiang, P., Mari, M.D., Gu, P. et al. (2024). Mean field-based dynamic backoff optimization for mimo-enabled grant-free NOMA in massive iot networks. Journal on Internet of Things,
6(1), 17-41. https://doi.org/10.32604/jiot.2024.054791
Vancouver Style
Wang H, Gao H, Jiang P, Mari MD, Gu P, Liu Y. Mean field-based dynamic backoff optimization for mimo-enabled grant-free NOMA in massive iot networks. J Internet Things . 2024;6(1):17-41 https://
IEEE Style
H. Wang, H. Gao, P. Jiang, M.D. Mari, P. Gu, and Y. Liu, “Mean Field-Based Dynamic Backoff Optimization for MIMO-Enabled Grant-Free NOMA in Massive IoT Networks,” J. Internet Things , vol. 6, no. 1,
pp. 17-41, 2024. https://doi.org/10.32604/jiot.2024.054791
This work is licensed under a Creative
Commons Attribution 4.0 International License
, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. | {"url":"https://www.techscience.com/jiot/v6n1/57757/html","timestamp":"2024-11-07T23:51:17Z","content_type":"application/xhtml+xml","content_length":"330233","record_id":"<urn:uuid:522f1eee-7e01-421d-9ab0-e4288ae7f2ba>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00597.warc.gz"} |
Tutorial hydroelectric turbine
Design: propeller turbine, screw, aerial, marine, turbine, tidal, wind, kaplan, foil, wings, 3D. Discover heliciel software: Modeling turbine kaplan in heliciel guide vanes draft tube turbine
Tutorial example small hydropower plant design 3/3: sections, design draft tube
hydropower turbine propeller:
Tutorial hydroelectric turbine design 3/3 :Choice sections, design draft tube:
Reminder of the rules guiding the choice of sections:
We have seen that there are two basic rules that justify the choice of sections:" Minimize pressure losses and minimize the cost of parts by concentrating energy" and these 2 rules, respectively lead
to an increase and a decrease of the dimensions.
We have seen that the energy losses are important if:
• The fluid velocity is high and the roughness is important
• Speed variation due to the change of section is important and sudden
• the change of direction is important and sudden
and that these energy losses will therefore be minimal if:
• the speed is low and surfaces are smooth
• the speed variation due to a change in section is gradual and low
• the change of direction is low and progressive
Adduction Area
The Adduction Area
leads the flow to the guide vanes or stator. The sections will be the largest possible, to avoid losses, and surface vortex created by the fluid aceleration. In installations such kaplan, this zone
terminates in a volute,which tangentially directs the fluid in the guide vanes, so as to reduce the deflection angle and so the pressure drops caused by the passage of the vanes. The role of this
area is to reduce the section as gradually as possible, to the section of the organs "precious" in order to focus energy and reduce manufacturing costs. The losses in the converging cones are less
sensitive to the angle of the cone, than divergent, and accept reasonable values to large angles. For example, the coefficient of pressure loss of a converging cone of 30 degrees is about 0.1,
whereas a divergent cone of the same angle will have a coefficient of 2.5! This partly explains, why the area of adduction, admits reductions sections, more abrupt than the draft tube part. The
calculation of the sections of this area presents no difficulties: We will use mecaflux to assess and validate the sections of this part, with regard the pressure losses:
We will evaluate for example, the pressure loss of a convergent cone from 3 meters to 1.9 meters at 30 degrees, for a rate of 8m3/sec:
We get a head loss of 25 mm for the inlet cone.
Guide vanes and turbine area
We have calculated the loss of the stator vanes or guide in the previous chapter about t
he calculation of stator vanes or guide
: We got
0.7 meter pressure loss in the guide vanes stator
. At the beginning of the previous chapter (
calculation of stator vanes or guide
) we calculated our propeller with Heliciel introducing a tangential flow of 4.4 rad / sec. In the results area, we have access to the pressure difference generated between the upstream and
downstream of the propeller. This pressure difference gives us the "pressure loss" generated by the turbine
(delta pressure upstream downstream turbine)
What gives with the pressure value measured in Heliciel: 20339 =(we will remember for simplicity that 1 bar = 10 meters of water so 0.2 bars = 2 meters of head losses)
So we will estimate the height of load required for the crossing of the turbine to the operating point calculated: 2 meters
Total pressure loss consumed by the guide vanes and turbine Area = 0.7 2 =
2.7 meters
Gross height available is 4 meters, we could generate more tangential flow with our distributor (This is an information for optimization and adjustment of the vanes of the distributor)
• Accompanying the change in velocity induced
Before defining the vacuum (diffuser, draft tube),we must take into account the relationship between volume, speed and flow rate for an incompressible fluid. We have a volume flow 12 m3/sec that
through our system. Due to the compressibility of our fluid,the inlet flow rate is identical to the output flow.We can even establish that the section is related to the speed by law::
□ axial velocity(m/sec) = rate of flow (m3/sec) / section (m²)
To avoid uncontrolled changes speeds which will translate by losses, it is important to check the sections according to the desired speed or estimated in the different parts of the system.
In the results area of software HELICIEL, select the tab "fluid velocities". We can see the axial velocity downstream (output propeller). The relation: axial velocity(m/sec) = rate of flow (m3/
sec) / section (m²), therefore gives the section that should be our conduct at turbine outlet to accompany speed change caused by the turbine, with no additional disturbance:
Section output turbine:
□ axial velocity(m/sec) = rate of flow (m3/sec) / section (m²)
□ => section (m²) = Débit (m3/sec) /Vitesse axiale(m/sec)
area diffuser, draft tube:
We saw in Chapter turbine design, that the kinetic energy can not be fully captured by a propeller as this would completely stop the flow at the output and to have an infinite volume of fluid
stokage, at the outlet point of the turbine..it is possible to expand the sections to approach a very low speed. It is 40% of the kinetic energy of axial velocity at the turbine outlet. This speed
energy can be recovered and converted into local depression at the outlet of the turbine, with a variation of regular section.
draft tube
If a vacuum is generated at the turbine outlet, this will result (depending on Bernoulli's Theorem) to increase the rate of passage of turbine and generating a power gain. So we will create a
diverging conical vacuum, which will locate a depression, to turbine output by varying its section(Thus the fluid velocity)between its input and outlet. If this speed change is too abrupt, we
will create a loss of energy by turbulence and the draft tube will not be very effective. The optimum cone angle generating the speed variation with minimal losses is approximately 6°, but for
reasons of economy of material and space, cones of draft tubes are about 12 degrees.
Tip: The calculation of the depression created by the draft tube can be done with the application of Bernoulli mecaflux software:
Enter Data:
• Density: water 1000 kg/m3
• Point 1:
□ Altitude: we will not consider here the altitude and we set 0.
□ Pressure inlet draft tube(point 1): check "result" because it is the value we are looking for
□ Inlet draft tube speed(turbine outlet):The speed at the turbine outlet is found in the result field of software héliciel. Nous prendrons une valeur moyenne de 2 m/sec
• Point 2:
□ Altitude: set 0.
□ Pressure outlet draft tube(point 2): this is the output in the open air in the basin so 100000(atmospheric pressure)
□ outlet draft tube speed(point2): The output speed is estimated according to the flow of our system (12m3/s)section divided by the maximum that we can achieve, taking into account the price
constraints and dimensions complying with a minimum angle of cone (around 10 to 12 degrees usually)
Nous avons estimé ici que le diffuseur pourra réaliser une variation de vitesse de 2 m/sec à 0.5 m/sec.We estimated that ledraft tube can realize speed variation, from 2 m / sec to 0.5 m / sec. This
generates a pressure at the inlet of the draft tube, of 98125, so a depression :98125-100000 =1875 pascals:
With héliciel we resume our propeller to evaluate the gain in power caused by the depression located downstream of the propeller,we use the parameter menu / simulate depression located downstreamand
by enter the value of 1875 Pascals:
We launch a search for Optimum speed to rebuild a propeller at the optimum operating point in these new conditions:
The Optimum speed and power is increased by the suction parameter: Depression is used by heliciel to increase the through speed following Bernoulli, this allows to quickly assess and temporarily gain
power, but a new study with the modified flow through this depression, should be made to adjust the actual efficiency ...
We conclude this small tutorial here, but it is obvious that we could further optimize our system, including the increase of tangential introduction to our Central are perfectly suited to the site.
hydropower turbine propeller: | {"url":"https://www.heliciel.com/en/helice/turbines%20hydraulique/didacticiel%20turbine%20centrale%20hydraulique%203.htm","timestamp":"2024-11-03T00:04:23Z","content_type":"text/html","content_length":"49047","record_id":"<urn:uuid:1b932249-74d5-4e51-8573-d92c57bc4663>","cc-path":"CC-MAIN-2024-46/segments/1730477027768.43/warc/CC-MAIN-20241102231001-20241103021001-00431.warc.gz"} |
12.4 Photon Momentum
Learning Objectives
Learning Objectives
By the end of this section, you will be able to do the following:
• Relate the linear momentum of a photon to its energy or wavelength, and apply linear momentum conservation to simple processes involving the emission, absorption, or reflection of photons
• Account qualitatively for the increase of photon wavelength that is observed, and explain the significance of the Compton wavelength
The information presented in this section supports the following AP® learning objectives and science practices:
• 5.D.1.6 The student is able to make predictions of the dynamical properties of a system undergoing a collision by application of the principle of linear momentum conservation and the principle of
the conservation of energy in situations in which an elastic collision may also be assumed. (S.P. 6.4)
• 5.D.1.7 The student is able to classify a given collision situation as elastic or inelastic, justify the selection of conservation of linear momentum and restoration of kinetic energy as the
appropriate principles for analyzing an elastic collision, solve for missing variables, and calculate their values. (S.P. 2.1, 2.2)
Measuring Photon Momentum
Measuring Photon Momentum
The quantum of EM radiation we call a photon has properties analogous to those of particles we can see, such as grains of sand. A photon interacts as a unit in collisions or when absorbed, rather
than as an extensive wave. Massive quanta, like electrons, also act like macroscopic particles—something we expect, because they are the smallest units of matter. Particles carry momentum as well as
energy. Despite photons having no mass, there has long been evidence that EM radiation carries momentum. In fact, Maxwell and others who studied EM waves predicted that they would carry momentum. It
is now a well-established fact that photons do have momentum. In fact, photon momentum is suggested by the photoelectric effect, where photons knock electrons out of a substance. Figure 12.17 shows
macroscopic evidence of photon momentum.
Figure 12.17 shows a comet with two prominent tails. What most people do not know about the tails is that they always point away from the sun rather than trailing behind the comet. Comet tails are
composed of gases and dust evaporated from the body of the comet and ionized gas. The dust particles recoil away from the sun when photons scatter from them. Evidently, photons carry momentum in the
direction of their motion, away from the sun, and some of this momentum is transferred to dust particles in collisions. Gas atoms and molecules in the blue tail are most affected by other particles
of radiation, such as protons and electrons emanating from the sun, rather than by the momentum of photons.
Connections: Conservation of Momentum
Not only is momentum conserved in all realms of physics, but all types of particles are found to have momentum. We expect particles with mass to have momentum, but now we see that massless particles
including photons also carry momentum.
Momentum is conserved in quantum mechanics just as it is in relativity and classical physics. Some of the earliest direct experimental evidence of this came from scattering of X-ray photons by
electrons in substances, named Compton scattering after the American physicist, Arthur H. Compton (1892–1962). Around 1923, Compton observed that X-rays scattered from materials had a decreased
energy and correctly analyzed this as being due to the scattering of photons from electrons. This phenomenon could be handled as a collision between two particles—a photon and an electron at rest in
the material. Energy and momentum are conserved in the collision. (See Figure 12.18) He won a Nobel Prize in 1929 for the discovery of this scattering, now called the Compton effect, because it
helped prove that photon momentum is given by
12.22 $p=hλ,p=hλ, size 12{p = { {h} over {λ} } } {}$
where $hh size 12{h} {}$ is Planck’s constant and $λλ size 12{λ} {}$ is the photon wavelength. Note that relativistic momentum given as $p=γmup=γmu size 12{p=γ ital "mu"} {}$ is valid only for
particles having mass.
We can see that photon momentum is small, since $p=h/λp=h/λ size 12{p = h/λ} {}$ and $hh size 12{h} {}$ is very small. It is for this reason that we do not ordinarily observe photon momentum. Our
mirrors do not recoil when light reflects from them, except perhaps in cartoons. Compton saw the effects of photon momentum because he was observing X-rays, which have a small wavelength and a
relatively large momentum, interacting with the lightest of particles, the electron.
Example 12.5 Electron and Photon Momentum Compared
(a) Calculate the momentum of a visible photon that has a wavelength of 500 nm. (b) Find the velocity of an electron having the same momentum. (c) What is the energy of the electron, and how does it
compare with the energy of the photon?
Finding the photon momentum is a straightforward application of its definition: $p=hλp=hλ size 12{p = { {h} over {λ} } } {}$. If we find the photon momentum is small, then we can assume that an
electron with the same momentum will be nonrelativistic, making it easy to find its velocity and kinetic energy from the classical formulas.
Solution for (a)
Photon momentum is given by the equation
12.23 $p = h λ . p = h λ . size 12{p = { {h} over {λ} } } {}$
Entering the given photon wavelength yields
12.24 $p=6.63× 10–34 J ⋅ s500×10–9 m= 1.33 × 10–27 kg ⋅ m/s.p=6.63× 10–34 J ⋅ s500×10–9 m= 1.33 × 10–27 kg ⋅ m/s. size 12{p = { {6 "." "63 " times " 10" rSup { size 8{"–34"} } " J " cdot " s"} over
{"500 " times " 10" rSup { size 8{"–9"} } " m"} } =" 1" "." "33 " times " 10" rSup { size 8{"–27"} } " kg " cdot " m/s"} {}$
Solution for (b)
Since this momentum is indeed small, we will use the classical expression $p=mvp=mv size 12{p= ital "mv"} {}$ to find the velocity of an electron with this momentum. Solving for $vv size 12{v} {}$
and using the known value for the mass of an electron gives
12.25 $v=pm=1.33×10–27 kg ⋅ m/s9.11×10–31 kg= 1460 m/s≈ 1,460 m/s.v=pm=1.33×10–27 kg ⋅ m/s9.11×10–31 kg= 1460 m/s≈ 1,460 m/s. size 12{v = { {p} over {m} } = { {1 "." "33 " times " 10" rSup { size 8
{"–27"} } " kg " cdot " m/s"} over {9 "." "11 " times " 10" rSup { size 8{"–31"} } " kg"} } =" 1,460 m/s"} {}$
Solution for (c)
The electron has kinetic energy, which is classically given by
12.26 $KEe=12mv2.KEe=12mv2. size 12{"KE" rSub { size 8{e} } = { {1} over {2} } ital "mv" rSup { size 8{2} } } {}$
12.27 $KEe=12(9.11×10–3 kg)(1,455 m/s)2= 9.64×10–25 J.KEe=12(9.11×10–3 kg)(1,455 m/s)2= 9.64×10–25 J.$
Converting this to eV by multiplying by $(1 eV)/(1.602×10–19J)(1 eV)/(1.602×10–19J) size 12{ \( "1 eV" \) / \( 1 "." "602" times "10" rSup { size 8{"–19"} } `J \) } {}$ yields
12.28 $KEe= 6.02× 10–6 eV.KEe= 6.02× 10–6 eV. size 12{"KE" rSub { size 8{e} } =" 6" "." "06 " times " 10" rSup { size 8{"–6"} } " eV"} {}$
The photon energy $EE$ is
12.29 $E=hcλ= 1,240 eV ⋅ nm500 nm=2.48 eV,E=hcλ= 1,240 eV ⋅ nm500 nm=2.48 eV, size 12{E = { { ital "hc"} over {λ} } = { {" 1240 eV " cdot " nm"} over {"500"" nm"} } = 2 "." "48"" eV"} {}$
which is about five orders of magnitude greater.
Photon momentum is indeed small. Even if we have huge numbers of them, the total momentum they carry is small. An electron with the same momentum has a 1,460 m/s velocity, which is clearly
nonrelativistic. A more massive particle with the same momentum would have an even smaller velocity. This is borne out by the fact that it takes far less energy to give an electron the same momentum
as a photon. But on a quantum-mechanical scale, especially for high-energy photons interacting with small masses, photon momentum is significant. Even on a large scale, photon momentum can have an
effect if there are enough of them and if there is nothing to prevent the slow recoil of matter. Comet tails are one example, but there are also proposals to build space sails made of aluminized
polyester resin that use huge low-mass mirrors to reflect sunlight. In the vacuum of space, the mirrors would gradually recoil and could actually take spacecraft from place to place in the solar
system. (See Figure 12.19.)
Relativistic Photon Momentum
Relativistic Photon Momentum
There is a relationship between photon momentum $pp size 12{p} {}$ and photon energy $EE size 12{E} {}$ that is consistent with the relation given previously for the relativistic total energy of a
particle as $E2=(pc)2+(mc)2E2=(pc)2+(mc)2 size 12{E rSup { size 8{2} } = \( ital "pc" \) rSup { size 8{2} } + \( ital "mc" \) rSup { size 8{2} } } {}$. We know $mm size 12{m} {}$ is zero for a
photon, but $pp size 12{p} {}$ is not, so that $E2=(pc)2+(mc)2E2=(pc)2+(mc)2 size 12{E rSup { size 8{2} } = \( ital "pc" \) rSup { size 8{2} } + \( ital "mc" \) rSup { size 8{2} } } {}$ becomes
12.30 $E=pc,E=pc, size 12{E = ital "pc"} {}$
12.31 $p=Ec(photons).p=Ec(photons). size 12{p = { {E} over {c} } } {}$
To check the validity of this relation, note that $E=hc/λE=hc/λ size 12{E = ital "hc"/λ} {}$ for a photon. Substituting this into $p=E/cp=E/c size 12{p = E"/c"} {}$ yields
12.32 $p=hc/λ/c=hλ,p=hc/λ/c=hλ, size 12{p = left ( ital "hc"/λ right )/c = { {h} over {λ} } } {}$
as determined experimentally and discussed above. Thus, $p=E/cp=E/c size 12{p = E"/c"} {}$ is equivalent to Compton’s result $p=h/λp=h/λ size 12{p = h/λ} {}$. For a further verification of the
relationship between photon energy and momentum, see Example 12.6.
Photon Detectors
Almost all detection systems talked about thus far—eyes, photographic plates, photomultiplier tubes in microscopes, and CCD cameras—rely on particle-like properties of photons interacting with a
sensitive area. A change is caused and either the change is cascaded or zillions of points are recorded to form an image we detect. These detectors are used in biomedical imaging systems, and there
is ongoing research into improving the efficiency of receiving photons, particularly by cooling detection systems and reducing thermal effects.
Example 12.6 Photon Energy and Momentum
Show that $p=E/cp=E/c size 12{p = E"/c"} {}$ for the photon considered in the Example 12.5.
We will take the energy $EE size 12{E} {}$ found in Example 12.5, divide it by the speed of light, and see if the same momentum is obtained as before.
Given that the energy of the photon is 2.48 eV and converting this to joules, we get
12.33 $p=Ec=(2.48 eV)(1.60 × 10–19 J/eV)3.00 × 108 m/s= 1.33 × 10–27 kg ⋅ m/s.p=Ec=(2.48 eV)(1.60 × 10–19 J/eV)3.00 × 108 m/s= 1.33 × 10–27 kg ⋅ m/s. size 12{p = { {E} over {c} } = { { \( 2 "." "48
eV" \) \( 1 "." "60 " times " 10" rSup { size 8{"–19"} } " J/eV" \) } over {3 "." "00 " times " 10" rSup { size 8{8} } " m/s"} } =" 1" "." "33 " times " 10" rSup { size 8{"–27"} } " kg " cdot " m/s"}
This value for momentum is the same as found before (note that unrounded values are used in all calculations to avoid even small rounding errors), an expected verification of the relationship $p=E/cp
=E/c size 12{p = E"/c"} {}$. This also means the relationship between energy, momentum, and mass given by $E2=(pc)2+(mc)2E2=(pc)2+(mc)2 size 12{E rSup { size 8{2} } = \( ital "pc" \) rSup { size 8{2}
} + \( ital "mc" \) rSup { size 8{2} } } {}$ applies to both matter and photons. Once again, note that $pp size 12{p} {}$ is not zero, even when $mm size 12{m} {}$ is.
Problem-Solving Suggestion
Note that the forms of the constants $h= 4.14 × 10–15 eV ⋅ sh= 4.14 × 10–15 eV ⋅ s size 12{h =" 4" "." "14 " times " 10" rSup { size 8{"–15"} } " eV " cdot " s"} {}$ and $hc= 1240 eV ⋅ nmhc= 1240 eV
⋅ nm size 12{ ital "hc" =" 1240 eV " cdot " nm"} {}$ may be particularly useful for this section’s Problems and Exercises. | {"url":"https://texasgateway.org/resource/124-photon-momentum?book=79106&binder_id=78856","timestamp":"2024-11-04T12:01:47Z","content_type":"text/html","content_length":"87954","record_id":"<urn:uuid:3f9b0a89-196e-451d-8ffc-c7c974279f70>","cc-path":"CC-MAIN-2024-46/segments/1730477027821.39/warc/CC-MAIN-20241104100555-20241104130555-00765.warc.gz"} |
What is Group Theory?
What is Group Theory? Archives
2/27/2014 Categories
0 Comments
If you're studying undergrad math, physics or chemistry, chances are you've heard of this thing called a "Group" that is studied in "Group Theory". What is it and why is it so
important? I'll explain in a simple way.
The most familiar group is our number system; we have a bunch of numbers, and we have this thing called "$+$" which can take two numbers and churn out another number. A group is just a
set of objects, and you have one "operation" (usually called "$\ast$" or "$+$", but those are just names) that tells you how to combine objects to produce other objects in that set,
subject to a few rules. Why devise such a "strange chasing game" in a set of objects? That's because such systems are omnipresent in a stupendous array of phenomena:
• Most number systems (whole numbers, fractions, real numbers, complex numbers...)
• "Weird" number systems that can have strange rules like $1 + 1 = 0$ are used in binary computation, and are crucial in encrypting information for secrecy. Similarly, the "clock
algebra" has $2359 + 0001 = 0000$.
Clock algebra: the numbers "reset" after advancing past a certain limit, like 24 hours. (Image by Martin Pettitt)
• Transformation groups - transformations of space, like a rotation $A$ or a reflection $B$ etc, can be combined into new transformations by defining $B \ast A$ to mean "do $A$ then
do $B$". This is absolutely fundamental in physics, from the physical laws of "our space" to investigating the laws of "new spaces", like Quantum Mechanics and Relativity were; and
in chemistry, from studying crystals to calculating molecular orbitals. Check "Noether's Theorem" for more.
Ice is made of water molecules arranged in a regular lattice. Its symmetry can be characterized by the transformations of space that leave the lattice unchanged. (Image generated by a
program I wrote)
• The Rubik's Cube Group - every possible configuration is a combination of the basic moves (read the moves from right to left). In the figure below,
1. $\mathrm{X1} = \mathsf{NoMove}$
2. $\mathrm{X2} = \mathsf{BackTwist}$
3. $\mathrm{X3} = \mathsf{TopTwist} \ast\mathsf{BackTwist}$ (read from right to left, so Back Twist then Top Twist)
4. $\mathrm{X4} = \mathsf{LeftTwist} \ast\mathsf{TopTwist} \ast\mathsf{BackTwist}$
I know that each twist can go in two opposing directions, but you get the point.
Each configuration of the Rubik's Cube can be described by the sequence of moves used to reach it. (Image by Tom Davis)
• Pretty much every "algebraic system" is built on special kinds of groups - e.g. groups of vectors, groups of matrices, groups of polynomials...
\mathsf{Quadratic} + \mathsf{Cubic}
• Many, many, many more...
This exemplifies one power of Mathematics, where a myriad of disparate phenomena and concepts can have a single simple abstract underpinning. Group Theory is the study of that
underpinning, thus its results and implications reach far and wide.
Technical Note
When I said "subject to a few rules", I was referring to the following rules that hold for any group with object set $G$ and operation $\ast$:
1. Associativity. Any objects $a$, $b$ and $c$ in $G$ must satisfy $(a \ast b) \ast c = a \ast (b \ast c)$. You can verify that this works in the above examples—another "strange"
abstract underpinning!
2. Identity. $G$ must contain a special object $e$ called the "identity" so that $e \ast a = a = a \ast e$ for every object $a$ in $G$. When adding numbers, $e = 0$ so $0 + a = a = a
+ 0$. For the Rubik's Cube, $e = \mathsf{NoMove}$ doesn't change the cube, and performing $\mathsf{TopTwist}$ before or after a $\mathsf{NoMove}$ is the same as doing just the $\
3. Inverse. Each object $a$ in $G$ must correspond to an "inverse" $b$ so that $a \ast b = e = b \ast a$. When adding numbers, the inverse of $a$ is $-a$ so that $a + (-a) = 0 = (-a)
+ a$. For the Rubik's Cube, the inverse of each twist is the same twist but in the opposite direction; $\mathsf{TopTwist} \ast \mathsf{TopOppositeTwist} = \mathsf{NoMove}$ $=$ $\
mathsf{TopOppositeTwist} \ast \mathsf{TopTwist}$.
0 Comments
Clock algebra: the numbers "reset" after advancing past a certain limit, like 24 hours. (Image by Martin Pettitt)
Ice is made of water molecules arranged in a regular lattice. Its symmetry can be characterized by the transformations of space that leave the lattice unchanged. (Image generated by a program I
Each configuration of the Rubik's Cube can be described by the sequence of moves used to reach it. (Image by Tom Davis) | {"url":"https://www.herngyi.com/blog/what-is-group-theory","timestamp":"2024-11-11T03:37:14Z","content_type":"text/html","content_length":"46253","record_id":"<urn:uuid:3af95cb2-9930-4029-9fee-6f26f4822dc6>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00533.warc.gz"} |
Semiconductor memory device
Semiconductor memory device
In a DRAM, an external cycle count circuit detects an operation cycle of a signal RAS which is externally inputted, and a signal expressing the result is outputted to a CBR signal generating circuit
and a self refresh signal generating circuit. In response to outputs from the respective signal generating circuits, an internal RAS signal generating circuit outputs a refresh instruction signal
INRAS for CBR refresh and self refresh. For self refresh, as the operation cycle of the signal RAS immediately before self refresh begins, a refresh cycle is set longer. For CBR refresh, when the
operation cycle of the signal RAS is long, a CBR refresh instruction signal is generated in accordance with only a part of an operation of the signal RAS. By reducing the frequency of refresh,
consumption power is reduced. By means of control which considers a parameter which influences an internal temperature of a semiconductor memory device such as a DRAM, consumption power is reduced
and an operation speed is improved.
Latest Matsushita Electric Industrial Co., Ltd. Patents:
The present invention relates to a semiconductor memory device having a refresh function, such as a DRAM, and more particularly, to a reduction in electric power which is expended during a refresh
Conventionally, dynamic type semiconductor memory devices such as a DRAM have various types of refresh functions, considering that there is a limit to a holding time for holding stored data. For
example, there are a RAS only refresh function for performing a refresh operation by externally inputting a refresh row address and a control signal RAS (row address strobe signal), a CAS before RAS
auto-refresh (CBR refresh) function which requires to input two types of control signals RAS and CAS (column address strobe signal) and generate a refresh address within a semiconductor memory
device, a self refresh function in which a semiconductor memory devices itself generates a control signal and a refresh address which are needed for a refresh operation asynchronously to an
externally inputted signal, etc.
Now, a self refresh function for a conventional DRAM disclosed in Japanese Laid-Open Patent Application Gazette No. 1-13292 will be briefly described.
FIG. 6 is a block circuitry diagram showing a part of a conventional semiconductor memory device which performs a self refresh function, and FIG. 7 is a signal timing chart of various parts of the
conventional semiconductor memory device of FIG. 6. Disposed within a semiconductor memory device 1 are a self refresh control circuit 3, an oscillating circuit 4, a frequency-divider circuit 5, an
external RAS input control circuit 6, an internal RAS generation control circuit 7, an internal address counter control circuit 8, an internal address counter circuit 9, an NOR circuit 16, and a NAND
circuit 17. The semiconductor memory device 1 includes other circuits in addition to those shown in the drawings. Denoted at .PHI.OSC is a signal at a point A, denoted at .PHI.OSCD is a signal at a
point B, denoted at RAS0 is a signal at a point C, denoted at RAS1 is a signal at a point D, and denoted at IntRAS is a signal at a point E.
The signals flow in the circuit of FIG. 6 in the following manner. A signal RAS is supplied to the self refresh control circuit 3 and the external RAS input control circuit 6, a signal CAS is
supplied to the self refresh control circuit 3, the internal address counter control circuit 8 and the NOR circuit 16. The oscillating circuit 4 generates a signal .PHI.OSC, in response to an output
signal from the self refresh control circuit 3. The signal .PHI.OSC is supplied to the frequency-divider circuit 5. The frequency-divider circuit 5 divides the signal .PHI.OSC to generate a signal
.PHI.OSCD. The signal .PHI.OSCD is supplied to the internal RAS generation control circuit 7, while other signal is supplied from the frequency-divider circuit 5 to the external RAS input control
circuit 6. A signal RAS0 which is generated by the external RAS input control circuit 6 and a signal RAS1 which is generated by the internal RAS generation control circuit 7 are supplied to the NAND
circuit 17. The NAND circuit 17 generates a signal IntRAS. The signal IntRAS is supplied to the internal address counter control circuit 8. Meanwhile, the NAND circuit 17 supplies other signal as an
internal signal RAS. Further, a signal which is generated by the internal address counter control circuit 8 is supplied to the internal address counter circuit 9 and the NOR circuit 16. The NOR
circuit 16 generates an internal CAS signal.
FIG. 7 is a timing chart showing an example of timing at which the respective signals above operate. After a certain time t0 since the CAS signal changes to a logic voltage "L," the RAS signal
changes to the logic voltage "L." Following this, after a certain time, the IntRAS signal is generated as a signal which is asynchronous to the external signal. Using the IntRAS signal and an
internal address from the internal address counter, a self-refresh is sequentially operated.
In such a conventional semiconductor memory device having a self refresh function, a signal .PHI.OSCD is outputted in a constant cycle T from the frequency-divider circuit 5, the signal RAS1 is
outputted from the internal RAS generation control circuit 7 in response to this signal, and further, a refresh operation is performed in response to the signal IntRAS, i.e., a internal RAS signal,
which is outputted from the NOR circuit 17. Thus, since the signal .PHI.OSCD which is generated by the frequency-divider circuit 5 has a constant cycle, a self refresh cycle for a self refresh
operation is set constant, regardless of a normal operation condition before the self refresh operation.
By the way, when an operation cycle for normal operations is short, i.e., in a high-speed operation condition, since a consumption current during operations increases, an internal temperature of the
device increases. Due to a characteristic of a memory cell which utilizes a capacitance, as the internal temperature of the device increases, a data holding time becomes shorter. Hence, in the
conventional semiconductor memory device having a self-refresh function, a self refresh cycle is set within a predetermined range which is known by experiences as not causing data to be lost. This
cycle corresponds to a data holding time of when an operation under a condition in which the internal temperature of the device becomes highest, i.e., the normal operation condition before the self
refresh operation, are high-speed operation. Because of this, in the conventional semiconductor memory device, self refresh is performed in a short cycle despite a fact that the normal operation is
not performed at a high speed and a data holding time is sufficiently long, which results in an unnecessary consumption current during self refresh.
Except for such an example as above, in a semiconductor memory device, since a parameter which influences a temperature characteristic is not sufficiently considered, there is a great waste in
consumption power, an operation speed, etc.
The present invention has been made noting such points. Accordingly, an object of the present invention is to grasp a parameter which influences a temperature characteristic and to operate a
semiconductor memory device in accordance with the parameter, to thereby improve the performance of the semiconductor memory device, including a reduction in consumption power and an improvement in
an operation speed.
A first semiconductor memory device according to the present invention comprises: a memory part; a control part for controlling writing of data in the memory part, reading of data from the memory
part, erasion of data, holding of data and the like in response to a signal which is externally inputted; and operation cycle detecting means for detecting an operation cycle of the signal which is
externally inputted.
When the first semiconductor memory device is a DRAM, the control part is structured so as to perform a RAS only refresh operation in response to a signal RAS which is externally inputted, and the
operation cycle detecting means is structured so as to detect an operation cycle of the signal RAS during the RAS only refresh operation.
When the first semiconductor memory device is a DRAM, the control part is structured so as to perform a CAS before RAS auto-refresh (CBR refresh) operation in response to a signal RAS which is
externally inputted, and the operation cycle detecting means is structured so as to detect an operation cycle of the signal RAS during the CAS before RAS auto-refresh operation.
One of parameters influencing an internal temperature of a semiconductor memory device is a frequency of operations by the control part within the device. Since the operation cycle detecting means
disposed within such a structure as above detects the operation cycle of the externally inputted signal which instructs the control part to operate, the semiconductor memory device is controlled
considering the temperature dependence thereof.
A second semiconductor memory device which functions as a DRAM according to the present invention comprises: a memory part; a control part for controlling writing of data in the memory part, reading
of data from the memory part, holding of data and the like in accordance with a signal which is externally inputted; and self refresh means for performing refresh asynchronously to the signal which
is externally inputted, wherein the self refresh means gradually extends each self refresh cycle after a certain period of time during a self refresh operation.
This allows a cycle of a self refresh instruction signal to become gradually longer after a certain period of time during the self refresh operation. On the other hand, while the internal temperature
of the device increases during the refresh operation, as the self refresh cycle becomes longer, the internal temperature of the device gradually decreases to thereby extend a data holding time.
Hence, by gradually extending the self refresh cycle, it is possible to reduce consumption power while maintaining a data holding function.
A third semiconductor memory device which functions as a DRAM according to the present invention comprises: a memory part; a control part for controlling writing of data in the memory part, reading
of data from the memory part, holding of data and the like in response to a signal which is externally inputted; operation cycle detecting means for detecting an operation cycle of the signal which
is externally inputted; and self refresh means for performing refresh asynchronously to the signal which is externally inputted, wherein the self refresh means selects one of the plurality of self
refresh cycles in accordance with an operation cycle of the externally inputted signal which is detected by the operation cycle detecting means during a self refresh operation.
In the third semiconductor memory device, the self refresh means is structured to select a longer self refresh cycle as the operation cycle of the signal which is externally inputted is longer.
This allows to change the self refresh cycle in accordance with the operation cycle of the externally inputted signal which influences the internal temperature of the semiconductor memory device, so
that consumption power during the self refresh operation is reduced by means of simple and quick controlling.
A fourth semiconductor memory device which functions as a DRAM according to the present invention comprises: a memory part; a control part for controlling writing of data in the memory part, reading
of data from the memory part, holding of data and the like in response to a signal which is externally inputted; operation cycle detecting means for detecting an operation cycle of the signal which
is externally inputted; and self refresh means for performing refresh asynchronously to the signal which is externally inputted, wherein the self refresh means selects one of the plurality of self
refresh cycles in accordance with an operation cycle of the externally inputted signal which is detected by the operation cycle detecting means at the beginning of a self refresh operation, and
gradually extends each self refresh cycle after a certain period of time during the self refresh operation.
In such a structure, it is possible to largely reduce consumption power.
A fifth semiconductor memory device which functions as a DRAM according to the present invention comprises: a memory part; a control part for controlling writing of data in the memory part, reading
of data from the memory part, holding of data and the like in response to a signal which is externally inputted; operation cycle detecting means for detecting an operation cycle of the signal which
is externally inputted; and CBR refresh means for performing CBR refresh in a basic cycle which is determined in accordance with an operation cycle of the externally inputted signal, wherein in
response to the specific operation cycle of the externally inputted signal which is detected by the operation cycle detecting means during a CBR refresh operation, the CBR refresh means performs the
CBR refresh operation in a cycle which is obtained by changing the basic cycle.
In the fifth semiconductor memory device, the CBR refresh means can be structured so as to change a cycle for performing the CBR refresh operation more largely than a change in the operation cycle
which is detected by the operation cycle detecting means.
In such a structure, when the operation cycle of the externally inputted signal which influences the internal temperature of the semiconductor memory device changes, the cycle of the CBR refresh
operation which is determined in accordance with the operation cycle of the externally inputted signal is reduced more largely than a change in the operation cycle of the externally inputted signal,
for instance. Hence, it is possible to control the CBR refresh operation considering a change in the internal temperature of the semiconductor memory device, and to reduce consumption power during
CBR refresh.
A sixth semiconductor memory device according to the present invention comprises: a memory part; a control part for controlling writing of data in the memory part, reading of data from the memory
part, erasion of data, holding of data and the like in response to a signal which is externally inputted; operation cycle detecting means for detecting an operation cycle of the signal which is
externally inputted; refresh means for performing refresh for holding data which are stored in the memory part; and instruction signal generating means for generating a refresh instruction signal
which operates the refresh means, wherein the instruction signal generating means changes a cycle of the refresh instruction signal so that a frequency of refresh becomes smaller as an operation
cycle of the externally inputted signal which is detected by the operation cycle detecting means becomes longer.
In such a structure, the frequency of the refresh operation changes depending on the operation cycle of the externally inputted signal, so that the frequency of the refresh operation becomes smaller
as the operation cycle of the externally inputted signal becomes longer. In general, a semiconductor memory device has a characteristic that an internal temperature of the semiconductor memory device
becomes low when the device operates at a low speed, and a data holding time of the semiconductor memory device becomes longer when the internal temperature of the device is low. Hence, even if the
frequency of the refresh operation is reduced in a low-speed operation condition with a long operation cycle to extend a refresh cycle, data are not lost. Therefore, by reducing the frequency of the
refresh operation during operations at a low speed, consumption power is reduced. Since a reduction in consumption power prevents the internal temperature of the device from increasing, the data
holding time becomes even longer. Thus, by reducing the frequency of the refresh operation, it is possible to reduce consumption power while maintaining a data holding function.
FIG. 1 is a block diagram showing a structure of a part of a semiconductor memory device which performs a self refresh function and a CBR self refresh function according to a preferred embodiment;
FIG. 2 is a timing chart of each signal as it is when an operation cycle of a signal RAS is short in the semiconductor memory device according to the preferred embodiment;
FIG. 3 is a timing chart of each signal as it is when the operation cycle of the signal RAS is long in the semiconductor memory device according to the preferred embodiment;
FIG. 4 is a timing chart showing the details of a method of controlling each signal during CBR refresh in the semiconductor memory device according to the preferred embodiment;
FIG. 5 is a characteristic diagram showing a relationship between an operation cycle, an internal temperature and a data holding time of a semiconductor memory device;
FIG. 6 is a block diagram showing a part of a conventional semiconductor memory device which performs a self refresh function; and
FIG. 7 is a timing chart of each signal in the conventional semiconductor memory device.
In the following, a preferred embodiment of the present invention will be described with reference to the drawings.
FIG. 1 is a block diagram showing a structure of a part of a semiconductor memory device which performs a self refresh function and a CBR self refresh function.
As shown in FIG. 1, a mode detection circuit 110 for detecting an operation mode includes a RAS only refresh and normal read/write detection circuit 111 (abbreviated as "RAS only Ref. Normal R/W
DETECT CKT" in FIG. 1), a CBR refresh detection circuit 112 (abbreviated as "CBR Ref DETECT CKT" in FIG. 1), and a self refresh detection circuit 113 (abbreviated as "Self Ref. DETECT CKT" in FIG.
1). On the output side of the mode detection circuit 110, there are disposed a first and a second internal timers 114 and 118, an external cycle count circuit 115, a CBR refresh signal generating
circuit 116 (abbreviated as "CBR Ref. SIGNAL GEN CKT" in FIG. 1), a self refresh signal generating circuit 117 (abbreviated as "Self Ref. SIGNAL GEN CKT" in FIG. 1), two frequency-divider circuits
119 and 120, and an internal RAS signal generating circuit 121. The symbols RAS, CAS, MNORM, MCBR, MSELF, TMR11, TMR21-23, NORMPROC0-2, CBRPRC0-2, CCBR, CSELF, and INRAS denote signals.
FIGS. 2 and 3 are timing charts of the respective signals RAS, CAS, MNORM, MCBR, MSELF, TMR11, NORMPROC0-2, CBRPRC0-2, and INRAS. Processes in which the respective signals are generated, cycles and
the like of the respective signals will be described later. First, only relationships between inputting and outputting of each signal within each circuit will be described.
In the circuit shown in FIG. 1, the signal RAS and the signal CAS are externally inputted to the RAS only refresh and normal read/write detection circuit 111, the CBR refresh detection circuit 112
and the self refresh detection circuit 113 which are disposed within the mode detection circuit 110. Further, the RAS only refresh and normal read/write detection circuit 111 generates the signal
MNORM. The signal MNORM is supplied to the internal timer 114, the external cycle count circuit 115, and the internal RAS signal generating circuit 121. The CBR refresh detection circuit 112
generates the signal MCBR. The signal MCBR is supplied to the internal timer 114, the external cycle count circuit 115, and the CBR refresh signal generating circuit 116. The self refresh detection
circuit 113 generates the signal MSELF. The signal MSELF is supplied to the self refresh signal generating circuit 117 and the second internal timer 118.
Next, the first internal timer 114 generates the signal TMR11 in response to the two signals MNORM and MCBR. This signal is supplied to the external cycle count circuit 115. The external cycle count
circuit 115, in response to the three signals TMR11, MNORM and MCBR, generates 3-bit signals NORMPROC0-2 and CBRPROC0-2 each expressing the speed of the signal RAS which is externally inputted. The
signals NORMPROC0-2 and CBRPROC0-2 are supplied to the self refresh signal generating circuit 117. The signal CBRPROC0-2 is also supplied to the CBR refresh signal generating circuit 116. Further,
the signal RAS or CAS which is externally inputted is also directly or indirectly supplied to the first internal timer 114, the external cycle count circuit 115, the CBR refresh signal generating
circuit 116, the self refresh signal generating circuit 117, and the internal RAS signal generating circuit 121.
As described later, the external cycle count circuit 115, functioning as operation cycle detecting means for detecting an operation cycle of the signal RAS which is externally inputted, generates the
3-bit signals NORMPROC0-2 and CBRPROC0-2 as a detection result on the operation cycle of the signal RAS which is externally inputted.
On the other hand, receiving the signal MSELF, the second internal timer 118 generates the signal TMR21 which sets a cycle for performing self refresh. The signal TMR21 is directly supplied to the
self refresh signal generating circuit 117. The signal TMR 21 is divided sequentially by the frequency-divider circuits 119 and 120, and the resultant signals TMR22 and TMR23 are each supplied to the
self refresh signal generating circuit 117.
Receiving the signals CBRPROC0-2 and MCBR, the CBR refresh signal generating circuit 116 generates the signal CCBR. The signal CCBR is supplied to the internal RAS signal generating circuit 121. The
self refresh signal generating circuit 117, receiving the signals NORMPROC0-2, CBRPROC0-2, MCBR, and TMR21 to TMR23, generates the signal CSELF. The signal CSELF is supplied to the internal RAS
signal generating circuit 121. The internal RAS signal generating circuit 121, in response to the signals MNORM, CCBR and CSELF, generates the signal INRAS which serves as a refresh instruction
signal. The signal INRAS is supplied to memory cells and the like. The CBR refresh signal generating circuit 116, the self refresh signal generating circuit 117 and the internal RAS signal generating
circuit form instruction signal generating means for generating the signal INRAS which serves as the refresh instruction signal. Although omitted in FIG. 1, the semiconductor memory device further
includes a memory cell array which is formed by arranging a number of memory cells and a control circuit which functions as refresh means for supplying a current for holding data to the respective
memory cells in response to the signal RAS.
Next, operations within the circuit as described above will be described with reference to FIGS. 2 and 3.
FIG. 2 is a timing chart of each signal as it is when an operation cycle of the signal RAS is short. In FIG. 2, a long period P10 is a RAS only refresh period of the signal INRAS. This is a mode in
which the signal MNORM outputted from the RAS only refresh and normal read/write detection circuit 111 stays at a logic voltage "H." A period P20 of the signal INRAS is a CBR refresh period. This is
a mode in which the signal MCBR outputted from the CBR refresh detection circuit 112 stays at the logic voltage "H." The CBR refresh period P20 starts after the signal CAS changes to the logic
voltage "L" and the signal RAS changes to the logic voltage "L" to invoke a self refresh mode.
Further, after a certain period of time since the CBR refresh period P20 started, the signal MSELF outputted the self refresh detection circuit 113 changes to the logic voltage "H." At a rise of the
signal MSELF, a self refresh period P30 in which the internal signal INRAS automatically performs refresh starts. In the present preferred embodiment, the self refresh period P30 consists of partial
periods P31 to P33, the cycle of the internal signal INRAS becomes progressively longer during the partial periods P31, P32 and P33 in this order. While a normal operation cycle is about 200 nsec, a
data holding time is 200 msec, and a data holding time of a DRAM is about 106 times as long as the normal operation cycle. Although FIG. 2 shows a difference between the normal operation cycle and
the self refresh cycle is only small for convenience of illustration, the self refresh cycle is about 1000 times as long as the normal operation cycle.
During the RAS only refresh period P10 above, refresh is performed with the same cycle as the signal RAS which is externally inputted. In the example shown in FIG. 2, since the operation cycle of the
signal RAS is short, RAS only refresh is performed in a short cycle. In the present preferred embodiment, operations are similar between a normal read/write period, which is the normal operation
cycle, and the RAS only refresh period P10.
Further, in the present preferred embodiment, responding to the signal TMR11 from the first internal timer 114, the external cycle count circuit 115 which functions as the operation cycle detecting
means changes described earlier the value of the 3-bit signal NORMPROC0-2 at a RAS only refresh time t11.
Since FIG. 2 shows a case in which the RAS only refresh operation before entering the self refresh mode is at a high speed, in the external cycle count circuit 115, the 3-bit signal NORMPROC0-2 at
time t11 is a signal which expresses a large numerical value. In other words, at the time till, the most significant signal NORMPROC2 has the logic voltage "H," the signal NORMPROC1 has the logic
voltage "H," and the signal NORMPROC0 has the logic voltage "L." Since the value of the signal NORMPROC0-2 is large, the self refresh signal generating circuit 117 generates a signal which is based
on the signal TMR21 having a cycle which is not divided, as a self refresh cycle (i.e., the cycle of the internal signal INRAS) for the period P31 under the self refresh mode. Entering the period P32
starts after a predetermined period of time, the self refresh signal generating circuit 117 generates the signal TMR22 having a cycle twice as long as that of the signal TMR21, as the self refresh
cycle. Entering the period P33 after a predetermined period of time, the self refresh signal generating circuit 117 generates a signal which is based on the signal TMR23 having a cycle which is four
times as long as that of the signal TMR21, as the self refresh cycle.
Hence, in self refresh control according to the present embodiment, since the self refresh cycle is controlled so as to become progressively longer after a certain period of time during the self
refresh period, a consumption current during the self refresh operation eventually becomes about 1/4 of a conventional consumption current. On the other hand, since the data holding time becomes
longer as described later as the self refresh cycle becomes longer, even if the self refresh cycle becomes gradually longer, this does not damage a data holding function.
Next, FIG. 3 is a timing chart showing a case where the operation cycle of the signal RAS is long, i.e., during low-speed operations. As shown in FIG. 3, during the RAS only refresh period P10, RAS
only refresh is performed within a longer cycle than in the case shown in FIG. 2. That is, the operation is performed at a low speed. Since the RAS only refresh operation before entering the self
refresh mode is performed at a low speed, the numerical value which is expressed by the 3-bit signal NORMPROC0-2 at a time t21 defined by the signal TMR11 from the first internal timer 114 is small.
In other words, the most significant signal NORMPROC2 has the logic voltage "L," the signal NORMPROC1 has the logic voltage "H," and the signal NORMPROC0 has the logic voltage "L." Since the value of
the signal NORMPROC0-2 is small, the self refresh cycle during the period P32 under the self refresh mode becomes a signal which is based on the signal TMR22 having a cycle which is twice as long as
that of the signal TMR21, from the beginning. Following this, in the period P33 after a certain time, the self refresh cycle becomes a signal which is based on the signal TMR23 having a cycle which
is four times as long as that of the signal TMR21.
Hence, when the operation is performed at a low speed during the RAS only refresh period shown in FIG. 3, since the self refresh cycle immediately after entering the self refresh mode is 1/2 of that
in a case where the operation is performed at a high speed during the RAS only refresh period shown in FIG. 2, from the beginning, there is no the period P31 which has a short cycle, and therefore, a
consumption current is further reduced. The effect of reducing a consumption current is particularly noticeable during operations in which a normal operation, such as RAS only refresh, self refresh
and normal read/write, and a self refresh operation frequently switch with each other.
Next, a description will be given on the details of a method of controlling the refresh cycle during the CBR refresh period P20. FIG. 4 is a timing chart, magnifying a part which corresponding to the
CBR refresh period P20 shown in FIG. 2 or 3. In FIG. 4, the CBR refresh period P20 includes partial periods P21 to P25. The partial period P21 is a CBR refresh period of a high speed operation, while
the partial periods P22 to P24 are CBR refresh periods of a low speed operation.
First, it is assumed that while high-speed CBR refresh is performed during the partial period P21, at an end time t31, a high-speed operation is detected from the value of the signal NORMPROC0-2. In
this case, the internal signal INRAS is a signal which has the same cycle with the signal RAS which is externally inputted. Hence, the cycle of the externally inputted signal RAS becomes long during
the partial period P22 to slow down the operation, whereby the cycle of the internal signal becomes long in synchronization to the cycle of the externally inputted signal RAS.
If it is detected at an end time t32 of the partial period P22 that the operation is a slow-speed operation, in the next partial period P23, since it is detected that the operation is a slow-speed
operation during the precedent partial period P22, the internal signal INRAS is generated for every other signal RAS which is externally inputted. That is, although the time at which the internal
signal INRAS is outputted itself is the same as the time at which the signal RAS is outputted, it is not always the case that the internal signal INRAS is outputted every time the signal RAS is
outputted. In other words, although a CBR refresh instruction signal is generated from the signal RAS which is externally inputted in response to the internal signal INRAS, when it is detected that
the operation cycle of the signal RAS is long, the device is controlled so that the CBR refresh instruction signal is generated in synchronization to only a part of the signal RAS which is externally
inputted. This is the same in the next partial period P24.
In the next partial period P25, if it is detected at an end time t34 of the precedent partial period P24 that the operation is still a slow-speed operation, the device is controlled so that the
internal signal INRAS is generated for every three signals RAS which are externally inputted. That is, a pulse number for not generating the CBR refresh instruction signal within the externally
inputted signal RAS.
Hence, in the present embodiment, when the operation cycle of the signal RAS which is externally inputted becomes long, the device is controlled so that a period is created having a cycle which is
not the same as the operation cycle of the externally inputted signal RAS and which does not cause CBR refresh, i.e., so that the device operates at a low speed. When CBR refresh becomes slow, since
a consumption current and an internal temperature of the device are reduced, a data holding time of the memory cells becomes long and an actual refresh cycle becomes long. Since the consumption
current is further reduced when the actual refresh cycle becomes long in this manner, a margin to the data holding time of the memory cells increases. In this embodiment, since the actual refresh
cycle appears every other externally inputted signal RAS or every three externally inputted signals RAS, consumption power becomes 1/2 or 1/3.
While the present embodiment requires that a 3-bit signal is outputted as the signals NORMPRC0-2 and CBRPRC0-2 which express a detection result on the operation cycle by the external cycle count
circuit 115, the self refresh operation cycle is extended up to four times in accordance with this signal and the CBR refresh cycle is extended up to every three externally inputted signals RAS, the
present invention is not limited to such a preferred embodiment. Rather, finer control is possible.
Further, although the basic CBR refresh cycle, which is defined as a cycle which is twice as long as the operation cycle of the signal RAS which is externally inputted, is changed to a cycle which is
further twice or three times longer in the present embodiment, the basic CBR refresh cycle may be changed to a long CBR refresh cycle which is separated from the cycle of the signal RAS which is
externally inputted. In this case as well, by changing in the CBR refresh cycle more largely than a change in the externally inputted signal RAS, it is possible to reduce consumption power while
maintaining a data holding function under a condition that the semiconductor memory device operates very frequently or very scarcely.
Next, a description will be given on a relationship between the operation speed, i.e., the operation cycle of the semiconductor memory device, the internal temperature of the device and the data
holding time.
FIG. 5 is a characteristic diagram showing the relationship between the operation cycle, the internal temperature and the data holding time, using an ambient temperature as a parameter. In FIG. 5,
the horizontal axis shows the operation cycle tRC, the vertical axis on the left-hand side shows the internal temperature of the device, and the vertical axis on the right-hand side shows the data
holding time. The characteristic curves C25, C50 and C75 are characteristic curves corresponding to cases where the ambient temperature is 25.degree. C., 50.degree. C. and 75.degree. C.,
respectively. In either case, the operation cycle becomes shorter, and the internal temperature of the device increases when the device operates at a high speed. On the other hand, when the operation
cycle tRC becomes long, the internal temperature of the device decreases and the data holding time becomes long.
While the present embodiment requires that the operation cycle is detected to control the actual refresh cycle so that consumption power is reduced, this may be combined with other controlling in
which the internal temperature of the device is directly detected to control the refresh cycle. In short, by controlling the semiconductor memory device while utilizing a parameter which influences
the internal temperature of the device, it is possible to reduce consumption power, improve the operation speed, etc.
For example, it is possible to control a delay time of a delay circuit which is disposed in the semiconductor memory device, using a signal which is detected by the operation cycle detecting means.
For instance, since the internal temperature increases and a delay time of a delay circuit becomes long as an operation time of the semiconductor memory device becomes short, using a signal which is
detected by the operation cycle detecting means, it is possible to structure the circuitry which shortens the delay time.
Further, while there are a reference voltage signal from a reference voltage generating circuit, an input switching level and the like for a circuit having a temperature characteristic, such a signal
or level can be corrected in accordance with the operation cycle of the externally inputted signal.
1. A semiconductor memory device, comprising:
a memory part;
a control part for controlling writing of data in said memory part, reading of data from said memory part, holding of data in response to a signal which is externally inputted; and
operation cycle detecting means for detecting an operation cycle of the signal which is externally inputted.
2. The semiconductor memory device of claim 1, wherein said semiconductor memory device is a DRAM,
said control part is structured so as to perform a RAS only refresh operation in response to a signal RAS which is externally inputted, and
said operation cycle detecting means is structured so as to detect an operation cycle of said signal RAS during the RAS only refresh operation.
3. The semiconductor memory device of claim 1, wherein said semiconductor memory device is a DRAM,
said control part is structured so as to perform a CAS before RAS auto-refresh (CBR refresh) operation in response to
a signal RAS and a signal CAS which are externally inputted, and
said operation cycle detecting means is structured so as to detect an operation cycle of said signal RAS during said CBR refresh operation.
4. A semiconductor memory device which functions as a DRAM, comprising:
a memory part;
a control part for controlling writing of data in said memory part, reading of data from said memory part, holding of data in response to a signal which is externally inputted; and
self refresh means for performing refresh asynchronously to said signal which is externally inputted,
wherein the self refresh means gradually extends each self refresh cycle after a certain period of time during a self refresh operation.
5. A semiconductor memory device which functions as a DRAM, comprising;
a memory part;
a control part for controlling writing of data in said memory part, reading of data from said memory part, holding of data in response to a signal which is externally inputted;
operation cycle detecting means for detecting an operation cycle of the signal which is externally inputted; and
self refresh means for performing refresh asynchronously to the signal which is externally inputted,
wherein said self refresh means selects one of said plurality of self refresh cycles in accordance with an operation cycle of an externally inputted signal which is detected by said operation
cycle detecting means during a self refresh operation.
6. The semiconductor memory device of claim 5, wherein said self refresh means is structured to select a longer self refresh cycle as the operation cycle of said signal which is externally inputted
is longer.
7. The semiconductor memory device of claim 5, wherein said signal which is externally inputted is RAS.
8. A semiconductor memory device which functions as a DRAM, comprising;
a memory part;
a control part for controlling writing of data in said memory part, reading of data from said memory part, holding of data in response to a signal which is externally inputted;
operation cycle detecting means for detecting an operation cycle of the signal which is externally inputted; and
self refresh means for performing refresh asynchronously to said signal which is externally inputted,
wherein said self refresh means selects one of said plurality of self refresh cycles in accordance with an operation cycle of an externally inputted signal which is detected by said operation
cycle detecting means at the beginning of a self refresh operation, and gradually extends each self refresh cycle after a certain period of time during the self refresh operation.
9. The semiconductor memory device of claim 7, wherein said signal which is externally inputted is RAS.
10. A semiconductor memory device which functions as a DRAM, comprising:
a memory part;
a control part for controlling writing of data in said memory part, reading of data from said memory part, holding of data in response to a signal which is externally inputted;
operation cycle detecting means for detecting an operation cycle of said signal which is externally inputted; and
CBR refresh means for performing CBR refresh in a basic cycle which is determined in accordance with an operation cycle of an externally inputted signal,
wherein in response to the specific operation cycle of the externally inputted signal which is detected by said operation cycle detecting means during a CBR refresh operation, said CBR refresh
means performs the CBR refresh operation in a cycle which is obtained by changing said basic cycle.
11. The semiconductor memory device of claim 10, wherein said CBR refresh means is structured so as to change a cycle for performing said CBR refresh operation more largely than a change in the
operation cycle which is detected by said operation cycle detecting means.
12. The semiconductor memory device of claim 10, wherein said signal which is externally inputted is RAS.
13. A semiconductor memory device, comprising;
a memory part;
a control part for controlling writing of data in said memory part, reading of data from said memory part, holding of data in response to a signal which is externally inputted;
operation cycle detecting means for detecting an operation cycle of said signal which is externally inputted;
refresh means for performing refresh for holding data which are stored in said memory part; and
instruction signal generating means for generating a refresh instruction signal which operates said refresh means,
wherein said instruction signal generating means changes a cycle of the refresh instruction signal so that a frequency of refresh becomes smaller as an operation cycle of an externally inputted
signal which is detected by said operation cycle detecting means becomes longer.
Referenced Cited
U.S. Patent Documents
5321662 June 14, 1994 Ogawa
5495452 February 27, 1996 Cha
5515331 May 7, 1996 Kim
Foreign Patent Documents
0 301 794 February 1989 EPX
0 632 463 January 1995 EPX
64-13292 January 1989 JPX
WO 94/12934 June 1994 WOX
Other references
• IBM Technical Disclosure Bulletin, vol. 33, No. 2, Jul. 1990, pp. 68-72,, "Intelligent DRAM Refresh Controller". | {"url":"https://patents.justia.com/patent/5828619","timestamp":"2024-11-13T14:51:43Z","content_type":"text/html","content_length":"104401","record_id":"<urn:uuid:76cf5d1f-1482-4f9b-9e95-6554abffc291>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00291.warc.gz"} |
How do you write two and a half in AP style?
How do you write two and a half in AP style?
Writing 101: AP Style Basics
1. Examples: Harper Lee spent two and a half years writing “To Kill A Mockingbird.” USA Today was the first outlet to cover the story.
2. Examples: Katie’s birthday is on July 9.
3. Example: Michael Jordan played for the University of North Carolina for three years in college.
Is half hyphenated in AP style?
For mixed numbers, use 1 1/2, 2 5/8, etc., with a full space between the whole number and the fraction. Other fractions require a hyphen and individual figures, with a space between the whole number
and the fraction. For example, 1 3-16.
How do you write 1.5 in AP style?
AP style tip: Spell out amounts less than 1 in stories, using hyphens between the words: two-thirds, four-fifths, seven-sixteenths, etc. Use figures for precise amounts larger than 1, converting to
decimals whenever practical.
Are fractions hyphenated AP style?
Fractions. Spell out fractions less than one in text, using a hyphen. Two-thirds, four-fifths, etc. Use figures for precise amounts larger than one, converting to decimals whenever practical.
Do you hyphenate two and a half?
Two and a half hours. A. There is no need for hyphens if you’re using the phrase as a noun: We’ll be there in two and a half hours; two and a half hours is plenty of time. If you are using a phrase
like that as a modifier, however, you’ll need hyphens to hold it all together: a two-and-a-half-hour trip.
How do you write two and a half?
There is no need for hyphens if you’re using the phrase as a noun: We’ll be there in two and a half hours; two and a half hours is plenty of time. If you are using a phrase like that as a modifier,
however, you’ll need hyphens to hold it all together: a two-and-a-half-hour trip.
Do you hyphenate numbers in AP style?
(Do not use hyphens.) An exception to spelling out numbers for planes, ships, etc. is “Air Force One,” the president’s plane. Use Roman numerals if they are part of the official designation.
Do you hyphenate numbers?
Use a hyphen when writing two-word numbers from twenty-one to ninety-nine (inclusive) as words. But don’t use a hyphen for hundreds, thousands, millions and billions.
Do you hyphenate numbers and a half?
Do you put hyphens between numbers?
You should always hyphenate numbers when you are describing compound numbers between 21 and 99 (except 30, 40, 50, 60, 70, 80 and 90). A compound number is any number that consists of two words; for
example, eighty-eight, twenty-two, forty-nine. Numbers higher than 99 do not need a hyphen.
How do you write numbers in AP Style?
Using AP Style
1. Numbers. Spell out numbers one through nine, but write numbers 10 and above as numerals.
2. Percentages. Write percentages as numerals, followed by the word “percent.”
3. Ages. Write ages using numerals.
4. Dollar amounts.
5. Street addresses.
6. Dates.
7. Job titles.
8. Film, book, and song titles.
Is two and a half years hyphenated?
Hyphen With Number of Years Don’t use hyphens when you’re just talking about a span of time. We’ve lived here for four and a half years. Two and a half years is plenty of time to learn how to play
How do you write 2 and a half in a fraction?
You would say “two and one half.” The other format is an improper fraction where the numerator is greater than the denominator (5/2). Mathematicians would say that is five halves. You will find both
types of fractions in your problems. Both of those examples represent the same value (2 1/2 = 5/2).
What is 2.5 as a mixed number?
Thus, 2.5 as a mixed number is the same as 2.5 as a mixed fraction. Anyway, here are step-by-step instructions showing you how to get 2.5 as a mixed number: The whole number is the number to the left
of the decimal point. Therefore, the whole number (W) is equal to 2.
Is two and a half hyphenated?
Should one half be hyphenated?
One half need not be hyphenated when used as a noun; however, it must be hyphenated when used as an adjective: 1. I am entitled to one half of the pizza.
Where do you put the hyphen in numbers?
NUMBERS: from twenty-one to ninety-nine, when spelled out, are hyphenated. FRACTIONS: Hyphenate a fraction when it is used as a adjective (e.g., a two-thirds majority). Write as two words when used
as a noun (e.g. two thirds of the participants). Use figures for sums of money, except when they begin a sentence. | {"url":"https://www.wren-clothing.com/how-do-you-write-two-and-a-half-in-ap-style/","timestamp":"2024-11-06T20:37:18Z","content_type":"text/html","content_length":"64071","record_id":"<urn:uuid:3ee8b8eb-065d-4898-8fe1-76de4406d83c>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.47/warc/CC-MAIN-20241106194801-20241106224801-00700.warc.gz"} |
Resolving Builtin Function or Method Object Not Subscriptable Error with NumPy
The “builtin function or method object is not subscriptable” error in Python often occurs when using libraries like NumPy. This error happens when you mistakenly use square brackets [] instead of
parentheses () to call a function or method. It’s a common issue for developers, as it highlights the importance of correctly distinguishing between different types of objects and their appropriate
usage in Python.
Understanding the Error
The error “builtin function or method object is not subscriptable” occurs when you try to use square brackets ([]) to access elements of a function or method, which is not allowed. This often happens
if you mistakenly use square brackets instead of parentheses when calling a function.
In Python, subscriptable objects are those that support indexing or slicing operations, such as lists, tuples, dictionaries, and strings. These objects implement the __getitem__ method, allowing you
to access their elements using square brackets (e.g., my_list[0]).
For example, if you try to call a function like numpy.array with square brackets instead of parentheses, you’ll get this error:
import numpy as np
arr = np.array[1, 2, 3] # This will raise the error
To fix it, use parentheses:
arr = np.array([1, 2, 3]) # Correct usage
This ensures you’re calling the function correctly and not trying to index it.
Common Causes
Here are the common causes of the 'builtin_function_or_method' object is not subscriptable error when using NumPy:
1. Calling a Function Without Parentheses: Forgetting to include parentheses when calling a function results in referencing the function object itself rather than its return value. For example,
using np.array instead of np.array().
2. Confusion Between Functions and Variables: Accidentally using the name of a function without calling it, leading to referencing the function object instead of its return value. For instance, max
instead of max().
3. Method Chaining Errors: Mistakenly chaining method calls without parentheses, resulting in accessing the method object rather than its return value. For example, array.mean instead of array.mean
4. Incorrect Use of Square Brackets: Using square brackets to call a function or method instead of parentheses. For example, np.array[1, 2, 3] instead of np.array([1, 2, 3]).
These errors typically occur when trying to index or slice a function or method instead of a data structure like a list or array.
Example Scenarios
Here are some example scenarios where the 'builtin_function_or_method' object is not subscriptable error might occur while using NumPy:
1. Incorrectly using square brackets with a NumPy function:
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
result = np.mean[arr] # Incorrect usage
# TypeError: 'builtin_function_or_method' object is not subscriptable
2. Attempting to subscript a method instead of calling it:
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
result = arr.sum[0] # Incorrect usage
# TypeError: 'builtin_function_or_method' object is not subscriptable
3. Confusing a function call with indexing:
import numpy as np
arr = np.array([1, 2, 3, 4, 5])
max_value = np.max[arr] # Incorrect usage
# TypeError: 'builtin_function_or_method' object is not subscriptable
These examples illustrate common mistakes that lead to this error. Make sure to use parentheses () to call functions or methods.
How to Fix the Error
1. Use Parentheses for Function Calls:
import numpy as np
array = np.array([1, 2, 3])
print(array) # Correct: Using parentheses
2. Avoid Using Square Brackets with Functions:
import numpy as np
array = np.array([1, 2, 3])
print(array[0]) # Correct: Accessing array element
print(np.array[0]) # Incorrect: Trying to subscript a function
3. Check for Method Calls:
import numpy as np
array = np.array([1, 2, 3])
shape = array.shape() # Incorrect: shape is an attribute, not a method
shape = array.shape # Correct: Accessing attribute without parentheses
4. Ensure Correct Method Usage:
import numpy as np
array = np.array([1, 2, 3])
reshaped_array = array.reshape((3, 1)) # Correct: Using parentheses
reshaped_array = array.reshape[3, 1] # Incorrect: Using square brackets
These steps should help you avoid the ‘builtin function or method object is not subscriptable’ error.
Common Error When Working with NumPy
When working with NumPy, it’s essential to use correct syntax to avoid the ‘builtin function or method object is not subscriptable’ error.
This error occurs when you try to access an attribute or element of a function or method using square brackets `[]`, which is incorrect.
To fix this issue, make sure to use parentheses `()` to call functions or methods correctly. For example, instead of np.max[arr], use np.max(arr).
Additionally, avoid using square brackets with functions, as they are not subscriptable. Instead, access array elements using square brackets, like array[0].
Correct Syntax
• Access attributes or methods without parentheses: array.shape
• Use parentheses for method calls: array.reshape((3, 1))
By following these guidelines and using proper syntax, you can avoid this common error when working with NumPy. | {"url":"https://terramagnetica.com/builtin-function-or-method-object-is-not-subscriptable-error-while-using-numpy/","timestamp":"2024-11-09T18:44:49Z","content_type":"text/html","content_length":"68651","record_id":"<urn:uuid:8d418761-cc55-4a8d-a45c-c98059a12f87>","cc-path":"CC-MAIN-2024-46/segments/1730477028142.18/warc/CC-MAIN-20241109182954-20241109212954-00443.warc.gz"} |
1. The physic-mechanical phenomena in lubricated contact
The elastohydrodynamic theory of lubrication explains the phenomena occurring in the field of contact of two elastic bodies, divided by a thin layer of a liquid and moving one concerning another.
Such contact we will name EHD contact.
The EHD theory of lubrication differs from the classical hydrodynamic theory of lubrication that considers normal and tangents displacements of moving surfaces of elastic bodies of any form, the
viscoelastic and thermal phenomena in a liquid and in bodies, strong dependence of viscosity of a liquid on pressure and temperature, a condition of the limited lubrication. Taking into account of
these factors allows to define with reliability the basic characteristics of contact - a thickness of a lubricant film, stresses, temperature and to enter them into calculation technical EHD systems
while the classical theory of lubrication gives errors on one or several orders.
On the basis of calculation recommendations for choice materials and geometry of details of mechanisms and devices, a kind and a way of lubricating, processing of surfaces, refrigerating conditions
and installation, a lubricating mode can be made. Settlement formulas and algorithms of EHD theory can form a basis for machine designing of mobile connections.
As a rule, lubricated contact works in extreme conditions. Pressure reaches values, shear rate has an order, the temperature reaches and above, and the temperature gradient reaches values. Time of
passage of a particle of lubricant through contact area usually is little. All it creates difficulties for construction of the theory and experiment statement. Apparently, on one experimental
installation, except the installations reproducing direct working conditions of rolling-sliding contact, it is impossible to provide all characteristic values for mechanical and thermodynamic
parameters of contact. Therefore mathematical modeling of EHD processes, analytical both numerical research of models and comparison of results with experiment are the most effective and fundamental
approach to the decision of problems of EHD lubrication.
In EHD rolling-sliding contact lubricant moves together with surfaces of bodies and is involved in a gap between them. The big contact pressure deforms bodies and increases area of a gap and in a
stationary case does its almost plane-parallel.
When lubricant pressure increases from atmospheric to maximum value the viscosity of lubricant increases many times. Near to outlet of EHD contact the backlash decreases, and lubricant encounters the
big resistance to an exit from the narrow slit almost closed from different directions, except that, whence lubricant arrives. As a result in the field of contact the lubricating film of a
considerable thickness is formed.
The form of gap, pressure distribution and film thickness can be defined experimentally, as a result of numerical decision of the EHD equations or the approached methods. The film thickness received
by any of these ways, approximately on an order surpasses a thickness calculated under the usual hydrodynamic theory of lubricating for rigid bodies and a liquid of constant viscosity.
Sliding and considerable gradients of pressure in contact lead to the big shear rate in lubricant. The thermal emission from shift raises lubricant temperature on tens degrees and increases
temperature of bodies near to contact. The field of temperatures in the field of contact can be found in result of the joint decision of the equations of movement and energy in lubricant and the heat
conductivity equations in bodies.
The big shear rate, high pressures and small times of process lead to difficult behavior of lubricant, in particular to viscoelastic effects. The joint account of thermal processes and complicated
rheological behavior of lubricant is necessary for estimation and at least the approached calculation of pressure and temperature in contact. The classical theory of lubrication which are not
considering specified factors, and decision of HD problems for Newtonian liquids lead to tensions in lubricant, more than on an order different from the experimental.
2. Scopes of the theory and limits of its applicability
The EHD theory of lubrication can be applied at calculation and designing, and also at the analysis of damages of those mechanisms and devices in which there are mobile lubricating contacts to the
big contact pressure. Practically it concerns almost all areas of technics. Materials of contacting bodies are steel and other metals, polymers. Lubricants are also various - usual technical oils on
the basis of mineral and synthetic oils, water, liquid metals, glass bath etc.
The widest scope of the EHD theory of lubrication is rolling bearings and gearings. Rolling bearing calculation includes definition of its dynamics and kinematics, a thickness of a film in contacts,
corners of contact, rigidity, the moment of resistance to rotation, forces of interaction of a separator with rolling bodies and of some other characteristics. It is a challenge about definition of
movement of many bodies in the diphasic environment. It cannot be solved traditional methods without application of the EHD theory.
EHD calculation of gearings should give a film thickness and pressure distribution in contact of teeth, a field of temperatures in a zone of contact and to lead to an estimation of durability and
well-founded criterion of jamming.
Heavy-loaded sliding bearing also is inexpedient to count on the basis of the classical theory of lubrication. The account of deformations of surfaces and dependence of viscosity of lubricant on
pressure is necessary.
Applications of the EHD theory in the biomechanics (joints, movement of bodies, for example erythrocytes, in the liquid environment on elastic channels) are interesting.
Among other applications - consolidations, friction gears, persistent crests of cogwheels, mobile spline connections, guides of various types, front contacts of rollers.
From the resulted variety of scopes of EHD theory of lubrication follows that its calculations and recommendations have general technical character. Universal calculation of a film thickness concerns
any mobile lubricated contact, irrespective of the mechanism and friction unit in which it is available.
As basis for calculations the uniform mathematical apparatus serves. The equations describe movement of a thin layer of a liquid, contact deformations and a field of temperatures in a contact zone. | {"url":"https://www.tribo-lab.com/index.php?option=com_content&view=article&id=1:introduction&catid=1:theory-of-librycation&Itemid=2","timestamp":"2024-11-11T03:03:24Z","content_type":"application/xhtml+xml","content_length":"28648","record_id":"<urn:uuid:6bb7b751-1ddb-41ea-8ffa-4cd9d26733e0>","cc-path":"CC-MAIN-2024-46/segments/1730477028216.19/warc/CC-MAIN-20241111024756-20241111054756-00394.warc.gz"} |
Ana Caraiani's papers
Huxley 668
180 Queen's Gate
London SW7 2AZ
Ana Caraiani
Imperial College Department of Mathematics
Publications and preprints (click on title to view abstract):
Recent progress on Langlands reciprocity for GL[n]: Shimura varieties and beyond (with Sug Woo Shin), to appear in Proceedings of the 2022 IHES summer school on the Langlands
program. PDF arXiv
The goal of these lecture notes is to survey progress on the global Langlands reciprocity conjecture for GL[n] over number fields from the last decade and a half. We highlight results and
conjectures on Shimura varieties and more general locally symmetric spaces, with a view towards the Calegari–Geraghty method to prove modularity lifting theorems beyond the classical setting of
On the modularity of elliptic curves over imaginary quadratic fields (with James Newton) PDF arXiv
In this paper, we establish the modularity of every elliptic curve E/F, where F runs over infinitely many imaginary quadratic fields, including ℚ(√(-d)) for d=1,2,3,5. More precisely, let F be
imaginary quadratic and assume that the modular curve X[0](15), which is an elliptic curve of rank 0 over ℚ, also has rank 0 over F. Then we prove that all elliptic curves over F are modular. More
generally, when F/ℚ is an imaginary CM field that does not contain a primitive fifth root of unity, we prove the modularity of elliptic curves E/F under a technical assumption on the image of the
representation of Gal(F̅/F) on E[3] or E[5].
The key new technical ingredient we use is a local-global compatibility theorem for the p-adic Galois representations associated to torsion in the cohomology of the relevant locally symmetric
spaces. We establish this result in the crystalline case, under some technical assumptions, but allowing arbitrary dimension, arbitrarily large regular Hodge–Tate weights, and allowing p to be small
and highly ramified in the imaginary CM field F.
The cohomology of Shimura varieties with torsion coefficients, ICM–International Congress of Mathematicians. Vol. 3. Sections 1–4, 1744–1766. PDF Book Video
In this article, we survey recent work on some vanishing conjectures for the cohomology of Shimura varieties with torsion coefficients, under both local and global conditions. We discuss the p-adic
geometry of Shimura varieties and of the associated Hodge–Tate period morphism, and explain how this can be used to make progress on these conjectures. Finally, we describe some applications of
these results, in particular to the proof of the Sato–Tate conjecture for elliptic curves over CM fields.
The geometric Breuil–Mézard conjecture for two-dimensional potentially Barsotti–Tate Galois representations (with Matthew Emerton, Toby Gee, and David Savitt), to appear in
Algebra Number Theory. PDF arXiv
We establish a geometrisation of the Breuil–Mézard conjecture for potentially Barsotti–Tate representations, as well as of the weight part of Serre's conjecture, for moduli stacks of two-dimensional
mod p representations of the absolute Galois group of a p-adic local field.
Components of moduli stacks of two-dimensional Galois representations (with Matthew Emerton, Toby Gee, and David Savitt), Forum Math. Sigma 12 (2024), e31, 1–62. PDF arXiv Journal
In a previous article we introduced various moduli stacks of two-dimensional tamely potentially Barsotti–Tate representations of the absolute Galois group of a p-adic local field, as well as related
moduli stacks of Breuil–Kisin modules with descent data. We study the irreducible components of these stacks, establishing in particular that the components of the former are naturally indexed by
certain Serre weights.
Local geometry of moduli stacks of two-dimensional Galois representations (with Matthew Emerton, Toby Gee, and David Savitt), to appear in Proceedings of the International
Colloquium on 'Arithmetic Geometry', TIFR Mumbai, Jan. 6–10, 2020. PDF arXiv
We construct moduli stacks of two-dimensional mod p representations of the absolute Galois group of a p-adic local field, as well as their resolutions by moduli stacks of two-dimensional
Breuil–Kisin modules with tame descent data. We study the local geometry of these moduli stacks by comparing them with local models of Shimura varieties at hyperspecial and Iwahori level.
On the étale cohomology of Hilbert modular varieties with torsion coefficients (with Matteo Tamiozzo), Compositio Math. 159 (2023), no. 11, 2279–2325. PDF arXiv Journal
We study the étale cohomology of Hilbert modular varieties, building on the methods introduced for unitary Shimura varieties in [CS17, CS19]. We obtain the analogous vanishing theorem: in the
"generic" case, the cohomology with torsion coefficients is concentrated in the middle degree. We also probe the structure of the cohomology beyond the generic case, obtaining bounds on the range of
degrees where cohomology with torsion coefficients can be non-zero. The proof is based on the geometric Jacquet--Langlands functoriality established by Tian--Xiao and avoids trace formula
computations for the cohomology of Igusa varieties. As an application, we show that, when p splits completely in the totally real field and under certain technical assumptions, the p-adic local
Langlands correspondence for GL[2](ℚ[p]) occurs in the completed homology of Hilbert modular varieties.
New frontiers in Langlands reciprocity, EMS Magazine (2021), no. 119. PDF Journal
In this survey, I discuss some recent developments at the crossroads of arithmetic geometry and the Langlands programme. The emphasis is on recent progress on the Ramanujan–Petersson and Sato–Tate
conjectures. This relies on new results about Shimura varieties and torsion in the cohomology of locally symmetric spaces.
Vanishing theorems for Shimura varieties at unipotent level (with Daniel R. Gulotta and Christian Johansson), J. Eur. Math. Soc. 25 (2023), no. 3, 869–911. PDF arXiv Journal
We show that the compactly supported cohomology of Shimura varieties of Hodge type of infinite Γ[1](p^∞)-level (defined with respect to a Borel subgroup) vanishes above the middle degree, under the
assumption that the group of the Shimura datum splits at p. This generalizes and strengthens the vanishing result proved in "Shimura varieties at level Γ[1](p^∞) and Galois representations". As an
application of this vanishing theorem, we prove a result on the codimensions of ordinary completed homology for the same groups, analogous to conjectures of Calegari–Emerton for completed
(Borel–Moore) homology.
On the generic part of the cohomology of non-compact unitary Shimura varieties (with Peter Scholze), Annals of Math. 199 (2024), no. 2, 483–590. PDF arXiv Journal
We prove that the generic part of the mod l cohomology of Shimura varieties associated to quasi-split unitary groups of even dimension is concentrated above the middle degree, extending previous
work to a non-compact case. The result applies even to Eisenstein cohomology classes coming from the locally symmetric space of the general linear group, and has been used in [ACC+18] to get good
control on these classes and deduce potential automorphy theorems without any self-duality hypothesis.
Our main geometric result is a computation of the fibers of the Hodge–Tate period map on compactified Shimura varieties, in terms of similarly compactified Igusa varieties.
Potential automorphy over CM fields (with Patrick B. Allen, Frank Calegari, Toby Gee, David Helm, Bao V. Le Hung, James Newton, Peter Scholze, Richard Taylor, and Jack A.
Thorne), Annals of Math. 197 (2023), no. 3, 897–1113. PDF arXiv Journal
Let F be a CM number field. We prove modularity lifting theorems for regular n-dimensional Galois representations over F without any self-duality condition. We deduce that all elliptic curves E over
F are potentially modular, and furthermore satisfy the Sato–Tate conjecture. As an application of a different sort, we also prove the Ramanujan Conjecture for weight zero cuspidal automorphic
representations for GL[2](𝔸[F]).
Perfectoid Shimura varieties, in Perfectoid spaces: Lectures from the 2017 Arizona Winter School. PDF Book Video
This is an expanded version of the lecture notes for the minicourse I gave at the 2017 Arizona Winter School. In these notes, I discuss Scholze's construction of Galois representations for torsion
classes in the cohomology of locally symmetric spaces for GL[n], with a focus on his proof that Shimura varieties of Hodge type with infinite level at p acquire the structure of perfectoid spaces. I
also briefly discuss some recent vanishing results for the cohomology of Shimura varieties with infinite level at p.
Shimura varieties at level Γ[1](p^∞) and Galois representations (with Daniel R. Gulotta, Chi-Yun Hsu, Christian Johansson, Lucia Mocz, Emanuel Reinecke, and Sheng-Chi Shih),
Compositio Math. 156 (2020), no. 6, 1152–1230. PDF arXiv Journal Video
We show that the compactly supported cohomology of certain U(n,n) or Sp(2n)-Shimura varieties with Γ[1](p^∞)-level vanishes above the middle degree. The only assumption is that we work over a CM
field F in which the prime p splits completely. We also give an application to Galois representations for torsion in the cohomology of the locally symmetric spaces for GL[n]/F. More precisely, we
use the vanishing result for Shimura varieties to eliminate the nilpotent ideal in the construction of these Galois representations. This strengthens recent results of Scholze and Newton-Thorne.
Patching and the p-adic Langlands program for GL(2,ℚ[p]) (with Matthew Emerton, Toby Gee, David Geraghty, Vytautas Paškūnas, and Sug Woo Shin), Compositio Math. 154 (2018),
no. 3, 503–548. PDF arXiv Journal
We present a new construction of the p-adic local Langlands correspondence for GL(2, ℚ[p]) via the patching method of Taylor–Wiles and Kisin. This construction sheds light on the relationship
between the various other approaches to both the local and global aspects of the p-adic Langlands program; in particular, it gives a new proof of many cases of the second author's local-global
compatibility theorem, and relaxes a hypothesis on the local mod p representation in that theorem.
Kisin modules with descent data and parahoric local models (with Brandon Levin), Ann. Sci. Éc. Norm. Supér. (4) 51 (2018), no. 1, 181–213. PDF arXiv Journal
We construct a moduli space Y^μ,τ of Kisin modules with tame descent datum τ and with fixed p-adic Hodge type μ, for some finite extension K/ℚ[p]. We show that this space is smoothly equivalent to
the local model for Res[K/ℚ[p]]GL[n], cocharacter {μ}, and parahoric level structure. We use this to construct the analogue of Kottwitz–Rapoport strata on the special fiber Y^μ,τ indexed by the
μ-admissible set. We also relate Y^μ,τ to potentially crystalline Galois deformation rings.
On the generic part of the cohomology of compact unitary Shimura varieties (with Peter Scholze), Annals of Math. 186 (2017), no. 3, 649–766. PDF arXiv Journal
The goal of this paper is to show that the cohomology of compact unitary Shimura varieties is concentrated in the middle degree and torsion-free, after localizing at a maximal ideal of the Hecke
algebra satisfying a suitable genericity assumption. Along the way, we establish various foundational results on the geometry of the Hodge-Tate period map. In particular, we compare the fibres of
the Hodge-Tate period map with Igusa varieties.
p-adic q-expansion principles on unitary Shimura varieties (with Ellen Eischen, Jessica Fintzen, Elena Mantovan, and Ila Varma), Directions in Number Theory: Proceedings of
the 2014 WIN3 Workshop. Springer International Publishing (2016), 197–243. PDF arXiv Book
We formulate and prove certain vanishing theorems for p-adic automorphic forms on unitary groups of arbitrary signature. The p-adic q-expansion principle for p-adic modular forms on the Igusa tower
says that if the coefficients of (sufficiently many of) the q-expansions of a p-adic modular form f are zero, then f vanishes everywhere on the Igusa tower. There is no p-adic q-expansion principle
for unitary groups of arbitrary signature in the literature. By replacing q-expansions with Serre-Tate expansions (expansions in terms of Serre-Tate deformation coordinates) and replacing modular
forms with automorphic forms on unitary groups of arbitrary signature, we prove an analogue of the p-adic q-expansion principle. More precisely, we show that if the coefficients of (sufficiently
many of) the Serre-Tate expansions of a p-adic automorphic form f on the Igusa tower (over a unitary Shimura variety) are zero, then f vanishes identically on the Igusa tower.
This paper also contains a substantial expository component. In particular, the expository component serves as a complement to Hida's extensive work on p-adic automorphic forms.
On the image of complex conjugation in certain Galois representations (with Bao V. Le Hung), Compositio Math. 152 (2016), no. 7, 1476–1488. PDF arXiv Journal
We compute the image of any choice of complex conjugation on the Galois representations associated to regular algebraic cuspidal automorphic representations and to torsion classes in the cohomology
of locally symmetric spaces for GL[n] over a totally real field F.
Patching and the p-adic local Langlands correspondence (with Matthew Emerton, Toby Gee, David Geraghty, Vytautas Paškūnas, and Sug Woo Shin), Cambridge Journal of Math. 4
(2016), no. 2, 197–287. PDF arXiv Journal Video
We use the patching method of Taylor–Wiles and Kisin to construct a candidate for the p-adic local Langlands correspondence for GL[n](F), F a finite extension of ℚ[p]. We use our construction to
prove many new cases of the Breuil–Schneider conjecture.
Monodromy and local-global compatibility for l=p, Algebra Number Theory 8 (2014), no. 7, 1597–1646. PDF arXiv Journal
We strengthen the compatibility between local and global Langlands correspondences for GL[n] when n is even and l=p. Let L be a CM field and Π a cuspidal automorphic representation of GL[n](A[L])
which is conjugate self-dual and regular algebraic. In this case, there is an l-adic Galois representation associated to Π, which is known to be compatible with local Langlands in almost all cases
when l=p by recent work of Barnet-Lamb, Gee, Geraghty and Taylor. The compatibility was proved only up to semisimplification unless Π has Shin-regular weight. We extend the compatibility to
Frobenius semisimplification in all cases by identifying the monodromy operator on the global side. To achieve this, we derive a generalization of Mokrane's weight spectral sequence for log
crystalline cohomology.
Local-global compatibility and the action of monodromy on nearby cycles, Duke Math. J. 161 (2012), no. 12, 2311–2413. PDF arXiv Journal Video
We strengthen the local-global compatibility of Langlands correspondences for GL[n] in the case when n is even and l≠p. Let L be a CM field and Π be a cuspidal automorphic representation of GL[n](A
[L]) which is conjugate self-dual. Assume that Π[∞] is cohomological and not "slightly regular", as defined by Shin. In this case, Chenevier and Harris constructed an l-adic Galois representation R
[l](Π) and proved the local-global compatibility up to semisimplification at primes v not dividing l. We extend this compatibility by showing that the Frobenius semisimplification of the restriction
of R[l](Π) to the decomposition group at v corresponds to the image of Π[v] via the local Langlands correspondence. We follow the strategy of Taylor-Yoshida, where it was assumed that Π is
square-integrable at a finite place. To make the argument work, we study the action of the monodromy operator N on the complex of nearby cycles on a scheme which is locally etale over a product of
semistable schemes and derive a generalization of the weight-spectral sequence in this case. We also prove the Ramanujan–Petersson conjecture for Π as above.
Oberwolfach reports:
Patching and p-adic local Langlands, in Algebraische Zahlentheorie, joint with M. Emerton, T. Gee, D. Geraghty, V. Paškūnas, and S.W. Shin, Oberwolfach Report No. 32 (2014), 31–33. PDF
Undergraduate papers:
Multiplicative semigroups related to the 3x+1 problem, Adv. Appl. Math. 45 (2010), no. 3, 373–389. PDF Journal | {"url":"https://www.ma.imperial.ac.uk/~acaraian/papers.php","timestamp":"2024-11-14T07:16:24Z","content_type":"application/xhtml+xml","content_length":"28210","record_id":"<urn:uuid:c5f31a8f-bcac-405a-bceb-c844a9d9d32e>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00667.warc.gz"} |
6.1: Spatial Degrees of Freedom, Normal Coordinates and Normal Modes
Last updated
Page ID
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\( \newcommand{\id}{\mathrm{id}}\) \( \newcommand{\Span}{\mathrm{span}}\)
( \newcommand{\kernel}{\mathrm{null}\,}\) \( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\) \( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\) \( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\id}{\mathrm{id}}\)
\( \newcommand{\Span}{\mathrm{span}}\)
\( \newcommand{\kernel}{\mathrm{null}\,}\)
\( \newcommand{\range}{\mathrm{range}\,}\)
\( \newcommand{\RealPart}{\mathrm{Re}}\)
\( \newcommand{\ImaginaryPart}{\mathrm{Im}}\)
\( \newcommand{\Argument}{\mathrm{Arg}}\)
\( \newcommand{\norm}[1]{\| #1 \|}\)
\( \newcommand{\inner}[2]{\langle #1, #2 \rangle}\)
\( \newcommand{\Span}{\mathrm{span}}\) \( \newcommand{\AA}{\unicode[.8,0]{x212B}}\)
\( \newcommand{\vectorA}[1]{\vec{#1}} % arrow\)
\( \newcommand{\vectorAt}[1]{\vec{\text{#1}}} % arrow\)
\( \newcommand{\vectorB}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vectorC}[1]{\textbf{#1}} \)
\( \newcommand{\vectorD}[1]{\overrightarrow{#1}} \)
\( \newcommand{\vectorDt}[1]{\overrightarrow{\text{#1}}} \)
\( \newcommand{\vectE}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash{\mathbf {#1}}}} \)
\( \newcommand{\vecs}[1]{\overset { \scriptstyle \rightharpoonup} {\mathbf{#1}} } \)
\( \newcommand{\vecd}[1]{\overset{-\!-\!\rightharpoonup}{\vphantom{a}\smash {#1}}} \)
\(\newcommand{\avec}{\mathbf a}\) \(\newcommand{\bvec}{\mathbf b}\) \(\newcommand{\cvec}{\mathbf c}\) \(\newcommand{\dvec}{\mathbf d}\) \(\newcommand{\dtil}{\widetilde{\mathbf d}}\) \(\newcommand{\
evec}{\mathbf e}\) \(\newcommand{\fvec}{\mathbf f}\) \(\newcommand{\nvec}{\mathbf n}\) \(\newcommand{\pvec}{\mathbf p}\) \(\newcommand{\qvec}{\mathbf q}\) \(\newcommand{\svec}{\mathbf s}\) \(\
newcommand{\tvec}{\mathbf t}\) \(\newcommand{\uvec}{\mathbf u}\) \(\newcommand{\vvec}{\mathbf v}\) \(\newcommand{\wvec}{\mathbf w}\) \(\newcommand{\xvec}{\mathbf x}\) \(\newcommand{\yvec}{\mathbf y}
\) \(\newcommand{\zvec}{\mathbf z}\) \(\newcommand{\rvec}{\mathbf r}\) \(\newcommand{\mvec}{\mathbf m}\) \(\newcommand{\zerovec}{\mathbf 0}\) \(\newcommand{\onevec}{\mathbf 1}\) \(\newcommand{\real}
{\mathbb R}\) \(\newcommand{\twovec}[2]{\left[\begin{array}{r}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\ctwovec}[2]{\left[\begin{array}{c}#1 \\ #2 \end{array}\right]}\) \(\newcommand{\threevec}
[3]{\left[\begin{array}{r}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\cthreevec}[3]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \end{array}\right]}\) \(\newcommand{\fourvec}[4]{\left[\begin{array}
{r}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\cfourvec}[4]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \end{array}\right]}\) \(\newcommand{\fivevec}[5]{\left[\begin{array}{r}#1 \\ #2 \\
#3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\cfivevec}[5]{\left[\begin{array}{c}#1 \\ #2 \\ #3 \\ #4 \\ #5 \\ \end{array}\right]}\) \(\newcommand{\mattwo}[4]{\left[\begin{array}{rr}#1 \amp
#2 \\ #3 \amp #4 \\ \end{array}\right]}\) \(\newcommand{\laspan}[1]{\text{Span}\{#1\}}\) \(\newcommand{\bcal}{\cal B}\) \(\newcommand{\ccal}{\cal C}\) \(\newcommand{\scal}{\cal S}\) \(\newcommand{\
wcal}{\cal W}\) \(\newcommand{\ecal}{\cal E}\) \(\newcommand{\coords}[2]{\left\{#1\right\}_{#2}}\) \(\newcommand{\gray}[1]{\color{gray}{#1}}\) \(\newcommand{\lgray}[1]{\color{lightgray}{#1}}\) \(\
newcommand{\rank}{\operatorname{rank}}\) \(\newcommand{\row}{\text{Row}}\) \(\newcommand{\col}{\text{Col}}\) \(\renewcommand{\row}{\text{Row}}\) \(\newcommand{\nul}{\text{Nul}}\) \(\newcommand{\var}
{\text{Var}}\) \(\newcommand{\corr}{\text{corr}}\) \(\newcommand{\len}[1]{\left|#1\right|}\) \(\newcommand{\bbar}{\overline{\bvec}}\) \(\newcommand{\bhat}{\widehat{\bvec}}\) \(\newcommand{\bperp}{\
bvec^\perp}\) \(\newcommand{\xhat}{\widehat{\xvec}}\) \(\newcommand{\vhat}{\widehat{\vvec}}\) \(\newcommand{\uhat}{\widehat{\uvec}}\) \(\newcommand{\what}{\widehat{\wvec}}\) \(\newcommand{\Sighat}{\
widehat{\Sigma}}\) \(\newcommand{\lt}{<}\) \(\newcommand{\gt}{>}\) \(\newcommand{\amp}{&}\) \(\definecolor{fillinmathshade}{gray}{0.9}\)
To deal with the complexity of the vibrational motion in polyatomic molecules, we need to utilize the three important concepts listed as the title of this section. By a spatial degree of freedom, we
mean an independent direction of motion. A single atom has three spatial degrees of freedom because it can move in three independent or orthogonal directions in space, i.e. along the x, y, or z-axes
of a Cartesian coordinate system. Motion in any other direction results from combining velocity components along two or three of these directions. Two atoms have six spatial degrees of freedom
because each atom can move in any of these three directions independently.
Equivalently, we also can say one atom has three spatial degrees of freedom because we need to specify the values of three coordinates \((x_1, y_1, z_1)\) to locate the atom. Two atoms have six
spatial degrees of freedom because we need to specify the values of six coordinates, \((x_1, y_1, z_1)\) and \((x_2, y_2, z_2)\), to locate two atoms in space. In general, to locate N atoms in space,
we need to specify 3N coordinates, so a molecule comprised of N atoms has 3N spatial degrees of freedom.
Exercise \(\PageIndex{1}\)
Identify the number of spatial degrees of freedom for the following molecules: \(Cl_2\), \(CO_2\), \(H_2O\), \(CH_4\), \(C_2H_2\), \(C_2H_4\), \(C_6H_6\).
The motion of the atomic nuclei in a molecule is not as simple as translating each of the nuclei independently along the x, y, and z axes because the nuclei, which are positively charged, are coupled
together by the electrostatic interactions with the electrons, which are negatively charged. The electrons between two nuclei effectively attract them to each other, forming a chemical bond.
Consider the case of a diatomic molecule, which has six degrees of freedom. The motion of the atoms is constrained by the bond. If one atom moves, a force will be exerted on the other atom because of
the bond. The situation is like two balls coupled together by a spring. There are still six degrees of freedom, but the motion of atom 1 along x, y, and z is not independent of the motion of atom 2
along x, y, and z because the atoms are bound together.
It therefore is not very useful to use the six Cartesian coordinates, \((x_1, y_1, z_1)\) and \((x_2, y_2, z_2)\), to describe the six degrees of freedom because the two atoms are coupled together.
We need new coordinates that are independent of each other and yet account for the coupled motion of the two atoms. These new coordinates are called normal coordinates, and the motion described by a
normal coordinate is called a normal mode.
A normal coordinate is a linear combination of Cartesian displacement coordinates. A linear combination is a sum of terms with constant weighting coefficients multiplying each term. The coefficients
can be imaginary or any positive or negative number including +1 and -1. For example, the point or vector r = (1, 2, 3) in three-dimensional space can be written as a linear combination of unit
\[r = 1 \bar {x} + 2 \bar {y} + 3 \bar {z} \label {6-1}\]
A Cartesian displacement coordinate gives the displacement in a particular direction of an atom from its equilibrium position. The equilibrium positions of all the atoms are those points where no
forces are acting on any of the atoms. Usually the displacements from equilibrium are considered to be small. For illustration, the Cartesian displacement coordinates for HCl are defined in Table \(\
PageIndex{1}\), and they are illustrated in Figure \(\PageIndex{1}\).
Table 6. Cartesian displacement coordinates for HCl.*
\[q_1 = X_{H} - X^e_H\]
\[q_2 = y_{H} - y^e_H\]
\[q_3 = z_H - z^e_H\]
\[q_4 = x_{Cl} - x^e_{Cl}\]
\[q_5 = y_{Cl} - y^e_{Cl}\]
\[q_6 = z_{Cl} - z^e_{Cl}\]
*The superscript e designates the coordinate value at the equilibrium position.
Note that the position of one atom can be written as a vector \(r_1\) where \(r_1 = (x_1, y_1, z_1)\), and the positions of two atoms can be written as two vectors \(r_1\) and \(r_2\) or as a
generalized vector that contains all six components \(r = (x_1, y_1, z_1, x_2, y_2, z_2)\). Similarly the six Cartesian displacement coordinates can be written as such a generalized vector \(q =
(q_1, q_2, q_3, q_4, q_5, q_6)\).
Figure \(\PageIndex{1}\): The Cartesian displacement coordinates for HCl. Note that the internuclear axis is the x-axis.
For a diatomic molecule it is easy to find the linear combinations of the Cartesian displacement coordinates that form the normal coordinates and describe the normal modes. Just take sums and
differences of the Cartesian displacement coordinates. Refer to Table \(\PageIndex{1}\) and Figure \(\PageIndex{4}\). for the definition of the q's. The combination q1 + q4 corresponds to translation
of the entire molecule in the x direction; call this normal coordinate Tx. Similarly we can define Ty = q2 + q5 and Tz = q3 + q6 as translations in the y and z directions, respectively. Now we have
three normal coordinates that account for three of the degrees of freedom, the three translations of the entire molecule.
What do we do about the remaining three degrees of freedom? Here let's use a simple rule for doing creative science: if one thing works, try something similar and examine the result. In this case, if
adding quantities works, try subtracting them. Examine the combination q2 - q5. This combination means that H is displaced in one direction and Cl is displaced in the opposite direction. Because of
the bond, the two atoms cannot move completely apart, so this small displacement of each atom from equilibrium is the beginning of a rotation about the z-axis. Call this normal coordinate Rz.
Similarly define Ry = q3 - q6 to be rotation about the y-axis. We now have found two rotational normal coordinates corresponding to two rotational degrees of freedom.
The remaining combination, q1 - q4, corresponds to the atoms moving toward each other along the x-axis. This motion is the beginning of a vibration, i.e. the oscillation of the atoms back and forth
along the x-axis about their equilibrium positions, and accounts for the remaining sixth degree of freedom. We use Q for the vibrational normal coordinate.
\[Q = q_1 - q_4 \label {6-2}\]
To summarize: a normal coordinate is a linear combination of atomic Cartesian displacement coordinates that describes the coupled motion of all the atoms that comprise a molecule. A normal mode is
the coupled motion of all the atoms described by a normal coordinate. While diatomic molecules have only one normal vibrational mode and hence one normal vibrational coordinate, polyatomic molecules
have many.
Exercise \(\PageIndex{2}\)
Draw and label six diagrams, each similar to Figure \(\PageIndex{1}\), to show the 3 translational, 2 rotational and 1 vibrational normal coordinates of a diatomic molecule.
Exercise \(\PageIndex{3}\)
Vibrational normal modes have several distinguishing characteristics. Examine the animations for the normal modes of benzene shown in Figure \(\PageIndex{3}\) to identify and make a list of these
characteristics. Use a molecular modeling program to calculate and visualize the normal modes of another molecule.
The list of distinguishing characteristics of normal modes that you compiled in Exercise \(\PageIndex{4}\) should include the following four properties. If not, reexamine the animations to confirm
that these characteristics are present.
1. In a particular vibrational normal mode, the atoms move about their equilibrium positions in a sinusoidal fashion with the same frequency.
2. Each atom reaches its position of maximum displacement at the same time, but the direction of the displacement may differ for different atoms.
3. Although the atoms are moving, the relationships among the relative positions of the different atoms do not change.
4. The center of mass of the molecule does not move.
For the example of HCl, see Table \(\PageIndex{1}\), the first property, stated mathematically, means
\[q_1 = A_1 \sin (\omega t) \text {and} q_4 = A_4 \sin (\omega t) \label {6-3}\]
The maximum displacements or amplitudes are given by A1 and A4, and the frequency of oscillation (in radians per second) is ω for both displacement coordinates involved in the normal vibrational mode
of HCl. Substitution of Equations (6-3) for the displacement coordinates into the expression determined above for the vibrational normal coordinate, Equation (6-2), yields
\[ Q = q_1 - q_4 = A_1 \sin (\omega t) - A_4 \sin (\omega t) \label {6-4}\]
This time-dependent expression describes the coupled motions of the hydrogen and chlorine atoms in a vibration of the HCl molecule. In general for a polyatomic molecule, the magnitude of each atom's
displacement in a vibrational normal mode may be different, and some can be zero. If an amplitude, the A, for some atom in some direction is zero, it means that atom does not move in that direction
in that normal mode. In different normal modes, the displacements of the atoms are different, and the frequencies of the motion generally are different. If two or more vibrational modes have the same
vibrational frequency, these modes are called degenerate.
You probably noticed in Exercise \(\PageIndex{4}\) that the atoms reached extreme points in their motion at the same time but that they were not all moving in the same direction at the same time.
These characteristics are described by the second and third properties from the list above. For the case of HCl, the two atoms always move in exactly opposite directions during the vibration.
Mathematically, the negative sign in Equation that we developed for the normal coordinate, Q, accounts for this relationship.
This timing with respect to the direction of motion is called the phasing of the atoms. In a normal mode, the atoms move with a constant phase relationship to each other. The phase relationship is
represented by a phase angle φ in the argument of the sine function that describes the time oscillation, sin(ωt + φ). The angle is called a phase angle because it shifts the sine function on the time
axis. We can illustrate this phase relationship for HCl. Use the trigonometric identity
\[- \sin {\theta} = \sin (\theta + 180^0) \label {6-5}\]
in Equation \ref{6-4} to obtain
\[Q = A_1 \sin (\omega t ) + A_4 \sin (\omega t + 180^0) \label {6-6}\]
to see that that the phase angle for this case is 180^o.
The phase angle \(\varphi\) accounts for the fact that the H atom and the Cl atom reach their maximum displacements in the positive x-direction, +A1 and +A4, at different times. Generally in a normal
mode the phase angle \(\varphi\) is 0o or 180o. If \(\varphi\) = 0o for both atoms, the atoms move together, and they are said to be in-phase. For the vibration of a diatomic molecule such as HCl,
the phase angle for one atom is \(\varphi\) = 0o, and the phase angle for the other atom is \(\varphi\) = 180o. The atoms therefore move in opposite directions any time, and the atoms are said to be
180o out-of-phase. When \(\varphi\) is 180o, two atoms reach the extreme points in their motion at the same time, but one is in the positive direction and the other is in the negative direction.
Phase relationships can be seen by watching a marching band. All the players are executing the same marching motion at the same frequency, but a few may be ahead or behind the rest. You might say,
"They are out-of-step." You also could say, "They are out-of-phase."
To illustrate the fourth property for HCl, recall that the center of mass for a diatomic molecule is defined as the point where the following equation is satisfied.
\[m_H d_H = m_{Cl} d_{Cl} \label {6-7}\]
The masses of the atoms are given by \(m_H\) and \(m_Cl\), and \(d_H\) and \(d_{Cl}\) are the distances of these atoms from the center of mass.
Exercise \(\PageIndex{4}\)
Find the distances, dH and dCl, of the H and Cl atoms from the center of mass in HCl given that the bond length is 0.13 nm. In general for a diatomic molecule, AB, what determines the ratio dA/dB,
and which atom moves the greater distance in the vibration?
In general, to satisfy the center of mass condition, a light atom is located further from the center of mass than a heavy atom. To keep the center of mass fixed during a vibration, the amplitude of
motion of an atom must depend inversely on its mass. In other words, a light atom is located further from the center of mass and moves a longer distance in a vibration than a heavy atom.
Exercise \(\PageIndex{5}\)
Find the ratio of A1 to A4 from Equation that keeps the HCl center of mass stationary during a vibration. Find values for A1 and A4 that satisfy the condition
\[A12 + A42= 1.\]
Exercise \(\PageIndex{6}\)
For a vibrating HCl molecule, use the four properties of a normal vibrational mode, listed previously, to sketch a graph showing the position of the H atom (plot 1) and the position of the Cl atom
(plot 2) as a function of time. Both plots should be on the same scale. Hint: place x on the vertical axis and time on the horizontal axis.
In general, a molecule with 3N spatial degrees of freedom has 3 translational normal modes (along each of the three axes), 3 rotational normal modes (around each of the three axes), and 3N-6 (the
remaining number) different vibrational normal modes of motion. A linear molecule, as we have just seen, only has two rotational modes, so there are 3N-5 vibrational normal modes. Rotational motion
about the internuclear axis in a linear molecule is not one of the 3N spatial degrees of freedom derived from the translation of atoms in three-dimensional space. Rather, such motion corresponds to
other degrees of freedom, rotational motion of the electrons and a spinning motion of the nuclei. Indeed, the electronic wavefunction for a linear molecule is characterized by some angular momentum
(rotation) about this axis, and nuclei have a property called spin.
Exercise \(\PageIndex{7}\)
Identify the number of translational, rotational, and vibrational normal modes for the following molecules: \(Cl_2, CO_2, H_2O, CH_4, C_2H_2, C_2H_4, C_2H_6, C_6H_6\). Using your intuition, draw
diagrams similar to the ones in Exercise \(\PageIndex{3}\) to show the normal modes of \(H_2O\) and \(C_2H_4\). It is difficult to identify the normal modes of triatomic and larger molecules by
intuition. A mathematical analysis is essential. It is easier to see the normal modes if you use a molecular modeling program like Spartan or Gaussian to generate and display the normal modes.
You probably found in trying to complete Exercise \(\PageIndex{7}\) that it is difficult to identify the normal modes and normal coordinates of triatomic and large molecules by intuition. A
mathematical analysis is essential. A general analysis based on the Lagrangian formulation of classical mechanics is described separately. | {"url":"https://chem.libretexts.org/Bookshelves/Physical_and_Theoretical_Chemistry_Textbook_Maps/Book%3A_Quantum_States_of_Atoms_and_Molecules_(Zielinksi_et_al)/06%3A_Vibrational_States/6.01%3A_Spatial_Degrees_of_Freedom_Normal_Coordinates_and_Normal_Modes","timestamp":"2024-11-07T00:06:27Z","content_type":"text/html","content_length":"143467","record_id":"<urn:uuid:51041e60-3c9a-4e10-b0d2-ce71a92ee88a>","cc-path":"CC-MAIN-2024-46/segments/1730477027942.54/warc/CC-MAIN-20241106230027-20241107020027-00582.warc.gz"} |
The Genetics of Space Amazons
The Genetics of Space Amazons
post by
Jan Christian Refsgaard (jan-christian-refsgaard)
· 2021-12-30T22:14:14.201Z ·
12 comments
Amazon mating: The Next Generation
Optional: Other XY + AY offspring patterns
12 comments
If you wanted to colonize other planets with meat suite humans, then females are the superior choice as they have lower nutritional needs. Here I imagine a fictional Amazon gene that could lead to a
skewed sex ratio favoring females.
• Human Females need less food than males
• Human Females silence a random X chromosome because otherwise every protein from the X chromosome would be twice as abundant for XX females compared to as for XY males
• If Humans could be engineered to have a female biased sex ratio, then they would have an advantage when traveling to new worlds because:
□ then they will need less food.
□ their space ships would require slightly less fuel as females tend to be lighter.
□ the could increase their population size faster as there are more gestating members of the species.
So this is a cool premise for an egalitarian feminist spacefaring society... or a harem anime :)
Imagine a novel X chromosome mutation that allows the X chromosome to silence the Y chromosome such that these XY mutants have a female phenotype and reproduction strategy, to make this less
confusing we denote the mutated X chromosome as A, the Amazon chromosome
In this new society there are the following species with a male phenotype: XY, and the following with female phenotype XX, AX and AY (AA would also be female, but cannot exist as the males have no
Try to stop up and guess the sex equilibrium ratio... Is it 1/4 of each one because there are now 4 sexes, is it 1/3 XY and 2/3 female genotypes?, or something completely different?
Amazon mating: The Next Generation
Let's try to "mate" these new sex genotypes and see what would be produced
XY and XX (the standard): XX has 100% chance of giving an X and XY has 50/50 for each, so the offspring will be 50% XY and 50% XX
XY + XX -> 50% XX and 50% XY
Let's try to mate XY with AX:
XY + AX ->25% XY, 25% AX, 25% XY and 25% AY
Finally XY with AY, if we assume that egg cells carrying a Y are nonviable, then AY amazons will always donate their A giving:
XY + AY -> 50% AX + 50X AY
We can put this in a table where the columns are the mating pairs and the rows are offspring probabilities. (For mathematical reasons explained later we add a Male on Male columns with has 0% chance
of getting offspring)
XY + XY XX + XY AX + XY AY + XY
XY 0 0.5 0.25 0
XX 0 0.5 0.25 0
AX 0 0 0.25 0.5
AY 0 0 0.25 0.5
Looking at this table, do you now have a new guess for the equilibrium sex ratio?
Lets say the space ship starts with 2 XY males and 8 AX Amazons, let's calculate the expected number for each of the 4 genotypes after 1 generation (assuming each female mates twice). We can express
the 10 astronauts as a vector where items of the vector corresponds to the index in the table above, so like this
If we also consider the first row of the table as a vector, then we can multiply the two vectors as follows:
This is called the vector dot product, if we "keep sliding" the vector down the table, then it is called a matrix dot product, Thus to calculate the next generation genotype vector we simply need to
slide trough the table (and multiply by the number of times the females mates, which is 2)
So after 1 generation there are now 16 people and a sex ratio of 1:3, and a 1:1:1:1 genotype distribution. One generation later we have:
Interesting The same distribution, thus if the Amazons decide to get only kids per female they will have a stable population:
If you are a scifi author and want to write an Amazons in space story, then you can use this world building premise without needing to cite me :D
Optional: Other XY + AY offspring patterns
As mentioned above the rules for XY + AY are not clear. It all depends on how a female Sex cell handles lacking a X chromosome. Is the X chromosome vital for the viability of the cell? or is it only
vital for cell proliferation? which happens after merging with a sperm cell. Under all circumstances YY are not viable as the X chromosome contains a lot of important genes.
Let's game out all possible combinations
If egg cells containing a Y chromosome are viable from AY females then two other mating patterns are possible depending on what happens if a Y sperm tries to fuse with a Y egg cells.
1. Y eggs 'reject' Y sperm Then:
□ AY donates A -> 50% XY + 50% AX
□ AY donates Y -> 100% XY
□ Thus 75% XY + 25% AX
2. Y eggs 'accept' Y sperm, but the cell fails to divide
□ AY donates A -> 50% XY + 50% AX
□ AY donates Y -> 50% XY + 50% egg fails to divide
□ Thus 2/3 XY + 1/3 AX
12 comments
Comments sorted by top scores.
comment by ChristianKl · 2021-12-31T10:47:38.746Z · LW(p) · GW(p)
It seems to me to make more sense to just go for artificial embryo selection. This especially goes for a spaceship with 10 people which is an amount that's a lot lower than what's generally
considered to be necessary for a population to keep working.
Replies from: jan-christian-refsgaard
comment by Jajk (jajk-winter) · 2021-12-31T20:06:36.563Z · LW(p) · GW(p)
This seems to be a very cool idea, but if (when?) we actually have the tech to do this, there would likely be better solutions to the issues this is supposed to help with. Or, our spacefaring
capabilities would be such that the marginal advantage of Space Amazons would be very small.
Still I do hope someone makes this into a cool sci fi story.
Replies from: jan-christian-refsgaard
↑ comment by Jan Christian Refsgaard (jan-christian-refsgaard) · 2022-01-01T01:36:35.558Z · LW(p) · GW(p)
Totally agree, it's also Christians critique of the idea :)... Maybe it could be relevant for aliens on a smaller planet as they could leave their planet more easily, and would thus be less advanced
than us when we become space faring :)... Or a scifi where the different tech trees progress different, like stram punk
comment by tailcalled · 2021-12-31T09:44:16.624Z · LW(p) · GW(p)
Wouldn't this prevent the A chromosome from having recombination with other A chromosomes? Which seems like a problem due to Muller's ratchet.
Replies from: jan-christian-refsgaard
↑ comment by Jan Christian Refsgaard (jan-christian-refsgaard) · 2021-12-31T16:24:27.527Z · LW(p) · GW(p)
Good Point, In principle the X chromosome already has this issue when you get it from your farther, if the A chromosome is simply a normal X chromosome with an insertion of a set of proteins that
blocks silencing, then you can still have recombination, if we assume the Amazon proteins are all located in the same LD region then mechanically everything is as in the post, but we do not have the
Muller's ratchet problem
Also the A only recombines with X as AY is female and therefore never mates with an AX or AY
comment by [deleted] · 2021-12-31T00:41:27.454Z · LW(p) · GW(p)
"Sci fi plot alert": what happens if due to random chance/genetic drift the "A" version of the X chromosome becomes more common and the last male dies? This would be more probable the smaller the
population is. And, I dunno, a space rock snipes the sperm storage freezer. (similar to what happened in Seveneves)
Replies from: jan-christian-refsgaard
↑ comment by Jan Christian Refsgaard (jan-christian-refsgaard) · 2021-12-31T09:27:04.774Z · LW(p) · GW(p)
When the space ship lands there is a 1% chance that no males are among the first 16 births ()
Luckily males are firtile for longer so if the second generation had no men the first generation still works
If the A had a mutation such that AX did not have 50% chance of passing on a A, then the gender ratio would be even more extreme, if the last man dies the a AY female could probably artificially
incriminate a female.
You can update the matrix and do the for product to see how those different rules pan out, if you have a specific ratio you want to try then I can calculate it for you, calculating a target gender
ratio will require a mathematician as this is a Markov process and their Transmission Matrix are hard to calculate fom a target steady state, if you are a for mortal like me | {"url":"https://lw2.issarice.com/posts/zmXrKHZRQ3jsc9pER/the-genetics-of-space-amazons","timestamp":"2024-11-07T21:46:26Z","content_type":"text/html","content_length":"116979","record_id":"<urn:uuid:b23c2f17-4d9a-4ed1-9a3d-a873e687dbdc>","cc-path":"CC-MAIN-2024-46/segments/1730477028017.48/warc/CC-MAIN-20241107212632-20241108002632-00108.warc.gz"} |
How long is a piece of string?
Present students with a piece of string (4–5 metres) and discuss how to estimate its length. Show students how a standard informal measure can be used as a benchmark to estimate the length of the
Watch the Armspan video which explains the process.
You can download the Armspan video transcript.
The body proportions of children and teenagers differ from those of adults. To compare, measure the arm span for each student in the class using the 1 metre benchmark method.
Use the data to draw a histogram to demonstrate the variation in the arm span measurement amongst the students.
Calculate the mean of the class data and compare it with the benchmark measurement for adults of 1 metre.
Use the average arm span length calculated from the class data to gain an estimate of the length of the string.
Measure the string using a tape measure and compare the estimate to the exact length of the string. | {"url":"https://topdrawer.aamt.edu.au/Statistics/Activities/How-long-is-a-piece-of-string","timestamp":"2024-11-01T20:27:52Z","content_type":"text/html","content_length":"81599","record_id":"<urn:uuid:6f8b0d11-67b4-40e4-afbc-9ff0c65c0c37>","cc-path":"CC-MAIN-2024-46/segments/1730477027552.27/warc/CC-MAIN-20241101184224-20241101214224-00163.warc.gz"} |
Game theory
Game theory is nothing but the study of calculated decision making. Game theory is mainly used in economics, political science, and psychology. Modern game theory started with the idea of mixed
strategy balance in two persons zero-sum games. Zero-sum games mean one person’s gains exactly equal net losses of other participants. The games defined in game theory are well-defined mathematical
objects. A game contains a set of players, a set of moves or strategies and a specification of payoffs for each combination of strategies. The extensive and normal forms are used to define
non-cooperative games. The extensive forms can be used to formalize the games with a time sequencing of moves. Games in this form can be played in the form of trees. Each vertex denotes a point of
choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex indicate a possible action of that player. At the bottom, the payoffs are specified.
The extensive form can be described as a multi-player generalization of a decision tree. The extensive form can capture the games with imperfect information. The normal form is otherwise known as
strategic form. The normal form is represented generally by a matrix which shows the players, strategies and payoffs. When a game is represented in normal form, it is assumed that each player acts
without knowing the actions of the other. Game theory can be used to study a wide variety of human and animal behaviors. It was originally developed in economics to understand a large collection of
economic behaviors.
Game theory has now been applied to political, sociological and psychological behaviors. Co-operative game is a type of game if the players are able to form binding commitments. In non-cooperative
games, communication is not allowed. Hybrid games contain both cooperative and non-cooperative games. The other type of games include cooperative or non-operative games, symmetric and asymmetric,
simultaneous and sequential, perfect and imperfect information, combinatorial games, infinitely long games, discrete games, continous games, differential games, many player games and metagames etc.
Such games are used extensively in computations. The game theories are used in computer science and multi agent systems. They are also used in algorithms as well. The advent of technology has played
its part significantly in these game theories. | {"url":"http://www.madisonhorror.com/game-theory.html","timestamp":"2024-11-10T17:02:20Z","content_type":"application/xhtml+xml","content_length":"4237","record_id":"<urn:uuid:8b831676-4133-4536-b303-c7df26210d33>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.61/warc/CC-MAIN-20241110170046-20241110200046-00110.warc.gz"} |
Jonathan Bostic
Professional Title
Associate Professor of Mathematics Education
About Me (Bio)
My scholarship agenda is guided by an embodied cognition perspective, framed by an overarching sociocultural metaperspective, and my own teaching and learning experiences. Students and teachers
should become active problem solvers who demonstrate mathematical proficiency. My primary area of scholarship is exploring validity issues and trends within the context of measurement in mathematics
education. Secondarily, I investigate ways to enhance instructional contexts to better support teaching and learning, especially learners’ mathematical proficiency. This agenda is pursued by
scholarship focused on mathematics tasks, learning environment, and teachers as they influence students’ outcomes (e.g., problem-solving performance, contextualization of problem solving, etc.).
Citations of DRK-12 or Related Work (DRK-12 work is denoted by *)
• Bostic, J., Krupa, E., & Shih, J. (2019). Assessment in mathematics education contexts: Theoretical frameworks and new directions. New York, NY: Routledge.*
• Bostic, J., Krupa, E., & Shih, J. (2019). Quantitative measures of mathematical knowledge: Researching instruments and perspectives. New York, NY: Routledge.*
• Bostic, J., Matney, G., Sondergeld, T., & Stone, G. (2020, April). Validation as design-based research: Implications for practice and theory. Paper presented at annual meeting of the annual
meeting of the American Education Research Association. San Francisco, CA.*
• Bostic, J., Matney, G., Sondergeld, T., & Stone, G. (2019, July). Developing a problem-solving measure for grade 4. In Editors (Eds.), Proceedings of the 43rd Meeting of the International Group
for the Psychology of Mathematics Education. Pretoria, South Africa.* | {"url":"https://cadrek12.org/users/jonathan-bostic","timestamp":"2024-11-06T08:29:45Z","content_type":"text/html","content_length":"25692","record_id":"<urn:uuid:300cf139-aa64-4640-be47-fc487d137f81>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00859.warc.gz"} |
The Stacks project
Proposition 75.29.3. Let $S$ be a scheme. Let $X$ be a quasi-compact and quasi-separated algebraic space over $S$. Let $G \in D_{perf}(\mathcal{O}_ X)$ be a perfect complex which generates $D_\mathit
{QCoh}(\mathcal{O}_ X)$. Let $E \in D_\mathit{QCoh}(\mathcal{O}_ X)$. The following are equivalent
1. $E \in D^-_\mathit{QCoh}(\mathcal{O}_ X)$,
2. $\mathop{\mathrm{Hom}}\nolimits _{D(\mathcal{O}_ X)}(G[-i], E) = 0$ for $i \gg 0$,
3. $\mathop{\mathrm{Ext}}\nolimits ^ i_ X(G, E) = 0$ for $i \gg 0$,
4. $R\mathop{\mathrm{Hom}}\nolimits _ X(G, E)$ is in $D^-(\mathbf{Z})$,
5. $H^ i(X, G^\vee \otimes _{\mathcal{O}_ X}^\mathbf {L} E) = 0$ for $i \gg 0$,
6. $R\Gamma (X, G^\vee \otimes _{\mathcal{O}_ X}^\mathbf {L} E)$ is in $D^-(\mathbf{Z})$,
7. for every perfect object $P$ of $D(\mathcal{O}_ X)$
1. the assertions (2), (3), (4) hold with $G$ replaced by $P$, and
2. $H^ i(X, P \otimes _{\mathcal{O}_ X}^\mathbf {L} E) = 0$ for $i \gg 0$,
3. $R\Gamma (X, P \otimes _{\mathcal{O}_ X}^\mathbf {L} E)$ is in $D^-(\mathbf{Z})$.
Comments (0)
Post a comment
Your email address will not be published. Required fields are marked.
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
All contributions are licensed under the GNU Free Documentation License.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder,
this is tag 0GFH. Beware of the difference between the letter 'O' and the digit '0'.
The tag you filled in for the captcha is wrong. You need to write 0GFH, in case you are confused. | {"url":"https://stacks.math.columbia.edu/tag/0GFH","timestamp":"2024-11-03T16:24:03Z","content_type":"text/html","content_length":"20055","record_id":"<urn:uuid:89def4f8-d6c1-4e46-9824-734e4d18bbcc>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00296.warc.gz"} |
Finding max and min
I have a problem I need help with.
I have a row of numbers on the top row. i.e. 15 20 25 40 50 80 100 150 200 250 300 400 450 500 600 750.
I would like to fill up the min and max column with numbers such that for the 3rd row, min will be 50 and max will be 100 since there are numbers at the column 50 and column 100 that covers the min
and max.
in the case of the 4th row, it would be 50 for min and 150 for max.
for 6th row, min and max would be 50. for the last row, it would be 50 for min and 100 for max.
the min and max values are based on the first row.
how do i write a macro or formula to accomplish this task?
your help will be greatly appreciated.
Sorry I don't quite understand your question.
"I have a row of numbers on the top row. i.e. 15 20 25 40 50 80 100 150 200 250 300 400 450 500 600 750.
I would like to fill up the min and max column with numbers such that for the 3rd row, min will be 50 and max will be 100 since there are numbers at the column 50 and column 100 that covers the min
and max."
Kindly rephrase the question. | {"url":"http://www.eluminary.org/en/QnA/Finding_max_and_min_(Excel)","timestamp":"2024-11-07T04:08:08Z","content_type":"text/html","content_length":"10023","record_id":"<urn:uuid:305e9b90-60e9-47b8-ab9c-3d047af4056a>","cc-path":"CC-MAIN-2024-46/segments/1730477027951.86/warc/CC-MAIN-20241107021136-20241107051136-00592.warc.gz"} |
Adding and Subtracting Polynomials
Adding polynomials requires us to first identify then combine like terms.
Remember that like terms are terms with the exact same variable part.
Add the polynomials
When subtracting polynomials you distribute the negative one first and then identify and combine like terms.
Subtract the polynomials
We may encounter function notation,
In this case, just add or subtract the given polynomials as we did in the above examples.
For the given functions find (f + g)(x) and (f - g)(x)
Video Examples on YouTube: | {"url":"https://www.openalgebra.com/2012/11/adding-and-subtracting-polynomials.html","timestamp":"2024-11-08T11:23:42Z","content_type":"application/xhtml+xml","content_length":"64973","record_id":"<urn:uuid:53af48e5-e8e2-465b-80dd-63da3cafcd0e>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00163.warc.gz"} |
Development Platform
Using pointer will cause the board freeze
Created Sun, 27 May 2012 05:11:33 +0000 by legendary_kx
Sun, 27 May 2012 05:11:33 +0000
I just bought the chipkit uno32 and I want to use it for rosserial. I able to compile the ros_lib libraries and upload the code into the uno32. When i start the rosserial communication, the board
will just hang. I discover that using pointers in uno32, it will cause the board freeze. Rosserial used a lot of pointers in their libraries. I using mpide-023. Below is the example code of using
After you compile and upload the code, the LED 13 should blink. But the LED 13 does not blink and the board just freeze. Is it the pic32 compiler problem?
boolean change = true;
void setup(){
pinMode(13, OUTPUT);
int * l;
*l = 10;
void loop(){
digitalWrite(13, change ? HIGH : LOW);
change = !change;
Sun, 27 May 2012 12:45:13 +0000
Nope it is a problem with your program...
That is with your code:
int * l;
*l = 10;
You say to stuff 10 into the location where l is pointing. But l is not initialized... So it is stuffing it in some random location like 0...
Should be more like:
int ll;
int * l;
l = &ll;
*l = 10;
Tue, 29 May 2012 02:24:49 +0000
Thx Kurt. I finally get the rosserial running in my chipkit uno32.
Sat, 27 Oct 2012 05:28:43 +0000
how did you get it to work? I am able to complie rosserial, but when I upload to the arduino (I am actually using Max32) the Hello World example, and try to look at the connection i lose
connection...any luck with that? | {"url":"https://chipkit.org/archive/782","timestamp":"2024-11-05T06:17:34Z","content_type":"text/html","content_length":"16023","record_id":"<urn:uuid:75e51580-9bc0-4e00-9b74-e67b63d3f5f6>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00137.warc.gz"} |
The present disclosure provides a method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir. The technical solution is as follows: with consideration of forms
of different well types and a gravitational pressure drop in production, a coupled permeability of a corresponding well type acting on a gas-water interface under the action of a baffle plate is
expressed by using an equivalent percolation resistance method; a productivity of each infinitesimal section of the corresponding well type is calculated; a position of the gas-water interface is
calculated with an equivalent permeability and a pseudo-productivity as characteristic parameters; water breakthrough is determined when there is a point over a water avoidance height, and a change
in water invasion frontier form may be expressed with water quality point position data at this time.
This patent application claims the benefit and priority of Chinese Patent Application No. 202310310739.1, filed with the China National Intellectual Property Administration on Mar. 28, 2023, the
disclosure of which is incorporated by reference herein in its entirety as part of the present application.
The present disclosure belongs to the field of gas reservoir development, and in particular, to a method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir.
Bottom-water gas reservoirs are often connected to large-scale water bodies, and thus have the following characteristics during development: (1) there is rich reservoir energy that can continuously
replenish energy for gas reservoir development to guarantee stable high production of gas reservoirs at early and middle stages; (2) as natural gas is continuously produced from a reservoir, a
pressure drop funnel is formed in an immediate vicinity of a wellbore in the reservoir, and coning, cresting, and the like of bottom water will occur, with reservoir seepage gradually changing from a
single-phase flow into a gas-liquid two-phase flow; and (3) after breakthrough of water, the productivity of a gas well declines rapidly and a water-gas ratio rises sharply; and if no effective
measure is adopted, the production of the gas well may be stopped due to water flooding, etc.
In terms of water invasion frontier research, predecessors have achieved many research results. However, these results mainly focus on obtaining of water breakthrough time and critical parameters.
There are few research methods on the coning (cresting) process of bottom water, and further in-depth studies are required. In terms of well types, predecessors' results mainly focus on a straight
well and a horizontal well and present almost no water invasion frontier characterization with regard to an inclined well and an undulating well. In practical production, ideal horizontal well and
straight well barely exist. Therefore, it is necessary to study complex wells. On the basis of morphological analysis on a water invasion frontier for a complex well, considering the function of a
baffle plate is of certain guiding significance for existing development.
An objective of the present disclosure is to predict a water invasion frontier for a complex well in a bottom-water gas reservoir by using an equivalent percolation resistance method and correcting a
coupled permeability in case of heterogeneity in order to solve the existing problems of difficult water invasion frontier prediction for a complex well, indefinite water invasion frontier form
characterization, lack of heterogeneous water breakthrough prediction with a baffle plate, and the like.
To achieve the above objective, the present disclosure provides a method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir, including the following steps:
□ preparing static parameters of a reservoir, fluid characteristic parameters, production characteristic parameters, and baffle plate characteristic parameters, where the static parameters of
the reservoir include a porosity, a reservoir permeability, a formation temperature, a constant interfacial pressure at a gas-water boundary, a reservoir thickness, an initial gas saturation,
and an irreducible water saturation; the fluid characteristic parameters include a natural gas relative density, a natural gas viscosity, and a water quality point position; the production
characteristic parameters include a well length, a well diameter, a bottom-hole radius, a frictional resistance coefficient of an inner well wall, a water avoidance height, a flowing
bottom-hole pressure, and an inclination angle; and the baffle plate characteristic parameters include a length, a width, and a thickness of a baffle plate, a baffle plate position, a baffle
plate permeability, and a baffle plate form;
□ inputting the static parameters of a reservoir, the fluid characteristic parameters, the production characteristic parameters, and the baffle plate characteristic parameters into a predicting
model comprising a processor and a memory storing program codes, wherein the processor performs the stored program codes for:
□ S100: based on different well types, the static parameters, and the production characteristic parameters, calculating a potential φ generated by each infinitesimal section in an infinite
formation, assigning an initial pressure P[wfi]^0 in units of MPa to all infinitesimal sections, calculating ideal productivity q[i]^0 in units of m^3/d of each infinitesimal section, and
calculating a pressure drop generated when natural gas flows through each infinitesimal section using a multi-phase flow model based on the production characteristic parameters and the
calculated ideal productivity;
□ S200: calculating an equivalent permeability K[ij ]and a coupled permeability K[oi ]based on a relative position of an infinitesimal section and a water quality point, specifically including
the following steps:
□ S2001: obtaining the equivalent permeability by an equation
$K i j = L L 1 K 1 + L 2 K 2 + … + L n K n$
□ using n equivalent percolation resistance method based on a permeability distribution combined with a positional relationship between each infinitesimal section and a water quality point,
where L represents a distance from the water quality point to the infinitesimal section, in units of m; L[1], L[2], . . . , and L[n ]represent lengths of corresponding permeability regions in
a connecting line, respectively, in units of m; K[ij ]represents the equivalent permeability in a path from the infinitesimal section to the water quality point, in units of mD; and K[1], K
[2], . . . , and K[n ]represent permeabilities of regions, in units of mD; and
□ S2002: calculating the coupled permeability using a weighting method by a process of dividing a distance of a corresponding infinitesimal section from any water quality point by a sum of
distances of the infinitesimal section from all water quality points, which is then multiplied by the equivalent permeability between the corresponding infinitesimal section and any water
quality point, and accumulating obtained values to obtain the coupled permeability K[oi], in units of mD;
□ S300: performing pressure drop iteration; performing calculation on pressure drop data calculated in S100 and the initial pressure P[wfi]^0 by an equation
$∑ i = 1 N μ 4 π K Oi q i φ ij = p e - p j + ρ g ( z i - z j )$
□ to obtain a new corrected infinitesimal section pressure P[wfi]^1, where μ represents the natural gas viscosity, in units of mPa·s; K[oi ]represents the coupled permeability of each
infinitesimal section, in units of mD; N represents a number of infinitesimal sections; q[i ]represents a productivity of an infinitesimal section; φ[ij ]represents the potential generated by
an ith infinitesimal section in a jth infinitesimal section; P[e ]represents a constant boundary pressure; P[j ]represents a bottom-hole pressure, in units of MPa; ρ represents a natural gas
density, in units of g/cm^3; g represents the gravitational acceleration, in units of cm/s^2; z[i ]and z[j ]represent longitudinal positions of an infinitesimal section and a water quality
point, respectively, in units of m; then calculating the corrected productivity q[i]^1, repeating step S100, continuously correcting productivity data, and stopping iteration until a
difference between a pressure value and a last calculated pressure value is less than one ten-thousandth, the productivity data obtained by calculation at this time being a coupled
productivity q[i ]of each infinitesimal section;
□ s400: solving a water breakthrough prediction model for different well types with consideration of the baffle plate, and calculating a real influencing effect of an infinitesimal section of a
well on a water quality point in combination with the coupled productivity of the infinitesimal section;
□ S4001: obtaining the equivalent permeability using the method in step S2001 based on the permeability distribution combined with the positional relationship between each infinitesimal section
and a water quality point, where the equivalent permeability of a mirror reflection point to a corresponding water quality point is also K[ij]^t; and
□ S4002: obtaining a real productivity of an infinitesimal section under the influence of a water quality point at a corresponding time, defined as a pseudo-productivity qni[ij]^t, based on the
equivalent permeability data K[11]^t, K[12]^t, . . . , and K[ij]^t calculated in step S4001, where the pseudo-productivity is numerically equal to a ratio of an equivalent permeability to a
coupled permeability multiplied by the productivity of the corresponding infinitesimal section, in units of m^3/d;
□ S500: obtaining a displacement of each infinitesimal section with parameters in S400 and S4002, specifically including the following steps:
□ S5001: obtaining a position of a mirror reflection point of the infinitesimal section using a mirror reflection principle, and calculating distances of a water quality point from the
infinitesimal section and a reflected infinitesimal section; and
□ S5002: calculating water quality point displacements dz[ij], dy[ij], and dx[ij ]in unit times from the pseudo-productivity qni[ij]^t calculated in S400 combined with basic parameters of
natural gas; calculating a displacement of a water quality point caused by each infinitesimal section at each time by the following equation:
$d z = v z d t = ∑ 1 M ∑ - ∞ + ∞ ∑ j = 1 N qn i i j ( z i - z j ) 4 πφ ( 1 - S w i - S g r ) [ ( z i - z j ) + r i j 2 ] 1.5 dt$
□ where dz represents a longitudinal displacement of the water quality point in unit time, in units of m; dt represents the unit time, in units of d; qni[ij ]represents the pseudo-productivity
of the ith infinitesimal section for the jth water quality point, in units of m^3/d; z[i ]and z[j ]represent longitudinal positions of the infinitesimal section and the water quality point,
respectively, in units of m; ϕ represents the porosity, dimensionless; S[wi ]and S[gr ]represent the irreducible water saturation and an irreducible gas saturation, respectively,
dimensionless; r[ij ]represents a distance of the water quality point from an x-y plane of the infinitesimal section, in units of m; obtaining displacements in x and y directions,
respectively, with changes of a trigonometric function, and after accumulation, updating a real-time position P[j](X[t], Y[t], Z[t]) of the water quality point; and
□ S600: repeating steps S400 and S500 by comparing with a position of a wellbore and increasing the unit time in combination with position data in S5002 until a longitudinal height zit of a
water quality point in a wellbore region is greater than the water avoidance height Z[w], and stopping calculation, current time being water breakthrough time;
□ outputting, from the predicting model, the water breakthrough time and position data of the water quality point;
□ plotting a water breakthrough form of an undulating well with a baffle plate by using water quality point data on the day before water breakthrough according to the water breakthrough time
obtained by the repetition in S600, and drawing a schematic diagram of a water invasion frontier rising process under circumstance of homogeneity of the different well types, a semi-enclosed
baffle plate, and a semi-permeable baffle plate based on water quality point position data from the first day to the day before water breakthrough; and
□ obtaining, from the schematic diagram of the water invasion frontier rising process, a location of water coning, according to which a gas injector is arranged to inject gas for pressure
In the method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir described above, the different well types in S100 have the following differences:
□ a horizontal well has an inclination angle of 0 and a gravitational pressure drop of 0; an inclined well has a particular inclination angle and a gravitational pressure drop; a straight well
is vertical and has a relatively maximum gravitational pressure drop; and an undulating well has a particular inclination angle, varying longitudinal coordinates of well sections, a wavy
form, and increased energy in a descending section and an increased pressure drop in a ascending section under the action of gravity.
In the accompanying drawings:
FIG. 1 is a diagram illustrating a technical route of a method according to the present disclosure;
FIG. 2 is a prediction curve chart of a water invasion frontier of a horizontal well with a semi-permeable baffle plate;
FIG. 3 is a prediction curve chart of a water invasion frontier of a horizontal well with a semi-enclosed baffle plate; and
FIG. 4 is a prediction curve chart of a water invasion frontier of a horizontal well in a certain bottom-water gas reservoir.
The present disclosure is further described below with reference to embodiments and the accompanying drawings.
The present disclosure provides a method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir. FIG. 1 is a diagram illustrating a technical route of the method.
The method includes the following steps:
□ a first step: prepare static parameters of a reservoir, fluid characteristic parameters, production characteristic parameters, and baffle plate characteristic parameters, where the static
parameters of the reservoir include a porosity, a reservoir permeability, a formation temperature, a constant interfacial pressure at a gas-water boundary, a reservoir thickness, an initial
gas saturation, and an irreducible water saturation; the fluid characteristic parameters include a natural gas relative density, a natural gas viscosity, and a water quality point position;
the production characteristic parameters include a well length, a well diameter, a bottom-hole radius, a frictional resistance coefficient of an inner well wall, a water avoidance height, a
flowing bottom-hole pressure, and an inclination angle; and the baffle plate characteristic parameters include a length, a width, and a thickness of a baffle plate, a baffle plate position, a
baffle plate permeability, and a baffle plate form;
□ a second step: based on different well types, the static parameters, and the production characteristic parameters, calculate a potential generated by each infinitesimal section in an infinite
formation, assign an initial pressure P[wfi]^0 in units of MPa to all infinitesimal sections, calculate ideal productivity q[i]^0 in units of m^3/d of each infinitesimal section, and
calculate a pressure drop generated when natural gas flows through each infinitesimal section using a multi-phase flow model based on the production characteristic parameters and the
calculated ideal productivity; calculate a coupled permeability using a weighting method by a process of dividing a distance of a corresponding infinitesimal section from any water quality
point by a sum of distances of the infinitesimal section from all water quality points, which is then multiplied by an equivalent permeability between the corresponding infinitesimal section
and any water quality point, and accumulate obtained values to obtain the coupled permeability. The infinitesimal section is obtained by dividing the gas well into multiple small sections.
□ a third step: perform pressure drop iteration; perform calculation on pressure drop data calculated in the second step and the initial P[wfi]^0 by pressure an equation
$∑ i = 1 N μ 4 π K Oi q i φ i j = p e - p j + ρ g ( z i - z j )$
□ to obtain a new corrected infinitesimal section pressure P[wfi]^1; then calculate the corrected productivity q[i]^1, repeat the second step, continuously correct productivity data, and stop
iteration until a difference between a pressure value and a last calculated pressure value is less than one ten-thousandth, the productivity data obtained by calculation at this time being a
coupled productivity q[i ]of each infinitesimal section;
□ a fourth step: obtain the equivalent permeability K[ij]^t by an equation
$K i j = L L 1 K 1 + L 2 K 2 + … + L n K n$
□ using an equivalent percolation resistance method based on a permeability distribution combined with a positional relationship between each infinitesimal section and a water quality point at
this time, where the equivalent permeability of a mirror reflection point to a corresponding water quality point is also K[ij]^t; and obtain a real productivity of an infinitesimal section
under the influence of a water quality point at a corresponding time, defined as a pseudo-productivity qni[ij]^t, where the pseudo-productivity is numerically equal to a ratio of an
equivalent permeability to a coupled permeability multiplied by the productivity of the corresponding infinitesimal section; and
□ a fifth step: calculate a displacement of a water quality point caused by each infinitesimal section at each time by the following equation:
$d z = v z d t = ∑ 1 M ∑ - ∞ + ∞ ∑ j = 1 N qn i i j ( z i - z j ) 4 πφ ( 1 - S w i - S g r ) [ ( z i - z j ) + r i j 2 ] 1.5 dt ;$
□ obtain displacements in x and y directions, respectively, with changes of a trigonometric function, and after accumulation, update a real-time position of the water quality point; repeat the
fourth step by comparing with a position of a wellbore and increasing the unit time until a longitudinal height zit of a water quality point in a wellbore region is greater than the water
avoidance height Z[w], and stop calculation. A water breakthrough form of a corresponding gas well can be plotted with the water quality point position data at this time.
Research is conducted on a method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir in a gas-water distribution mode with a baffle plate based on a reservoir
seepage mirror reflection principle, a complex potential theory, and a velocity stack principle. A method for predicting a water invasion frontier of a straight well and a horizontal well in a
bottom-water gas reservoir in a gas-water distribution mode with a baffle plate is established, and water invasion frontier forms for a semi-permeable baffle plate (FIG. 2) and a semi-enclosed baffle
plate (FIG. 3) are written, calculated, and plotted using C^# language.
Taking YB102-2H well for example, YB102-2H was put into production on Dec. 16, 2015, and water broke through on Apr. 1, 2020. The well is a horizontal well, and the production section thereof is 140
m long. Liner-type well completion is adopted. The water avoidance height is 35 m. The inner and outer diameters of the wellbore are 108.62 mm and 127 mm, respectively. The production allocation is
330 thousand m^3/day. The initial formation pressure is 69.5 MPa. The reservoir temperature is 150° C. The reservoir permeability is 0.64 mD. The reservoir thickness is 45 m. The reservoir porosity
is 0.049. The irreducible water saturation is 0.103. The method for predicting a water invasion frontier with a baffle plate is adopted. The water breakthrough time is calculated to be 1589 days, and
the actual water breakthrough time is 1568 days, with a small difference therebetween, which is within an error range. FIG. 4 is a diagram illustrating the water invasion frontier form of the
horizontal well.
The prediction of water breakthrough time may provide a basis for ground surface water control plans, such as optimizing the configurations of compressor and separator, increasing the processing
capacity of water treatment systems, to ensure stable production of gas wells. The water breakthrough in a gas well is generally caused by the formation water coning. Compared to the water bodies on
both sides, the water coning is the main reason for water breakthrough in a gas well. The schematic diagram of a water invasion frontier rising process may provide the location of the water coning.
After the water breakthrough in the gas well, gas injector can be deployed on the ground according to the location of the water coning, thereby achieving the effect of maintaining a stable pressure.
The gas injection position is the water coning point. The gas, such as nitrogen or carbon dioxide, may be injected into the gas reservoir by the gas injector. On one hand, it blocks the advancement
of formation water, and on the other hand, due to the injection of gas, it can maintain pressure stability, and improve the acquisition rate of gas reservoirs.
Compared with existing methods, the present disclosure has the following beneficial effects: (1) a water invasion frontier form is predicted according to mirror reflection and superposition
principles, and a predicted water breakthrough time can be given; (2) calculation parameters can be changed according to different well types to adapt to a plurality of complex wells in production;
and (3) the water invasion frontier position data at each time is obtained by programming, which is time-saving and labor-saving.
Finally, it should be noted that the above embodiments are merely intended to describe rather than limit the technical solutions of the present disclosure. Although the present disclosure is
described in detail with reference to the preferred embodiments, the those skilled art should understand that modifications or equivalent substitutions may be made to the technical solutions of the
present disclosure without departing from the spirit and scope of the technical solutions of the present disclosure, and such modifications or equivalent substitutions should be included within the
scope of the claims of the present disclosure.
1. A method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir, comprising the following steps: K i j = L L 1 K 1 + L 2 K 2 + … + L n K n ∑ i = 1 N μ 4 π
K Oi q i φ i j = p e - p j + ρ g ( z i - z j ) dz = v z d t = ∑ 1 M ∑ - ∞ + ∞ ∑ j = 1 N qn i i j ( z i - z j ) 4 πφ ( 1 - S w i - S g r ) [ ( z i - z j ) + r i j 2 ] 1.5
dt
S100: preparing static parameters of a reservoir, fluid characteristic parameters, production characteristic parameters, and baffle plate characteristic parameters, wherein the static parameters
of the reservoir comprise a porosity, a reservoir permeability, a formation temperature, a constant interfacial pressure at a gas-water boundary, a reservoir thickness, an initial gas saturation,
and an irreducible water saturation; the fluid characteristic parameters comprise a natural gas relative density, a natural gas viscosity, and a water quality point position; the production
characteristic parameters comprise a well length, a well diameter, a bottom-hole radius, a frictional resistance coefficient of an inner well wall, a water avoidance height, a flowing bottom-hole
pressure, and an inclination angle; and the baffle plate characteristic parameters comprise a length, a width, and a thickness of a baffle plate, a baffle plate position, a baffle plate
permeability, and a baffle plate form;
S200: based on different well types, the static parameters, and the production characteristic parameters, calculating a potential φ generated by each infinitesimal section in an infinite
formation, assigning an initial pressure Pwfi0 in units of MPa to all infinitesimal sections, calculating ideal productivity qi0 in units of m3/d of each infinitesimal section, and calculating a
pressure drop generated when natural gas flows through each infinitesimal section using a multi-phase flow model based on the production characteristic parameters and the calculated ideal
S300: calculating an equivalent permeability Kij and a coupled permeability Koi based on a relative position of an infinitesimal section and a water quality point, specifically comprising the
following steps:
S3001: obtaining the equivalent permeability by an equation
using an equivalent percolation resistance method based on a permeability distribution combined with a positional relationship between each infinitesimal section and a water quality point;
S3002: calculating the coupled permeability using a weighting method by a process of dividing a distance of a corresponding infinitesimal section from any water quality point by a sum of
distances of the infinitesimal section from all water quality points, which is then multiplied by the equivalent permeability between the corresponding infinitesimal section and any water quality
point, and accumulating obtained values to obtain the coupled permeability Koi, in units of mD;
S400: performing pressure drop iteration; performing calculation on pressure drop data calculated in S200 and the initial pressure Pwfi0 by an equation
to obtain a new corrected infinitesimal section pressure Pwfi1; then calculating the corrected productivity qi1, repeating step S200, continuously correcting productivity data, and stopping
iteration until a difference between a pressure value and a last calculated pressure value is less than one ten-thousandth, the productivity data obtained by calculation at this time being a
coupled productivity qi of each infinitesimal section;
S500: solving a water breakthrough prediction model for different well types with consideration of the baffle plate, and calculating a real influencing effect of an infinitesimal section of a
well on a water quality point in combination with the coupled productivity of the infinitesimal section;
S5001: obtaining the equivalent permeability Kijt at a current time using the method in step S3001 based on the permeability distribution combined with the positional relationship between each
infinitesimal section and a water quality point at the current time, wherein the equivalent permeability of a mirror reflection point to a corresponding water quality point is also Kijt; and
S5002: obtaining a real productivity of an infinitesimal section under the influence of a water quality point at a corresponding time, defined as a pseudo-productivity qniijt, based on the
equivalent permeability data K11t, K12t,..., and Kijt calculated in step S5001, wherein the pseudo-productivity is numerically equal to a ratio of an equivalent permeability to a coupled
permeability multiplied by the productivity of the corresponding infinitesimal section, in units of m3/d;
S600: obtaining a displacement of each infinitesimal section with parameters in S500 and S5002, specifically comprising the following steps:
S6001: obtaining a position of a mirror reflection point of the infinitesimal section using a mirror reflection principle, and calculating distances of a water quality point from the
infinitesimal section and a reflected infinitesimal section; and
S6002: calculating water quality point displacements dzij, dyij, and dxij in unit times from the pseudo-productivity qniijt calculated in S500 combined with basic parameters of natural gas;
calculating a displacement of a water quality point caused by each infinitesimal section at each time by the following equation:
obtaining displacements in x and y directions with changes of a trigonometric function, and after accumulation, updating a real-time position Pj(Xt, Yt, Zt) of the water quality point;
S700: repeating steps S500 and S600 by comparing with a position of a wellbore and increasing the unit time in combination with position data in S6002 until a longitudinal height zjt of a water
quality point in a wellbore region is greater than the water avoidance height Zw, and stopping calculation, current time being water breakthrough time; and
S800: plotting a water breakthrough form of an undulating well with a baffle plate by using water quality point data on the day before water breakthrough according to the water breakthrough time
obtained by the repetition in S700, and drawing a schematic diagram of a water invasion frontier rising process under circumstance of homogeneity of the different well types, a semi-enclosed
baffle plate, and a semi-permeable baffle plate based on data processed in the S800.
2. The method for predicting a water invasion frontier for a complex well in a bottom-water gas reservoir according to claim 1, wherein the different well types in S200 have the following
a horizontal well has an inclination angle of 0 and a gravitational pressure drop of 0; an inclined well has a particular inclination angle and a gravitational pressure drop; a straight well is
vertical and has a relatively maximum gravitational pressure drop; and an undulating well has a particular inclination angle, varying longitudinal coordinates of well sections, a wavy form, and
increased energy in a descending section and an increased pressure drop in an ascending section under the action of gravity.
Patent History
Publication number
: 20240328301
: Mar 28, 2024
Publication Date
: Oct 3, 2024
Xiaohua TAN
(Chengdu City),
Taixiong QING
(Chengdu City),
Xiaoping LI
(Chengdu City),
Lei DING
(Chengdu City),
Xian PENG
(Chengdu City),
Longxin LI
(Chengdu City)
Application Number
: 18/620,276
International Classification: E21B 47/005 (20060101); E21B 47/07 (20060101); E21B 47/08 (20060101); E21B 47/117 (20060101); | {"url":"https://patents.justia.com/patent/20240328301","timestamp":"2024-11-05T11:14:13Z","content_type":"text/html","content_length":"93989","record_id":"<urn:uuid:705db1be-45fd-40c1-9f62-b397ef7fa2cd>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00089.warc.gz"} |
How A Scottish Naval Engineer And His Horse Discovered Solitons • theGIST
How A Scottish Naval Engineer And His Horse Discovered Solitons
Daniel Giovannini looks at the incredible contribution that one man and his horse (and the Union Canal) made to modern science.
Although no reliable, easily accessible records are available on this topic, it is probably safe to say that not many mathematical discoveries have ever been made on horseback. It may or may not have
been sunny in August 1834. The weather conditions for that fortuitous day are unknown but, for the sake of realism and to set the scene, let us just assume that it was overcast, with a light drizzle.
The story takes place in the Scottish countryside, on the Union Canal at Hermiston, near Edinburgh. At the time naval engineer John Scott Russell, born and educated in Glasgow, was working on the
design of the keels of canal boats. In Hermiston, Russell was riding his horse, following and observing a boat being rapidly drawn along the canal by a pair of horses. As he later recounted in a
nicely penned report for the British Association for the Advancement of Science [lref id=1], what he noticed when the boat suddenly stopped was a most unusual wave that detached from the prow.
The swell quickly moving away from the boat and J. Scott Russell’s insight turned out to be surprisingly resilient. It also had a rather unexpected, though tardy, impact on mathematics and applied
mathematics. Such a wave would later be dubbed a soliton and, in the following decades and throughout the twentieth century, would play a central role in the theory of nonlinear differential
equations, hydrodynamics, nonlinear optics and communications engineering.
What Russell saw was a wave rolling forward “with great velocity, assuming the form of a large solitary elevation, a rounded, smooth and well-defined heap of water”. This beautiful but subtly odd
phenomenon was enough for him to spur his horse and go on the pursuit. The wave kept travelling along the canal, at about 14 km/h when Russell overtook it, apparently undisturbed and preserving its
form (a bump of about 40 centimetres in height, extending for some 9 metres). At least two of the properties that would later be recognised as defining characteristics of solitons must have been at
once apparent to Russell: the shape of the wave remained stable and, although propagating forward, localised at each instant within a certain region without the dispersion that we would usually
associate with an ordinary wave, which instead would eventually flatten out or topple over.
The chase lasted for a mile or two, after which Russell lost sight of the persistent wave in the windings of the canal. Following prolonged investigations performed using a tank built in his back
garden for this very purpose, Russell concluded that the strange behaviour of what he called the wave of translation was due to the relative shallowness and narrowness of the canal. The stable waves
produced in such a body of water at odds with the principles of hydrodynamics known in the mid-19th century, also showed bizarre particle-like behaviours: a wave of translations too big could split
into two, and two waves propagating at different velocities wouldn’t merge, but rather overtake each other and carry on undisturbed.
The first full theoretical treatment of Russell’s wave of translation also known as solitary wave, or soliton, was only published in the 1870’s by Joseph Boussinesq and Lord Rayleigh. The first
mathematical model of waves on shallow water surfaces was published much later, in 1895, by Diederik Korteweg and Gustav de Vries. What is now known as the Korteweg-de Vries equation is the
prototypical textbook nonlinear partial differential equation whose solutions can be exactly and unambiguously found. Solitons constitute one of its families of solutions. This fact shouldn’t
surprise us at all, since solitons were in fact first observed as waves propagating along shallow water surfaces, which is just what Korteweg and de Vries had set out to describe.
Russell went on with his life. He single-handedly revolutionised naval design and made the first experimental observation of the Doppler effect. During all this time, though, he was convinced that
his wave of translation would one day be considered of fundamental importance.
In many instances, the scientific method is all about accumulating evidence that either supports or refutes existing hypotheses, models and theories that haven’t yet been put to the test. Sometimes,
those models and theories just have to be put aside, waiting to be picked up at some point in the future by someone who can put those nifty mathematical tools to good use. And in fact, while
mathematicians kept finding soliton solutions popping up everywhere in new, increasingly complex nonlinear systems, during the second half of the 20th century solitons started finding practical
applications, for instance, in optics — where solitons’ intrinsic stability comes in handy when designing optical fibres for long-distance transmission.
Solitons also appear in the description of many optical phenomena that involve nonlinear crystals (where the optical properties of the crystal do not respond linearly to the electric field of the
incoming light), as well as optical fibres. These effects now constitute the basic toolbox of modern optics, and have been the subject of active research and countless fundamental and technological
applications since the invention of the laser. As the same nonlinear models describe a wide range of physical systems, similar at least from the mathematical side, solitons also appear in the
treatment of phenomena as diverse as shock waves and plasma, low-frequency oscillations in complex chemical structures such as DNA and fluid dynamics.
Solitons are, in a way, a recurring trait in the family tree of nonlinear differential equations. Furthermore, they also show up among the families of solutions of other equations that are directly
related to, or nonlinear generalisations of, equations that shaped our understanding of the quantum world. The Schrödinger equation in its most basic form is one of the foundations of quantum
mechanics. Introduced by Erwin Schrödinger in 1926, the equation that now goes by his name describes the time evolution of the quantum state associated with a physical system. The nonlinear version
of the equation and its soliton solutions, moving through obscure mathematical backdoors, appear in the analysis of the interactions of some classes of subatomic particles. At the same time, however,
the nonlinear Schrödinger equation can also help describe rogue waves: unusually large spontaneous ocean surface waves that represent a threat even to bulkier ships and ocean liners.
Solitons are an excellent example of a ubiquitous mathematical concept that, after being modelled over a simple physical system, have turned up in a wide variety of disciplines. If one day you happen
to be cycling along the Union Canal and a barge stops nearby, look out for the solitary wave it may generate and follow it. It could get you far.
^1 ^2^3
1. Russell JS. Report of the Committee on Waves, appointed by the British Association at Bristol in 1836. Published in the BA reports VI, 417–468; 1837.
1 Response
1. […] żródło: https://the-gist.org/2012/07/how-a-scottish-naval-engineer-and-his-horse-discovered-solitons/ […]
This site uses Akismet to reduce spam. Learn how your comment data is processed. | {"url":"https://the-gist.org/2012/07/how-a-scottish-naval-engineer-and-his-horse-discovered-solitons/","timestamp":"2024-11-02T01:38:57Z","content_type":"text/html","content_length":"140190","record_id":"<urn:uuid:bd5dc26c-1729-4198-9bdf-31aac9552610>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00810.warc.gz"} |
Reducing variance in AB testing
“Variance is the core of experimental analysis.” Almost all the key statistical concepts around experimentation relate to variance. Assuming i.i.d. samples, we can estimate the variance as follows:
\[ \bar{Y}&= \frac{1}{n} \sum_i Y_i \\ var(Y) &= \hat\sigma^2 = \frac{1}{n-1} \sum_i (Y_i - \bar{Y})^2 \\ var(\bar{Y}) &= var(\frac{1}{n} \sum_i Y_i) = \frac{1}{n^2} * n * var(Y) = \frac{\hat\sigma^
2}{n} \]
If we overestimate the variance, it is more likely to get false negatives; if we underestimate the variance then it is more likely to get false positives. To understand this, consider the following
two-sample test:
\[ T = \frac{\Delta}{\sqrt{Var(\Delta)}} \] Overestimating variance increases the denominator, which then decreases the estimated T score, which will lead to False negatives as we will mistakenly not
reject the Null. On the other hand, if we underestimate variance, we will end up rejecting the null more often as the above denominator will be smaller.
High variance in particular is an issue because it affects power analysis and increases the necessary sample size of the experiment. In fact, assuming that significance level \(\alpha=0.05\), power
can be defined by \(\delta\), the minimum delta of practical significance:
\[ \text{Power}_{\delta} = P(|T| \geq 1.96 | \text{true diff is } \delta) \] where \(T\) is the t-statistic value. Then, assuming treatment and control are of equal size, the total number of samples
you need to achieve 80% power is (p.189 Kohavi, Tang, and Xu (2020)):
\[ n \approx \frac{16 \sigma^2} {\delta^2} \] As a result, the sample size increases with variance (and decreases with \(\delta^2\)). Because of the effect of variance on sample size, we tend to try
to artificially reduce variance, with some of the following techniques:
• Remove outliers (e.g., bots)
• Cap or log-transform variable of interest
• Use post-stratification to reduce variance within strata (or do a stratified experiment)
• Control variables in a regression
• Use CUPED (https://www.statsig.com/blog/cuped)
• Randomize at a more granular level
• Run a within subject design A/B test (https://dovetail.com/research/within-subjects-design/) | {"url":"https://www.madinterview.com/practice/question/ab-testing-reducing-variance-in","timestamp":"2024-11-13T01:19:21Z","content_type":"text/html","content_length":"23793","record_id":"<urn:uuid:69a229d5-6929-43fa-a130-0b51677c3bb1>","cc-path":"CC-MAIN-2024-46/segments/1730477028303.91/warc/CC-MAIN-20241113004258-20241113034258-00195.warc.gz"} |
American Mathematical Society
Two kinds of strong pseudoprimes up to $10^{36}$
HTML articles powered by AMS MathViewer
by Zhenxiang Zhang;
Math. Comp. 76 (2007), 2095-2107
DOI: https://doi.org/10.1090/S0025-5718-07-01977-1
Published electronically: April 17, 2007
PDF | Request permission
Let $n>1$ be an odd composite integer. Write $n-1=2^sd$ with $d$ odd. If either $b^d \equiv 1$ mod $n$ or $b^{2^rd}\equiv -1$ mod $n$ for some $r=0, 1, \dots , s-1$, then we say that $n$ is a strong
pseudoprime to base $b$, or spsp($b$) for short. Define $\psi _t$ to be the smallest strong pseudoprime to all the first $t$ prime bases. If we know the exact value of $\psi _t$, we will have, for
integers $n<\psi _t$, a deterministic efficient primality testing algorithm which is easy to implement. Thanks to Pomerance et al. and Jaeschke, the $\psi _t$ are known for $1 \leq t \leq 8$.
Conjectured values of $\psi _9,\dots ,\psi _{12}$ were given by us in our previous papers (Math. Comp. 72 (2003), 2085–2097; 74 (2005), 1009–1024). The main purpose of this paper is to give exact
values of $\psi _t’$ for $13\leq t\leq 19$; to give a lower bound of $\psi _{20}’$: $\psi _{20}’>10^{36}$; and to give reasons and numerical evidence of K2- and $C_3$-spsp’s $<10^{36}$ to support the
following conjecture: $\psi _t=\psi _t’<\psi _t”$ for any $t\geq 12$, where $\psi _t’$ (resp. $\psi _t”$) is the smallest K2- (resp. $C_3$-) strong pseudoprime to all the first $t$ prime bases. For
this purpose we describe procedures for computing and enumerating the two kinds of spsp’s $<10^{36}$ to the first 9 prime bases. The entire calculation took about 4000 hours on a PC Pentium IV/
1.8GHz. (Recall that a K2-spsp is an spsp of the form: $n=pq$ with $p,q$ primes and $q-1=2(p-1)$; and that a $C_3$-spsp is an spsp and a Carmichael number of the form: $n=q_1q_2q_3$ with each prime
factor $q_i\equiv 3$ mod $4$.) References
• Richard Crandall and Carl Pomerance, Prime numbers, 2nd ed., Springer, New York, 2005. A computational perspective. MR 2156291
• Ivan Damgård, Peter Landrock, and Carl Pomerance, Average case error estimates for the strong probable prime test, Math. Comp. 61 (1993), no. 203, 177–194. MR 1189518, DOI 10.1090/
• Richard K. Guy, Unsolved problems in number theory, 3rd ed., Problem Books in Mathematics, Springer-Verlag, New York, 2004. MR 2076335, DOI 10.1007/978-0-387-26677-0
• Gerhard Jaeschke, On strong pseudoprimes to several bases, Math. Comp. 61 (1993), no. 204, 915–926. MR 1192971, DOI 10.1090/S0025-5718-1993-1192971-8
• Gary L. Miller, Riemann’s hypothesis and tests for primality, J. Comput. System Sci. 13 (1976), no. 3, 300–317. MR 480295, DOI 10.1016/S0022-0000(76)80043-8
• Louis Monier, Evaluation and comparison of two efficient probabilistic primality testing algorithms, Theoret. Comput. Sci. 12 (1980), no. 1, 97–108. MR 582244, DOI 10.1016/0304-3975(80)90007-9
• Carl Pomerance, J. L. Selfridge, and Samuel S. Wagstaff Jr., The pseudoprimes to $25\cdot 10^{9}$, Math. Comp. 35 (1980), no. 151, 1003–1026. MR 572872, DOI 10.1090/S0025-5718-1980-0572872-7
• Michael O. Rabin, Probabilistic algorithm for testing primality, J. Number Theory 12 (1980), no. 1, 128–138. MR 566880, DOI 10.1016/0022-314X(80)90084-0
• Zhenxiang Zhang, Finding strong pseudoprimes to several bases, Math. Comp. 70 (2001), no. 234, 863–872. MR 1697654, DOI 10.1090/S0025-5718-00-01215-1
• Zhenxiang Zhang, Finding $C_3$-strong pseudoprimes, Math. Comp. 74 (2005), no. 250, 1009–1024. MR 2114662, DOI 10.1090/S0025-5718-04-01693-X
• Zhenxiang Zhang and Min Tang, Finding strong pseudoprimes to several bases. II, Math. Comp. 72 (2003), no. 244, 2085–2097. MR 1986825, DOI 10.1090/S0025-5718-03-01545-X
Similar Articles
• Retrieve articles in Mathematics of Computation with MSC (2000): 11Y11, 11A15, 11A51
• Retrieve articles in all journals with MSC (2000): 11Y11, 11A15, 11A51
Bibliographic Information
• Zhenxiang Zhang
• Affiliation: Department of Mathematics, Anhui Normal University, 241000 Wuhu, Anhui, People’s Republic of China
• Email: zhangzhx@mail.wh.ah.cn, ahnu_zzx@sina.com
• Received by editor(s): March 8, 2006
• Received by editor(s) in revised form: July 8, 2006
• Published electronically: April 17, 2007
• Additional Notes: The author was supported by the NSF of China Grant 10071001
Dedicated: Dedicated to the memory of Kencheng Zeng (1927–2004)
• © Copyright 2007 American Mathematical Society
The copyright for this article reverts to public domain 28 years after publication.
• Journal: Math. Comp. 76 (2007), 2095-2107
• MSC (2000): Primary 11Y11, 11A15, 11A51
• DOI: https://doi.org/10.1090/S0025-5718-07-01977-1
• MathSciNet review: 2336285 | {"url":"https://www.ams.org/journals/mcom/2007-76-260/S0025-5718-07-01977-1/?active=current","timestamp":"2024-11-03T13:29:59Z","content_type":"text/html","content_length":"67237","record_id":"<urn:uuid:b13667f1-d6d6-4a51-ad74-63f631d5856a>","cc-path":"CC-MAIN-2024-46/segments/1730477027776.9/warc/CC-MAIN-20241103114942-20241103144942-00699.warc.gz"} |
Calculations from inspection questions | Community
Would it, or is it possible to create a calculation from answers in an inspection? For example, I am inspecting an air conditioner on a recreational vehicle or camper and I need to determine how
efficient the air conditioner is functioning. This is called Delta-T. To do this I measure the temperature of the air entering the air conditioner and the temperature of the air leaving the air
conditioner. The difference between the two numbers is the delta-T.
I would like to have a question that ask for the temperature of air entering the air conditioner, followed by a question asking for the temperature of the air leaving the air conditioner. Finally,
the third question would automatically calculate the delta-T from the answers from the previous two questions. | {"url":"https://community.safetyculture.com/ideas/calculations-from-inspection-questions-451","timestamp":"2024-11-04T21:44:47Z","content_type":"text/html","content_length":"177008","record_id":"<urn:uuid:0e97ad18-cc48-40c7-97d3-5fe01ae3a2cb>","cc-path":"CC-MAIN-2024-46/segments/1730477027861.16/warc/CC-MAIN-20241104194528-20241104224528-00523.warc.gz"} |
Units of Conductivity - SI Units for Conductance, Thermal & Electrical
TopicsChemistry TopicsConductivity Units | SI Units for Conductance, Thermal Conductivity & Electrical Conductivity
Conductivity Units | SI Units for Conductance, Thermal Conductivity & Electrical Conductivity
Introduction to Conductivity Units
Conductivity is a fundamental property that measures how well a material conducts electric current or heat. The conductivity or specific conductance of an electrolyte solution can be used to
determine its ability to generate the needed quantity of electricity. In scientific and engineering fields, it is crucial to have standardized units for conductivity measurements.
The ability of an electrolyte solution to transmit electricity due to the electrolyte’s free ions that are no longer bound in the lattice is referred to as the conductivity or specific conductance of
that electrolyte’s solution. The International System of Units (SI Units) provides a set of standard measurements for different types of conductivity.
In this article, we will delve into the world of conductivity units, including the SI units for conductance, thermal conductivity, and electrical conductivity. We will also explore their applications
and significance in various industries and scientific research.
Unit of Conductance in Chemistry
Conductance is the reciprocal of resistance and represents how easily electricity flows through a material. The unit of conductance is Siemens (S).
The higher the conductance, the more conductive the material is, indicating efficient electron flow. Conductance measurements are essential in chemical analysis and determining the electrolytic
behavior of substances.
Thermal Conductivity Unit
Thermal conductivity is the property that determines a material’s ability to conduct heat. It is measured in Watts per meter-Kelvin (W/m-K), which is the SI unit of thermal conductivity. Materials
with higher thermal conductivity are better heat conductors.
SI Unit of Thermal Conductivity
Thermal conductivity is a critical property for materials used in various thermal applications. The SI unit of thermal conductivity is Watts per meter-Kelvin (W/m-K). This unit quantifies a
material’s efficiency in conducting heat.
SI Unit of Conductivity
1. Unit of Conductivity in Siemen per meter: Conductivity, in general, is the measure of a material’s ability to conduct electricity or heat. The SI unit of conductivity is Siemens per meter (S/m).
It quantifies the conductive capability of a material across a given distance.
2. Unit of Conductivity in Ohm: Conductivity is sometimes expressed in ohms, which is the unit of electrical resistance. However, it is important to distinguish between resistance and conductance,
with the latter being measured in Siemens (S).
SI Unit of Molar Conductivity
Molar conductivity is a specific type of conductivity that considers the concentration of the electrolyte solution.
SI Unit of Molar Conductivity is measured in Siemens per meter squared per mole (S m2 mol-1). Molar conductivity is particularly useful in studying electrolytic solutions and their behavior.
Unit of Specific Conductivity
Specific conductance, also known as specific conductivity or conductivity per unit concentration. The unit of Specific Conductivity is expressed in Siemens per centimeter (S/cm).
It provides valuable insights into the conductive behavior of a solution at a specific concentration. This unit helps to compare the conductive capabilities of solutions at different concentrations
Si unit of Electrical Conductivity
Electrical conductivity measures a material’s ability to conduct electricity. The SI Unit of Electrical Conductivity is expressed in Siemens per meter (S/m). Electrical conductivity is essential in
various industries, including electronics and electrical engineering.
Unit of Equivalent Conductance
Equivalent conductance is a parameter used in electrochemistry to compare the conductivities of different ions in a solution. It is measured in Siemens per mole (S mol-1).
Water Conductivity Units
Water conductivity measures the ability of water to carry an electrical current. Water conductivity units are expressed in microsiemens per centimeter (μS/cm) or millisiemens per centimeter (mS/cm).
Water conductivity is crucial in assessing water quality and environmental conditions.
Frequently Asked Questions on Conductivity Units
What is the thermal conductivity unit symbol?
Thermal conductivity refers to a material's capacity to conduct/transfer heat. It is commonly represented by the sign 'k,' however it can also be represented by κ and λ.
What is the unit of molar conductivity and its significance in electrochemistry?
The unit of molar conductivity is Siemens per meter squared per mole (Sm2mol-1). It helps study electrolytic solutions and the conductive behavior of ions in solution.
How is electrical conductivity expressed, and why is it essential?
Electrical conductivity is expressed in Siemens per meter (S/m). It is crucial in various industries, including electronics and electrical engineering, to assess a material's ability to conduct
electricity effectively.
What is the relationship between resistance and conductivity?
Resistance and conductivity are inversely related. As conductivity increases, resistance decreases, and vice versa. Conductivity measures how well a material conducts electricity, while resistance
quantifies the opposition to the flow of current. | {"url":"https://infinitylearn.com/surge/topics/conductivity-units-si-units-for-conductance-thermal-conductivity-electrical-conductivity/","timestamp":"2024-11-02T21:46:21Z","content_type":"text/html","content_length":"173514","record_id":"<urn:uuid:1ff423cb-8f3b-48b7-9efc-2b27a56ad6ab>","cc-path":"CC-MAIN-2024-46/segments/1730477027730.21/warc/CC-MAIN-20241102200033-20241102230033-00028.warc.gz"} |
How do I evaluate int_1^4sqrt(t) \(5 + 2 t) dt? | HIX Tutor
How do I evaluate #int_1^4sqrt(t) \(5 + 2 t) dt#?
Answer 1
I would expand the argument as:
I can now separate the integrals, as:
Remembering that #sqrt(t)=t^(1/2)# and the rules to manipulate exponentes, to get:
#=int_1^4t^(1/2)*5dt+int_1^4t^(1/2+1)*2dt=# #=int_1^4t^(1/2)*5dt+int_1^4t^(3/2)*2dt=#
Taking the constant out of the integrals and remembering that the integral of #x^n# is #x^(n+1)/(n+1)#;
#5t^(1/2+1)/(1/2+1)+2t^(3/2+1)/(3/2+1)|_1^4# #10/3t^(3/2)+4/5t^(5/2)|_1^4=48,13#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
To evaluate ( \int_{1}^{4} \sqrt{t}(5 + 2t) , dt ), you would first distribute the square root term into the polynomial, then integrate each term separately. The integral would yield ( \frac{32}{3}\
sqrt{4} - \frac{4}{3}\sqrt{1} + \frac{5}{3}(4^{\frac{3}{2}} - 1) ).
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-i-evaluate-int-1-4sqrt-t-5-2-t-dt-8f9afa0772","timestamp":"2024-11-07T08:47:36Z","content_type":"text/html","content_length":"577275","record_id":"<urn:uuid:f764d6f5-8f0c-4533-960b-fafa2297960c>","cc-path":"CC-MAIN-2024-46/segments/1730477027987.79/warc/CC-MAIN-20241107083707-20241107113707-00192.warc.gz"} |
Summability of the transport density in the import-export transport problem with Riemannian cost
Accepted Paper
Inserted: 3 dec 2021
Last Updated: 26 oct 2024
Journal: Discrete and Continuous Dynamical Systems
Year: 2023
In this paper, we consider a mass transportation problem with transport cost given by a Riemannian metric in a bounded domain $\Omega$, where a mass $f^+$ is sent to a location $f^-$ in $\Omega$ with
the possibility of importing or exporting masses from or to the boundary $\partial\Omega$. First, we study the $L^p$ summability of the transport density $\sigma$ in the Monge-Kantorovich problem
with Riemannian cost between two diffuse measures $f^+$ and $f^-$. Using some technical geometrical estimates on the transport rays, we will show that $\sigma$ belongs to $L^p(\Omega)$ as soon as the
source measure $f^+$ and the target one $f^-$ are both in $L^p(\Omega)$, for all $p \in [1,∞]$. Moreover, we will prove that the transport density between a diffuse measure $f^+$ and its Riemannian
projection onto the boundary (so, the target measure is singular) is in $L^p(\Omega)$ provided that $f^+ \in L^p(\Omega)$ and $\Omega$ satisfies a uniform exterior ball condition. Finally, we will
extend the $L^p$ estimates on the transport density $\sigma$ to the case of a transport problem with import-export taxes. | {"url":"https://cvgmt.sns.it/paper/5369/","timestamp":"2024-11-02T04:27:44Z","content_type":"text/html","content_length":"8963","record_id":"<urn:uuid:8c32c3b7-82a2-4132-8ae9-d71cfba77031>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00888.warc.gz"} |
Proofs in Mathematics by Alexander Bogomolny
Proofs in Mathematics
by Alexander Bogomolny
Publisher: Interactive Mathematics Miscellany and Puzzles 2013
Number of pages: 272
I'll distinguish between two broad categories. The first is characterized by simplicity. In the second group the proofs will be selected mainly for their charm. Most of the proofs in this book should
be accessible to a middle grade school student.
Download or read it online for free here:
Read online
(online html)
Similar books
A Gentle Introduction to the Art of Mathematics
Joseph Fields
Southern Connecticut State UniversityThe point of this book is to help you with the transition from doing math at an elementary level (concerned mostly with solving problems) to doing math at an
advanced level (which is much more concerned with axiomatic systems and proving statements).
Book of Proof
Richard Hammack
Virginia Commonwealth UniversityThis textbook is an introduction to the standard methods of proving mathematical theorems. It is written for an audience of mathematics majors at Virginia Commonwealth
University, and is intended to prepare the students for more advanced courses.
Basic Concepts of Mathematics
Elias Zakon
The Trillia GroupThe book will help students complete the transition from purely manipulative to rigorous mathematics. It covers basic set theory, induction, quantifiers, functions and relations,
equivalence relations, properties of the real numbers, fields, etc.
An Introduction to Mathematical Reasoning
Peter J. Eccles
Cambridge University PressThis book introduces basic ideas of mathematical proof to students embarking on university mathematics. The emphasis is on constructing proofs and writing clear mathematics.
This is achieved by exploring set theory, combinatorics and number theory. | {"url":"https://www.e-booksdirectory.com/details.php?ebook=9675","timestamp":"2024-11-13T14:22:30Z","content_type":"text/html","content_length":"11077","record_id":"<urn:uuid:227241c0-3b81-4c87-9478-039596a00e45>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00283.warc.gz"} |
Mass transfer effects on the three-dimensional second-order boundary layer flow at the stagnation point of blunt bodies
Solutions are presented to the second-order boundary layer equations to show the effects of mass transfer on wall shear and heat transfer in the case of an incompressible three-dimensional stagnation
point flow. On impermeable walls, longitudinal curvature reduces the short cross-sectional component of the shear stress near the general stagnation point, whereas the long cross-sectional component
is increased; heat transfer is reduced. Transverse curvature augments the short and reduces the long cross-sectional components of shear stress, while increasing heat transfer. These tendencies are
intensified by blowing, due to artificial thickening of the boundary layer; here second-order effects may prevail over classical theory. Suction reduces boundary layer thickness; at large suction
rates second-order effects are small compared to classical results.
Mechanics Research Communications
Pub Date:
□ Blunt Bodies;
□ Boundary Layer Flow;
□ Heat Transfer;
□ Mass Transfer;
□ Stagnation Point;
□ Three Dimensional Boundary Layer;
□ Boundary Layer Equations;
□ Curvature;
□ Pressure Effects;
□ Shear Flow;
□ Shear Stress;
□ Stagnation Flow;
□ Suction;
□ Surface Geometry;
□ Tables (Data);
□ Wall Flow;
□ Fluid Mechanics and Heat Transfer | {"url":"https://ui.adsabs.harvard.edu/abs/1974MeReC...1..285P/abstract","timestamp":"2024-11-12T16:31:21Z","content_type":"text/html","content_length":"36721","record_id":"<urn:uuid:1808329a-97e4-47bf-8937-7407257e5bf0>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.63/warc/CC-MAIN-20241112145015-20241112175015-00058.warc.gz"} |
Download GTU MBA 2016 Summer 3rd Sem 2830203 Security Analysis And Portfolio Management Question Paper
Download GTU (Gujarat Technological University) MBA (Master of Business Administration) 2016 Summer 3rd Sem 2830203 Security Analysis And Portfolio Management Previous Question Paper
Seat No.: ________ Enrolment No.___________
MBA ? SEMESTER 3 ? EXAMINATION ? SUMMER 2016
Subject Code: 2830203 Date: 09/05/2016
Subject Name: Security Analysis and Portfolio Management
Time: 10.30 AM TO 01.30 PM Total Marks: 70
1. Attempt all questions.
2. Make suitable assumptions wherever necessary.
3. Figures to the right indicate full marks.
Q1 Answer the following multiple choice questions: 06
Which of the following terms represent an upper price limit for a stock
based on the quantity of the willing seller?
A. Support B. Trend line
C. Resistance D. Channel
A main difference between real and nominal return proceeds is that,
A. A real return adjust for
inflation and nominal
return do not
B. Real return use actual cash
flows and nominal use expected
cash flows
C. Real return adjust for
commissions and
nominal returns do not
D Real returns show highest
possible return and nominal
show lowest possible return
Non-systematic risk is further more identified as
3. A. No diversifiable risk B. Market risk
C. Random risk D. Company specific risk
Suppose you have 20 stocks and you want to derive efficient frontier,
how many co-variances do you have to calculate?
4. A. 90 B. 190
C. 20 D. 400
Mr X is just retired as a government officer. Which investment would
grade upper most with regard to protection is,
5. A. Preferred stock B. Real estate
C. Common stock D. Government bonds
Consider two stock in portfolio A and B
E (R) S.D.
A 15% 10%
B 20% 30%
If the returns of the two stocks perfectly negatively correlated what is
the weightage of two stocks that risk of portfolio driven down to zero?
6. A. 75% and 25% B. 60% and 40%
C. 80% and 20% D. 66.67% and 33.33%
Q.1 (b) Explain the meaning of the following terms: 04
FirstRanker.com - FirstRanker's Choice
Seat No.: ________ Enrolment No.___________
MBA ? SEMESTER 3 ? EXAMINATION ? SUMMER 2016
Subject Code: 2830203 Date: 09/05/2016
Subject Name: Security Analysis and Portfolio Management
Time: 10.30 AM TO 01.30 PM Total Marks: 70
1. Attempt all questions.
2. Make suitable assumptions wherever necessary.
3. Figures to the right indicate full marks.
Q1 Answer the following multiple choice questions: 06
Which of the following terms represent an upper price limit for a stock
based on the quantity of the willing seller?
A. Support B. Trend line
C. Resistance D. Channel
A main difference between real and nominal return proceeds is that,
A. A real return adjust for
inflation and nominal
return do not
B. Real return use actual cash
flows and nominal use expected
cash flows
C. Real return adjust for
commissions and
nominal returns do not
D Real returns show highest
possible return and nominal
show lowest possible return
Non-systematic risk is further more identified as
3. A. No diversifiable risk B. Market risk
C. Random risk D. Company specific risk
Suppose you have 20 stocks and you want to derive efficient frontier,
how many co-variances do you have to calculate?
4. A. 90 B. 190
C. 20 D. 400
Mr X is just retired as a government officer. Which investment would
grade upper most with regard to protection is,
5. A. Preferred stock B. Real estate
C. Common stock D. Government bonds
Consider two stock in portfolio A and B
E (R) S.D.
A 15% 10%
B 20% 30%
If the returns of the two stocks perfectly negatively correlated what is
the weightage of two stocks that risk of portfolio driven down to zero?
6. A. 75% and 25% B. 60% and 40%
C. 80% and 20% D. 66.67% and 33.33%
Q.1 (b) Explain the meaning of the following terms: 04
1. Circuit breaker
2. Anchoring
3. Short sell
4. Regret aversion
Q.1 (c) Write a note of IPO investments. 04
Q.2 (a) Define investments. Discuss the various marketable and non-marketable
investment avenues available to investors.
(b) What do you mean by efficient market hypothesis? Also explain the
forms of market efficiency.
(b) A highly volatile stock earns the following returns over six year periods
R1= 10%, R2= 30%, R3=15%, R4=-0.12, R5=35%, R6=12%
Calculate and interpret the following values:
1. Arithmetic mean
2. Cumulative wealth index
3. Standard deviation
Q.3 (a) What are the basic assumption and inputs required for CAPM? Explain
CML and SML. Also establish intra-relation between them.
(b) The earning of a company has been growing at 15% over the past several
years and is expected to increase at this rate for next seven years and
thereafter at 9% in perpetuity it is currently earning Rs 4 per share and
paying Rs 2 per dividend. What shall be present value of share with
discount rate of 12% for the first seven years and 10% thereafter?
Q.3 (a) Select an industry of your choice and do the industry analysis in the
current economic scenario.
(b) The following table gives analyst expected return on two stocks for
particular market:
Market return Aggressive stock Defensive stock
8% 3% 10%
25% 40% 20%
1. What are the betas of the stocks?
2. What is the expected return on each stock if market return is
equally likely to be 8% and 25%?
3. If the risk free rate is 9% and market return is equally likely to
be 8% or 25%, what is SML?
4. What is the alpha of two stocks?
Q.4 (a) Write a note on the following:
1. Technical analysis
2. Dow theory and components
(b) The rates of return on stock X and market portfolio for last 12
months are given below:
FirstRanker.com - FirstRanker's Choice
Seat No.: ________ Enrolment No.___________
MBA ? SEMESTER 3 ? EXAMINATION ? SUMMER 2016
Subject Code: 2830203 Date: 09/05/2016
Subject Name: Security Analysis and Portfolio Management
Time: 10.30 AM TO 01.30 PM Total Marks: 70
1. Attempt all questions.
2. Make suitable assumptions wherever necessary.
3. Figures to the right indicate full marks.
Q1 Answer the following multiple choice questions: 06
Which of the following terms represent an upper price limit for a stock
based on the quantity of the willing seller?
A. Support B. Trend line
C. Resistance D. Channel
A main difference between real and nominal return proceeds is that,
A. A real return adjust for
inflation and nominal
return do not
B. Real return use actual cash
flows and nominal use expected
cash flows
C. Real return adjust for
commissions and
nominal returns do not
D Real returns show highest
possible return and nominal
show lowest possible return
Non-systematic risk is further more identified as
3. A. No diversifiable risk B. Market risk
C. Random risk D. Company specific risk
Suppose you have 20 stocks and you want to derive efficient frontier,
how many co-variances do you have to calculate?
4. A. 90 B. 190
C. 20 D. 400
Mr X is just retired as a government officer. Which investment would
grade upper most with regard to protection is,
5. A. Preferred stock B. Real estate
C. Common stock D. Government bonds
Consider two stock in portfolio A and B
E (R) S.D.
A 15% 10%
B 20% 30%
If the returns of the two stocks perfectly negatively correlated what is
the weightage of two stocks that risk of portfolio driven down to zero?
6. A. 75% and 25% B. 60% and 40%
C. 80% and 20% D. 66.67% and 33.33%
Q.1 (b) Explain the meaning of the following terms: 04
1. Circuit breaker
2. Anchoring
3. Short sell
4. Regret aversion
Q.1 (c) Write a note of IPO investments. 04
Q.2 (a) Define investments. Discuss the various marketable and non-marketable
investment avenues available to investors.
(b) What do you mean by efficient market hypothesis? Also explain the
forms of market efficiency.
(b) A highly volatile stock earns the following returns over six year periods
R1= 10%, R2= 30%, R3=15%, R4=-0.12, R5=35%, R6=12%
Calculate and interpret the following values:
1. Arithmetic mean
2. Cumulative wealth index
3. Standard deviation
Q.3 (a) What are the basic assumption and inputs required for CAPM? Explain
CML and SML. Also establish intra-relation between them.
(b) The earning of a company has been growing at 15% over the past several
years and is expected to increase at this rate for next seven years and
thereafter at 9% in perpetuity it is currently earning Rs 4 per share and
paying Rs 2 per dividend. What shall be present value of share with
discount rate of 12% for the first seven years and 10% thereafter?
Q.3 (a) Select an industry of your choice and do the industry analysis in the
current economic scenario.
(b) The following table gives analyst expected return on two stocks for
particular market:
Market return Aggressive stock Defensive stock
8% 3% 10%
25% 40% 20%
1. What are the betas of the stocks?
2. What is the expected return on each stock if market return is
equally likely to be 8% and 25%?
3. If the risk free rate is 9% and market return is equally likely to
be 8% or 25%, what is SML?
4. What is the alpha of two stocks?
Q.4 (a) Write a note on the following:
1. Technical analysis
2. Dow theory and components
(b) The rates of return on stock X and market portfolio for last 12
months are given below:
Month 1 2 3 4 5 6 7 8 9 10 11 12
1. Calculate and interpret the beta stock ? X.
2. What is characteristic line for stock ? X?
Q.4 (a) Write a note on the following:
1. Single index model
2. Arbitrage pricing theory
(b) Calculate the systematic and unsystematic risks for the given securities
from the following data.
Return (%)
Tata power 33.90 126.34 0.36
Mahindra &
25.09 106.70 0.74
Market index
28.63 39.52 1
Q.5 Mr. X has recently completed MBA Finance from GTU as major
in finance and he has been hired as a financial planner by a leading
financial corporation. His boss has assigned him the task of
investing Rs 10,00,000 for a client who has been asked to consider
only the following investment alternatives, Stock A and Stock B.
The research wing of the company has developed the probability
distribution for the state of the economy and estimated value of
rate of return under each state of economy. The following
information is available for your research purpose:
State of
Probability Stock A Stock B
1 0.20 5 20
2 0.30 15 14
3 0.40 18 35
4 0.10 02 10
1. What are expected returns and standard deviations of returns for
stock A and B? What is your recommendation of client in terms
of variability for the two stocks? Which stock is more consistent?
Justify your answers.
FirstRanker.com - FirstRanker's Choice
Seat No.: ________ Enrolment No.___________
MBA ? SEMESTER 3 ? EXAMINATION ? SUMMER 2016
Subject Code: 2830203 Date: 09/05/2016
Subject Name: Security Analysis and Portfolio Management
Time: 10.30 AM TO 01.30 PM Total Marks: 70
1. Attempt all questions.
2. Make suitable assumptions wherever necessary.
3. Figures to the right indicate full marks.
Q1 Answer the following multiple choice questions: 06
Which of the following terms represent an upper price limit for a stock
based on the quantity of the willing seller?
A. Support B. Trend line
C. Resistance D. Channel
A main difference between real and nominal return proceeds is that,
A. A real return adjust for
inflation and nominal
return do not
B. Real return use actual cash
flows and nominal use expected
cash flows
C. Real return adjust for
commissions and
nominal returns do not
D Real returns show highest
possible return and nominal
show lowest possible return
Non-systematic risk is further more identified as
3. A. No diversifiable risk B. Market risk
C. Random risk D. Company specific risk
Suppose you have 20 stocks and you want to derive efficient frontier,
how many co-variances do you have to calculate?
4. A. 90 B. 190
C. 20 D. 400
Mr X is just retired as a government officer. Which investment would
grade upper most with regard to protection is,
5. A. Preferred stock B. Real estate
C. Common stock D. Government bonds
Consider two stock in portfolio A and B
E (R) S.D.
A 15% 10%
B 20% 30%
If the returns of the two stocks perfectly negatively correlated what is
the weightage of two stocks that risk of portfolio driven down to zero?
6. A. 75% and 25% B. 60% and 40%
C. 80% and 20% D. 66.67% and 33.33%
Q.1 (b) Explain the meaning of the following terms: 04
1. Circuit breaker
2. Anchoring
3. Short sell
4. Regret aversion
Q.1 (c) Write a note of IPO investments. 04
Q.2 (a) Define investments. Discuss the various marketable and non-marketable
investment avenues available to investors.
(b) What do you mean by efficient market hypothesis? Also explain the
forms of market efficiency.
(b) A highly volatile stock earns the following returns over six year periods
R1= 10%, R2= 30%, R3=15%, R4=-0.12, R5=35%, R6=12%
Calculate and interpret the following values:
1. Arithmetic mean
2. Cumulative wealth index
3. Standard deviation
Q.3 (a) What are the basic assumption and inputs required for CAPM? Explain
CML and SML. Also establish intra-relation between them.
(b) The earning of a company has been growing at 15% over the past several
years and is expected to increase at this rate for next seven years and
thereafter at 9% in perpetuity it is currently earning Rs 4 per share and
paying Rs 2 per dividend. What shall be present value of share with
discount rate of 12% for the first seven years and 10% thereafter?
Q.3 (a) Select an industry of your choice and do the industry analysis in the
current economic scenario.
(b) The following table gives analyst expected return on two stocks for
particular market:
Market return Aggressive stock Defensive stock
8% 3% 10%
25% 40% 20%
1. What are the betas of the stocks?
2. What is the expected return on each stock if market return is
equally likely to be 8% and 25%?
3. If the risk free rate is 9% and market return is equally likely to
be 8% or 25%, what is SML?
4. What is the alpha of two stocks?
Q.4 (a) Write a note on the following:
1. Technical analysis
2. Dow theory and components
(b) The rates of return on stock X and market portfolio for last 12
months are given below:
Month 1 2 3 4 5 6 7 8 9 10 11 12
1. Calculate and interpret the beta stock ? X.
2. What is characteristic line for stock ? X?
Q.4 (a) Write a note on the following:
1. Single index model
2. Arbitrage pricing theory
(b) Calculate the systematic and unsystematic risks for the given securities
from the following data.
Return (%)
Tata power 33.90 126.34 0.36
Mahindra &
25.09 106.70 0.74
Market index
28.63 39.52 1
Q.5 Mr. X has recently completed MBA Finance from GTU as major
in finance and he has been hired as a financial planner by a leading
financial corporation. His boss has assigned him the task of
investing Rs 10,00,000 for a client who has been asked to consider
only the following investment alternatives, Stock A and Stock B.
The research wing of the company has developed the probability
distribution for the state of the economy and estimated value of
rate of return under each state of economy. The following
information is available for your research purpose:
State of
Probability Stock A Stock B
1 0.20 5 20
2 0.30 15 14
3 0.40 18 35
4 0.10 02 10
1. What are expected returns and standard deviations of returns for
stock A and B? What is your recommendation of client in terms
of variability for the two stocks? Which stock is more consistent?
Justify your answers.
2. If correlation coefficient of two stocks is 0.80 and investor wants
to invest 40% in stock A and remaining in stock B, what is the
expected return and risk of the portfolio of the two stocks?
Q.5 Consider the following information for three mutual funds, X, Y and Z
and the market.
Mean return S.D. Beta
X 15% 20% 0.90
Y 17% 24% 1.10
Z 19% 27% 1.20
Market index 16 20 1.00
The mean risk free rate was 10%.
1 Calculate the Treynor measure, Sharpe measure and Jensen
measure for the three mutual funds and the market index.
2 Explain the real life application of the Treynor measure, Sharpe
measure and Jensen measure with reference to the above
FirstRanker.com - FirstRanker's Choice
This post was last modified on 19 February 2020 | {"url":"https://firstranker.com/fr/frdA190220A1531529/download-gtu-mba-2016-summer-3rd-sem-2830203-security-analysis-and-portfolio-management-question-paper","timestamp":"2024-11-06T17:03:35Z","content_type":"text/html","content_length":"106960","record_id":"<urn:uuid:45fab2f8-478d-4700-aa58-461002db16d5>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00321.warc.gz"} |
14 kg to lbs
To convert kilograms (kg) to pounds (lbs), you can use the following step-by-step instructions:
Step 1: Understand the conversion factor
1 kilogram (kg) is equal to 2.20462 pounds (lbs). This means that to convert kg to lbs, you need to multiply the weight in kg by 2.20462.
Step 2: Set up the conversion equation
Let’s denote the weight in kg as “x” and the weight in lbs as “y”. The conversion equation can be written as:
x kg = y lbs
Step 3: Substitute the given value
In this case, the given weight is 14 kg. So, we can substitute x = 14 kg into the equation:
14 kg = y lbs
Step 4: Solve for y
To find the weight in lbs (y), we need to solve the equation. Multiply both sides of the equation by the conversion factor (2.20462):
14 kg * 2.20462 = y lbs
Step 5: Perform the calculation
Using a calculator, multiply 14 by 2.20462:
14 * 2.20462 = 30.86568
Step 6: Round the answer (if necessary)
Since weight is typically rounded to the nearest whole number, we can round 30.86568 to 31 lbs.
Step 7: Write the final answer
Therefore, 14 kg is approximately equal to 31 lbs.
Note: The conversion factor used in this example is an approximation. If you require a more precise conversion, you can use a more accurate conversion factor.
Visited 2 times, 1 visit(s) today | {"url":"https://unitconvertify.com/weight/14-kg-to-lbs/","timestamp":"2024-11-03T09:23:00Z","content_type":"text/html","content_length":"43374","record_id":"<urn:uuid:0436ae93-0c54-4595-9bca-931ef8382042>","cc-path":"CC-MAIN-2024-46/segments/1730477027774.6/warc/CC-MAIN-20241103083929-20241103113929-00837.warc.gz"} |
Gr 3_OA_MultiplicationPropertyCommutative_Problem_Construct_SameOrDifferent | Bridging Practices Among Connecticut Mathematics Educators
Gr 3_OA_MultiplicationPropertyCommutative_Problem_Construct_SameOrDifferent
Third graders are developing an understanding of the commutative property through single-digit multiplication in Same of Different?. Students are asked if two rectangles are the same given reverse
length and width measurements. Argumentation language is present when asking students to explain their thinking using claim, evidence, and warrants.
Microsoft Word version: 3_OA_MultiplicationPropertyCommutative_Problem_Construct_SameOrDifferent
PDF version: 3_OA_MultiplicationPropertyCommutative_Problem_Construct_SameOrDifferent | {"url":"https://bridges.education.uconn.edu/2015/06/18/gr-3_oa_multiplicationpropertycommutative_problem_construct_sameordifferent/","timestamp":"2024-11-11T16:38:17Z","content_type":"text/html","content_length":"53617","record_id":"<urn:uuid:2a6366f8-52bf-4fb8-9574-fb3bf67d4baf>","cc-path":"CC-MAIN-2024-46/segments/1730477028235.99/warc/CC-MAIN-20241111155008-20241111185008-00610.warc.gz"} |
Realty Stock Current Market Value
O Stock USD 58.88 0.23 0.39%
Realty Income's market value is the price at which a share of Realty Income trades on a public exchange. It measures the collective expectations of Realty Income investors about its performance.
Realty Income is selling at
as of the 3rd of November 2024; that is
0.39% down
since the beginning of the trading day. The stock's open price was
. With this module, you can estimate the performance of a buy and hold strategy of Realty Income and determine expected loss or profit from investing in Realty Income over a given investment horizon.
Check out
Realty Income Correlation
Realty Income Volatility
Realty Income Alpha and Beta
module to complement your research on Realty Income. To learn how to invest in Realty Stock, please use our
How to Invest in Realty Income
Realty Income Price To Book Ratio
Is Retail REITs space expected to grow? Or is there an opportunity to expand the business' product line in the future? Factors like these will boost
the valuation of Realty Income
. If investors know Realty will grow in the future, the company's valuation will be higher. The financial industry is built on trying to define current growth potential and future valuation
accurately. All the valuation information about Realty Income listed above have to be considered, but the key to understanding future value is determining which factors weigh more heavily than
Dividend Share Earnings Share Revenue Per Share Quarterly Revenue Growth Return On Assets
3.088 1.08 6.01 0.316 0.0211
The market value of Realty Income
is measured differently than its book value, which is the value of Realty that is recorded on the company's balance sheet. Investors also form their own opinion of Realty Income's value that differs
from its market value or its book value, called intrinsic value, which is Realty Income's true underlying value. Investors use various methods to calculate intrinsic value and buy a stock when its
market value falls below its intrinsic value. Because Realty Income's market value can be influenced by many factors that don't directly affect Realty Income's underlying business (such as a pandemic
or basic market pessimism), market value can vary widely from intrinsic value.
AltmanZ ScoreDetails PiotroskiF ScoreDetails BeneishM ScoreDetails FinancialAnalysisDetails Buy or SellAdviceDetails
Please note, there is a significant difference between Realty Income's value and its price as these two are different measures arrived at by different means. Investors typically determine
if Realty Income is a good investment
by looking at such factors as earnings, sales, fundamental and technical indicators, competition as well as analyst projections. However, Realty Income's price is the amount at which it trades on the
open market and represents the number that a seller and buyer find agreeable to each party.
Realty Income 'What if' Analysis
In the world of financial modeling, what-if analysis is part of sensitivity analysis performed to test how changes in assumptions impact individual outputs in a model. When applied to Realty Income's
stock what-if analysis refers to the analyzing how the change in your past investing horizon will affect the profitability against the current market value of Realty Income.
If you would invest
in Realty Income on
May 7, 2024
and sell it all today you would
earn a total of 0.00
from holding Realty Income or generate
return on investment in Realty Income over
days. Realty Income is related to or competes with
Federal Realty
National Retail
Kimco Realty
Agree Realty
Simon Property
, and
Acadia Realty
. Realty Income, The Monthly Dividend Company, is an SP 500 company dedicated to providing stockholders with dependable mo...
Realty Income Upside/Downside Indicators
Understanding different market momentum indicators often help investors to time their next move. Potential upside and downside technical ratios enable traders to measure Realty Income's stock current
market value against overall
market sentiment
and can be a good tool during both bulling and bearish trends. Here we outline some of the essential indicators to assess Realty Income
upside and downside potential
and time the market with a certain degree of confidence.
Realty Income Market Risk Indicators
Today, many novice investors tend to focus exclusively on investment returns with little concern for Realty Income's investment risk. Other traders do consider volatility but use just one or two very
conventional indicators such as Realty Income's standard deviation. In reality, there are many statistical measures that can use Realty Income historical prices to predict the future Realty Income's
HypePrediction IntrinsicValuation NaiveForecast 21 AnalystsConsensus
Low Estimated High Low Real High Low Next High Low Target High
57.93 58.88 59.83 47.33 48.28 64.77 55.05 56.00 56.95 57.66 63.36 70.33
Details Details Details Details
Realty Income Backtested Returns
As of now, Realty Stock is very steady.
Realty Income
maintains Sharpe Ratio (i.e., Efficiency) of 0.0174, which implies the firm had a 0.0174% return per unit of risk over the last 3 months. We have found thirty
technical indicators
for Realty Income, which you can use to evaluate the volatility of the company. Please check Realty Income's Semi Deviation of 0.9033,
risk adjusted performance
of 0.0384, and Coefficient Of Variation of 2104.69 to confirm if the risk estimate we provide is consistent with the expected return of 0.0165%. Realty Income has a
performance score
of 1 on a scale of 0 to 100. The company holds a Beta of
, which implies not very significant fluctuations relative to the market. As returns on the market increase, Realty Income's returns are expected to increase less than the market. However, during the
bear market, the loss of holding Realty Income is expected to be smaller as well.
Realty Income
right now holds a risk of 0.95%. Please check Realty Income
standard deviation
expected short fall
period momentum indicator
, as well as the
between the
maximum drawdown
rate of daily change
, to decide if Realty Income will be following its historical
price patterns
Weak predictability
Realty Income has weak predictability. Overlapping area represents the amount of predictability between Realty Income time series from 7th of May 2024 to 5th of August 2024 and 5th of August 2024 to
3rd of November 2024. The more autocorrelation exist between current time interval and its lagged values, the more accurately you can make projection about the future pattern of Realty Income price
movement. The serial correlation of 0.2 indicates that over 20.0% of current Realty Income price fluctuation can be explain by its past prices.
Correlation Coefficient 0.2
Spearman Rank Test 0.29
Residual Average 0.0
Price Variance 1.81
Realty Income lagged returns against current returns
Autocorrelation, which is Realty Income stock's lagged correlation, explains the relationship between observations of its time series of returns over different periods of time. The observations are
said to be independent if autocorrelation is zero. Autocorrelation is calculated as a function of mean and variance and can have practical application in
Realty Income's stock expected returns. We can calculate the autocorrelation of Realty Income returns to help us make a trade decision. For example, suppose you find that Realty Income has exhibited
high autocorrelation historically, and you observe that the stock is moving up for the past few days. In that case, you can expect the price movement to match the lagging time series.
Current and Lagged Values
Realty Income regressed lagged prices vs. current prices
Serial correlation can be approximated by using the Durbin-Watson (DW) test. The correlation can be either positive or negative. If Realty Income stock is displaying a positive serial correlation,
investors will expect a positive pattern to continue. However, if Realty Income stock is observed to have a negative serial correlation, investors will generally project negative sentiment on having
a locked-in long position in Realty Income stock over time.
Realty Income Lagged Returns
When evaluating Realty Income's market value, investors can use the concept of autocorrelation to see how much of an impact past prices of Realty Income stock have on its future price. Realty Income
autocorrelation represents the degree of similarity between a given time horizon and a lagged version of the same horizon over the previous time interval. In other words, Realty Income
autocorrelation shows the relationship between Realty Income stock current value and its past values and can show if there is a momentum factor associated with investing in Realty Income.
Pair Trading with Realty Income
One of the main advantages of trading using pair correlations is that every trade hedges away some risk. Because there are two separate transactions required, even if Realty Income position performs
unexpectedly, the other equity can make up some of the losses. Pair trading also minimizes risk from directional movements in the market. For example, if an entire industry or sector drops because of
unexpected headlines, the short position in Realty Income will appreciate offsetting losses from the drop in the long position's value.
0.7 ARE Alexandria Real Estate PairCorr
0.71 BDN Brandywine Realty Trust PairCorr
0.72 BXP Boston Properties PairCorr
0.59 RC Ready Capital Corp PairCorr
0.53 UK Ucommune International PairCorr
0.48 EQC Equity Commonwealth PairCorr
0.45 PW Power REIT PairCorr
The ability to find closely correlated positions to Realty Income could be a great tool in your tax-loss harvesting strategies, allowing investors a quick way to find a similar-enough asset to
replace Realty Income when you sell it. If you don't do this, your portfolio allocation will be skewed against your target asset allocation. So, investors can't just sell and buy back Realty Income -
that would be a violation of the tax code under the "wash sale" rule, and this is why you need to find a similar enough asset and use the proceeds from selling Realty Income to buy it.
The correlation of Realty Income is a statistical measure of how it moves in relation to other instruments. This measure is expressed in what is known as the correlation coefficient, which ranges
between -1 and +1. A perfect positive correlation (i.e., a correlation coefficient of +1) implies that as Realty Income moves, either up or down, the other security will move in the same direction.
Alternatively, perfect negative correlation means that if Realty Income moves in either direction, the perfectly negatively correlated security will move in the opposite direction. If the correlation
is 0, the equities are not correlated; they are entirely random. A correlation greater than 0.8 is generally described as strong, whereas a correlation less than 0.5 is generally considered weak.
Correlation analysis
and pair trading evaluation for Realty Income can also be used as hedging techniques within a particular sector or industry or even over random equities to generate a better risk-adjusted return on
your portfolios.
Pair CorrelationCorrelation Matching
Check out
Realty Income Correlation
Realty Income Volatility
Realty Income Alpha and Beta
module to complement your research on Realty Income. To learn how to invest in Realty Stock, please use our
How to Invest in Realty Income
guide.You can also try the
Sync Your Broker
module to sync your existing holdings, watchlists, positions or portfolios from thousands of online brokerage services, banks, investment account aggregators and robo-advisors..
Realty Income technical stock analysis exercises models and trading practices based on price and volume transformations, such as the moving averages, relative strength index, regressions, price and
return correlations, business cycles, stock market cycles, or different charting patterns.
Overlap AnalysisView Forecast ModelsView Price ShiftingView Value MovementView
A focus of Realty Income technical analysis is to determine if market prices reflect all relevant information impacting that market. A technical analyst looks at the history of Realty Income trading
pattern rather than external drivers such as economic, fundamental, or social events. It is believed that price action tends to repeat itself due to investors' collective, patterned behavior. Hence
technical analysis focuses on identifiable price trends and conditions.
More Info... | {"url":"https://widgets.macroaxis.com/market-value/O","timestamp":"2024-11-04T07:18:28Z","content_type":"text/html","content_length":"342430","record_id":"<urn:uuid:0b061166-c0cd-4456-be1a-4a34e276f5ea>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00354.warc.gz"} |
Hotel Revenue Management Excel Formulas and Templates
As a hotelier, you know that hotel revenue management is a complex discipline, which demands two things:
• Comprehensive understanding of market dynamics
• Effective utilization of available tools
And speaking of tools in specific, Excel has long been a go-to tool for revenue managers in the hospitality industry.
With its powerful formulas and analytical capabilities, Excel can help your business unlock its revenue management potential and gain a competitive edge in today's dynamic hospitality landscape.
From data analysis to forecasting to pricing optimization and performance tracking, Excel can empower your revenue managers to make informed decisions while driving growth and enhancing
Today, we will explore some of the essential Excel formulas that can simplify your hotel revenue management process.
Let’s get started.
Essential Excel Formulas for Hotel Revenue Management
Excel offers several formulas that can assist in various aspects of hotel revenue management. Here are a few to make a note of:
1. Occupancy Rate
The occupancy rate in a hotel is the percentage of rooms occupied during a specific period. Calculating this metric can help your hotel assess how effectively you utilize your available room
This metric helps your business make data-driven decisions regarding pricing, marketing strategies, and operations to optimize revenue while ensuring efficient resource allocation. It is a vital
performance indicator that helps your hotel gauge demand, track trends, and make informed decisions to maximize profitability.
Calculating room occupancy rate in Excel is a straightforward process. Here's how to do it:
Step 1. Prepare your data
Create three columns in an Excel sheet and name them as Date, Number of occupied rooms, and Total number of available rooms.
Step 2. Enter your data
Enter the corresponding data for each date in the respective columns. Add the accurate number of occupied rooms and total available rooms each day.
Step 3. Calculate the occupancy rate
In a new column, you can calculate the occupancy rate using the formula:
Occupancy Rate (%) = (Number of Occupied Rooms / Total Available Rooms) X 100
• For example, if your number of occupied rooms is in column B and the total available rooms are in column C, you can use the formula =(B2/C2)*100 in the first row of the occupancy rate column.
• Apply the formula: Drag the formula down to apply it to all the rows in the occupancy rate column. Excel will automatically adjust the formula for each row based on the corresponding occupied and
available room values.
• Format the occupancy rate: Format the occupancy rate column as a percentage. Select the cells in the occupancy rate column, right-click, and choose the "Format Cells" option. In the "Number" tab,
select "Percentage" and choose the desired decimal places.
• Calculate the average occupancy rate (optional): To calculate the average occupancy rate over a specific period, you can use the AVERAGE formula. Select the cells in the occupancy rate column
corresponding to the desired period and use the AVERAGE formula to calculate the average occupancy rate.
For example, if the occupancy rates are in cells D2 to D31, you can use the formula =AVERAGE(D2:D31) to calculate the average occupancy rate for that period. This calculation provides valuable
insights into how effectively your hotel utilizes its room inventory and helps inform pricing and revenue management strategies.
2. Average Daily Rate (ADR)
The Average Daily Rate (ADR) refers to the average price paid per room occupied in a specific period. You can calculate ADR to assess your pricing strategy and track revenue performance.
When you monitor ADR, your hotel business can evaluate the effectiveness of pricing decisions, identify trends, and make adjustments to optimize revenue as well as profitability. It is a crucial
metric for benchmarking and analyzing market competitiveness while helping your hotel stay competitive and make informed pricing decisions.
Calculating ADR rate in Excel is simple. Here's how to do it:
Step 1. Prepare and enter your data
Create three columns in an Excel sheet for the date, room revenue, and the number of occupied rooms for each day. Then, enter the corresponding data for each date in the respective columns.
Step 2. Calculate the ADR
In a new column, calculate the ADR for each day using the below formula:
ADR = Room Revenue / Number of Occupied Rooms
• For example, if your room revenue is in column B and the number of occupied rooms is in column C, you can use the formula =B2/C2 in the first row of the ADR column.
• Apply the formula: Drag the formula down to apply it to all the rows in the ADR column. Excel will automatically adjust the formula for each row based on the corresponding room revenue and the
number of occupied rooms values.
• Format the ADR: Format the ADR column as a currency or a desired number format. Select the cells in the ADR column, right-click, and choose the "Format Cells" option. In the "Number" tab, select
the desired format with decimal places, such as currency or a specific number format.
• Calculate the average ADR (optional): Use the AVERAGE formula to calculate the average ADR over a specific period. To do this, select the cells in the ADR column corresponding to the desired
period and use the AVERAGE formula to calculate the average ADR.
For example, if the ADR values are in cells D2 to D31, you can use the formula =AVERAGE(D2:D31) to calculate the average ADR for that specific period.
3. Revenue per Available Room (RevPAR)
RevPAR is a key performance metric that assesses your hotel's revenue generation efficiency. The RevPAR formula divides total room revenue by the number of available rooms, providing insight into the
revenue contribution per room.
When you monitor RevPAR trends over time and compare them to industry benchmarks, revenue managers can assess your hotel's performance and make data-driven decisions to enhance revenue.
Calculating RevPAR in Excel is simple. Here's how to do it:
Step 1. Prepare and enter your data
Create three columns in a new Excel sheet: Date, Total room revenue, and the number of available rooms for each day. Enter the corresponding data for each date in the respective columns and make sure
they are accurate.
Step 2. Calculate RevPAR
In a new column, calculate RevPAR for each day using the below formula:
RevPAR = Total Room Revenue / Number of Available Rooms
• For instance, if your total room revenue is in column B and the number of available rooms is in column C, you can use the formula =B2/C2 in the first row of the RevPAR column.
• Apply the formula: Drag the formula down to apply it to all the rows in the RevPAR column. Excel will automatically adjust the formula for each row based on the corresponding values entered.
• Format the RevPAR: Format the RevPAR column as a currency or a desired number format. Select the cells in the RevPAR column, right-click, and choose the "Format Cells" option. In the "Number"
tab, select the desired format with decimal places, such as currency or a specific number format.
• Calculate the average RevPAR (optional): You can use the AVERAGE formula to calculate the average RevPAR over a specific period. Select the cells in the RevPAR column corresponding to the desired
period and use the AVERAGE formula to calculate the average RevPAR.
For example, if the RevPAR values are in cells D2 to D31, you can use the formula =AVERAGE(D2:D31) to calculate the average RevPAR for that specific period.
4. GOPPAR (Gross Operating Profit per Available Room)
GOPPAR (Gross Operating Profit per Available Room) is a key performance metric that assesses the profitability of every available room in your hotel. It considers the gross operating profit generated
by the hotel and divides it by the total number of available rooms, providing a per-room profitability figure.
Unlike metrics such as RevPAR or ADR, which focus solely on revenue, GOPPAR considers both revenue and expenses. It comprehensively assesses the hotel's profitability by considering all operating
costs associated with generating that revenue. This KPI helps revenue managers evaluate your business's financial health and make informed decisions to drive the hotel's profitability.
Calculating GOPPAR in Excel is easy. Here's how to do it:
Step 1. Prepare and enter your data
Create three columns in a new Excel sheet: Date, Total revenue, Operating expenses, and the No. of available rooms per day. Enter the corresponding data for each date in the respective columns and
make sure they are precise.
Step 2. Calculate GOP (Gross Operating Profit)
In a new column, calculate RevPAR for each day using the below formula:
GOP (Gross Operating Profit) = Total Revenue – Operating Expenses
• For instance, if your total revenue is in column B and operating expenses are in column C, you can use the formula =B2-C2 in the first row of the gross operating profit column.
Step 3. Calculate the GOPPAR
In a new column, calculate RevPAR for each day using the below formula:
GOPPAR = Gross Operating Profit / Number of Available Rooms
• For instance, if the gross operating profit is in column D and the number of available rooms is in column E, you can use the formula =D2/E2 in the first row of the GOPPAR column.
• Apply the formula: Drag the formula down to apply them to all the rows in the respective columns. Excel will automatically adjust the formula for each row based on the corresponding data.
• Format the GOPARR: Format the GOPPAR column as a currency or a desired number format. Select the cells in the GOPPAR column, right-click, and choose the "Format Cells" option. In the "Number"
tab, select the desired format with decimal places, such as currency or a specific number format.
• Calculate the average GOPPAR (optional): Use the AVERAGE formula to calculate the average GOPPAR over a specific period. Select the cells in the GOPPAR column corresponding to the desired period
and use the AVERAGE formula to calculate the average GOPPAR.
For example, if the GOPPAR values are in cells F2 to F31, you can use the formula =AVERAGE(F2:F31) to calculate the average GOPPAR for that specific period.
5. TRevPAR (Total Revenue per Available Room)
TRevPAR considers all revenue streams your hotel generates, not just room revenue. It includes revenue from rooms, food and beverage, spa services, conference facilities, and other ancillary sources.
This KPI provides a holistic view of the hotel's revenue-generating potential and allows revenue managers to accurately assess the overall financial performance.
TRevPAR is an important metric because it allows your business to evaluate the effectiveness of revenue management strategies and pricing decisions. It provides insights into your hotel's ability to
maximize revenue from both rooms and other revenue streams, contributing to the overall business profitability.
Calculating TRevPAR in Excel is easy. Here's how to do it:
Step 1. Prepare and enter your data
Create three columns in a new Excel sheet: Date, Total revenue, and the number of available rooms per day. Enter the corresponding data for each date in the respective columns and make sure they are
Step 2. Calculate GOP (Gross Operating Profit)
In a new column, sum up all the revenue streams for each day to obtain the total revenue by using the below formula:
=SUM(Range of Revenue Cells)
• For instance, if your revenue streams are in columns B to F, you can use the formula =SUM(B2:F2) in the first row of the total revenue column.
Step 3. Calculate the TRevPAR
In another column, divide the total revenue by the number of available rooms for each day to calculate TRevPAR. Use this formula:
TRevPAR = Total Revenue / Number of Available Rooms
• For instance, if the total revenue is in column G and the number of available rooms is in column H, you can use the formula =G2/H2 in the first row of the TRevPAR column.
• Apply the formula: Drag the formula down to apply them to all the rows in the respective columns. Excel will automatically adjust the formula for each row based on the corresponding data.
• Format the TRevPAR: Select the cells in the TRevPAR column, right-click, and choose the "Format Cells" option. In the "Number" tab, select the desired format with decimal places, such as currency
or a specific number format.
• Calculate the average TRevPAR (optional): You can use the AVERAGE formula to calculate the average TRevPAR over a specific period. Select the cells in the TRevPAR column corresponding to the
desired period and use the AVERAGE formula to calculate the average TRevPAR.
For example, if the TRevPAR values are in cells I2 to I31, you can use the formula =AVERAGE(I2:I31) to calculate the average TRevPAR for that specific period.
6. RevPASH (Revenue per Available Seat Hour)
RevPASH (Revenue per Available Seat Hour) is a way to measure revenue generated per available seat hour in your restaurants, bars, and event spaces. It is calculated by dividing the total revenue by
the number of available seat hours.
This metric helps your hotel assess the efficiency of the seating capacity utilization while optimizing your pricing strategies. It helps your revenue managers make informed decisions regarding
staffing, operations, and revenue generation in their F&B outlets.
RevPASH helps your hotel identify opportunities for revenue growth and improve the overall financial performance of your food and beverage operations.
Calculating RevPASH in Excel is easy. Here's how to do it:
Step 1. Prepare and enter your data
Create three columns in a new Excel sheet: Date, Total revenue, and the number of available seat hours per day or specific time period. Enter the corresponding data for each date in the respective
columns and make sure they are correct.
Step 2. Calculate the RevPASH
In a new column, divide the total revenue by the number of available seat hours for each day or period. Use the below formula:
RevPASH = Total Revenue / Number of Available Seat Hours
• For example, if your total revenue is in column B and the number of available seat hours is in column C, you can use the formula =B2/C2 in the first row of the RevPASH column.
• Apply the formula: Drag the formula down to apply them to all the rows in the respective columns. Excel will automatically adjust the formula for each row based on the corresponding data.
• Format the RevPASH: Format the RevPASH column as a currency or a desired number format. Select the cells in the RevPASH column, right-click, and choose the "Format Cells" option. In the "Number"
tab, select the desired format with decimal places, such as currency or a specific number format.
• Calculate the average RevPASH (optional): You can use the AVERAGE formula to calculate the average RevPASH over a specific period. Select the cells in the RevPASH column corresponding to the
desired period and use the AVERAGE formula to calculate the average RevPASH.
For example, if the RevPASH values are in cells D2 to D31, you can use the formula =AVERAGE(D2:D31) to calculate the average RevPASH for that specific period.
Benefits of Using Excel Spreadsheet for Hotel Revenue Management
While advanced revenue management systems are available, not all hotels have the resources to adopt them. Excel, being a versatile tool, can revolutionize hotel revenue management. Learn how Excel
spreadsheets can help streamline your hotel's revenue management process.
1. A Cost-Effective Solution
Excel is a cost-effective solution if you have a limited budget. Unlike complex revenue management systems that may require substantial investments, Excel is readily available at no additional cost.
You can optimize revenue, streamline processes, and drive profitability without incurring significant expenses.
2. Easy to Access and Use
Excel's widespread usage and familiarity make it an accessible and user-friendly tool for hotel revenue management. It minimizes the learning curve and allows for quick implementation. Its intuitive
interface and familiar spreadsheet format enable easy navigation, data entry, and analysis, which makes it a go-to choice for your tasks.
3. Flexible and Customizable
Excel spreadsheets offer unparalleled flexibility and customization options. Since hotel revenue management requirements vary across properties, Excel empowers you to tailor it according to your
needs. Customizable formulas, calculations, and formatting options allow you to create personalized revenue management templates that align precisely with your business's unique metrics, reporting
periods, and analysis preferences.
4. Provides Better Data Organization and Analysis
Excel is excellent at organizing and analyzing data, making it an ideal tool for hotel revenue management. It offers a structured framework for efficiently recording and managing data related to room
revenue, ancillary revenue, expenses, and occupancy rates.
With Excel's powerful built-in functions, your business can perform various calculations, generate insightful reports, and derive meaningful metrics such as RevPAR, ADR, and occupancy rate. These
capabilities enable your revenue managers to make data-driven decisions and gain a comprehensive understanding of your hotel's performance.
5. Assists with Financial Analysis and Forecasting
Excel empowers your hotel business to conduct in-depth financial analysis and forecasting, critical components of revenue management. With the help of historical data, your revenue managers can
create accurate projections, monitor trends, and identify opportunities for revenue optimization.
By leveraging Excel's formulas and data visualization features, such as charts and graphs, you can present financial insights clearly and effectively, facilitating strategic planning and informed
Final Words
Excel's significance in hotel revenue management cannot be overstated. Its data analysis capabilities, forecasting tools, pricing optimization features, performance tracking, and flexibility make it
an invaluable asset for revenue managers. With Excel as an ally, your hotel can unlock revenue potential, drive profitability, and stay ahead in the ever-evolving hospitality landscape.
Join the newsletter to get the latest updates. | {"url":"https://blog.axisrooms.com/hotel-revenue-management-excel/","timestamp":"2024-11-05T11:56:18Z","content_type":"text/html","content_length":"74547","record_id":"<urn:uuid:17ffe56d-a9eb-425d-9aaa-53e7c537cd28>","cc-path":"CC-MAIN-2024-46/segments/1730477027881.88/warc/CC-MAIN-20241105114407-20241105144407-00383.warc.gz"} |
Module 0 Day 13 Your Turn Part 1
The purpose of this question was to show the power of using exterior angles: It can make questions like these very simple! The method Professor Loh shows later in the video, using interior angles, is
a little more complicated, but it's still worth knowing how to make use of interior angles.
The first step is to know what is the sum of all the interior angles in a polygon with \(n\) sides. You can try to look for a pattern:
For a triangle, the sum of angles is \(180.\)
For a quadrilateral, the sum is \(360\) (think of a square).
<Pentagons may be difficult, skip for now.>
For a hexagon, the sum is \(720\) (think of a regular hexagon!).
Based on this pattern, you can guess that the sum of angles goes up by \(180\) every time you add a side! But why? The clever reason comes from dividing the polygon into triangles.
You can see Professor Loh do this at 2:18. Every time you add another side, you get another triangle that makes up the polygon. And, adding up the angles of the polygon is the same as adding up the
angles of every triangle in the "triangulation" (make sure this step makes sense!).
That means that the sum of the angles is just (sum of angles inside a triangle) \( \times \) (number of triangles in the polygon). Since the angle sum for a triangle is \(180,\) it's just \(180 \
times \text{ number of triangles}.\)
The number of triangles that an \(n\)-sided polygon is divided into is just \((n-2)\). You can figure this out by looking at a pattern (\(n=3\) is \(1\) triangle, \(n=4\) is \(2\) triangles, etc.).
Then the formula for the sum of angles is just \(180 \times (n-2).\) This is a very important formula! Make sure you understand why it works instead of memorizing it.
Back to the problem in the video: We don't know how many sides there are, so let's say there are \( n\) sides. Then since every angle is \(170,\) the sum of angles is \(170n.\) But we just derived a
formula for the sum of angles: It's \(180(n-2).\) So we can set these expressions equal to each other:
$$ 170n = 180(n-2) $$
Now we just solve this equation and we are done!
I hope that helped. Happy learning! | {"url":"https://forum.poshenloh.com/category/119/module-0-day-13-your-turn-part-1","timestamp":"2024-11-12T13:32:57Z","content_type":"text/html","content_length":"42765","record_id":"<urn:uuid:1b1376f4-a9d3-4144-a918-2ffd2dc51cd7>","cc-path":"CC-MAIN-2024-46/segments/1730477028273.45/warc/CC-MAIN-20241112113320-20241112143320-00155.warc.gz"} |
My main research centres on quantum non-locality. In general the world changes through local interactions which cannot move from one place to another faster than light. However when two particles
interact to form an entangled state, are separated by a large distance, and then one is measured, the wave function describing the other particle collapses seemingly instantaneously. It’s as if the
act of measuring one particle affects the other faster than light. There are several different interpretations of how nature allows this, none of which have universal agreement amongst physicists.
For example, one of the interpretations, many worlds, says that every outcome of the measurement happens simultaneously, and that we should view the world including ourselves as splitting into
parallel universes. In this view the collapse of the state is the change of our knowledge about the other particle conditional on a particular measurement outcome.
I create thought experiments with spatially separated particles, looking at behaviours which, like the collapse above, do not exist in classical physics. Although the basic rules giving the evolution
of states and the probabilities of the outcomes for any measurements are agreed, there are many surprising behaviours which follow. Bell Inequality non-locality and quantum information science
including teleportation are so remarkable they won the Nobel prize in 2022. I hope that by understanding thought experiments we will get a clear understanding of how nature operates, and reconcile
its implications for our view of the world. My main results fall into several groups: Conservation Laws, Teleportation, Bell Inequalities, Interaction-Free Dynamics, Entanglement, Non-Local
Operations, Frames of Reference, Post-Quantum Theories, and Weak Measurements. Below I give an overview. For more details you can look at my paper posts in “News”, or read the papers themselves via
the links in “Publications”.
Conservation Laws
Conservation Laws along with their underlying symmetries underpin physics, and are familiar to anyone who ever studied conservation of energy in physics or chemistry. However in quantum mechanics,
the underlying randomness of a measurement led to the laws being written in terms of distributions of e.g. energy being conserved. S Popescu and I argue that conservation holds in each individual
measurement outcome. This result, which we proved in a simplified case, is a significant strengthening of the usual conservations laws, which we hope to extend to all physical interactions.
Teleportation is the process of using pre-shared entanglement, classical communication and local operations to send a quantum state from one place to another. It’s remarkable that it’s possible at
all, since the uncertainty principle prevents us from measuring all properties of a state simultaneously. I showed that one can teleport a state evolving backward in time. I used this to give a
simplified instantaneous* measurement verifying that an entangled backward evolving state is an accurate combined description of two remote systems at a given time. I also worked on the theory side
of quantum relay experiments in Nicolas Gisin’s group.
*The quantum part of the measurement is instantaneous.
Bell Inequalities
John Bell showed that quantum mechanics predicts non-local correlations between measurements performed on spatially-separated systems. He created inequalities on the correlations which all classical
local theories obey, and which quantum mechanics violates. N Gisin, N Linden, S Massar, S Popescu and I created a family of inequalities for arbitrarily many outcomes, which is strongly resistant to
noise. With my collaborators I also created inequalities to detect true n-party non-locality, a useful inequality with three possible measurements on each party (usually there were two), and showed
how to close the memory loophole.
Interaction Free Flows
Newton’s third law states that for every action there is an equal and opposite reaction. However in quantum mechanics objects can sometimes be measured with no interaction at all, as first
demonstrated by the Elitzur-Vaidman Bomb. Together with Y Aharonov and S Popescu, I showed that angular momentum, a conserved quantity, can flow across a region of space where there is a vanishingly
small chance of finding any particles to carry it. This shows that the flow of conserved quantities, e.g. energy, can be completely different to the flow of any particles which may carry them. This
is known as the Dynamic Cheshire Cat effect, like the cat in Alice in Wonderland which separates its grin from itself.
Entangled states are the fundamental non-local states in quantum mechanics, giving correlations between experimental outcomes which cannot be reproduced classically. They can be treated as a resource
for quantum information processing tasks such as teleportation or cryptography, and the amount of entanglement in each state can be quantified by using local operations and classical communication to
transform one entangled state into another. Sandu Popescu and I nevertheless showed that entanglement has a close classical analogue – secret classical correlations. By comparing the two we are able
to better understand entanglement, and which features of it are truly quantum.
Non-Local Operations
Going beyond entangled states, N Linden, S Popescu and I developed a resource theory for quantum operations on spatially separated systems. These operations take quantum inputs for Alice and Bob and
give them each a quantum output. We showed how one could quantify them, by relating them to the entanglement of states. Much more recently, CM Ferrera, R Simmons, J Purcell, S Popescu and I defined
non-signalling operations taking classical inputs for Alice and Bob and giving quantum outputs, and contrasted them with the no-signalling PR Boxes which take classical inputs and give classical
Frames of Reference
Some tasks require a frame of reference to be physically shared between parties. For example, two people cannot agree which is their left hand and which is their right by sending binary code to one
another unless they already have shared knowledge of a left (or right) handed object*. L Diosi, N Gisin, S Massar, S Popescu and I invented Quantum Gloves, states which encode left or right
handedness without encoding any directionality and using a minimum of resources. Frames of reference are also crucial for our research on the conservation laws in each individual outcome.
*Unless you perform a particle physics experiment, as the weak force has a handedness (parity-violation) built in.
Post Quantum Non-locality
One way to understand why quantum mechanics goes beyond classical mechanics, is to go even further, to theories with non-local correlations even stronger than those in quantum mechanics, and try to
see how quantum mechanics looks in comparison. Carolina Moreira Ferrera, Robin Simmons, James Purcell, Sandu Popescu and I investigated non-local boxes with classical inputs and quantum outputs,
which is one particular extension beyond quantum theory. We give explicit constructions for all bi-partite pure state output boxes in this theory using standard quantum mechanics and the original
post-quantum object, PR Boxes.
Weak Measurements
Measuring a weight by putting it on the scale so briefly it barely moves the scale feels useless classically. However in Quantum Mechanics weak measurements allow us to learn something about states
whilst hardly disturbing them. N Brunner, A Acin, N Gisin, V Sclarani and I showed that polarization-mode dispersion in an optical fibre can be viewed as a weak measurement. In this framework
polarization-dependent losses are a post-selection. We applied the quantum formalism for weak measurements and post-selection, which was developed to probe fundamental quantum mechanics, to simplify
the practical problem of optical networks in the telecom limit of weak PMD. | {"url":"https://danielgcollins.com/?page_id=273","timestamp":"2024-11-10T08:38:04Z","content_type":"text/html","content_length":"43193","record_id":"<urn:uuid:8147718e-e86c-49f5-8d12-3537796c209b>","cc-path":"CC-MAIN-2024-46/segments/1730477028179.55/warc/CC-MAIN-20241110072033-20241110102033-00283.warc.gz"} |
How do you write 0.25 million in scientific notation? | HIX Tutor
How do you write 0.25 million in scientific notation?
Answer 1
$0.25 m i l l i o n = 2.5 \times {10}^{5}$.
In scientific notation, we write a number so that it has single digit to the left of decimal sign and is multiplied by an integer power of #10#.
Note that moving decimal #p# digits to right is equivalent to multiplying by #10^p# and moving decimal #q# digits to left is equivalent to dividing by #10^q#.
Hence, we should either divide the number by #10^p# i.e. multiply by #10^(-p)# (if moving decimal to right) or multiply the number by #10^q# (if moving decimal to left).
In other words, it is written as #axx10^n#, where #1<=a<10# and #n# is an integer.
Now #0,25# million is equivalent to #0.25xx10^6#. However, to write in scientific notation, we need to have first digit to the left of decimal and hence we should move the decimal point one point to
right, which literally means multiplying by #10# and also divide by #10#, which will reduce the power of #10# to #5#.
Hence in scientific notation #0.25 million=2.5xx10^5#.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 2
A million is #10^6 ->1000,000#
So we have 0.25 of that amount which is written #0.25xx10^6#
But scientific notation is such that we have just 1 non zero digit to the left of a decimal point and everything else to the right of it.
So the objective is to and up with #0.25# becoming #2.5#
But this is a different value. So we include a correction without actually a applying it
#0.25 = 2.5xx1/10#
So instead of writing #0.25xx10^6# we write:
#2.5xx(10^6)/10" "->" "2.5xx10^5#
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer 3
To write 0.25 million in scientific notation, you first need to express it in standard notation, which is 250,000. Then, you move the decimal point to the right until there is only one non-zero digit
to its left, and count the number of places you moved the decimal point. In this case, you move the decimal point five places to the left to get 2.5, and you are left with 1 non-zero digit.
Therefore, in scientific notation, 0.25 million is written as 2.5 × 10^5.
Sign up to view the whole answer
By signing up, you agree to our Terms of Service and Privacy Policy
Answer from HIX Tutor
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
When evaluating a one-sided limit, you need to be careful when a quantity is approaching zero since its sign is different depending on which way it is approaching zero from. Let us look at some
Not the question you need?
HIX Tutor
Solve ANY homework problem with a smart AI
• 98% accuracy study help
• Covers math, physics, chemistry, biology, and more
• Step-by-step, in-depth guides
• Readily available 24/7 | {"url":"https://tutor.hix.ai/question/how-do-you-write-0-25-million-in-scientific-notation-8f9afa4ca2","timestamp":"2024-11-13T05:02:05Z","content_type":"text/html","content_length":"588989","record_id":"<urn:uuid:642aaab5-004f-4a00-bbef-5a901724cce7>","cc-path":"CC-MAIN-2024-46/segments/1730477028326.66/warc/CC-MAIN-20241113040054-20241113070054-00442.warc.gz"} |
Inferring the subsurface structural covariance model using cross-borehole ground penetrating radar tomography
We address a fundamental problem inherent in least squares based ground penetrating radar tomography problems, and linear inverse Gaussian problems in general: how should the a priori covariance
model be chosen? The choice of such a prior covariance model is most often a very subjective task that has major implications on the result of the inversion. We present a method that allows
quantification of the likelihood that a given choice of prior covariance model is consistent with the observed tomography data. This is done by comparing statistical properties of samples of the
prior and posterior probability density function of the tomographic inverse problem. In essence, if samples of the posterior are unlikely samples of the prior, then such a choice of a priori
covariance model is deemed unlikely. This enables one to quantify the consistency of a number of equally probable prior covariance models to data observations. A synthetic data set was used to
describe and validate the approach. We determined how a known covariance model could be inferred from a synthetic tomography problem. The methodology was then applied to a nonlinear ground
penetrating radar tomography case study. The covariance model deemed most likely was consistent with nearby ground penetrating radar reflection profiles. The method provides useful results even if
just a subset as small as 10% of the available data is considered.
• Programområde 3: Energiressourcer
Dyk ned i forskningsemnerne om 'Inferring the subsurface structural covariance model using cross-borehole ground penetrating radar tomography'. Sammen danner de et unikt fingeraftryk. | {"url":"https://pub.geus.dk/da/publications/inferring-the-subsurface-structural-covariance-model-using-cross-","timestamp":"2024-11-10T14:33:57Z","content_type":"text/html","content_length":"56297","record_id":"<urn:uuid:03cbd845-f88b-4c72-908e-912b25d612cf>","cc-path":"CC-MAIN-2024-46/segments/1730477028187.60/warc/CC-MAIN-20241110134821-20241110164821-00798.warc.gz"} |
When To Buy Stocks (Timing The Market Perfectly) - Business
Make Money Online
When To Buy Stocks (Timing The Market Perfectly)
the best time to buy stocks is at regularly timed intervals and I've shared all the data to back it up in this video with the market at all-time highs and everybody talking about a pending recession
it can be scary to get into the market right now there's lots of theories and advice on how you should invest and there's no shortage of random people on YouTube telling you what to do with your
money but there's really nothing like data to tell you what is the best strategy if I'm looking to invest my money in the best way possible I want a strategy that has been proven that it works by the
data and that's why over the weekend I ran some numbers on some historical stock market data to test three different market timing strategies and yes that is how I spend my weekends using historical
stock market returns from 1988 to 2019 I calculated what would have happened to your money in three different scenarios one where you times the market perfectly and you bought at exactly the right
times and then another where you tried to time the market but you failed miserably and bought at exactly the wrong times and finally a scenario where you didn't bother trying to time the market at
all and instead you invested at regular intervals the entire time in the first scenario you would have bought at exactly the lowest of the lows right after three market crashes the 2000 com bubble
bursting in 2000 and then we saw the financial crisis in 2007-2008 and then in 2018 sort of towards the end of the year we saw a mini crash so in my spreadsheet I tested what would have happened if
in the first scenario you would have completely avoided the crash and gotten in right at the bottom the second scenario you would have bought right at the top right before the crash so you got in at
the worst price and then in the third scenario you invested at regular intervals the entire time crash or no crash and let me tell you the results are shocking so the vast majority of people make
investing decisions based on emotions but with this study now you can be different from everybody else you're gonna know how to look at data and make investment decisions based on what the numbers
say and not what some random person on YouTube tells you what to do so if you want some data back to insides on how to handle your investments given where we are in the market cycle as well as find
out when is the best time to buy stocks then keep watching okay so here's the lowdown on the study I I took the monthly returns of the S&P 500 over a 31 year period from 1988 to 2019 then I assume
that each investor saved $500 a month every single month for those entire 31 years in the first scenario where you had perfect market timing you would have always waited to buy your stocks right at
the very lows of the market after any one of those three crashes the 2000 2008 and 2018 crashed and whenever you were waiting for that perfect time to buy you saved up your $500 a month in cash which
would then pile up so you could deploy when it was time to buy and if you had invested like this you would have ended up with 1 million two hundred thirty five thousand seven hundred sixty dollars
and ten cents at the end of those 31 years in the second scenario you had horrible market timing you never got the timing right so you always ended up buying your stocks at the top of the market
right before it crashed and if you had invested like this you would have ended up with five hundred seventy seven thousand five hundred thirty two dollars at the end of those 31 years and finally in
the third scenario you had a very different approach from the first two so instead of trying to time the market and picking the best times to buy you just invested on a recurring basis $500 a month
every single month in other words you invested like a robot every month you bought the same dollar amount of stocks whether the market was high or low crashing or not crashing this strategy is also
known as dollar cost averaging and if you had invested like this you would have ended up with three million one hundred eighty three thousand two hundred twenty four dollars and forty six cents at
the end of the 31 years keep in mind all three of these investors contributed five hundred dollars a month or a total of 192 thousand dollars over the course of those 31 years and they all invested
in the same S&P 500 index fund same investment same amount of money that they each put in three very different outcomes if you want to see for yourself how I came up with these results you can get my
spreadsheet right below the video in this in the description so check it out if you want to verify my numbers okay so what can we learn from this the number one takeaway from this study is that you
should definitely smash that like button all jokes aside I think there's three key takeaways the first takeaway is that in all three scenarios you ended up with way more money than you started with
in all three situations you contributed 500 a month or a total of 192 thousand and you still ended up with more than you actually put in what this tells me is that even if you have horrible market
timing it's still better to invest than to not invest because if you don't invest then you just save your cash in a bank account then at the end of those 31 years you'll just have what you put in
which is the hundred ninety two thousand dollars over the long-run the investor always ends up better than the saver the second takeaway is that if you just forget about trying to time the market and
you just make regular purchases instead with a dollar cost averaging strategy you would have ended up with one point one eight million dollars you'd be a millionaire not too shabby right and what's
amazing is that the difference between having done a regular dollar cost averaging strategy versus having bought at exactly the best times with perfect market timing is not huge the person who timed
the market perfectly in other words in that first scenario they ended up with one point two three five million dollars so that's only a difference of fifty two thousand dollars from the dollar cost
average er and the thing is it's actually impossible to time the market I'll show you some very sobering stats on that in just a minute but given that making consistent investments at regular monthly
intervals is something that anyone can do and that it's actually doable I think that is the best approach the third takeaway is that a long time horizon is key even in the worst case scenario where
you always got in to the market at the very top if you had a long time horizon and the period that I looked at was a 31 year time period you still ended up with more money than you put in and in the
dollar cost averaging scenario as well as the best-case scenario you ended up with exponentially more money than you put in what most people forget to consider is that when you own stocks you also
get dividends that entire time and then those dividends get reinv | {"url":"https://lyfreedom.com/when-to-buy-stocks-timing-the-market-perfectly/","timestamp":"2024-11-06T07:55:46Z","content_type":"text/html","content_length":"75673","record_id":"<urn:uuid:a9f68289-2439-43b8-a502-2d61e55b8e10>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00802.warc.gz"} |
Adjusted R-squared | FlowHunt
Adjusted R-squared is a statistical measure used to evaluate the goodness of fit of a regression model. It is a modified version of the R-squared (or coefficient of determination) that accounts for
the number of predictors in the model. Unlike R-squared, which can artificially inflate with the addition of more independent variables, Adjusted R-squared adjusts for the number of predictors,
providing a more accurate measure of a model’s explanatory power. It increases only if the new predictor improves the model’s predictive power more than expected by chance, and decreases when a
predictor is not adding significant value.
Understanding the Concept
R-squared vs. Adjusted R-squared
• R-squared: Represents the proportion of variance in the dependent variable that is predictable from the independent variables. It is calculated as the ratio of the explained variance to the total
variance and ranges from 0 to 1, where 1 indicates that the model explains all the variability of the response data around its mean.
• Adjusted R-squared: This metric adjusts the R-squared value based on the number of predictors in the model. The adjustment is made to account for the possibility of overfitting which can occur
when too many predictors are included in a model. Adjusted R-squared is always less than or equal to R-squared and can be negative, indicating that the model is worse than a horizontal line
through the mean of the dependent variable.
Mathematical Formula
The formula for Adjusted R-squared is:
[ \text{Adjusted } R^2 = 1 – \left( \frac{1-R^2}{n-k-1} \right) \times (n-1) ]
• ( R^2 ) is the R-squared,
• ( n ) is the number of observations,
• ( k ) is the number of independent variables (predictors).
Importance in Regression Analysis
Adjusted R-squared is crucial in regression analysis, especially when dealing with multiple regression models, where several independent variables are included. It helps to determine which variables
contribute meaningful information and which do not. This becomes particularly important in fields like finance, economics, and data science where predictive modeling is key.
Overfitting and Model Complexity
One of the main advantages of Adjusted R-squared is its ability to penalize the addition of non-significant predictors. Adding more variables to a regression model typically increases the R-squared
due to the likelihood of capturing random noise. However, Adjusted R-squared will only increase if the added variable improves the model’s predictive power, thereby avoiding overfitting.
Use Cases and Examples
Use in Machine Learning
In machine learning, Adjusted R-squared is employed to evaluate the performance of regression models. It is particularly useful in feature selection, which is an integral part of model optimization.
By using Adjusted R-squared, data scientists can ensure that only those features that genuinely contribute to the model’s accuracy are included.
Application in Finance
In finance, Adjusted R-squared is often used to compare the performance of investment portfolios against a benchmark index. By adjusting for the number of variables, investors can better understand
how well a portfolio’s returns are explained by various economic factors.
Simple Example
Consider a model predicting house prices based on square footage and the number of bedrooms. Initially, the model shows a high R-squared value, suggesting a good fit. However, when additional
irrelevant variables, such as the color of the front door, are added, the R-squared may remain high. Adjusted R-squared would decrease in this scenario, indicating that the new variables do not
improve the model’s predictive power.
Detailed Example
According to a guide from the Corporate Finance Institute, consider two regression models for predicting the price of a pizza. The first model uses the price of dough as the sole input variable,
yielding an R-squared of 0.9557 and an adjusted R-squared of 0.9493. A second model adds temperature as a second input variable, yielding an R-squared of 0.9573 but a lower adjusted R-squared of
0.9431. The adjusted R-squared correctly indicates that temperature does not improve the model’s predictive power, guiding analysts to prefer the first model.
Comparison with Other Metrics
While both R-squared and Adjusted R-squared serve to measure the goodness of fit for a model, they are not interchangeable and serve different purposes. R-squared may be more appropriate for simple
linear regression with a single independent variable, while Adjusted R-squared is better suited for multiple regression models with several predictors.
Explore linear regression, a fundamental tool in statistics & machine learning for modeling relationships. Learn key concepts & applications.
Explore Regularization in AI to prevent overfitting, enhance model performance, and build robust systems. Learn techniques like L1/L2, dropout.
Discover logistic regression for predicting binary outcomes with FlowHunt. Explore types, use cases, and advantages. Learn more!
Explore cross-validation techniques to enhance your machine learning models, prevent overfitting, and optimize performance. | {"url":"https://www.flowhunt.io/glossary/adjusted-r-squared/","timestamp":"2024-11-03T07:36:28Z","content_type":"text/html","content_length":"87829","record_id":"<urn:uuid:499ba98d-2114-46ae-b020-02ef5b0d8d37>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00580.warc.gz"} |
Limit cycle
From Encyclopedia of Mathematics
An isolated closed trajectory in the phase space of an autonomous system of ordinary differential equations. A limit cycle corresponds to a periodic non-constant solution of the system.
Limit cycles represent the simplest (after the steady states) type of behavior of a continuous time dynamical system. Theoretically all properties of limit cycles (their stability and bifurcations)
can be reduced to investigation of the associated Poincaré return map^[1]. In practice, however, the Taylor coefficients of the Poincare map can be obtained only in the form of integrals over the
cycle, which may require some quite detailed knowledge of the shape of the cycle itself.
For instance, in the linear approximation if $\gamma:[0,T]\to\R^n$, $t\mapsto\gamma(t)$, is a limit cycle of period $T>0$ for the vector field $v(x)$ associated with the differential equation $\dot x
=v(x)$, $x\in\R^n$, one obtains a linear (non-autonomous) system of differential equations $$ \dot z=A(t)z,\qquad z\in\R^n, \quad A(t)=\biggl(\frac{\partial v}{\partial x}(\gamma(t)\biggr),\ t\in
[0,T]. $$ The corresponding Cauchy--Floquet linear operator $M:\R^n\to\R^n$ maps a vector $a\in\R_n$ into the vector $Ma=z_a(T)$, where $z_a$ is the solution of the above system with the initial
value $z_a(0)=a$. If this operator is hyperbolic, i.e., has no modulus one eigenvalues ("characteristic exponents"), then the stability pattern of the cycle (dimensions of the corresponding stable
and unstable invariant manifolds) is completely determined (and coincides with that of the iterations $M^k$, $k\in\Z$).
Limit cycles of planar vector fields
On the two-dimensional sphere (and plane) the topological restrictions which forbid intersection of phase trajectories, make limit cycles the only possible limit motion not directly related to
singular points (steady states, also known as stationary solutions). More precisely, if the $\Omega$-limit set of a non-periodic point $a\in \R^2$^[2]contains no singular point of the field $v$, then
it must be a limit cycle (Poincare-Bendixson, 1886^[3], 1901^[4]).
If the presence of singular points cannot be excluded, the situation becomes slightly more complicate. Under the assumption of analyticity one can show that the only possible limit sets for vector
fields on the sphere^[5] are singular points, limit cycles and separatrix polygons, also known as polycycles, which consist of singular points and connecting them arcs of separatrices.
For the same reasons bifurcations of limit cycles, topological changes of the number of limit cycles, are possible only in annular neighborhoods of existing (multiple) cycles, singular points or
Complex limit cycles
A polynomial planar vector field after complexification defines a holomorphic singular foliation $\mathscr F$ on the complex projective plane $\C P^2$. Solutions of the differential equation
correspond to leaves of this foliation, yet unlike in the real case, the leaves are topologically two-dimensional and can have much richer topological structure.
A limit cycle after complexification corresponds to a nontrivial loop on a leaf of the foliation $\mathscr F$ with a non-identical holonomy map. This observation may motivate one of the possible
generalizations of the notion of limit cycle for complex ordinary differential equations.
A complex limit cycle is a noncontractible closed loop on the leaf of a singular holomorphic foliation on $\C P^2$ with a non-identical holonomy. Note that according to this definition, the same leaf
may carry many different limit cycles: for instance, generically the infinite line (with deleted singular points) is a multiply connected leaf of a polynomial foliation, and each small loop around
the deleted singularity is a complex limit cycle. However, these limit cycles are homologically dependent: their sum is zero.
Hilbert 16th problem
One of the most challenging problems which remains open for over 120 years, is the Hilbert's question on the number and position of limit cycles of a polynomial vector field on the plane (Problem 16,
second part). Despite considerable progress in the last 25 years, the only known general result states that each polynomial vector field may have only finitely limit cycles (independently Yu.
Ilyashenko and J. Ecalle, 1991). It is not known whether this number is uniformly bounded over all polynomial fields of degree $\le d$, even for $d=2$ (fields of degree $1$ cannot exhibit limit
cycles at all).
It is worth noting that the Hilbert 16th problem has no nontrivial complex version. A generic polynomial vector field after complexification has countably many homologically independent complex limit
cycles, see [IY, Sect. 28C].
[E] Écalle, J. Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac, Actualités Mathématiques. Hermann, Paris, 1992. MR1399559
[H] Hilbert, D. Mathematical problems Reprinted from Bull. Amer. Math. Soc. 8 (1902), 437–479. Bull. Amer. Math. Soc. (N.S.) 37 (2000), no. 4, 407--436. MR1779412
[I91] Ilyashenko, Yu. S. Finiteness theorems for limit cycles, Translations of Mathematical Monographs, 94. American Mathematical Society, Providence, RI, 1991. MR1133882
[I02] Ilyashenko, Yu. Centennial history of Hilbert's 16th problem Bull. Amer. Math. Soc. (N.S.) 39 (2002), no. 3, 301--354. MR1898209
[IY] Ilyashenko, Yu. and Yakovenko, S. Lectures on analytic differential equations, Graduate Studies in Mathematics, 86. American Mathematical Society, Providence, RI, 2008. MR2363178
[R] R. Roussarie, Bifurcation of planar vector fields and Hilbert's sixteenth problem , Birkhäuser (1998). MR1628014.
How to Cite This Entry:
Limit cycle. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Limit_cycle&oldid=54065
This article was adapted from an original article by L.A. Cherkas (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"https://encyclopediaofmath.org/wiki/Limit_cycle","timestamp":"2024-11-14T04:08:39Z","content_type":"text/html","content_length":"25815","record_id":"<urn:uuid:8bcb48fd-c3f2-409c-acb4-dcb5f6e63bcf>","cc-path":"CC-MAIN-2024-46/segments/1730477028526.56/warc/CC-MAIN-20241114031054-20241114061054-00093.warc.gz"} |
Best places to take photos in Hamilton?
Your thoughts on where to take photos of my e30 around Hamilton.
Let me know
i took photos of my honda in the car park at the hamilton gardens looked good
interesting background but subtle enough not to take interest of the car
frankton railway station, frankton in general if you want a ghettoish background, iv been looking for ages cant really find anywhere i like except for multi level carparks
frankton railway station, frankton in general if you want a ghettoish background, iv been looking for ages cant really find anywhere i like except for multi level carparks
funny you mention frankton railway. I have driven past it a few times and contemplated taking some pics there. Im looking more for a classy location. Thinking showroom kind of location? Coombes?
Wintec carpark, on collingwood street... nice mix of urban carpark with a wild back ground
It's where i took mine (I lived nextdoor lol)
Seconded, Where is Hamilton ?
South island isnt it Ryan? just above New Plymouth? Think thats correct
I think Hamilton is more well known than New Plymouth. Lol.
The graffiti wall opposite the Warehouse in town.
I think Hamilton is more well known than New Plymouth. Lol.
The graffiti wall opposite the Warehouse in town.
Yer iv seen that,
its pretty sweet
South island isnt it Ryan? just above New Plymouth? Think thats correct
By Christchurch ae? haha
Come to Auckland theres lots of good places for photos if you can be bothered travelling for them which is why mine are usually uninspiring.
The graffiti wall opposite the Warehouse in town.
+1 Ive done that.. Looks good with a white car brook | {"url":"https://bimmersport.co.nz/topic/21314-best-places-to-take-photos-in-hamilton/?tab=comments#comment-231379","timestamp":"2024-11-12T00:53:04Z","content_type":"text/html","content_length":"224303","record_id":"<urn:uuid:beb5c1b3-0939-424f-9a8b-4bfd9fc1b24f>","cc-path":"CC-MAIN-2024-46/segments/1730477028240.82/warc/CC-MAIN-20241111222353-20241112012353-00769.warc.gz"} |
Indirect Reasoning with LLMs – NextraPrompt Engineering Guide
Indirect Reasoning with LLMs
Zhang et al. (2024) (opens in a new tab) recently proposed an indirect reasoning method to strengthen the reasoning power of LLMs. It employs the logic of contrapositives and contradictions to tackle
IR tasks such as factual reasoning and mathematic proof. It consists of two key steps: 1) enhance the comprehensibility of LLMs by augmenting data and rules (i.e., logical equivalence of
contrapositive), and 2) design prompt templates to stimulate LLMs to implement indirect reasoning based on proof by contradiction.
Experiments on LLMs like GPT-3.5-turbo and Gemini-pro show that the proposed method enhances the overall accuracy of factual reasoning by 27.33% and mathematic proof by 31.43% compared to traditional
direct reasoning methods.
Below is an example of zero-shot template for proof-by-contradiction.
If a+|a|=0, try to prove that a<0.
Step 1: List the conditions and questions in the original proposition.
Step 2: Merge the conditions listed in Step 1 into one. Define it as wj.
Step 3: Let us think it step by step. Please consider all possibilities. If the intersection between wj (defined in Step 2) and the negation of the question is not empty at least in one possibility, the original proposition is false. Otherwise, the original proposition is true.
Code / API
from openai import OpenAI
client = OpenAI()
response = client.chat.completions.create(
"role": "user",
"content": "If a+|a|=0, try to prove that a<0.\n\nStep 1: List the conditions and questions in the original proposition.\n\nStep 2: Merge the conditions listed in Step 1 into one. Define it as wj.\n\nStep 3: Let us think it step by step. Please consider all possibilities. If the intersection between wj (defined in Step 2) and the negation of the question is not empty at least in one possibility, the original proposition is false. Otherwise, the original proposition is true.\n\nAnswer:" | {"url":"https://www.promptingguide.ai/prompts/reasoning/indirect-reasoning","timestamp":"2024-11-09T02:56:09Z","content_type":"text/html","content_length":"133786","record_id":"<urn:uuid:d4754474-b69d-4691-b864-d122b6f68e69>","cc-path":"CC-MAIN-2024-46/segments/1730477028115.85/warc/CC-MAIN-20241109022607-20241109052607-00242.warc.gz"} |
Spring 2021: Linear Algebra II (Math 316)
Dr. Armin Straub
Instructor MSPB 313
(251) 460-7262 (please use e-mail whenever possible)
MWF, 9-11am, or by appointment
Office hours
Held virtually using Zoom; please make an appointment by email at least 2 hours in advance.
MWF, 8:00-8:55am, in MSPB 410
Class schedule
Due to COVID restrictions, you may only attend those meetings assigned to the cohort you signed up for.
The tentative dates for our two midterm exams are:
Midterm exams Friday, February 26
Monday, April 5
Final exam Monday, May 3 — 8:00-10:00am
Online grades Homework Scores
Exams: USAonline (Canvas)
Syllabus syllabus.pdf
Assignments and course material
In order to be able to view the lecture recordings, you need to be logged into USAonline (Canvas). If you are still running into access issues, then please view the recordings through Panopto Video
in our course page on Canvas.
Dates Assignments and course material
Lecture recordings:
01/20 Class schedule:
#1 01/22
01/25 01/20: online using pre-recorded lectures
01/22: in-person meeting
01/25: online using pre-recorded lectures
Lecture recordings:
01/27 Class schedule:
#2 01/29
02/01 01/27: in-person meeting
01/29: in-person meeting
02/01: online using pre-recorded lectures
Lecture recordings:
#3 02/05 Class schedule:
02/03: in-person meeting
02/05: in-person meeting
02/08: online using pre-recorded lectures
Lecture recordings:
#4 02/12 Class schedule:
02/10: in-person meeting
02/12: in-person meeting
02/15: online using pre-recorded lectures
Lecture recordings:
Class schedule:
#5 02/19 02/17: in-person meeting
02/24 02/19: in-person meeting
02/22: online using pre-recorded lectures
02/24: online via Zoom
lectures-01-15.pdf (all lecture notes up to now in one big file)
Midterm Exam #1
02/26 Practice material:
Format of the exam:
Lecture recordings:
#6 03/03 Class schedule:
03/01: online using pre-recorded lectures
03/03: in-person meeting
03/05: in-person meeting
Lecture recordings:
03/10 Class schedule:
#7 03/12
03/15 03/08: online using pre-recorded lectures
03/10: student holiday
03/12: in-person meeting
03/15: online using pre-recorded lectures
Lecture recordings:
#8 03/19 Class schedule:
03/17: in-person meeting
03/19: in-person meeting
03/22: online using pre-recorded lectures
Lecture recordings:
Class schedule:
03/26 03/24: in-person meeting
#9 03/29
03/31 03/26: in-person meeting
03/29: online using pre-recorded lectures
03/31: in-person meeting
04/02: online via Zoom
lectures-16-28.pdf (all lecture notes since the previous exam in one big file)
Midterm Exam #2
Practice material:
Lecture recordings:
#10 04/09 Class schedule:
04/07: in-person meeting
04/09: in-person meeting
04/12: online using pre-recorded lectures
Lecture recordings:
#11 04/16 Class schedule:
04/14: in-person meeting
04/16: in-person meeting
04/19: online using pre-recorded lectures
Lecture recordings:
Class schedule:
04/23 04/21: in-person meeting
#12 04/26
04/28 04/23: in-person meeting
04/26: online using pre-recorded lectures
04/28: in-person meeting
04/30: online via Zoom
lectures-29-38.pdf (all lecture notes since the previous exam in one big file)
Final Exam
Practice material:
About the homework
• Homework problems are posted for each unit. Homework is submitted online, and you have an unlimited number of attempts. Only the best score is used for your grade.
Most problems have a random component (which allows you to continue practicing throughout the semester without putting your scores at risk).
• Aim to complete the problems well before the posted due date.
A 15% penalty applies if homework is submitted late.
• Collect a bonus point for each mathematical typo you find in the lecture notes (that is not yet fixed online), or by reporting mistakes in the homework system. Each bonus point is worth 1%
towards a midterm exam.
The homework system is written by myself in the hope that you find it beneficial. Please help make it as useful as possible by letting me know about any issues!
As part of this course, we will explore the open-source free computer algebra system Sage to assist with more involved calculations.
If you just want to run a handful quick computations (without saving your work), you can use the text box below.
A convenient way to use Sage more seriously is https://cocalc.com. This free cloud service does not require you to install anything, and you can access your files and computations from any computer
as long as you have internet. To do computations, once you are logged in and inside a project, you will need to create a "Sage notebook" as a new file.
Here are some other things to try:
• Sage makes solving least squares problems pleasant. For instance, to solve Example 45 in Lecture 8:
A = matrix([[1,2],[1,5],[1,7],[1,8]]); b = vector([1,2,3,3])
• Sage can compute QR decompositions. For instance, we can have it do Example 70 in Lecture 13 for us:
A = matrix(QQbar, [[0,2,1],[3,1,1],[0,0,1],[0,0,1]])
The result is a tuple of the two matrices Q and R. If that is too much at once, A.QR(full=false)[0] will produce Q, and A.QR(full=false)[1] will produce R. (Can you figure out what happens if you
omit the full=false? Check out the comment under "Variations" for the QR decomposition in the lecture sketch. On the other hand, the QQbar is telling Sage to compute with algebraic numbers
(instead of just rational numbers); if omitted, it would complain that square roots are not available.) | {"url":"https://arminstraub.com/teaching/linearalgebra2-spring21","timestamp":"2024-11-13T21:19:39Z","content_type":"text/html","content_length":"35569","record_id":"<urn:uuid:94e352b4-ae86-4ac7-8d75-6d9f9680f785>","cc-path":"CC-MAIN-2024-46/segments/1730477028402.57/warc/CC-MAIN-20241113203454-20241113233454-00350.warc.gz"} |
Photo by Anton Scherbakov on Unsplash
Clustering is an unsupervisedlearning method that allows us to group set of objects based on similar characteristics. In general, it can help you find meaningful structure among your data, group
similar data together and discover underlying patterns.
One of the most common clustering methods is K-means algorithm. The goal of this algorithm isto partition the data into set such that the total sum of squared distances from each point to the mean
point of the cluster is minimized.
K means works through the following iterative process:
1. Pick a value for k (the number of clusters to create)
2. Initialize k ‘centroids’ (starting points) in your data
3. Create your clusters. Assign each point to the nearest centroid.
4. Make your clusters better. Move each centroid to the center of its cluster.
5. Repeat steps 3–4 until your centroids converge.
How to apply it?
For the following example, I am going to use the Iris data set of scikit learn. This data consists of 50 samples from each of three species of Iris (Iris setosa, Iris virginica and Iris versicolor).
It has four features from each sample: length and width of sepals and petals.
1. To start let’s import the following libraries.
from sklearn import datasetsimport matplotlib.pyplot as pltimport pandas as pdfrom sklearn.cluster import KMeans
2. Load the data.
iris = datasets.load_iris()
3. Define your target and predictors.
X = iris.data[:, :2]y = iris.target
4. Let’s have a look at our data through a scatter plot.
plt.scatter(X[:,0], X[:,1], c=y, cmap='gist_rainbow')
plt.xlabel('Spea1 Length', fontsize=18)
plt.ylabel('Sepal Width', fontsize=18)
5. Now, let’s instantiate and fit our K means cluster model. We are going to use three clusters and a random state of 21.
km = KMeans(n_clusters = 3, n_jobs = 4, random_state=21)km.fit(X)
6. With the following code you can identify the center points of the data.
centers = km.cluster_centers_print(centers)Output
[[5.77358491 2.69245283][5.006 3.418 ][6.81276596 3.07446809]]
7. Now, let’s compare our original data versus our clustered results using the following code.
#this will tell us to which cluster does the data observations belong.new_labels = km.labels_# Plot the identified clusters and compare with the answersfig, axes = plt.subplots(1, 2, figsize=(16,8))axes[0].scatter(X[:, 0], X[:, 1], c=y, cmap='gist_rainbow',edgecolor='k', s=150)axes[1].scatter(X[:, 0], X[:, 1], c=new_labels, cmap='jet',edgecolor='k', s=150)axes[0].set_xlabel('Sepal length', fontsize=18)axes[0].set_ylabel('Sepal width', fontsize=18)axes[1].set_xlabel('Sepal length', fontsize=18)axes[1].set_ylabel('Sepal width', fontsize=18)axes[0].tick_params(direction='in', length=10, width=5, colors='k', labelsize=20)axes[1].tick_params(direction='in', length=10, width=5, colors='k', labelsize=20)axes[0].set_title('Actual', fontsize=18)axes[1].set_title('Predicted', fontsize=18)
Here is a list of the main advantages and disadvantages of this algorithm.
• K-Means is simple and computationally efficient.
• It is very intuitive and their results are easy to visualize.
• K-Means is highly scale dependent and is not suitable for data of varying shapes and densities.
• Evaluating results is more subjective. It requires much more human evaluation than trusted metrics. | {"url":"https://medium.com/@belen.sanchez27/predicting-iris-flower-species-with-k-means-clustering-in-python-f6e46806aaee","timestamp":"2024-11-06T05:21:43Z","content_type":"text/html","content_length":"120524","record_id":"<urn:uuid:69c73ac0-f2c4-4c90-97de-71cbbc31e17b>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00049.warc.gz"} |
Is this supposed to work with NLL?
EDIT: Revised examples to also include "owned" types and avoid overuse of u8
So, I have recently been playing with in-place deserialization (inside abomonation if you are curious), and there I hit some borrow checking edge cases which I thought should have been resolved by
Non-Lexical Lifetimes.
Can someone help me figure it out if I'm asking too much of NLL here or if it's a bug/limitation of the current implementation?
For the purpose of discussion, let us introduce a trait which represents data types that can be deserialized in-place from a slice of bytes of a certain lifetime.
If you have ever played with serde, this notion is closely related to the 'de in its Deserialize<'de> trait:
pub trait InPlaceDeserialize<'bytes> {
/* ... implementation details of deserialization from &'bytes mut [u8] ... */
Many types can be deserialized from any slice of bytes and can therefore have a blanket implementation of this trait for all 'bytes:
impl InPlaceDeserialize<'_> for bool {
/* ... implementation details ... */
However, we need to be careful about this lifetime as soon as references come into play, because we're talking about in-place deserialization, which entails patching pointers within the serialized
data to point at other serialized data.
Any &'target T will be turned into an &'bytes T, and for this substitution to be valid, we must only allow deserialization of references which are outlived by the bytes. Otherwise, the user would be
able to open a memory safety hole by e.g. sneaking out a &'static T after the bytes have been dropped.
In the language of lifetimes, this constraint is expressed as follows:
// Deserialization is only allowed if &'bytes T lives strictly longer than &'target T
impl<'target, 'bytes: 'target> InPlaceDeserialize<'bytes> for &'target str {
/* ... implementation details ... */
Now, let us write a deserialization interface. Intuitively, one could expect this to work...
// Actual signature is more complex, with error handling and other stuff
pub fn deserialize1<'bytes, T>(_bytes: &'bytes mut [u8]) -> &'bytes T
where T: InPlaceDeserialize<'bytes>
...but it doesn't, because it forbids us to deserialize references with any lifetime shorter than 'bytes, which is actually perfectly fine (e.g. it's okay to deserialize &'a T from &'static mut
// Won't compile. T may not live long enough, consider adding a T: 'bytes bound
pub fn client<'bytes, T>(bytes: &'bytes mut [u8])
where T: InPlaceDeserialize<'bytes>
Personally, I already find this surprising. Why doesn't rustc report an error at the point where deserialize1 is defined, but only at the point where client is defined? Maybe a bug report is in order
But in any case, to handle this use case, it seems we need this signature instead:
pub fn deserialize2<'output, 'bytes, T>(_bytes: &'bytes mut [u8]) -> &'output T
where 'bytes: 'output,
T: InPlaceDeserialize<'bytes>
To check if our deserialize2 signature is right, let's write some clients. At first, the easy stuff seems to work...
pub fn foo1<'bytes, T>(bytes: &'bytes mut [u8])
where T: InPlaceDeserialize<'bytes>
pub fn bar1(bytes: &mut [u8]) {
... but closer examination reveals that we have actually borrowed our bytes forever, and cannot do anything with them even after the &'output T has been dropped
// Won't compile
pub fn foo2<'bytes, T>(bytes: &'bytes mut [u8]) -> &'bytes mut [u8]
where T: InPlaceDeserialize<'bytes>
// Can't borrow "bytes" more than once at a time.
// First borrow ^^^^^
// ^^ second borrow
I find this puzzling. I thought that, especially with NLL, rustc was supposed to only borrow data as long as necessary? Here, the 'output that it's picking is clearly much too greedy for an &'output
T reference that is being dropped immediately...
Can someone more familiar than I am with lifetime puzzles tell me if this is expected behavior from the borrow checker, a known rustc bug, or an unknown rustc bug that I should report?
And, if it's a bug, is there a known workaround?
1 Like
pub fn foo2<'bytes, T>(bytes: &'bytes mut [u8]) -> &'bytes mut [u8]
where T: InPlaceDeserialize<'bytes>
Since it doesn't solve the error it doesn't seem to be a NLL issue. In the end you'd want "for all lifetimes shorter than this one", but I don't think your can express it with trait without more work
done on higher-kinded types (GAT for example).
2 Likes
First of all, there is no NLL-issue whatsoever:
• fn deserialize2<'output, 'bytes, T>(
_: &'bytes mut [u8],
) -> &'output T
'bytes : 'output,
T : InPlaceDeserialize<'bytes>,
If the only thing we know about T is that T : InPlaceDeserialize<'bytes> then the borrow on the input bytes will last 'bytes, no matter the return type.
So the choice of 'output plays no role in the choice of 'bytes, which is limited by the InPlaceDeserialize<'bytes> bound.
• fn foo2<'bytes, T> (
bytes: &'bytes mut [u8] // <-------------------------+
) -> &'bytes mut [u8] // |
where // | same as
T : InPlaceDeserialize<'bytes>, // >-----------------+
{ // | leads to
deserialize2::<T>(bytes); // <-----------------------+
// First borrow lasts for `'bytes`, hence for the whole function's body
bytes // Error
So, to fix that function, you need a HRTB on the trait, but for a lifetime 'a where 'bytes : 'a. This is alas not expressible with for<...> quantification, hence the problem.
It feels, however, to me, that you got the lifetime bounds reversed:
IIUC, the InPlaceDeserialize<'bytes> has an implicit &'bytes Self type in mind, and thus an implicit Self : 'bytes bound (which can and should be made explicit); and then with Self = &'target u8,
this leads to 'target : 'bytes (for the type &'bytes &'target u8.
If the return intended type was &'bytes u8 instead of &'bytes &'target u8 (which can become &'target u8 thanks to &'target u8 : Copy), then it feels to me that Self should be u8, and the lifetime
quantification should not appear in the trait's impl but in the deserializing function impl:
trait InPlaceDeserialize<'bytes> /* where Self */ : 'bytes {}
fn deserialize<'bytes, T> (_: &'bytes [u8]) -> &'bytes T
T : InPlaceDeserialize<'bytes>,
unimplemented!("Using `unsafe`, hence the unsafe on the trait")
impl<'bytes> InPlaceDeserialize<'bytes> for u8
// where u8 : 'bytes /* trivial bound */
2 Likes
Seems Yandros beat me to reply. I commented in playground, adds/complements theirs.
3 Likes
@Yandros Sorry, it seems that I over-simplified my actual problem by only providing an &'target u8 example, thus leading you in the wrong direction. To avoid misleading more people, I have updated
the OP to 1/provide another example of serializable data than references and 2/avoiding use of u8 in both "bytes" and the serialized data.
I think I do not want InPlaceDeserialize<'bytes> to have an implicit : 'bytes bound, because if I'm not misunderestood, this would mean that only 'static data can be deserialized from 'static bytes,
which is not the intent.
It is actually okay to deserialize short-lived data from long-lived bytes. What is not okay is to deserialize long-lived data from short-lived bytes. That is because the references will be patched by
the deserializer to point into the neighboring bytes. Therefore, if you were allowed to deserialize an &'static T from &'bytes [u8], you would get an &'bytes T which masquerades as a &'static T, and
then you could make a copy of it, drop the source bytes, and end up with a dangling reference...
Ultimately, though, I agree with you and @leudz that what I really want is some sort of for<'a where 'bytes: 'a> T: InPlaceDeserialize<'a> bound in client functions. Now matter how hard I try to work
around the lack of it, I just keep hitting a wall where I end up borrowing my bytes for too long...
@jonh I'm not fond of returning T: InPlaceDeserialize<'_> from deserialize() because it means that every impl of InPlaceDeserialize must feature a reference instead of just targeting the type being
serialized/deserialized. I suspect that you were also misled by the my original problem formulation, where I only implemented InPlaceDeserialize for references, so I would advise you to check the new
and improved version of the OP that I just edited in.
1 Like
Wait a minute...
Actually, for<'a where 'bytes: 'a> T: DeserializeInPlace<'a> is not what I want.
It requires DeserializeInPlace<'a> to be implemented for all lifetimes 'a smaller than 'bytes. Whereas the impl of DeserializeInPlace for reference types actually sets a lower bound of the set of
lifetimes 'a for which DeserializeInPlace<'a> is implemented. Therefore, this enhanced HRTB syntax still excludes reference types, just like a regular HRTB would!
Another way to look at it is that because deserializing from bytes which are longer-lived than necessary is safe, T: DeserializeInPlace<'a> implies for<'b: 'a> T: DeserializeInPlace<'b>, and
therefore setting a higher bound on the 'a-s for which we want DeserializeInPlace<'a> to be implemented is meaningless (higher bound on lower bound of 'b = no actual bound on 'b).
From this latter perspective, for<'a where 'bytes: 'a> T: DeserializeInPlace<'a> is actually equivalent to for<'a> T: DeserializeInPlace<'a>.
I was both misunderstanding the purpose of the trait and intent of deserialize that you have. Your only really intention is doing transmutation so was getting mixed up early rather than to the end.
Try this for helping you down the rabbit hole.
You know, I am surprised that the compiler will actually accept this:
pub trait Lo<'a> {
type a: InPlaceDeserialize<'a>;
pub trait Hi: for<'a> Lo<'a> {}
impl<'a, 'b> Lo<'a> for &'b str {
type a = &'a str;
impl<'b> Hi for &'b str{}
But crazily enough, this actually seems to work
1 Like
So... err... I think I've found a simpler way.
It's also a darker, somewhat frightening way. But hey, this code is for a crate that has entomb() and exhume() unsafe methods in its public API, after all...
pub fn evil_foo<'bytes, T>(bytes: &'bytes mut [u8]) -> &'bytes mut [u8]
where T: InPlaceDeserialize<'bytes>
unsafe {
// Since Rust won't work for us, we must resort to the C way.
let (ptr, len) = (bytes.as_mut_ptr(), bytes.len());
// Oh, and we're also going to need a scope. Bear with me.
// This is a perfect copy of the original "bytes"
let bytes = std::slice::from_raw_parts_mut::<'bytes, _>(ptr, len);
// The borrow checker can borrow it as long as it likes...
// ...because we're just going to drop it anyhow.
// What we will return is... a _different_ clone of "bytes"
std::slice::from_raw_parts_mut::<'bytes, _>(ptr, len)
Although this code is rather horrifying, I believe that it is sound, because...
1. There is no &mut aliasing. At any point of code, there are only zero or one Rust references to the bytes in flight.
2. Input data lifetime is duly respected. Every copy of bytes that is created is an &'bytes mut [u8], there is no possibility of accidentally creating an &'static mut [u8] or something bad like
3. Slice-cloning unsafe superpowers are not abused to leak extra references to the bytes which would violate Rust's aliasing rules. The references produced by deserialize2 do not exit the scope in
which the function is called.
Forgetting a reference does nothing, so you may still have some unsoundness, although I am not knowledgeable enough in unsafe to know if it is sound.
Let's try to summon @RalfJung then! He's the kind of person who likes reading this sort of code late at night...
1 Like
MIRI complains about your code, playground,
removing the std::mem::forget does pass MIRI (but that still doesn't mean that it is sound, just that it isn't obviously unsound) playground
Without some variant of std::mem::forget, I'm pretty sure that this code violates Rust's aliasing rules, because we have two live &muts to the same data at the same time in the inner scope. Miri most
likely isn't smart enough to notice it yet...
...on the other hand, I could be convinced that the following variant, while still super-shady, does not violate Rust's aliasing rules:
pub fn evil_foo<'bytes, T>(bytes: &'bytes mut [u8]) -> &'bytes mut [u8] {
unsafe {
// Dear borrow checker, please go out of the way for a second
let bytes_uc: &'bytes UnsafeCell<[u8]> = std::mem::transmute(bytes);
// Now, we're going to need a scope to contain some evil deeds
// This is a perfect copy of the original "bytes"
let bytes: &'bytes mut [u8] = &mut *bytes_uc.get();
// The borrow checker can borrow it as long as it likes...
// ...because we're just going to drop it anyhow.
// What we will return is... a _different_ clone of "bytes"
let bytes: &'bytes mut [u8] = &mut *bytes_uc.get();
That's because the transmute moves away "bytes" (if you try to use it afterwards, you will see that rustc complains that it is used after move). And if it's considered to be moved by the compiler, it
probably isn't taken into consideration for the purpose of &mut alias analysis.
1 Like
...well, after closer examination, my actual use case (which does not involve &'bytes mut [u8] but a more general S: DerefMut<Target=[u8]> + 'bytes generic parameter) is actually taken care of by
straightforward use of UnsafeCell<S>. So my immediate problem is solved:
pub fn less_evil<'bytes, S: DerefMut<Target=[u8]> + 'bytes>(bytes: S) -> S {
unsafe {
let bytes_uc = UnsafeCell::new(bytes);
let bytes = &'bytes mut [u8] = (*bytes_uc.get()).deref_mut();
// ... do whatever horrible stuff is necessary without leaking references ...
I'm still interested in 1/whether the transmute-based variant of evil_foo above was UB and 2/whether the API of either foo2 or deserialize2 can be straightforwardly adjusted to get the intended
semantics without resorting to evil unsafe tricks, though.
1 Like
For those like me who are still intrigued by the underlying API design puzzle, I think the key question is, why this variant of the deserialize and client API that tries hard to decouple the 'bytes
lifetime from the 'output lifetime won't solve the problem:
pub fn deserialize3<'output, 'bytes, T>(_bytes: &'bytes mut [u8]) -> &'output T
where 'bytes: 'output,
T: InPlaceDeserialize<'output> + 'output
pub fn client<'bytes, 'output, T>(bytes: &'bytes mut [u8]) -> &'bytes mut [u8]
where T: InPlaceDeserialize<'output> + 'output,
'bytes: 'output
I think the answer is that with this design, the caller of "client" is in principle able to pick any 'output lifetime smaller than 'bytes, and therefore the borrow checker must in particular prove
that the code works in the 'bytes == 'output case.
Thus, we effectively go back to a T: InPlaceDeserialize<'bytes> constraint, which as @Yandros pointed out is the heart of the issue.
To fix this, we would need a way to express a kind of "for some 'output lifetime of my choosing, not yours" concept in the API of client().
Higher-ranked trait bounds almost provide this, but their actual meaning is "for any possible 'output lifetime", which is too broad and effectively restricts us to T: InPlaceDeserialize<'null>
implementations (where 'null is an imaginary lifetime which never lives long enough no matter what borrow constraint you are trying to prove => no borrow allowed inside of T).
I think the constraint we actually need is "for some reborrow of the bytes input slice that does not exit this function's stack". But I'm not sure how it could be expressed in a Rust API. Maybe @jonh
was on the right track, but all my attempts at trying to generalize this solution to any InPlaceDeserialize implementation ended up hitting a "either we need GATs or we must write much more code for
each InPlaceDeserialize implementation" wall.
Yes, the more I think about it, the more I am convinced in that the issue here is between you and Rust dealing with lifetime parameters differently. The main thing is that a lifetime paramater bound
to a generic function represents a lifetime that outlives the body of the function.
The only way to get arbitrarily short-lived lifetime parameters is with HRTB, which are usually present in CPS/callback-based APIs:
fn with_deserialize<'bytes, T, R, F> (
bytes: &'bytes mut [u8],
f: F,
) -> R
for<'output> F : FnOnce(&'output T) -> R,
for<'output> T : InPlaceDeserialize<'output>,
fn client<'bytes, T>(bytes: &'bytes mut [u8]) -> &'bytes mut [u8]
for<'output> T : InPlaceDeserialize<'output>,
bytes.with_deserialize(|_thing: &T| {
Now, the HRTB cannot be bounded, but it does not matter, since a function requiring a &'static Something as input will not be able to fulfill the for<'any> &'any Something input bound (non-covariance
w.r.t. function parameters). Only a function that, on the contrary, can handle an input with a small lifetime, no matter how small, will be able to also handle the big lifetimes (by contravariance).
That's why even if there is no 'bytes : 'output bound, there should nevertheless be no soundness issue:
error: borrowed data cannot be stored outside of its closure
--> src/main.rs:53:13
50 | let mut s = "static";
| ----- -------- cannot infer an appropriate lifetime...
| |
| ...so that variable is valid at time of its declaration
51 | let bytes = &mut [];
52 | with_deserialize::<str, _, _>(bytes, |short_s: &str| {
| --------------- borrowed data cannot outlive this closure
53 | s = short_s;
| ^^^^^^^ cannot be stored outside of its closure
Now, I think that instead of having an "existential" small lifetime parameter, having a classic API where the return type is borrowed for as long as its input should be fine too, since variance and
the compiler choosing the smallest possible lifetime at each call site should provide the same flexibility:
/// Types which implement this can be deserialized in-place from &'bytes mut [u8]
trait InPlaceDeserialize {}
impl InPlaceDeserialize for str {}
fn deserialize<'bytes, T : ?Sized + 'bytes> (
bytes: &'bytes mut [u8],
) -> &'bytes T
T : InPlaceDeserialize
fn client<'bytes, T> (
bytes: &'bytes mut [u8],
) -> &'bytes mut [u8]
T : InPlaceDeserialize,
deserialize::<T>(bytes); // borrowed for an ephemeral lifetime
fn main ()
let mut s: &'static str = "static";
let mut bytes = [];
let short_s = deserialize::<str>(&mut bytes);
s = short_s; // Error
Ah, it's too bad you weren't at tonight's Rust Paris meetup (or I somehow missed you in the room), we could have iterated on this more quickly through the magic of face-to-face interaction.
But basically, the problem with the approach that you suggest at the end of your post is that it doesn't work for transitive references. You cannot (safely) deserialize more complex types like those
in place without setting some sort of lifetime bound on the InPlaceDeserialize trait:
struct Composite<'a> {
b: bool,
s: &'a str,
struct AndThenSome<'a, 'b, 'c> {
c: &'b Composite<'a>,
s: &'c str,
The reason is that sooner or later, beyond a certain reference tree depth, if it wants to produce an &T without allocating extra memory, the deserializer will have to resort to the trick of patching
the references within the serialized bytes to point elsewhere into the serialized bytes (thus creating a kind of hidden self-referential struct, FWIW). And when we reach that point, we cannot allow
the user to, say, request deserialization of a Composite<'static> from some bytes of finite lifetime.
This would lead the deserializer to create a fake &'static str reference which actually points into an &'bytes [u8] slice, which would be Very Bad (as it trivially enables memory unsafety by copying
the fake &'static str then dropping the bytes).
As for HRTBs, I have already explained why they don't work for this use case at the end of the opening post's playground example.
"Liveness" for Stacked Borrows (in Miri) right now is defined based on when a reference is actually used. It has nothing to do with its lexical scope. This matches NLL. Curiously, calling mem::forget
extends the liveness of a references and passing it to a function counts as a use.
I'm afraid I don't have the time right now to dig into this API any deeper. I am happy to explain Miri's behavior on concrete examples though, and defend it where necessary.
2 Likes
This topic was automatically closed 90 days after the last reply. New replies are no longer allowed. | {"url":"https://users.rust-lang.org/t/is-this-supposed-to-work-with-nll/33090","timestamp":"2024-11-01T22:54:31Z","content_type":"text/html","content_length":"78450","record_id":"<urn:uuid:f41a3888-372a-4ba0-a30a-343d50a17ca2>","cc-path":"CC-MAIN-2024-46/segments/1730477027599.25/warc/CC-MAIN-20241101215119-20241102005119-00078.warc.gz"} |
Further Maths - Linear Programming
What are “decision variables”?
The variables in a linear programming problem that can be varied.
In a linear programming problem, what are $x, y$ and $z$ called?
If you find suitable values for decision variables that fit the constraints, what have you found?
In a graphical linear programming problem, what is the area that contains all feasible solutions called?
What is the optimal solution in a linear programming problem?
A feasible solution that meets the objective.
What are the three parts of formulating something as a linear programming problem?
1. Define decision variables ($x$, $y$, etc.)
2. State the objective and objective function (minimise $3x + …$)
3. Stating the constraints as inequalities
How do you do solve a graphical linear programming problem?
Create an imaginary line with the gradient of the objective function and slide it to the first or last vertex in the feasible region.
What is the vertex testing method for linear programming?
Find the coordinates of each vertex in the feasible region and see which one gives the highest or lowest value for the objective function.
How do you deal with the fact the decision variables have to take on integer values in a linear programming problem?
Try out integer points near the optimal solution and see which one gives you the best value.
When stating the final answers in a linear programming question, what is important?
State the values is the context of the problem (how many hours swimming?).
“Darren has enough dried fruit to make 3kg of $x$, 2kg of $y$ or $6kg$ of $z$”. How do you write this as an inequality?
\[\frac{x}{3} + \frac{y}{2} + \frac{z}{6} \le 1\]
What constraints do you normally forget to specify in a linear programming question?
The non-negativity constraints.
Related posts | {"url":"https://ollybritton.com/notes/a-level/further-maths/topics/linear-programming/","timestamp":"2024-11-05T09:11:23Z","content_type":"text/html","content_length":"506011","record_id":"<urn:uuid:3d0b1dd7-b3e9-4cb9-8b86-198f9d6046ec>","cc-path":"CC-MAIN-2024-46/segments/1730477027878.78/warc/CC-MAIN-20241105083140-20241105113140-00423.warc.gz"} |
Mental Calculation Techniques for Academic Success
Reading time: 11 minutes
Have you ever found yourself staring blankly at a math problem, wishing you could solve it faster? Or perhaps you've watched in awe as someone effortlessly calculates complex sums in their head.
Well, you're in for a treat! Today, we're diving deep into the world of mental calculation techniques that can supercharge your academic success. So, grab your thinking cap, and let's embark on this
exciting journey to unlock the power of your mind!
Why Mental Calculation Matters in Academia
Before we jump into the nitty-gritty of mental math wizardry, let's take a moment to consider why these skills are so crucial for academic success. Think of your brain as a muscle – the more you
exercise it, the stronger it becomes. Mental calculation is like a full-body workout for your mind, helping you:
1. Sharpen your focus and concentration
2. Boost your problem-solving abilities
3. Enhance your memory and recall
4. Build confidence in your mathematical abilities
5. Save time during exams and assignments
Now that we've whetted your appetite, let's explore some powerful mental calculation techniques that'll have you crunching numbers like a pro in no time!
The Building Blocks: Mastering the Basics
1. Know Your Times Tables
Remember those endless drills in elementary school? Well, they weren't for nothing! Having your multiplication tables at your fingertips is like having a secret weapon in your mental math arsenal.
But don't worry if you're a bit rusty – we've got some tricks up our sleeve to help you master them:
The 9 Times Table Trick
Here's a nifty trick for the 9 times table:
1. Hold your hands out in front of you, palms down.
2. For 9 x 4, bend your 4th finger down.
3. Count the fingers to the left (3) and right (6) of the bent finger.
4. The answer is 36!
This works for all single-digit numbers multiplied by 9. Cool, right?
2. Rounding and Adjusting
When faced with tricky numbers, sometimes it's easier to round them to friendly numbers and then adjust. For example:
• To calculate 398 + 753:
1. Round 398 up to 400
2. Round 753 down to 750
3. Add 400 + 750 = 1150
4. Adjust: 1150 - 2 + 3 = 1151
3. Breaking Numbers Apart
Large numbers can be intimidating, but breaking them into smaller, manageable chunks can make calculations a breeze. Let's look at an example:
• To multiply 23 x 11:
1. Break 23 into 20 + 3
2. Calculate (20 x 11) + (3 x 11)
3. 220 + 33 = 253
Learn more about: How Students Can Master Mental Calculation? (Road Map)
Leveling Up: Intermediate Techniques
Now that we've covered the basics, let's dive into some more advanced techniques that'll really impress your teachers and classmates!
4. The "Vedic Square" Method for Multiplication
This ancient Indian technique is a game-changer for multiplying two-digit numbers. Here's how it works:
1. Subtract each number from 100
2. Multiply the results
3. Add the results to 100 times the sum of the original numbers minus 100
Sounds complicated? Let's break it down with an example:
• To multiply 97 x 96:
1. 100 - 97 = 3, 100 - 96 = 4
2. 3 x 4 = 12
3. 97 + 96 = 193, 193 - 100 = 93
4. 93 x 100 = 9300
5. 9300 + 12 = 9312
Voila! 97 x 96 = 9312
5. Squaring Numbers Ending in 5
Here's a quick trick to square any two-digit number ending in 5:
1. Take the first digit and add 1 to it
2. Multiply this by the original first digit
3. Append 25 to the end
Let's try it with 75²:
1. 7 + 1 = 8
2. 8 x 7 = 56
3. Append 25: 5625
75² = 5625. Magic, isn't it?
6. The "Butterfly Method" for Fractions
Adding and subtracting fractions can be a headache, but the butterfly method makes it a breeze. Here's how it works:
1. Draw a "butterfly" shape connecting the numerators and denominators diagonally
2. Multiply along the wings and add the results for the new numerator
3. Multiply the denominators for the new denominator
Let's add 3/8 and 2/5:
- + -
1. Multiply diagonally: (3 x 5) + (2 x 8) = 15 + 16 = 31
2. Multiply denominators: 8 x 5 = 40
3. Result: 31/40
Advanced Techniques: Pushing the Boundaries
Ready to take your mental math skills to the next level? These advanced techniques might seem daunting at first, but with practice, they'll become second nature.
7. The "Trachtenberg System" for Rapid Calculation
Developed by Jakow Trachtenberg while in a Nazi concentration camp, this system includes various methods for lightning-fast calculations. One of the most useful is the method for multiplying by 12:
1. Double the original number
2. Add the original number to this result, moving one place to the left
For example, 12 x 34:
1. Double 34: 68
2. Add 34 to 68, moving one place left: 408
So, 12 x 34 = 408
8. Calculating Cube Roots
While it might seem like something only a computer could do, calculating cube roots mentally is possible with practice. Here's a method for perfect cubes up to 1,000,000:
1. Memorize the cubes of numbers 1-10
2. For a number N, find the largest perfect cube A³ less than or equal to N
3. The cube root of N is A plus the remainder divided by (3A² + 3A + 1)
Let's find the cube root of 857,375:
1. The largest perfect cube less than 857,375 is 94³ = 830,584
2. Remainder: 857,375 - 830,584 = 26,791
3. 3(94²) + 3(94) + 1 = 26,791
4. 26,791 ÷ 26,791 = 1
5. Therefore, the cube root of 857,375 is 94 + 1 = 95
9. The "Casting Out Nines" Method for Checking Calculations
This ancient technique is a quick way to check your work:
1. Add the digits of your original numbers
2. If the sum is greater than 9, add those digits together
3. Repeat until you have a single digit
4. Perform the same operation on your answer
5. If the results match, your calculation is likely correct
For example, let's check 456 x 789 = 359,784:
• 456: 4 + 5 + 6 = 15, 1 + 5 = 6
• 789: 7 + 8 + 9 = 24, 2 + 4 = 6
• 6 x 6 = 36, 3 + 6 = 9
• 359,784: 3 + 5 + 9 + 7 + 8 + 4 = 36, 3 + 6 = 9
The results match, so our calculation is likely correct!
Putting It All Together: Real-World Applications
Now that we've armed you with an arsenal of mental calculation techniques, let's explore how you can apply these skills in various academic scenarios.
Conquering Standardized Tests
Standardized mental arithmetic tests like the SAT, ACT, or GRE often include sections that don't allow calculators. This is where your mental math skills can give you a significant edge. By quickly
estimating answers or performing calculations in your head, you can save precious time and boost your score.
Tip: Estimation is Your Friend
When faced with multiple-choice questions, you don't always need an exact answer. Use rounding and estimation techniques to quickly eliminate unlikely options and zero in on the correct answer.
Excelling in Science Classes
From chemistry to physics, mental calculation skills can be a game-changer in science classes. Quick unit conversions, balancing equations, or calculating molar masses become much easier when you can
perform calculations mentally.
Learn more about: The Science Behind Mental Calculation
Example: Molar Mass Calculations
Let's say you need to find the molar mass of H₂SO₄:
1. H: 2 x 1 = 2
2. S: 32
3. O: 4 x 16 = 64
4. Total: 2 + 32 + 64 = 98 g/mol
With practice, you can perform these calculations in seconds!
Impressing in Math Class
Your newfound mental math skills will undoubtedly shine in math class. Whether you're solving algebraic equations, working with geometry, or tackling calculus problems, the ability to perform quick
mental calculations will give you a significant advantage.
Tip: Show Your Work
While mental calculation is impressive, remember to show your work when required. Your teachers want to see your thought process, not just the final answer.
Training Your Brain: Tips for Improvement
Like any skill, mental calculation improves with mental arithmetic practice. Here are some tips to help you sharpen your mental math abilities:
1. Practice daily: Set aside 10-15 minutes each day for mental math exercises.
2. Use mental math in everyday life: Calculate tips, grocery totals, or time differences in your head.
3. Play math games: Try apps or online games that focus on mental calculation.
4. Challenge yourself: Start with easier problems and gradually increase the difficulty.
5. Teach others: Explaining techniques to friends or family can reinforce your own understanding.
Remember, the key is consistency. With regular practice, you'll be amazed at how quickly your mental calculation skills improve!
Overcoming Mental Blocks: Strategies for Success
Even with all these techniques at your disposal, you might sometimes find yourself hitting a mental wall. Don't worry – it happens to everyone! Here are some strategies to help you overcome mental
1. Take a Deep Breath
When you feel stuck, take a moment to breathe deeply. This simple act can help oxygenate your brain and reduce stress, allowing you to think more clearly.
2. Visualize the Problem
Try to create a mental image of the problem. Sometimes, seeing the numbers or equations in your mind's eye can help you find a solution.
3. Break It Down
If a problem seems overwhelming, break it into smaller, manageable parts. Solve each part separately, then combine the results.
4. Use Mnemonics
Create memorable phrases or acronyms to help you remember formulas or techniques. For example, "Please Excuse My Dear Aunt Sally" for the order of operations (Parentheses, Exponents, Multiplication,
Division, Addition, Subtraction).
5. Talk It Out
Sometimes, explaining the problem out loud – even if you're just talking to yourself – can help you see it from a new perspective and find a solution.
The Road Ahead: Continuing Your Mental Math Journey
Congratulations! You've taken the first steps on an exciting journey to mental math mastery. But remember, this is just the beginning. As you continue to hone your skills, you'll discover new
techniques and shortcuts that work best for you.
Stay Curious
The world of mental calculation is vast and fascinating. Don't be afraid to explore beyond what we've covered here. Look into the history of mathematical prodigies, explore different cultural
approaches to mental math, or dive into the neuroscience behind calculation skills.
Set Goals
Challenge yourself by setting specific goals. Maybe you want to master multiplying two-digit numbers in your head, or perhaps you're aiming to calculate cube roots mentally. Whatever your goals,
write them down and track your progress.
Join a Community
Consider joining online forums or local math clubs where you can share tips, challenge each other, and learn from fellow mental math enthusiasts. Remember, the journey is just as important as the
Conclusion: Unleashing Your Inner Math Genius
As we wrap up our deep dive into mental calculation techniques, take a moment to reflect on how far you've come. From mastering the basics to tackling advanced methods, you've equipped yourself with
powerful tools for academic success.
Remember, becoming a mental math whiz isn't about showing off or replacing calculators entirely. It's about developing a deeper understanding of numbers, sharpening your problem-solving skills, and
building confidence in your mathematical abilities.
So, the next time you're faced with a challenging calculation, don't reach for that calculator right away. Take a deep breath, recall the techniques we've discussed, and give it a shot mentally. You
might just surprise yourself with what you can achieve!
Keep practicing, stay curious, and never stop challenging yourself. Your journey to mental math mastery has only just begun. Who knows? You might be the next mathematical prodigy in the making!
Frequently Asked Questions (FAQs)
1. Can anyone become good at mental calculations, or is it an innate talent?
While some people may have a natural aptitude for mental math, anyone can improve their skills with practice and the right techniques. It's more about developing good habits and strategies than
innate talent.
2. How long does it typically take to see improvements in mental calculation abilities?
With consistent practice, you can start seeing improvements in as little as a few weeks. However, significant progress usually takes a few months of regular practice.
3. Are there any risks associated with relying too much on mental calculations?
While mental calculations are valuable, it's important to balance them with written work and calculator use when appropriate. Always double-check important calculations and show your work when
required in academic settings.
4. Can mental calculation techniques help with subjects other than mathematics?
Absolutely! Mental calculation skills can enhance problem-solving abilities, improve memory, and boost confidence across various subjects, from science to economics.
5. Are there any recommended resources or books for further improving mental calculation skills?
Yes, there are many excellent resources available. Some popular books include "Secrets of Mental Math" by Arthur Benjamin and Michael Shermer, and "Short-Cut Math" by Gerard W. Kelly. Additionally,
many online platforms offer interactive courses and practice exercises for mental math.
There are no comments yet! | {"url":"https://ucmas.ir/en/blog/mental-calculation-techniques","timestamp":"2024-11-14T01:27:02Z","content_type":"text/html","content_length":"45488","record_id":"<urn:uuid:29d2aca6-d47c-49af-8909-2ed2c3e39dba>","cc-path":"CC-MAIN-2024-46/segments/1730477028516.72/warc/CC-MAIN-20241113235151-20241114025151-00463.warc.gz"} |
2021-11 Conical jar with birds and bird-wing designs
The conical form of this pot makes it unsteady, though visually dramatic. It is not particularly large, but because of its unusual shape virtually all of its surface is visible and available to
decorate. Eleven unattached design motifs float across this broad surface: traditional on top and radically innovative on the sides. Both form and design are done with a high degree of skill. Jar
2021-11 is evidence of Alice Dashee’s exceptional imagination and ability. Alice has been making pots since 1970, is of the Eagle Clan, and lives at Polacca on First Mesa. The collection has only
one other pot painted by her (2014-02) and its design is also exceptional.
From a small 1.75-inch base, the walls of the vessel expand outward 5.75-inches to the waist and then turn abruptly inward to form a 2.875-inch-wide flat surface. This flat top rises slightly at the
mouth, which is 2.00-inches wide. The jar is 4.43 times wider than its base and gives the impression of soaring off a table on which it is placed. This ratio also makes the jar top-heavy and always
ready to tip over and shatter. It’s a beautiful but not very practical form, much the same shape as Nathan Begaye’s extraordinary 2018-10 jar. This impractically of shape is also shared with Jake
Koopee’s tall svelte jar 2013-08, which can barely stand on its own. Rachael Sahmie offers a particularly large version of a small-base jar (2012-01). I think the precariousness of these shapes
makes viewers uneasy and heightens attention. Its a clever artistic strategy, but also a threat to the physical safety of the pots.
The walls of jar 2021-11 are even and substantial. The weight of the jar is surprising given its light form. There seems to be an endless supply of clay dust inside the pot.
Alice ground her paint well. The colors on the jar are deep, solid and rich. Both black and red paint are used, but the red is closer to a brown-maroon color. At Hopi red paint is simply a yellow
“sikyatska” clay that is ground to a powder and mixed with water to the desired consistency. In firing, the yellow turns red. There are multiple sources of sikyatska on the reservation and the clay
color varies, resulting in a variety of bright red tones on the fired pots. I have not seen this brown-maroon color on a pot from Hopi before and am not sure if it is the result of an unusually
colored sikyatska or if Alice chose to mix some black paint into the yellow slip to darken the fired color.
The designs on the top and the sides of the jar have strikingly different formats. The top has 3 design elements, each repeated twice. In contrast, the sides of the pot display 8 patterns of design
and none are repeated; they are simply presented laterally.
Top designs:
Ruth Bunzel spent the summers of 1924 and 1925 interviewing pueblo potters, including those at Hopi and including Nampeyo. The author spent a good deal of time trying to understand the principles of
design used on Native pottery and quotes Nampeyo as saying:
“The best arrangement for the (wide shouldered) water jar is four designs around the top, —two and two, like this (indicating the arrangement). The designs opposite each other should be alike
This convention of paring designs on opposite sides of a design space is still typical today at Hopi and is the convention followed by Alice Dashee on the upper surface jar 2021-11, although she
places a tiny third element among the larger forms.
The top of the jar carries three designs.
Design #1:
The first design is a quite realistic image of a bird, a frequent motif on pottery from First Mesa. The two depictions are substantially the same, with some variance in detail. Both have elegant,
stippled heads with curved black beaks and triangular black eyes. One displays a curlicue topknot, the other’s topknot is triangular. The stippling slightly branches into four stubs at the base of
the neck. Each stub is caped with a black triangle set against an unpainted surface. At the far end of this unpainted segment five small conjoint hills (“gumdrops”) rest with their bases against
two parallel lines, a “one lane highway.” Following is a rectangular section cut diagonally by a one-lane highway running from upper-right to lower-left. The triangular space above the diagonal
contains a black element with two triangular peaks. Below the diagonal the space is bifurcated by a one-lane highway at right angles to the diagonal. Both sections display “foreground/background
reversal,” where the background and foreground forms keep changing places. The right section contains a red element in the form of a base with three points pointing right, a triple crown. Two large
while triangles form residual elements pointing left. The design can be seen as white triangles against a red background. The left area below the diagonal is most easily seen as an unpainted
triangular space with three smaller black triangles in each corner, though it can be seen as a solid black space with a white traingle in the center, its points truncated.
The design in the next section is also complicated. The lower left half the space is occupied by a large stippled staircase with an unpainted dot in its base. The diagonal corner above contains a
right triangle with a curved hypotenuse. The residual unpainted surface between is thus curved on one edge and has a stepped shaped on the other edge. Following another one-lane highway are four
black blade-shaped tails. The bases of these tails, against the highway are unpainted but take different forms. On the bird with the curlicue topknot these bases are simple roughly-define unpainted
elements. On the other bird, these elements take the form of unpainted “gumdrop” hills.
Design #2:
The second top design is a presentation of tail feathers that extends from the jar’s mouth to the edge of the upper surface. Two red and asymmetric “V” shapes top the design, their tips pointed
upwards, the points of their ends resting on a one-lane highway. Below the highway a large black pyramid points downward, its center an unpainted square. Below a second one-lane highway is a
0.375-inch wide black strip. The design in this strip is inscribed. A parallel line has be scratched a fraction of an inch below the top edge. Below were inscribed 7 or 8 wavy lines, the number
varying between the two renditions. Immediately following (without an intervening highway) are four 1.125-inch rectangular tails. Their base is stippled with inset unpainted right triangles.
Following one-lane highways, they have squared-off black tips.
Design #3:
Just above the tails of the two birds, about halfway to the jar’s mouth, are small solid-black dots. Emerging from these dots in the four cardinal directions are short, thin, lines.
Side designs:
Below thick-over-thin framing lines, a parade of eight discrete design motifs parade around the circumference of the jar. All eight designs emerge from the thin framing line. While they incorporate
a few elements regularly found on Hopi pottery, most of these designs are unique to this jar. Of the eight designs on the side, one is 6-inches wide and the widest on the jar. Our discussion will
start there and move to the left. Note that designs # 4 and #5 contain three red elements and the remaining six side designs are monochromatic.
Design #4:
The bulk of this design is a wide band. along the thin framing line The right segment of this band contains two forms in the shape of wide knife blades, points upwards. The blades are divided
internally by a horizontal unpainted line. The tip of the right bland and the base of the left blade are painted red. A 0.25-inch unpainted space separates the two blades, its edges one-lane
highways. To the left is a section of design that is bisected by a one-lane highway running from upper left to lower right. At right angles and above this diagonal is a tiny unpainted line, dividing
the space into upper and lower sections. The upper section displays a smaller version of the red crown we saw on the top surface. The area below is a black triangle with an unpainted triangle at its
core. The lower triangular section below the diagonal is solid black with an embedded unpainted triangle that mirrors its neighbor to the right.
To the left of these two sections is the largest design element on the pot, a broad stippled expanse whose right lower edge has the form of two triangles, followed to the left by a straight edge
followed by a kiva stepped edge. Based on the stippled triangles and swooping downward and to the left are long, curved, black feathers.
Design #5:
This design is basically box-shaped with two points off its lower edge. The top is a wide black bar, its lower edge a two-lane highway. Below is a space with four red equilateral triangles pointed
downward and thus two halves and three whole residual unpainted triangles pointed upward. Following is a one-lane highway with seven conjoint small black hills (“gumdrops”) based on the highway and
facing a wide unpainted area. Notice also that there is space for another half a gumdrop. Alice seems to have fractionally miscalculated her painting. The middle of this wide unpainted space is
bisected by a thin, intricately designed bar. The bar began as a two-lane highway. Imposed on top of this design is a zig-zag line with 11 points and two half-points. The result looks like a woven
tapestry. Below there is considerable unpainted space but extending from the lower corners of this space are two different feather images.
At this point the design gets oddly complicated: similarity embedded in contradiction. The feathers are different: the right is pointed, the left has a square end. At the base of the right
(pointed) arrow is a one-lane highway. From there to the point the arrow is black. The square end of the left feather has no highway divider and is subdivided by a diagonal, thus creating two
internal triangles. The right internal triangle is black, the left unpainted. Comparing the right and left feathers, they have different shapes, but both display black triangles, though the
orientation of these triangles is different. Comparing the right and left triangles, they have different shapes, but both contain triangles with the same orientation, though their colors are
different. Usually feathers emerging from the same base are of the same shape and paint design. Alice has chosen a different strategy here and the resulting visual pattern is complex.
Design #6:
I’m not sure what word would describe the overall shape of design #3, perhaps “prickly.” It’s a conglomeration of triangular forms. The central element is blade-shaped. Adhering to its outer edge
are additional triangular elements, those on the right quite different than those on the left edge of the blade. Internally the blade is divided into a broad upper section and a pointed lower
section. The upper section displays “foreground/background” reversal. It is perhaps most easily seen as painted solid black with three leaf-like form growing from its base. Alternatively the
section can be seen as essentially white but with black borders surrounding two downward-pointing black elongated triangles. A one-lane highway divider is expected here, but is missing. The bottom
design fills the end of the blade and is therefore triangular. It is filled with a crosshatch of 12 vertical by 9 horizontal lines, somewhat casually drawn.
The right edge of the central blade supports four isosceles triangles. These are double-walled. The inner triangles are filled with 10 to 14 horizontal lines. The narrow space between inner and
outer triangles is stippled. The left edge of the central blade has four smaller sawteeth and a larger black triangle against the framing line. These are separated from the central blade by a
one-lane highway. The sawteeth have an unpainted edges.
Design #7:
The design is a 4.5-inch long sweeping curve, thicker at the base and narrowing to a long, thin point. Attache to the upper end of this point is a substantial rendition of the “bear claw” element,
more often seen as the terminal element on pots with “fine-line migration” design (cf. 2019-10). As is usual, the front edge of the bear claw is a solid black dome, followed by a linear space, then a
one-lane highway followed by a set of four long, thin and conjoint isosceles triangles. The relatively heavy bear claw looks like it might snap off the thin point of the long curve on which it rests.
Design #8:
Within the space encompassed by design #4 is a large (almost) double “D” shape. One “D” form is inside the other, yet neither has the complete shape of the letter. The outer “D” shape is a branch
of the thin framing line, curves outward as it descends, then turns back parallel to the framing line above. To complete the “D” shape it would need to bend upward, and it does, but transforms into a
thick line. This thick line then rises, turns, descends and turns back to its start, almost completing the complete form of “D.” However, just before closure it stops. The dome-shaped space
internal to this heavy-line “D” is patterned with thin lines. The space is cut by a diagonal. Below the diagonal is a roughly triangular space that is crossed by 13 horizontal lines. Zig-zagging
across them is a three-pointed line. The domed space above the diagonal is cut into three wedges, like pieces of pie. All three are decorated with fine lines, but the pattern in each contradicts the
pattern in its neighbors.
Design #9:
This design is slightly wedge-shaped with three feathers at the bottom. The top half of the design has three white lunettes embedded in a black background. Each lunette has two small vertical
lines at its widest point. Below is a substantial stippled area that can be seen as ending with a one-lane highway.
Then things (again) get complicated. The stippled area ends with a one-lane highway followed by triangular, downward-pointing, arrows. At the base of each arrow is a black hill (“gumdrop”) followed
by an unpainted arrow-shaped space caped by a sharply-pointed black “V.” Thus there are three arrow images within this one element: 1) the outer form, 2) the internal unpainted space and 3) the
black tip. But the complexity does not end there. Alice has extended the black edges of each arrow until it intercepts the edge of its neighbor arrow and forms a point. The resulting two
upward-thrusting arrows extend past the base of the downward arrows and into the stippled area, calling attention to their presence. Moreover, in the open lower end of these upward-thrusting arrows,
Alice has drawn two parallel lines, suggesting (perhaps) arrow shafts, further focusing a viewer’s eye on these oddly-positioned upward arrows. In short, not only do the downward arrows encompass
three arrow forms, but these open-ended upward arrows add a fourth counter thrusting arrow element.
Design #10:
This is the second widest design on the exterior of the jar and has an overall form somewhat like the widest (Design #4). The top of the design is a wide band of elements that has two wing elements
descending from its lower right edge. The remaining lower edge is distinctly concave. The wide band is a jumble of elements. Its top right corner is stippled and is the largest form in the design.
At its center a tadpole lazily swims upward. The lower stippled edges divide into two points. These points are them mated to black blades to form larger blades that sweep down from the corner of
this design. The top left edge of the stippled area is bounded by a one-lane highway. Based on this highway, and intruding to the right into the stippled space, are two conjoint hills. From the
valley between them emerge a fan of three straight lines. On the left side of the highway is a black rectangle with an unpainted diamond at its center.
Following another one-lane highway is an area containing six elements whose format is almost identical to the last body segment of the bird, immediately above it on the top of the pot. The lower
left half the space is occupied by a large stippled staircase with a black dot in its base. The diagonal corner above contains a right triangle with a curved hypotenuse. Set into this triangle is an
unpainted wedge. Between the black triangle and the stippled kiva steps is a residual unpainted surface, curved on one edge and stepped shaped on the other edge. The lower left corner of this
design segment is a small black triangle, forming a nascent version of the two wings on the opposite corner.
Overall, design # 10 gives the impression that this was the last design space painted and Alice wanted to use as many leftover motifs in the space as possible. dada art.
Design #11:
This is a simple, downward-pointing black isosceles triangle with slightly concave sides. Inset into it are a series of four unpainted and upward pointing triangles. Like those images of a string
of bigger fish eating littler fish, the triangles subsume each other. The lowest triangle is almost complete, with only its tip disappearing inside the triangle above. The second triangle is further
engulfed by the triangle above, with each succeeding triangle more truncated than the one before.
Design analysis:
The top surface of jar 2021-11 carries traditional Hopi design elements and has the traditional format of paired elements. The elegant upper surface is all about feathers, attached to birds or
otherwise. In contrast, the side designs are largely unique and are not paired: each design motif appears just once. This contrast creates tension and eye appeal. Moreover the use of empty space
around all elements highlights their form and increase their visual impact.
Dark maroon paint is used sparingly on the pot, so that when casually seen the design looks monochromatic. The maroon color enriches, but the “almost monochromatic” coloring of the design is
understated. Drama on jar 2021-11 is not defined by color, but by the shape of the jar, the contrast between top and side decoration and the unusual design motifs on the vessel’s side.
The minor differences in design of the birds on the top surface (Design #1) are intentional. It’s likely that one is male and the other female. Notice that both sections at the core of these birds
are diagonally subdivided. One body section of the birds has a simple linear diagonal. The design in the next section is also subdivided, but here by the sawtoothed right angles of a kiva-step form.
Variation adding interest.
Design #2 contains several particularly interesting design strategies: 1) The top maroon-red “V” elements point to the mouth from both sides of the jar and draw the viewer’s eye to this opening. 2)
Moreover, these “V” elements reflect the single large black “V” on which they sit, unifying the design. 3) Unexpectedly the middle panel of design #2 is inscribed, adding an element of glittering
While the format of the top and side designs are strikingly different, there are a couple of linkages, First, the red W crown internal to both birds on the top (design #1) is echoed by a smaller
red version of this crown in design #4 on the side. Second, design # 10 on the side is linked to the bird immediately above it on the top: both incorporate kiva steps next to a residual lunette, also
with stepped edges.
My first impression of design #5 is that it is boringly box-shaped with two pendant feathers. A more considered look sees interesting features, then a surprise ending. The four red triangles are
elegant surrounded by dark-black elements. The central band with lines and zig-zags pleases my eye and looks like a woven belt. It’s a bit jarring to have the two terminal feathers be of different
shapes, but the interaction of their shape and internal design produces a fascinating pattern. The left feather both doubly reflects and doubly contradicts the feather to its right. They
contradict because the tip of the left feather is square and the tip of the right feather is triangular. Because the left square tip is internally subdivided into triangles, its left unpainted half
has the same triangular orientation as the neighboring right feather but a different color. The right black half of the square tip also has a triangular shape, has a different orientation than the
right feather, but the same color as that feather. Parallels within contrasts: what a treat.
The boxy form of design #5 in sharp contrast to prickly pointed form of its neighbor, design #6, and this change of form keeps a viewer’s eye engaged. Alice has drawn crosshatched areas on both
design #6 and #8, thus visually linking them.
Design #9 again plays with arrows, to great visual effect. The embedded unpainted lenses that occupy the top half of this design are of no particular interest. The lower half, however, is engaging.
The three terminating and black feathers in this design emerge from a stippled area and point downward. What is unusual is that Alice has extended upwards the defining edge line of each of the
three feathers so that each edge line intersects the edge line of its neighbor. They meet above the one-lane highway within the stippled space. As a result, they form reciprocal arrow shapes headed
upward. The downward moving black-tipped arrows are visually predominant but the two lighter reciprocal upward arrows are larger and they compete for attention. The two parallel lines inside these
upward forms are like arrow shafts, further attract a viewer’s attention to this contradictory design. Downward and upward moving arrows, all in the same design band. It’s an unexpected surprise.
Apparently Alice was is fond of playing with arrows. Like design #9, design #11 uses triangular forms to create interest in the design. The overall shape of the motif is a large downward-pointing
black triangle, but the internal white triangles. thrust upward. This contradictory movement increases visual interest. Addition tension is created as the internal triangles subsume each other.
Jar 2021-11 has a dramatic shape. The warm glow of the surface and the stark black and subdued red colors are elegant. The traditional designs on the top surface are graceful; the innovative designs
on the side are engaging. Dramatic, warm, elegant, graceful, innovative and engaging: jar 2021-11 is a jewel.
Purchase History:
Purchased early in the morning of 7-14-21 from the website of Charles King Galleries, Scottsdale Az. When asked about provenance, Charles responded: "The Alice Dashee is from around 1997-99. A client
of mine purchased it from her. Sorry for all the clay (dust) inside…I thought I emptied enough out! I got it everywhere! LOL!" | {"url":"https://firstpeoplepots.com/2021-11-conical-jar-with-birds-and-bird-wing-designs/","timestamp":"2024-11-14T13:38:32Z","content_type":"text/html","content_length":"222013","record_id":"<urn:uuid:64654cb0-3d85-4f37-9b5b-19fdd808654c>","cc-path":"CC-MAIN-2024-46/segments/1730477028657.76/warc/CC-MAIN-20241114130448-20241114160448-00328.warc.gz"} |
Rotational effects in the band oscillator strengths and predissociation linewidths for the lowest <sup>1</sup>Π <sub>u</sub> - X <sup>1</sup>∑ <sup>+</sup> <sub>g</sub> transitions of N <sub>2</sub>
A coupled-channel Schrödinger equation (CSE) model of N2 photodissociation, which includes the effects of all interactions between the b, c, and o Πu1 and the C and C′ Πu3 states, is employed to
study the effects of rotation on the lowest- ν Πu1 -X ^1∑ [g] ^+ (ν,0) band oscillator strengths and Πu1 predissociation linewidths. Significant rotational dependences are found which are in
excellent agreement with recent experimental results, where comparisons are possible. New extreme-ultraviolet (EUV) photoabsorption spectra of the key b Πu1 ← X ^1∑ [g] ^+ (3,0) transition of N2 are
also presented and analyzed, revealing a b (ν=3) predissociation linewidth peaking near J=11. This behavior can be reproduced only if the triplet structure of the C state is included explicitly in
the CSE-model calculations, with a spin-orbit constant A≈15 cm-1 for the diffuse C (ν=9) level which accidentally predissociates b (ν=3). The complex rotational behavior of the b-X (3,0) and other
bands may be an important component in the modeling of EUV transmission through nitrogen-rich planetary atmospheres.
Dive into the research topics of 'Rotational effects in the band oscillator strengths and predissociation linewidths for the lowest ^1Π [u] - X ^1∑ ^+ [g] transitions of N [2]'. Together they form a
unique fingerprint. | {"url":"https://researchportalplus.anu.edu.au/en/publications/rotational-effects-in-the-band-oscillator-strengths-and-predissoc","timestamp":"2024-11-11T20:26:50Z","content_type":"text/html","content_length":"54642","record_id":"<urn:uuid:79e20495-5fbf-469c-aa68-ca21b53ce219>","cc-path":"CC-MAIN-2024-46/segments/1730477028239.20/warc/CC-MAIN-20241111190758-20241111220758-00779.warc.gz"} |
Data Literacy MUST Be Taught in Public Schools
Doug Wren
“Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write.” This statement, often misattributed to H. G. Wells (1866-1946), is a paraphrased
version of the sci-fi author’s much longer passage by American mathematician Samuel Stanley Wilks (1906-1964). Wells’s original quote is shown at the bottom of this post.
If you saw the phrase “statistical thinking” and thought about not reading any further, it could be a symptom of the larger problem addressed in this short piece. If you care about education or the
economy, I encourage you to continue reading.
Let’s begin by replacing the phrase statistical thinking with the contemporary term data literacy. Data literacy is “the ability to read, work with, analyze and argue with data” (Bhargava &
D’Ignacio, 2015). There are countless resources on data literacy available on the internet; many advocate for a data-literate workforce (see, for example, Miramount, 2019; Ribeiro, 2020; or Stokes,
2021). If one of education’s primary purposes is to prepare students for work (Phi Delta Kappan, 2019), then schools and districts that fail to teach data literacy are failing their students as well.
Like other American institutions, public education is slow to change with the times. In many districts, the secondary math curriculum consists of the “geometry sandwich”—Algebra I, Geometry, Algebra
II—a course of study implemented over 60 years ago in response to the Soviet launch of Sputnik I. Math classes in the US tend to stress formulas and procedures, which is one reason some people have
an aversion to anything mathematical (the symptom I alluded to earlier).
Meanwhile, in countries that excel at math, students are taught to integrate the branches of math and think critically and creatively to solve complex problems. These data-literate children “will use
these skills throughout their lives – having them will make or break a student’s success in the modern world” (Data Science for Everyone, 2020, para. 4).
I mentioned above that (a) this was a brief post, and (b) there are countless resources on data literacy. As we wrap up, here are links to a few data literacy resources:
• The SAS Institute offers six free video modules on data literacy called the Data Literacy Essentials. (To view them, create a profile at https://www.sas.com/profile/ui/#/create. Select “Just
Browsing” in the drop-down menu under “Affiliation With SAS.” There is no obligation.)
• The modules are each 25 minutes long and address the topics listed below. Modules 1-2 provide an excellent overview of data literacy. Modules 3-6 show how data literacy works by using realistic
examples from the pandemic.
1. Why Data Literacy Matters
2. Data Literacy Practices
3. Identifying Reliable Data
4. Discovering the Meaning of Data
5. Making Data-informed Decisions
6. Working with Data Responsibly (i.e., data ethics)
• Data Science for Everyone (DS4E) “is a national movement for advancing data literacy in our K-12 education system.” The DS4E website includes links to activities, units, and courses for K-12
teachers, as well as links to resources for students and parents.
• Borrowing heavily from other sources, I created a free mini-course titled “Developing Children’s Data Literacy: Getting Started.” The course provides an overview of data literacy, its importance,
and ways to begin teaching data literacy to children. There are links to even more data literacy resources embedded in the course. The course can be accessed here.
Finally, here is the entire H. G. Wells quote:
“The great body of physical science, a great deal of the essential fact of financial science, and endless social and political problems are only accessible and only thinkable to those who have had a
sound training in mathematical analysis, and the time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex world-wide
states that are now developing, it is as necessary to be able to compute, to think in averages and maxima and minima, as it is now to be able to read and write” (1903, p. 189).
The story behind the rephrased quote is here. | {"url":"https://www.edjacent.org/post/data-literacy-must-be-taught-in-public-schools","timestamp":"2024-11-12T05:21:16Z","content_type":"text/html","content_length":"1050405","record_id":"<urn:uuid:8fb769fd-62e5-4c04-81a8-59227e905274>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00729.warc.gz"} |
On-chain Configurations | Antimatter Finance
Investment Targets / Underlying Assets
At the beginning of the product launch, Antimatter will list two mainstream assets as the underlying assets: BTC and ETH (more assets to be added to the list gradually).
1. Upward Exercise
Deposit Currency - BTC, ETH
Alternate Currencies - USDT
2. Downward Exercise
Deposit Currency - USDT
Alternate Currencies - BTC, ETH
Strike Price
The strike price of each financial product is predetermined, while APY continues to fluctuate;
How to determine the strike price:
Upward Exercise - Strike Price = Current Price * 105%, rounded to thousands (E.g. $51,000). The strike price would be $52,000, $53,000, and $54,000.
Downward Exercise - Strike Price = Current Price * 95%, rounded to thousands (E.g. $51,000). The strike price would be $50,000, $49,000, and $48,000.
Settlement Period
Antimatter supports three periods, one day, one week or two weeks.
When a one-week period ends, the remaining period is a one-week INVEST, at which point the contract automatically creates a new two-week INVEST (the exercise price rule is as described above).
Min. & Max. Limit of Single Investment
It is supported for users to configure the Min. and Max. deposit limit of single investment.
Total Allocation
Each investment product has a total allocation (e.g. 100 BTC) and new orders will be suspended when all allocation is sold out (the product status changes to [closed] and new products are
automatically incremented). | {"url":"https://docs.antimatter.finance/antimatter-structured-product/dual-investment/on-chain-configurations","timestamp":"2024-11-03T15:59:54Z","content_type":"text/html","content_length":"216977","record_id":"<urn:uuid:f72694bd-a6ee-4555-9d5f-62aecc29143a>","cc-path":"CC-MAIN-2024-46/segments/1730477027779.22/warc/CC-MAIN-20241103145859-20241103175859-00893.warc.gz"} |
+30 12.3 Buoyancy Worksheet Answers 2023 › Athens Mutual Student Corner
+30 12.3 Buoyancy Worksheet Answers 2023
+30 12.3 Buoyancy Worksheet Answers 2023. You have a choice of vanilla, blueberry, or chocolate. Open the web browser and enter the link above into the address bar.
Friction and buoyancy worksheet from www.liveworksheets.com
Binomial distribution multiple choice identify the letter of the choice that best completes the statement or answers the question. Web chapter 12.3 buoyancy flashcards | quizlet chapter 12.3 buoyancy
term 1 / 2 buoyancy click the card to flip ? definition 1 / 2 the measure of the upward force that a fluid exerts on. Uitmks/ fce/ bcbidaun/ ecw211 3 fb1 cb1 fluid of.
Write Comment
Can You Tell That Air Is Not 100% Oxygen?
Web answers to all these questions, and many others, are based on the fact that pressure increases with depth in a fluid. Buoyancy and displacement worksheet answers, buoyancy and archimedes'
principle worksheet answers, brainpop buoyancy worksheet answers,. Below are the printable assignments for chapter 12.
Web Free Printable Math Worksheets For Algebra 1 Free Printable Math Worksheets For Algebra 1 Created With Infinite Algebra 1 Stop Searching.
The force of gravity on the ball is: This means that the upward force on the bottom of an. Binomial distribution multiple choice identify the letter of the choice that best completes the statement or
answers the question.
Create The Worksheets You Need With Infinite Geometry.
When the simulation page opens click. Web ound your answer to a regular pyramid has a height of 10 inches. Web answer key practice a 1.
Open The Web Browser And Enter The Link Above Into The Address Bar.
The point which buoyancy is acting & the centroid of the displaced volume of fluid. The buoyant force on the ball is simply the weight of water displaced by the ball: Its is an equilateral triangle
with a base edge of 12 inches.
Web Chapter 12.3 Buoyancy Flashcards | Quizlet Chapter 12.3 Buoyancy Term 1 / 2 Buoyancy Click The Card To Flip ? Definition 1 / 2 The Measure Of The Upward Force That A Fluid Exerts On.
Web buoyancy and archimedes principle worksheet solved experiment5 archimedes' principle: Create the worksheets you need with. When a body is completely or partially immersed in a fluid, the fluid
exerts an upward force (the “buoyant force”) on the body equal to the weight of the fluid.
activity algebra answer answers biology cell chapter chemical chemistry cycle energy free genetics geometry gizmo grade homework icivics ionic lesson math periodic phet photosynthesis pogil practice
problems puzzle questions quiz quizlet regents review search sheet student system table test triangles unit webquest with word worksheet | {"url":"http://athensmutualaid.net/12-3-buoyancy-worksheet-answers/","timestamp":"2024-11-03T22:53:03Z","content_type":"text/html","content_length":"128422","record_id":"<urn:uuid:a2370247-febe-49f3-ac2d-1aa9bbe24165>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00662.warc.gz"} |
University College London University College London
Residual Galois representations of elliptic curves with image in the normaliser of a non-split Cartan
Heilbronn Number Theory Seminar
11th November 2020, 4:00 pm – 5:00 pm
Due to the work of several mathematicians, it is known that if p is a prime >37, then the image of the residual Galois representation \bar{\rho}_{E,p}: Gal(\overline{Q}/ Q) -> GL_2 (F_p) attached to
an elliptic curve E/Q without complex multiplication is either GL_2(F_p), or is contained in the normaliser of a non-split Cartan subgroup of GL_2(F_p). I will report on a recent joint work with
Samuel Le Fourn, where we improve this result (at least for large enough primes) by showing that if p>1.4 x 10^7, then \bar{\rho}_{E,p} is either surjective, or its image is the normaliser of a
non-split Cartan subgroup of GL_2(F_p). | {"url":"https://www.bristolmathsresearch.org/seminar/pedro-lemos-2/","timestamp":"2024-11-10T12:09:03Z","content_type":"text/html","content_length":"54468","record_id":"<urn:uuid:211a86a4-b2f0-4db6-a45d-d808f95304a8>","cc-path":"CC-MAIN-2024-46/segments/1730477028186.38/warc/CC-MAIN-20241110103354-20241110133354-00776.warc.gz"} |
Recommender System part 5: Embeddings and Similarity Measures
Embedding Space
Embedding refers to a technique used in machine learning to convert discrete elements, such as words, objects, or categories, into continuous vectors composed of real numbers. It involves creating a
connection between discrete variables and continuous values, where each unique element is transformed into a vector containing continuous numerical values. Essentially, embedding allows us to
translate categorical information into a more fluid and mathematically manageable form, enabling algorithms to process and analyze such data more effectively.
In the context of neural networks, the embedding space is usually low-dimensional and captures some latent structure of the item or query set. Similar items, such as movies watched by the same user,
end up close together in the embedding space, these closeness is defined by a similarity measure.
Both content-based and collaborative filtering map each item and each query (context) to an embedding vector in a common embedding space
Similarity Measures
A similarity measure is a function that takes a pair of embeddings and returns a scalar measuring their similarity. The embeddings can be used for candidate generation as follows: given a query
embedding (q), the system looks for item embeddings (x) that are close to q, that means embeddings with high similarity s(q, x)
Most recommendation systems rely on one or more of the following similarity measures to determine the degree of similarity:
• cosine: the cosine of the angle between the two vectors
• dot product: the dot product of two vectors
• Euclidean Distance: the usual distance in Euclidean space
When measuring similarity measures, cosine would measure items with the smallest angle from the query, dot product would rank items with the largest norm, and Euclidean distance would favor items
physically closest to the query
Compared to the cosine, the dot product similarity is sensitive to the norm of the embedding and is more likely to recommend items that are already popular, as popular items that appear frequently in
the training set tend to have embeddings with large norms.
To counteract this, you may use other variants of similarity measures that put less emphases on the norm of the item, consequently, if new items are initialized with a large norm the system may
recommend rare items over more relevant items. By being more careful about embedding initialization, and using appropriate regularization, we may avoid this problem.
For further reading on other similarity measures, check out this article: | {"url":"https://www.joankusuma.com/post/recommender-system-part-5-embeddings-and-similarity-measures","timestamp":"2024-11-08T01:24:12Z","content_type":"text/html","content_length":"1050481","record_id":"<urn:uuid:c683bdd7-59c9-4041-bfc5-abf821552336>","cc-path":"CC-MAIN-2024-46/segments/1730477028019.71/warc/CC-MAIN-20241108003811-20241108033811-00112.warc.gz"} |
Cell-probe lower bounds for the partial match problem
Given a database of n points in {0, 1}^d, the partial match problem is: In response to a query x in {0, 1, *}^d, find a database point y such that for every i whenever x[i] ≠ *, we have x[i] = y[i].
In this paper we show randomized lower bounds in the cell-probe model for this well-studied problem. Our lower bounds follow from a two-party asymmetric randomized communication complexity
near-optimal lower bound for this problem, where we show that either Alice has to send Ω(d/log n) bits or Bob has to send Ω(n^1-0(1)) bits. When applied to the cell-probe model, it means that if the
number of cells is restricted to be poly(n, d) where each cell is of size poly (log n,d), then Ω(d/log^2n) probes are needed. This is an exponential improvement over the previously known lower bounds
for this problem. Our lower bound also leads to new and improved lower bounds for related problems including a lower bound for the l[∞] c-nearest neighbor problem for c < 3 and an improved
communication complexity lower bound for the exact nearest neighbor problem.
Dive into the research topics of 'Cell-probe lower bounds for the partial match problem'. Together they form a unique fingerprint. | {"url":"https://cris.huji.ac.il/en/publications/cell-probe-lower-bounds-for-the-partial-match-problem-14","timestamp":"2024-11-09T00:17:03Z","content_type":"text/html","content_length":"48421","record_id":"<urn:uuid:e9076224-835b-4e61-ba3a-2ee8b301add7>","cc-path":"CC-MAIN-2024-46/segments/1730477028106.80/warc/CC-MAIN-20241108231327-20241109021327-00716.warc.gz"} |
High Dimensional Statistics | Zan Ahmad
High Dimensional Statistics
High-dimensional statistical learning refers to analyzing data where the number of features or dimensions is very large, often exceeding the number of observations. This approach leverages the fact
that despite their high dimensionality, many real-world datasets exhibit low intrinsic dimensionality. High-dimensional learning methods aim to uncover underlying geometric structures—such as
manifolds—embedded in the high-dimensional space, allowing data to be meaningfully represented in lower dimensions.
Mathematically, manifold learning assumes that high-dimensional data points lie on or near a lower-dimensional manifold $\mathcal{M}$ embedded in a high-dimensional space $\mathbb{R}^D$. If the
intrinsic dimension of the manifold is $d \ll D$, the data can be mapped from $\mathbb{R}^D$ to a lower-dimensional space $\mathbb{R}^d$ without losing significant structure or meaning. The mapping
can be expressed as:
$$ f: \mathbb{R}^D \rightarrow \mathbb{R}^d, \quad \text{where } d = \text{dim}(\mathcal{M}). $$
One of the key goals of manifold learning techniques is to preserve local geometric properties, such as distances or angles, during this mapping. Given two data points $x_i, x_j \in \mathbb{R}^D$,
the geodesic distance between them on the manifold, $d_{\mathcal{M}}(x_i, x_j)$, captures the shortest path along the manifold. A common approximation in manifold learning is to preserve these
geodesic distances in the low-dimensional representation:
$$ d_{\mathcal{M}}(x_i, x_j) \approx \lVert f(x_i) - f(x_j) \rVert. $$
Techniques such as Principal Component Analysis (PCA), Isomap, Locally Linear Embedding (LLE), and t-SNE seek to discover these lower-dimensional structures by projecting the data onto meaningful
manifolds. These methods are crucial in tasks like clustering, classification, and visualization, where meaningful patterns can only be observed after reducing the data to its intrinsic dimensions.
The reduction to intrinsic dimensions also helps overcome the curse of dimensionality, which refers to the challenges posed by sparse data distributions in high-dimensional spaces. By learning the
low-dimensional manifold $\mathcal{M}$ that underlies the data, high-dimensional statistical learning techniques enable models to generalize better and extract meaningful insights.
In summary, high-dimensional statistical learning leverages the geometry of data manifolds to perform dimensionality reduction, revealing latent structures and facilitating tasks like clustering,
pattern recognition, and visualization:
$$ { x_1, x_2, \ldots, x_n } \subset \mathbb{R}^D \quad \xrightarrow{f} \quad { y_1, y_2, \ldots, y_n } \subset \mathbb{R}^d, $$
where $d \ll D$, and the goal is to preserve the manifold’s structure as much as possible in the low-dimensional space.
My work on random features in machine learning earned first place in the Institute for Data Intensive Engineering and Science (IDIES) Poster Contest in 2023. | {"url":"https://zanahmad.com/research/high-dimensional-statistical-learning/","timestamp":"2024-11-02T11:30:32Z","content_type":"text/html","content_length":"11446","record_id":"<urn:uuid:ed6501d9-6e52-436f-9a9c-3fc4a7c81ce4>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00354.warc.gz"} |
Re: st: -predict , reffects- after -xtmelogit-
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org.
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: st: -predict , reffects- after -xtmelogit-
From Jeph Herrin <[email protected]>
To [email protected]
Subject Re: st: -predict , reffects- after -xtmelogit-
Date Wed, 22 Dec 2010 08:36:22 -0500
Thanks much for the thorough explanation. Very helpful.
On 12/21/2010 3:49 PM, Roberto G. Gutierrez, StataCorp wrote:
Jeph Herrin<[email protected]> asks about obtaining MLE estimates of random
effects after estimation with -xtmelogit-:
Thanks, the typo was that I had -yrwe- and -ywre- and mixed them up. But
this still doesn't tell me how to get the mle random effects, if that is
The posterior modes obtained from
. predict r, reffects
after -xtmelogit- are actually MLE's in and of themselves, because a mode is
the point where the distribution peaks and thus represents the maximum of the
posterior distribution of the random effects. These posterior MLE's condition
on three things:
1. The regression coefficients are as estimated
2. The random effects are normally distributed
3. The standard deviation of the random effects is as estimated
My guess is that Jeph would like estimates that only condition on 1. and not
on 2. and 3., as is discussed in Section 2.9.1 of Rabe-Hesketh and Skrondal
(2008). That discussion uses -xtmixed- and linear models, and thus the MLEs
can be obtained post-estimation by linear regression of the partial residuals
on dummy variables identifying the clusters.
Such an approach is not generalizable to -xtmelogit- because of the
non-linearity of the model. However, you can achieve the same effect by using
-logit- with a linear offset calculated from the regression coefficients as
obtained from -xtmelogit-. To illustrate,
. webuse bangladesh, clear
. xtmelogit c_use age || district: // age introduced to make
// things interesting
. predict r_mode, reffects // Posterior modes
. predict xbeta, xb // Obtain linear predictor
. forvalues i = 1/61 { // Generate dummy variables
. gen dd`i' = district == `i'
. }
Note above that -district- has 60 unique values, but is coded as 1 through
61 with 54 missing. We next obtain the MLEs of the random effects using
-logit- with dummy variables and an offset
. logit c_use dd*, nocons offset(xbeta) // MLEs of random effects
Some dummies are omitted because of perfect prediction, but such is life with
binary data sometimes.
The key here is that by including -xbeta- as an offset we are conditioning on
the regression coefficients as estimated by -xtmelogit-. Not conditioning on
these would be akin to unrestricted logistic regression with covariates and
dummy variables for clusters, in other words, fixed-effects logistic
The only remaining task is to take these logit coefficients and place them in
the data according to the values of -district-. One way is
. gen r_mle = .
. forvalues i = 1/61 {
. replace r_mle = _b[dd`i'] if district == `i'
. }
. replace r_mle = . if r_mle == 0
The last line is necessary because cases coefficients on dropped variables are
stored as 0 in e(b) rather than missing.
We can now compare with the standard random-effects predictions
. bysort district: gen tolist = _n==1
. list district r_mode r_mle if tolist
Finally, if you compare the standard deviation of the MLEs of the random
effects to that obtained as sd(_cons) from -xtmelogit-
. xtmelogit c_use age || district:
. sum r_mle if tolist
you will still not get matching numbers. The main reason for this is that by
estimating random-effects directly, you are no longer imposing that they are
normally distributed but instead allowing them to "be everything they can be".
I'm not sure how you would estimate individual random effects while at the
same time imposing normality.
[email protected]
Rabe-Hesketh, S. and A. Skrondal. 2008. Multilevel and Longitudinal Modeling
Using Stata. College Station, TX: Stata Press.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"https://www.stata.com/statalist/archive/2010-12/msg00859.html","timestamp":"2024-11-06T17:05:05Z","content_type":"text/html","content_length":"13789","record_id":"<urn:uuid:2d46aadf-8c77-4c37-8af0-b9d6b02e752d>","cc-path":"CC-MAIN-2024-46/segments/1730477027933.5/warc/CC-MAIN-20241106163535-20241106193535-00005.warc.gz"} |
Geometry of Normed Linear Spacessearch
Item Successfully Added to Cart
An error was encountered while trying to add the item to the cart. Please try again.
Please make all selections above before adding to cart
Geometry of Normed Linear Spaces
eBook ISBN: 978-0-8218-7637-4
Product Code: CONM/52.E
List Price: $125.00
MAA Member Price: $112.50
AMS Member Price: $100.00
Click above image for expanded view
Geometry of Normed Linear Spaces
eBook ISBN: 978-0-8218-7637-4
Product Code: CONM/52.E
List Price: $125.00
MAA Member Price: $112.50
AMS Member Price: $100.00
• Contemporary Mathematics
Volume: 52; 1986; 171 pp
MSC: Primary 46
These 17 papers result from a 1983 conference held to honor Professor Mahlon Marsh Day upon his retirement from the University of Illinois. Each of the main speakers was invited to take some
aspect of Day's pioneering work as a starting point: he was the first American mathematician to study normed spaces from a geometric standpoint and, for a number of years, pioneered American
research on the structure of Banach spaces.
The material is aimed at researchers and graduate students in functional analysis. Many of the articles are expository and are written for the reader with only a basic background in the theory of
normed linear spaces.
• Permission – for use of book, eBook, or Journal content
• Book Details
• Table of Contents
• Requests
Volume: 52; 1986; 171 pp
MSC: Primary 46
These 17 papers result from a 1983 conference held to honor Professor Mahlon Marsh Day upon his retirement from the University of Illinois. Each of the main speakers was invited to take some aspect
of Day's pioneering work as a starting point: he was the first American mathematician to study normed spaces from a geometric standpoint and, for a number of years, pioneered American research on the
structure of Banach spaces.
The material is aimed at researchers and graduate students in functional analysis. Many of the articles are expository and are written for the reader with only a basic background in the theory of
normed linear spaces.
Permission – for use of book, eBook, or Journal content
Please select which format for which you are requesting permissions. | {"url":"https://bookstore.ams.org/CONM/52","timestamp":"2024-11-08T18:03:48Z","content_type":"text/html","content_length":"79264","record_id":"<urn:uuid:4f3d52d6-7e49-444e-840d-de8923967bc4>","cc-path":"CC-MAIN-2024-46/segments/1730477028070.17/warc/CC-MAIN-20241108164844-20241108194844-00533.warc.gz"} |
Free Modular Lattice on 3 Generators
This is the free modular lattice on 3 generators, as drawn by Jesse McKeown. First discovered by Dedekind in 1900, this structure turns out to have an interesting connection to 8-dimensional
Euclidean space.
The set of subspaces of any vector space \(V\) is a lattice: that is, a partially ordered set where every finite subset has a greatest lower bound and a least upper bound.
The ordering is defined by saying \(A \le B\) when the subspace \(A\) is contained in the subspace \(B\). The greatest lower bound of \(A \) and \(B\), denoted \(A \wedge B\), is just the
intersection \(A \cap B\), while the least upper bound of \(A\) and \(B\), denoted \(A \vee B\), is the smallest subspace containing \(A \cup B\). The greatest lower bound of the empty set, or top
element \(\top\), is the whole space \(V\), while the least upper bound of the empty set, or bottom element \(\bot\), is the subspace \(\{0\}\).
A lattice is distributive if
$$ A \vee (B \wedge C) = (A \vee B) \wedge (A \vee C) $$
holds for all \(A,B,C\). (Surprisingly, this is equivalent to requiring the law with the roles of \(\vee\) and \(\wedge\) reversed.) The lattice of subspaces of a finite-dimensional vector space is
not distributive, but it’s always modular, meaning that the distributive law holds when \(A \le B\) or \(A \le C\).
The concept of a lattice can also be defined in a purely algebraic way. Namely, we can define a lattice to be a set with binary operations \( \vee \) and \(\wedge \) that are commutative,
associative, and obey the absorption laws:
$$ A \wedge (A \vee B) = A , \qquad A \vee (A \wedge B) = A $$
together with elements \(\bot\) and \(\top\) that serve as identities for \(\vee\) and \(\wedge\), respectively. Starting from this we may define \(A \le B\) to mean \(A = A \wedge B\), or
equivalently \(B = A \vee B\).
Since a lattice and also a modular lattice is a purely algebraic structure, ideas from universal algebra allow us to define the free modular lattice on any number of generators. This is roughly one
where every element is built from the generators using \(\vee, \wedge, \top\) and \(\bot\) and no relations hold except those that follow from the definition of modular lattice.
A remarkable fact, discovered by Dedekind in 1900, is that the free modular lattice on 3 generators has 30 elements, while the free modular lattice on 4 or more generators is infinite. The Hasse
diagram of the free modular lattice on 3 generators is shown above. This has one dot for each lattice element, and \(A \le B\) iff we can go from the dot for \(A\) to the dot for \(B\) by climbing
upwards along edges.
Here we are simplifying the actual history. In fact, Dedekind worked with a definition of lattice, still widely used, that does not require the existence of \(\bot\) and \(\top\). Thus, his free
modular lattice on 3 generators had only 28 elements: it was missing the bottom and top dots in the picture above. It still had a bottom and top element, but these were not ‘free’: they could be
defined in terms of the generators.
If we call the 3 generators \(X,Y,\) and \(Z\), Dedekind’s 28-element lattice looks like this:
Notice that this picture has 9 levels, or ‘ranks’. The generators \(X,Y,Z\) are on the middle rank. The top 3 ranks form a cube: this consists of \(X \vee Y\), \(X \vee Z\), \(Y \vee Z\) and all
elements formed from these using \(\wedge\) and \(\vee\). Dually, the bottom 3 ranks form a cube consisting of \(X \wedge Y\), \(X \wedge Z\), \(Y \wedge Z\) and the elements formed from these via \
(\wedge\) and \(\vee\).
Dedekind showed that his 28-element lattice could be represented as subspaces of an 8-dimensional vector space. To do this, he chose three subspaces \(X,Y,Z \subseteq \mathbb{R}^8\) and showed that
the lattice they generate has 28 elements.
It’s not a coincidence that 8-dimensional space shows up in this problem! Let us see why.
Start with the Dynkin diagram of \(\mathfrak{so}(8)\), which is the Lie algebra of \(\mathrm{SO}(8)\), the group of rotations in 8 dimensions:
If we draw arrows on its edges like this:
we get a directed graph, also known as a quiver. Let us call this particular directed graph the D[4] quiver, after the name of the Dynkin diagram.
This quiver is closely connected to a famous puzzle: the 3 subspace problem. This problem asks us to classify triples of subspaces of a finite-dimensional vector space \(L\), up to invertible linear
transformations of \(L\). It turns out that for any choice of the dimension of \(L\) there are finitely many possibilities. This is surprisingly nice compared to the 4 subspace problem, where there
are infinitely many possibilities.
One way to solve the 3 subspace problem is to note that 3 subspaces \(L_1,L_2,L_3 \subseteq L\) give a representation of the \(\mathrm{D}_4\) quiver. This fact is trivial: by definition, a
representation of the \(\mathrm{D}_4\) quiver is just a triple of linear maps like this:
and here we are taking those to be inclusions. The nontrivial part is how we can use this viewpoint, together with some quiver representation theory, to solve the 3 subspace problem.
There is an obvious notion of two representations of the \(\mathrm{D}_4\) quiver being isomorphic. We can also take direct sums of quiver representations. We define an indecomposable representation
to be one that it is not a direct sum of two others unless one of those others is trivial.
It is a remarkable fact, a spinoff of Gabriel’s theorem, that indecomposable representations of any quiver coming from a Dynkin diagram correspond in a natural one-to-one way with positive roots of
the corresponding Lie algebra. As mentioned, the Lie algebra corresponding \(\textrm{D}_4\) is \(\mathfrak{so}(8)\), the Lie algebra of the group of rotations in 8 dimensions. This Lie algebra has 12
positive roots. So, the \(\textrm{D}_4\) quiver has 12 indecomposable representations which we list below. The representation coming from any triple of subspaces \(X, Y, Z \subseteq V\) must be a
direct sum of these indecomposable representations, so we can classify the possibilities and solve the 3 subspace problem.
What does this have to do with the free modular lattice on 3 generators?
Given any representation \(f_i : L_i \to L\) of the \(\mathrm{D}_4\) quiver, the images of the maps \(f_i\) generate a sublattice \(\mathcal{L}\) of the lattice of all subspaces of \(L\). \(\mathcal
{L}\) is a modular lattice with 3 generators. So, representations of the \(\mathrm{D}_4\) quiver give modular lattices with 3 generators. Moreover we have:
Theorem 1. If we take a direct sum of indecomposable representations of the \(D_4\) quiver, one from each isomorphism class, we obtain a representation of the \(D_4\) quiver whose corresponding
modular lattice is the free modular lattice on 3 generators. In this representation, say \(f_i : L_i \to L\), the spaces \(\mathrm{im} (f_i)\) have dimension 5 and the space \(L\) has dimension 10.
10 is the smallest possible dimension for a vector space containing subspaces that generate a copy of the free modular lattice on 3 generators.
Proof. This will follow from Theorem 2 below. ▮
Let us call a representation of the \(\mathrm{D}_4\) quiver injective if all 3 maps \(f_i : L_i \to L\) are injective. Of the indecomposable representations of the \(\mathrm{D}_4\) quiver, exactly 3
are not injective:
Note that these representations contribute trivially to the direct sum in Theorem 1. So, we can leave them out of this direct sum without affecting the result. This reduces Theorem 1 to
Theorem 2. If we take a direct sum of injective indecomposable representations of the \(D_4\) quiver, one from each isomorphism class, we obtain an injective representation of the \(D_4\) quiver
whose corresponding modular lattice is the free modular lattice on 3 generators. In this representation, say \(f_i : L_i \to L\), the spaces \(L_i\) have dimension 5 while \(L\) has dimension 10. 10
is the smallest possible dimension for a vector space containing subspaces that generate a copy of the free modular lattice on 3 generators.
Proof. – It is clear that the direct sum of injective representations is injective. The rest will follow from Theorem 3. ▮
If we want to get Dedekind’s 28-element lattice, we need to leave out two injective indecomposable representations from the direct sum. To understand what is special about these two, and explicitly
construct Dedekind’s lattice, let us go ahead and list the 12 indecomposable representations of the \(\mathrm{D}_4\) quiver.
For the 9 injective ones, we can assume without loss of generality that the maps are inclusions of subspaces. One of the injective indecomposable representations involves a triple of subspaces of
zero dimension, while another involves a triple of subspaces that all equal the space they are included in:
The remaining 7 injective indecomposable representations are the ones relevant to Dedekind’s 28-element lattice. We call them \(A, B, C, D, E, F\) and \(G\):
If we take the direct sum of all 7 of these quiver representations we obtain a representation that we call
Here the central dot of the \(D_4\) quiver has been assigned a vector space \(V\) which contains 3 subspaces \(X,Y,Z\). We have:
Theorem 3. The modular lattice generated by \(X,Y,Z \subset V\) is the free modular lattice on 3 generators with its top and bottom removed: that is, Dedekind’s 28-element lattice. \(V\) is
8-dimensional, and Dedekind’s lattice cannot be embedded in the lattice of subspaces of a vector space of dimension \(\lt 8\).
Theorem 3 easily implies Theorem 2. To prove Theorem 2, we shall explicitly compute the vector space \(V\) and its three subspaces \(X,Y,Z\).
Abusing notation a little, let us write
$$ V = A \oplus B \oplus C \oplus D \oplus E \oplus F \oplus G $$
where now \(A, \dots, G\) denote the vector spaces assigned to the central dot by the quiver representations of the same name. Looking at the list above we see that \(A, B, C, D, E\) and \(F\) are
1-dimensional while \(G\) is 2-dimensional. So,
$$ V \cong \mathbb{R}^8 .$$
The next step is to determine the 3 subspaces \(X,Y,Z \subset V\).
We need a bit more notation to name all the subspaces associated to the quiver representation \(G\):
We are calling the copy of \(\mathbb{R}^2\) here \(G\). Let us denote the three copies of \(\mathbb{R}\) by \(G_1, G_2,\) and \(G_3\), starting at the top and going around clockwise:
The vector space $G$ is 2-dimensional, while $G_1, G_2, G_3$ are 1-dimensional and distinct, so
$$ G_1 \vee G_2 = G_2 \vee G_3 = G_3 \vee G_1 = G $$
$$ G_1 \wedge G_2 = G_2 \wedge G_3 = G_3 \wedge G_1 = \{0\} . $$
We can now determine the subspaces \(X,Y,Z \subset V\). The subspace \(X\) is the direct sum of the vector spaces assigned to the top dot of the \(\mathbb{D}_4\) quiver by all 7 quiver
representations under consideration. So, using the notation we have set up,
$$ X = A \oplus E \oplus F \oplus G_1 .$$
$$ Y = B \oplus D \oplus F \oplus G_2 $$
$$ Z = C \oplus D \oplus E \oplus G_3 .$$
Using these facts we can work out the subspace of
$$ V = A \oplus B \oplus C \oplus D \oplus E \oplus F \oplus G \cong \mathbb{R}^8 $$
corresponding to any element of the lattice generated by $X,Y,Z$. For example, we have
$$ X \wedge Y = ( A \oplus E \oplus F \oplus G_1) \wedge (B \oplus D \oplus F \oplus G_2) = F$$
where we used the fact that $G_1 \wedge G_2 = \{0\}$.
Greg Egan has summarized the results in this chart:
Tim Silverman prepared this table of the results:
\textrm{dimension 8:} \\
X\vee Y\vee Z&A\oplus B\oplus C\oplus D\oplus E\oplus F\oplus
\textrm{dimension 7:} \\
X\vee Y&A\oplus B\oplus D\oplus E\oplus F\oplus G\\
X\vee Z&A\oplus
C\oplus D\oplus E\oplus F\oplus G\\
Y\vee Z&B\oplus C\oplus D\oplus
E\oplus F\oplus G\\
\textrm{dimension 6:} \\
(X\vee Y)\wedge (X\vee Z)&A\oplus D\oplus E\oplus F\oplus G\\
(X\vee Y)\wedge (Y\vee Z)&B\oplus D\oplus E\oplus F\oplus G\\
(X\vee Z)\wedge (Y\vee Z)&C\oplus D\oplus E\oplus F\oplus G\\
\textrm{dimension 5:} \\
(X\vee Y)\wedge (X\vee Z)\wedge (Y\vee Z)&D\oplus E\oplus F\oplus G\\
X\vee (Y\wedge Z)&A\oplus D\oplus E\oplus F\oplus G_1\\
Y\vee (X\wedge Z)&B\oplus D\oplus E\oplus F\oplus G_2\\
Z\vee (X\wedge Y)&C\oplus D\oplus E\oplus F\oplus G_3\\
\textrm{dimension 4:} \\
(X\vee (Y\wedge Z)\wedge (Y\vee Z)&D\oplus
E\oplus F\oplus G_1\\
(Z\vee (X\wedge Y))\wedge (X\vee Y)&D\oplus E\oplus
F\oplus G_3\\
(Y\vee (X\wedge Z))\wedge (X\vee Z)&D\oplus E\oplus F\oplus
X&A\oplus E\oplus F\oplus G_1\\
Y&B\oplus D\oplus F\oplus G_2\\
Z&C\oplus D\oplus E\oplus G_3\\
\textrm{dimension 3:} \\
(X\wedge Y)\vee (X\wedge Z)\vee (Y\wedge Z)&D\oplus E\oplus F\\
X\wedge (Y\vee Z)&E\oplus F\oplus G_1\\
Y\wedge (X\vee Z)&D\oplus F\oplus G_2\\
Z\wedge (X\vee Y)&D\oplus E\oplus G_3\\
\textrm{dimension 2:} \\
(X\wedge Y)\vee (X\wedge Z)&E\oplus F\\
(X\wedge Y)\vee (Y\wedge Z)&D\oplus F\\
(X\wedge Z)\vee (Y\wedge Z)&D\oplus E\\
\textrm{dimension 1:} \\
X\wedge Y&F\\
X\wedge Z&E\\
Y\wedge Z&D\\
\textrm{dimension 0:} \\
X\wedge Y \wedge Z& \{0\}
Since all 28 of these subspaces are distinct, the lattice generated by \(X,Y,\) and \(Z\) is the same as the free modular lattice on 3 generators with \(\top\) and \(\bot\) removed: that is,
Dedekind’s original lattice. One can check that Dedekind’s lattice does not embed in the lattice of subspaces of a space of dimension \(\lt 8\), because the corresponding quiver representation needs
to contain copies of all 7 quiver representations \(A,\dots, G\), or extra relations would hold.
Puzzle 1. Is it a coincidence that \(V\) is 8-dimensional and the \(\mathrm{D}_4\) quiver is associated to the \(\mathfrak{so}(8)\) Lie algebra? There is no obvious relation visible in the argument
Puzzle 2. Is it a coincidence that Dedekind’s lattice has 28 elements and \(\mathfrak{so}(8)\) is 28-dimensional? Again this relation plays no obvious role in the argument above.
As a hint for Puzzle 2, Hugh Thomas pointed out that the portion of the free modular lattice on 3 generators above the middle rank is isomorphic to the poset of positive roots of \(\mathfrak{so}(4)
\), with its usual ordering. This poset has 12 elements. Similarly, we can identify the portion below the middle rank with the poset of negative roots. The reason for this is unclear, and this leaves
the middle rank somewhat mysterious: it has \(30 – 12 – 12 = 6\) elements. The Lie algebra \(\mathfrak{so}(4)\) is spanned by the positive roots, the negative roots, and its Cartan subalgebra, so its
Cartan subalgebra has dimension \(28 – 12 – 12 = 4\).
For Dedekind’s original paper, see:
• Richard Dedekind, Über die von drei Moduln erzeugte Dualgruppe, Mathematische Annalen 53 (1900), 371–403.
For the four subspace problem, see:
• I. M. Gelfand, and V. A. Ponomarev, Problems of linear algebra and classification of quadruples of subspaces in a finite-dimensional vector space, Coll. Math. Spc. Bolyai 5 (1970), 163–237.
To see the difficulty with the problem, note that starting with 4 generically chosen points on a plane, and repeatedly drawing lines through points and creating new points by intersecting lines, one
can generate infinitely many points and lines. Viewing this in terms of projective geometry, it follows that starting with 4 generically chosen 2-dimensional subspaces in \(\mathbb{F}^3\), where \(\
mathbb{F}\) is an infinite field, one can generate infinitely many subspaces using \(\vee\) and \(\wedge\).
The new ideas in this post, if any, were obtained collaboratively in discussions here:
• John Baez, The free modular lattice on 3 generators, The n-Category Café, 19 September 2015.
• John Baez, How is the free modular lattice on 3 generators related to 8-dimensional space?, MathOverflow, 20 September 2015.
Visual Insight is a place to share striking images that help explain advanced topics in mathematics. I’m always looking for truly beautiful images, so if you know about one, please drop a comment
here and let me know!
13 thoughts on “Free Modular Lattice on 3 Generators”
1. It seems as though this structure could be realizable (though not necessarily synthesizable) with carbon and hydrogen atoms. I wonder if anyone has tried?
□ Wow, I have no idea. There is a kind of subculture of people who try to synthesize chemicals based on mathematical concepts. Check out these:
• John Baez, Cubane, 5 March 2012.
• John Baez, Dodecahedrane, 5 March 2012.
However, I don’t think those people know about the free modular lattice on 3 generators!
2. This lattice, visually at least, greatly resembles an extended version of the free distributive lattice on three generators. If you cut out the middle layer of the free modular lattice and
identify the corresponding elements of the adjacent two layers, you get the free distributive lattice. Do these lattices continue to be closely related for higher numbers of generators, and if so
is there a clear way of stating the resemblance that doesn’t depend on the number of generators?
□ Intriguing questions!
Is there a picture somewhere of the free distributive lattice on 3 generators? Or a nice description of the free distributive lattice on \(n\) generators?
Since the modular identity is a weakened form of the distributive law, there is a lattice homomorphism from the free modular lattice on \(n\) generators to the free distributive lattice on \
(n\) generators, say
$$ \alpha: \mathrm{FMod}_n \to \mathrm{FDist}_n $$
That must the reason for the resemblance you see. But since \(\alpha\) is onto, each element of \(\mathrm{FDist}_n\) corresponds to some subset of \(\mathrm{FMod}_n\): an equivalence class.
That is, we get \(\mathrm{FDist}_n\) by ‘squashing down’ \(\mathrm{FMod}_n\).
So, ‘cutting out the middle layer’ is not the right way to think of what’s going on, but ‘identifying elements’ is.
With some work I could use this to take Egan’s labelled picture of the \(\mathrm{FMod}_3\), figure out which expressions become the same when we use the distributive law, and get a picture of
\(\mathrm{FDist}_3\). But I don’t have the energy right now!
As mentioned in the blog article, the big difference between \(\mathrm{FMod}_3\) and \(\mathrm{FMod}_4\) is that the latter is infinite.
Is \(\mathrm{FDist}_n\) always finite? I seem to recall it is.
☆ The free distributive lattice on $n$ generators is the lattice of monotone Boolean functions on $n$ variables, with conjunction and disjunction as the lattice operations. (Each generator
maps to the function that outputs one of the variables and ignores the other variables.) There’s an image, with lattice elements labeled by the corresponding functions, at https://
☆ Yes, this is what I mean too. I like how the weaker modular rule gives you a freer object, and enforcing the stricter distributive rule squashes the object down a bit. If you go to the
wiki article on Dedekind numbers, you’ll see a nice picture of the lattice at the top right. The Dedekind numbers grow very fast, and there’s definitely some recursion involved, but as of
right now there isn’t a closed formula. Also, the 4th Dedekind number turns out to be the 168. I’m really curious about how \(\mathrm{FMod}_4\) looks. Is it related to the modular group?
☆ I don’t know what the free modular lattice on 4 generators looks like, but just as the free modular lattice is related to the 3 subspace problem as discussed here, the free modular
lattice on 4 generators is related to the 4 subspace problem.
The 4 subspace problem asks to classify all ways one can choose 4 subspaces \(L_1, \dots, L_4\) of a finite-dimensional vector space \(L\), up to invertible linear transformations of \(L
\). Any choice of 4 subspaces of a finite-dimensional vector spaces gives a ‘linear representation’ of the free modular lattice on 4 generators, and two such representations are
equivalent if they differ by an invertible linear transformation of \(L\).
The 4 subspace problem is famous for being “wild”, unlike the 3 subspace problem. This is related to the free modular lattice on 4 generators being infinite.
If you can get ahold of it, the classical reference is here:
• I. M. Gelfand and V. A. Ponomarev, Problems of linear algebra and classification of quadruples of subspaces in a finite-dimensional vector space, Coll. Math. Spc. Bolyai 5 (1970),
It seems easier to get ahold of this paper, since it’s free online:
• Rafael Stekolshchik, Gelfand–Ponomarev and Herrmann constructions for quadruples and sextuples, Journal of Pure and Applied Algebra 211 (2007), 95–202.
I think it will give you a rough idea of what’s going on, though it’s mainly about another problem involving 6 subspace. It quotes Gian-Carlo Rota as saying
. . . He [Gelfand] took me aside and explained to me why it is important to study the structure consisting of four subspaces of a vector space V… Thus, all invariants of matrices….
are encoded in the invariant theory of four subspaces. It must be added that the invariant theory of four subspaces is more general than the invariant theory of operators. The problem
is that of explicitly describing the lattice of subspaces generated by four subspaces in general position. . . . The lattice generated by four subspaces in general position is
infinite, but nevertheless it should be explicitly described in some sense or other.
This is from Rota’s article “Ten mathematics problems I will never solve”.
By the way, you have to be careful, because the classification of linear representations of a modular lattice depends on a choice of field, while the free modular lattice on \(n\)
generators does not. Worse, it’s possible that when \(n\) is large enough there are distinct elements of the free modular lattice on \(n\) generators that cannot be distinguished by any
linear representation. I forget the story here, but I seem to recall reading something about this.
3. It looks like if you identify nodes up to the logical biconditional, you get a picture that looks like this:
And these are counted by Dedekind numbers. Apparently only the first 8 Dedekind numbers are known. I wonder if this could shed some light.
□ I don’t know what you mean by ‘identify nodes up to the logical biconditional’, since two elements \(a,b\) of a modular lattice both imply each other iff they are equal.
I’m suspecting that you’re actually replacing the free modular lattice on \(n\) generators, \(\mathrm{FMod}_n\) by the lattice of monotone functions \(f : \mathrm{FMod}_n \to \{0,1\} \). If I
remember correctly, this is the same as the lattice of ideals of \(\mathrm{FMod}_n\). And if I remember correctly, this construction—replacing a lattice by its lattice of ideals—is equivalent
to taking that lattice and forcing it to be distributive, by identifying expressions that become equal when you get to use the distributive law.
If I’m right, then, you’re taking \(\mathrm{FMod}_n\) and performing a quotient (just as mentioned in David Eppstein’s comment) to get \(\mathrm{FDist}_n\).
One big piece of evidence for my guesses here is that the \(n\)th Dedekind number is the number of elements of \(\mathrm{FDist}_n\)!
(By the way, an irksome feature of this blog is that even I cannot get pictures to appear in comments. On my other WordPress blog I can: other people can’t make images appear in comments,
perhaps as an anti-spam feature, but if they include the links I can make the images appear. But on this one, despite the name Visual Insight, I cannot! This blog is run by the American
Mathematical Society, so I should force someone there to fix this.)
☆ I also don’t know what “up to the logical biconditional” means but I think the identification of nodes that turns this into the free distributive lattice is the following. At the center
of the diagram there are five lattice elements forming a $K_{2,3}$ subgraph. Collapse the six edges of this subgraph, and also collapse another six edges parallel to the first six (where
by two edges being parallel I mean that they are two opposite sides of a 4-cycle). So the central five elements of the modular lattice map to a single central element of the distributive
lattice (the majority function) and six more pairs of elements of the modular lattice have the same image as each other in the distributive lattice.
The six identified pairs end up being instances of the distributive law (in retrospect, unsurprisingly) e.g. $x\vee(y\wedge z)=(x\vee y)\wedge(x\vee z)$. The identification of the five in
the middle is messier when expressed algebraically, though.
4. Following your invitation, I’ll make an attempt to comment here as if that was the same as commenting on G+… what incites me first to pack together random comments I’d most likely have
interjected separately on a social network comment stream:
(1) The rotating image is reminiscent of a Crookes radiometer.
(2) Does the “three” in “free lattice on three generators” deserve taking as a hint that the latter could serve in a narrative about the color force?
5. The article http://web.mst.edu/~insall/Research/Algebra%20and%20Nonstandard%20Algebra/Lattices/Geom/Geometric%20Conditions%20for%20Local%20Finiteness.pdf gave a simple argument that the lattice
of closed convex sets in the plane is not modular, by finding three closed segments A, B, and C that generate an infinite lattice of closed convex subsets of the plane. Near the end of the
article, a conjecture is made about locally finite lattices of closed convex subsets of Hilbert Spaces, but that conjecture has not borne up under scrutiny.
□ Interesting! | {"url":"https://blogs.ams.org/visualinsight/2016/01/01/free-modular-lattice-on-3-generators/","timestamp":"2024-11-12T05:49:36Z","content_type":"text/html","content_length":"105591","record_id":"<urn:uuid:a64a98c4-1ed9-42ab-9a43-171ff194dbf4>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.58/warc/CC-MAIN-20241112045844-20241112075844-00352.warc.gz"} |
Introduction to Vectors: Magnitude and Direction
⬆️Vectors are represented as directed line segments with magnitude and direction.
🔢Vector notation includes initial and terminal points, conveying direction clearly.
📐The direction angle of a vector helps define its orientation in a coordinate system.
🧮Calculating the magnitude of a vector involves the square root of the sum of squared components.
🔺Understanding quadrant placement is crucial when determining the direction angle of a vector. | {"url":"https://www.777ccc.com/summary/introduction-to-vectors-magnitude-and-direction","timestamp":"2024-11-02T13:57:04Z","content_type":"text/html","content_length":"25350","record_id":"<urn:uuid:60fe3484-88ec-465e-87e0-4e0d4e40b5b3>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00766.warc.gz"} |
Korrelationstage 2023
chair: Johannes Richter
Annabelle Bohrdt (Universität Regensburg & Harvard University)
How to find and probe pairs in doped quantum
A key step in unraveling the mysteries of materials exhibiting unconventional superconductivity is to understand the underlying pairing mechanism. While it is widely agreed upon that the
09:00 pairing glue in many of these systems originates from antiferromagnetic spin correlations, a microscopic description of pairs of charge carriers remains lacking. In this talk, I will present
- state-of-the art numerical methods to probe the internal structure and dynamical properties of pairs of charge carriers in quantum antiferromagnets in four-legged cylinders. Exploiting the full
09:30 momentum resolution in our simulations, we are able to distinguish two qualitatively different types of bound states: a highly mobile, meta-stable pair, which has a dispersion proportional to
the hole hopping t, and a heavy pair, which can only move due to spin exchange processes and turns into a flat band in the Ising limit of the model. We find qualitatively good agreement with
the semi-analytical geometric string theory. Based on the intuition gained with the geometric string theory, we introduce mixed-dimensional models, which exhibit binding energies of the order
of the spin exchange J and highly mobile pairs, and can be realized using cold atoms in optical lattices. We moreover relate the pair spectral function to the properties of Fermi-Hubbard
excitons and draw connections to the optical conductivity, thus enabling insights from and connections between theoretical models, quantum simulators, and solid state experiments.
Natalia Chepiga (TU Delft)
Resilient infinite randomness criticality for a disordered chain of interacting Majorana fermions
- The quantum critical properties of interacting fermions in the presence of disorder are still not fully understood. While it is well known that for Dirac fermions, interactions are irrelevant
10:00 to the non-interacting infinite randomness fixed point, the problem remains largely open in the case of Majorana fermions which further display a much richer disorder-free phase diagram.
Pushing the limits of DMRG simulations, we carefully examine the ground-state of a Majorana chain with both disorder and interactions. Building on appropriate boundary conditions and key
observables such as entanglement, energy gap, and correlations, we strikingly find that the non-interacting Majorana IRFP is very stable against finite interactions, in contrast with previous
Thomas Barthel (Duke University)
Quantum simulation of condensed matter using Trotterized tensor networks
First, I will describe a variational quantum eigensolver for the simulation of strongly-correlated quantum matter based on a multi-scale entanglement renormalization ansatz (MERA) and
gradient-based optimization. Due to its narrow causal cone, the algorithm can be implemented on noisy intermediate-scale (NISQ) devices and still describe large systems. The number of required
qubits is system-size independent and increases only to a logarithmic scaling when using quantum amplitude estimation to speed up gradient evaluations. Translation invariance can be used to
10:00 make computation costs square-logarithmic in the system size and describe the thermodynamic limit. For the practical implementation, the MERA disentanglers and isometries are Trotterized, i.e.,
- implemented as brickwall circuits. With a few Trotter steps, one recovers the accuracy of the full MERA. Results of benchmark simulations for various critical spin models establish a quantum
10:30 advantage. Secondly, I will address the question of barren plateaus in the optimization of isometric tensor network states. Barren plateaus correspond to scenarios where the average amplitude
of the cost function gradient decreases exponentially with increasing system size. This occurs, for example, for quantum neural networks. We found that, in systems with finite-range
interactions, variational optimization problems for matrix product states, tree tensor networks, and MERA are free of barren plateaus. The derived scaling properties of gradient amplitudes
establish trainability and bear implications for efficient initialization procedures. References: arXiv:2108.13401 - Q. Miao and T. Barthel, "A quantum-classical eigensolver using multiscale
entanglement renormalization" arXiv:2303.08910 - Q. Miao and T. Barthel, "Convergence and quantum advantage of Trotterized MERA for strongly-correlated systems" arXiv:2304.00161 - T. Barthel
and Q. Miao, "Absence of barren plateaus and scaling of gradients in the energy optimization of isometric tensor network states" arXiv:2304.14320 - Q. Miao and T. Barthel, "Isometric tensor
network optimization for extensive Hamiltonians is free of barren plateaus"
10:30 coffee break
chair: Andreas Honecker
Reinhard Noack (Philipps-Universität Marburg)
Finite PEPS and mode-transformation DMRG methods for two-dimensional Hubbard and related models
11:00 I discuss our work in applying tensor-network methods to two-dimensional strongly correlated electron models. First, I discuss our adaptation of the finite-projected-entangled-pair-state
- algorithm (fPEPS) for the two-dimensional Hubbard model. This adaptation uses projected entangled pair operators (PEPOs), takes full advantage of SU(2) symmetry, carries out both local
11:30 variational optimization and global gradient-based optimization of the PEPS, and implements a number of other optimizations, so that we can treat lattice of up to 8x8 with PEPS bond dimension
of up to 8. I discuss the accuracy and performance of this algorithm and its effectiveness relative to other methods. Second, I discuss work applying the DMRG with integrated mode
transformations to Hubbard-like models in general, applying it spcefically to a two-dimensional model of spinless fermions with nearest- and next-nearest-neighbor hopping and nearest-neighbor
Coulomb repulsion. I discuss the accuracy, computational cost and performance of the method as well as the ground-state phase diagram of the spinless fermion model at half filling.
Attila Szabó (Max Planck Institute for the Physics of Complex Systems)
High-accuracy variational Monte Carlo for frustrated magnets with deep neural networks
11:30 In this talk, I will show that neural quantum states based on very deep (4-16-layered) neural networks can outperform state-of-the-art variational approaches on highly frustrated quantum
- magnets. In particular, we use group convolutional neural networks (GCNNs), which allow us to impose all space-group symmetries and to target nontrivial symmetry sectors variationally. I will
12:00 demonstrate the power of our method by obtaining state-of-the-art ground- and excited-state energies for the $J_1-J_2$ Heisenberg model on the square and the triangular lattices. I will also
discuss GCNN studies of Heisenberg models on fullerene geometries, where we obtained the spectrum of low-lying excited states resolved by point-group symmetry. On larger fullerenes, these
contain distinct “towers of states” akin to those expected in an ordered magnet: Indeed, we are able to reconstruct the corresponding “magnon operators” from the ground-state correlation
functions, which show the gradual emergence of honeycomb-like Néel order as the degree of frustration reduces.
Stefan Wessel (RWTH Aachen)
Reduced basis surrogates of quantum many-body systems based on tensor networks
- Within the reduced basis methods approach, an effective low-dimensional subspace of a quantum many-body Hilbert space is constructed in order to investigate, e.g., the ground-state phase
12:30 diagram. The basis of this subspace is built from solutions of snapshots, i.e., ground states corresponding to particular and well-chosen parameter values. Here, we show how a greedy strategy
to assemble the reduced basis and thus to select the parameter points can be implemented based on matrix-product-state (MPS) calculations. Once the reduced basis has been obtained, observables
required for the computation of phase diagrams can be computed with a computational complexity independent of the underlying Hilbert space for any parameter value. We illustrate the efficiency
and accuracy of this approach for different one-dimensional quantum spin-1 models, including anisotropic as well as biquadratic exchange interactions, leading to rich quantum phase diagrams.
12:30 lunch
13:30 discussion
chair: Matthias Vojta
Mathias Scheurer (University of Stuttgart)
Exotic many-body physics in van der Waals moiré systems
14:00 When two layers of graphene are stacked on top of each other with a finite relative angle of rotation, a moiré pattern forms. Most strikingly, at so-called “magic angles”, the largest of which
- is around 1 degree, the bands around the Fermi surface flatten significantly; this enhances the density of states and the impact of electron-electron interactions. Soon after the experimental
14:30 discovery in 2018 that this enhancement can induce superconductivity and other, including magnetic, instabilities, it became clear that twisted bilayer graphene is only one example of an
engineered van der Waals moiré system with a complex phase diagram akin to other strongly correlated materials. In this talk, I will provide a brief introduction to the rich and diverse field
of moiré superlattices built by stacking and twisting graphene and other van der Waals materials. I will further present recent and ongoing projects – involving a combination of analytics,
numerics, machine-learning, and experiment – which explore the exotic quantum many-body phases that can be stabilized in these platforms.
Gaurav Chaudhary (University of Cambridge)
Flat bands and correlations in twisted multilayers of topological insulators
- Twisted bilayer graphene (TBG) near the "magic angles" has emerged as a rich platform for strongly correlated states of two-dimensional (2D) Dirac semimetals. Topological insulator thin films
15:00 because of their ability to host low energy Dirac nodes, presents another platform where "twistronics" can be used to engineer flat bands. However, topological insulator systems encounter some
theoretical difficulties in engineering flat bands using the twistronics approach. I will discuss these issues. Using simple surface state electronic models for thin film magnetic topological
insulators, I will show how flat moire bands can still be achieved in these systems. I will discuss the similarity and differences of such moire systems with the twisted multilayers of graphene
and more recent developments in twisted multilayers of cuprate superconductors. Finally, I will discuss possible many body phases that can appear in these moire systems.
Wonjune Choi (Technical University of Munich)
Finite temperature entanglement negativity of fermionic topological phases and quantum critical points
We study the logarithmic entanglement negativity of symmetry-protected topological phases (SPTs) and quantum critical points (QCPs) of one-dimensional noninteracting fermions at finite
15:00 temperatures. In particular, we consider a free fermion model which realizes not only quantum phase transitions between gapped topological phases but also an exotic topological phase transition
- between quantum critical states, namely the fermionic Lifshitz transition. We show that the bipartite entanglement negativity between two adjacent blocks of fermions sharply reveals the
15:30 crossover boundary of the quantum critical fan near the QCP between the gapped phases. Along the critical phase boundary between the gapped phases, the sudden decrease in the entanglement
negativity signals the fermionic Lifshitz transition responsible for the change in the topological nature of the QCPs. The high-temperature series expansion of the density operator shows that
the entanglement negativity of every gapped and gapless state is converged to zero as $\sim T^{-2}$ in the high-temperature limit. We further demonstrate that the tripartite entanglement
negativity between two spatially separated disjoint blocks of fermions can count the number of topologically protected boundary modes for both SPTs and topologically nontrivial QCPs at zero
temperature. The long-distance entanglement between the boundary modes vanishes at finite temperatures due to the instability of SPTs protected by on-site symmetries.
15:30 coffee break
chair: Sonia Haddad
Lucile Savary (CNRS, École Normale Supérieure de Lyon)
16:00 Thermal Hall transport from phonons in quantum materials
16:30 I will present several approaches to the study of thermal Hall transport in quantum materials (esp. quantum magnets) when the main thermal Hall contribution is due to the transport of phonons
coupled with electronic degrees of freedom.
Urban Seifert (University of California, Santa Barbara)
16:30 Moiré-Mott insulators in transition metal dichalcogenides: Spin liquids and doping
17:00 Moiré heterostructures of transition metal dichalcogenides (TMD) have been shown to give rise to correlated insulating states at fractional fillings, forming self-organized charge lattices. The
combination of spin-orbit coupling and moiré modulation can lead pseudomagnetic fields and associated fluxes patterns for electrons. We explore the possibility of spin-liquid states in these
systems. We further discuss the effects of doping these correlated insulating states at fractional fillings.
Robin Schäfer (Boston University)
Abundance of hard-hexagon crystals in the quantum pyrochlore antiferromagnet
17:00 We present a novel proposal for potential ground states of the $S=1/2$ and $S=1$ Heisenberg antiferromagnet on the pyrochlore lattice. The ground-state candidates form a simple family that is
- exponentially numerous in the linear size of the system. They can be visualized as coverings of hard hexagons, with each hexagon representing a resonating valence-bond ring, breaking various
17:30 lattice symmetries such as rotation, inversion, and translation. By evaluating a simple variational wavefunction based on a single hard-hexagon covering, we achieve a precise variational energy
consistent with density matrix renormalization group predictions and a numerical linked cluster expansion technique carried out at zero temperature. The scenario of a hard-hexagon state is
backed up by carefully examining excitations on top of the valence-bond crystal as it provides further evidence of its stability. Our findings have broader implications as they extend to other
frustrated magnets, such as the two-dimensional ruby and checkerboard lattices. In total, our work offers a new perspective, where the frustration effectively decouples unfrustrated motifs --
the hard hexagons in the pyrochlore lattice -- in quantum magnets. [1] Robin Schäfer, Benedikt Placke, Owen Benton, Roderich Moessner, arXiv:2210.07235
17:30 informal discussion with senior scientists
- dinner | {"url":"https://www.pks.mpg.de/de/korrel23/scientific-program","timestamp":"2024-11-03T07:59:38Z","content_type":"text/html","content_length":"235586","record_id":"<urn:uuid:17797272-81e6-4707-a689-b033bc17770d>","cc-path":"CC-MAIN-2024-46/segments/1730477027772.24/warc/CC-MAIN-20241103053019-20241103083019-00560.warc.gz"} |
Pappu Murthy - MATLAB Central
Pappu Murthy
Last seen: 1 year ago |  Active since 2014
Followers: 0 Following: 0
Professional Interests: signal processing
of 295,261
67 Questions
2 Answers
of 20,199
0 Files
of 153,476
0 Problems
0 Solutions
How to control the font of Contour "ShowText" option?
I am generating 2-D contour filled plot using the function "contourf". Further I specify levels and mark the levels with "ShowTe...
1 year ago | 3 answers | 0
Deep Neural LSTM Network Issues
I am training a Deep Neural Net with a regression layer in the end. I have 20 inputs and a sequence of output with 10 steps. I ...
2 years ago | 1 answer | 0
What does this Digital filter actually do? and what is it called?
I am using a filter defined by Matrix [1 1 1 1 1;1 2 2 2 1; 1 2 4 2 1; 1 2 2 2 1; 1 1 1 1 1]; on a 2 D image with filter2. I...
2 years ago | 1 answer | 0
What is ICMFrequency? and How do you specify it?
I came across with an issue while fitting ecdf function for a large dataset (>700 ). When I specify the NameValue parameter "ICM...
2 years ago | 1 answer | 0
Sobol First order and total sensitivities
I have 19 input variables and two outputs. I am focusing on one at a time. I have a closed form model flike Y = F(x1, x2, x3.......
2 years ago | 0 answers | 0
Deep Neural Net for Regression
I am training a NN with 19 inputs and two outputs. I have 1000 random observations, and I am using the first 800 for training, t...
3 years ago | 1 answer | 0
Need to find "Knees" in Stress Strain curves where slope change abruptly
I have stress strain curve which is basically peicewise linear. I need to find everytime there is a sudden change in the slope, ...
3 years ago | 1 answer | 0
how to use betafit for a data that is outside the range of 0 to 1.?
I have numbers between 10 to 20. If I try to use betafit it refuses to fit because the range is not from 0 to 1. If I have numbe...
3 years ago | 1 answer | 0
How to test montonousness of X Y data
I have sets of XY data with anywhere from 1 to 10 elements in it. I need to check whether x vs y is montonously increasing or no...
3 years ago | 1 answer | 0
Adding text programmatically to a figure
I have a figure with several subplots in it. I want to place a textbox with some text init which belongs to the entire figure. H...
3 years ago | 2 answers | 0
I am struggling a lot in my NN project
I have 20 inputs and one output. I am trying to train using the Deepnetwork work flow. I tried several combinations of hidden la...
3 years ago | 1 answer | 0
writecell command is not working.
When trying to write cell with only entity it does not work. if I have more than one entity it works fine. For e.g. Hdr1 = {'T...
3 years ago | 1 answer | 0
feedforward net for regression.
I have 1000 sets of data. each set consists of 4 input variables and one output variable. Variables 1, 2, and 3 remain same but ...
3 years ago | 1 answer | 0
Global sensitivity analysis with Input and output data only
I have a Matrix of input variables that were drawn from uniform distribution with in a specified range. My matrix is like [10000...
3 years ago | 1 answer | 0
I want to stop, check and restart an ongoing feedforward neural net.
Suppose i am training a network with net = train(net,x,t) where x inputs, and t targets. let us say I want to check how well th...
3 years ago | 1 answer | 0
How to center Column names in table
I use the uiStyle to make all my table entries to whichever type allignment I desire but the column names always seem to be left...
3 years ago | 1 answer | 0
Answered Stand alone executable using mlapp file
I found the answer. I had name syntax wrong. I had spaces in between which it didn't like. So I replaced the spaces with Undersc...
3 years ago | 0
| accepted
Stand alone executable using mlapp file
I have created an App using the app designer. I am trying to compile. I created a project and specified all the relevant detail...
3 years ago | 2 answers | 0
Matlab barchart question.
I get this strange error (screen shot attached) when I save the figure as ".emf", or ".bmp" but no error at all when I tried to ...
3 years ago | 2 answers | 0
NeuralNET with Categorical Variables.
I am trying to train a NN with a set number of Input variables, say "n", and a set of outputs say "m". I have two categories 1 a...
3 years ago | 0 answers | 0
Neural Net training function parameters
I am trying to use the algorithm trainbr for my feedforward neural net and trying to set up max number of epochs to 5000, using ...
3 years ago | 1 answer | 0
How to add the "Plottools" icon to plot control ribbon.
Whenever I create a plot, it used to have on top left corner a ribbon with icons, one of which is "plottools" when I click it th...
3 years ago | 1 answer | 0
Location of centers of Circular regions
I have a square region digitized to 40x40 grid. Each cell is denoted either by 1 or by 2. The 1's show circular regions for whic...
3 years ago | 2 answers | 0
Heat Map Cell values display
I noticed the cell values are displayed with arbitrary size and with 'normal' fontweight. I wonder whether one can choose, the ...
3 years ago | 1 answer | 0
Display format of numbers shown
I have generated heatmap of 7x7 cells. It shows some floating point numbers on each square like 0.0123 and so on. There are 4 d...
3 years ago | 1 answer | 0
Mixed data types in a Table
I have an app developed using app Designer. I have a table with 5 rows and 3 columns. These are usually double type and my numb...
4 years ago | 1 answer | 0
How to eliminate Repeated NaNs in a array?
I have two vectors X, and Y; both will have NaNs present in them. If X has a NaN, Y will have a NaN as well at the same locatio...
5 years ago | 1 answer | 0
How to use format spec to read in Textscan?
I have a string '-2.554-4'. This when I read using textscan I get two numbers -2.554 and -4. However it should be actually -2.55...
5 years ago | 1 answer | 0 | {"url":"https://au.mathworks.com/matlabcentral/profile/authors/705603?detail=answers","timestamp":"2024-11-14T09:06:16Z","content_type":"text/html","content_length":"103284","record_id":"<urn:uuid:d968c4fb-e8ed-47c1-9428-65b18e3c4878>","cc-path":"CC-MAIN-2024-46/segments/1730477028545.2/warc/CC-MAIN-20241114062951-20241114092951-00515.warc.gz"} |
• plot() for performance::check_model() no longer produces a normal QQ plot for GLMs. Instead, it now shows a half-normal QQ plot of the absolute value of the standardized deviance residuals.
• plot() for performance::check_model() and performance::check_predictions() gains a type argument, to either create density plots, or discrete dots resp. interval plots for posterior predictive
• plot() for performance::check_model() gains an n_column argument, to define the number of columns for the diagnostic plots (by default, two columns).
• plot() for performance::check_model() sometimes failed to create the plot under certain conditions, e.g. when the screen or app windows was zoomed-in. If an error occurs, a much more informative
error message is shown, providing several possible solutions to resolve this problem.
• plot() for parameters::equivalence_test() now aligns the labelling with the print() method. Hence, the legend title is no longer labelled "Decision on H0", but rather "Equivalence", to emphasize
that we can assume practical equivalence for effects, but that we cannot accept the H0 (in a frequentist framework).
• Added some examples and cross references between docs. Furthermore, a vignette about plotting functions for the datawizard package was added.
• Plot for SEM models now has arrows pointing from the latent variables towards the manifest variables.
• The plot() method for check_model() was revised and should now be more consistent regarding titles and subtitles, as well as color schemes and plot order. Furthermore, the plot for influential
observation was changed in order to better catch the potential influential observations.
• The check_heteroscedasticity() plot contains a dashed horizontal line, which makes it to assess the homoscedasticity assumption.
• The y-axis label for check_collinearity() plot clarifies that the measure being plotted is VIF. This was unclear when this plot was embedded in a grid of plots from check_model() containing
multiple checks.
• Plotting methods for performance_roc() and performance_accuracy() show correct labels now.
• plot() for parameters::model_parameters() gets a sort-argument to sort coefficients.
• plot() for parameters::model_parameters() now also create forest plots for meta-analysis.
• plot() for bayestestR::bayesfactor_parameters() only plots facets when necessary.
• plot() for performance::check_outliers() now also plot multiple methods in one plot.
• Following plot() methods get a n_columns-argument, so model components like random effects or a zero-inflation component can be plotted in a multi-column layout: bayestestR::p_direction(),
bayestestR::hdi(), bayestestR::rope(), bayestestR::equivalence_test(), parameters::model_parameter(), parameters::parameters_simulate()
• Following plot() methods get priors and priors_alpha arguments, to add a layer with prior samples to the plot: bayestestR::p_direction(), bayestestR::estimate_density(), | {"url":"https://cran.uni-muenster.de/web/packages/see/news/news.html","timestamp":"2024-11-02T04:47:41Z","content_type":"application/xhtml+xml","content_length":"28299","record_id":"<urn:uuid:4d8357c4-a534-49a3-876b-4f9af0209735>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00311.warc.gz"} |
Algorithms for persuasion with limited communication
The Bayesian persuasion paradigm of strategic communication models interaction between a privately-informed agent, called the sender, and an ignorant but rational agent, called the receiver. The goal
is typically to design a (near-)optimal communication (or signaling) scheme for the sender. It enables the sender to disclose information to the receiver in a way as to incentivize her to take an
action that is preferred by the sender. Finding the optimal signaling scheme is known to be computationally difficult in general. This hardness is further exacerbated when there is also a constraint
on the size of the message space, leading to NP-hardness of approximating the optimal sender utility within any constant factor. In this paper, we show that in several natural and prominent cases the
optimization problem is tractable even when the message space is limited. In particular, we study signaling under a symmetry or an independence assumption on the distribution of utility values for
the actions. For symmetric distributions, we provide a novel characterization of the optimal signaling scheme. It results in a polynomial-time algorithm to compute an optimal scheme for many
compactly represented symmetric distributions. In the independent case, we design a constant-factor approximation algorithm, which stands in marked contrast to the hardness of approximation in the
general case.
Original language English
Title of host publication ACM-SIAM Symposium on Discrete Algorithms, SODA 2021
Editors Daniel Marx
Pages 637-652
Number of pages 16
ISBN (Electronic) 9781611976465
State Published - 2021
Event 32nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2021 - Alexandria, Virtual, United States
Duration: 10 Jan 2021 → 13 Jan 2021
Publication series
Name Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
Conference 32nd Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2021
Country/Territory United States
City Alexandria, Virtual
Period 10/01/21 → 13/01/21
Dive into the research topics of 'Algorithms for persuasion with limited communication'. Together they form a unique fingerprint. | {"url":"https://cris.ariel.ac.il/en/publications/algorithms-for-persuasion-with-limited-communication-2","timestamp":"2024-11-02T04:55:28Z","content_type":"text/html","content_length":"55395","record_id":"<urn:uuid:5ac4e3ee-1b5a-4644-af55-f004cf46cdd7>","cc-path":"CC-MAIN-2024-46/segments/1730477027677.11/warc/CC-MAIN-20241102040949-20241102070949-00308.warc.gz"} |
Our Algebra Tutors In Missouri
Algebra Tutors
Success in math class enables students to be successful in many courses. Understanding the basic concepts of Algebra has an impact on chemistry, physics, engineering, technology, computer sciences,
etc. Students in middle school through high school must complete pre-algebra and algebra to graduate. Taking algebra courses is a necessary part of a student’s education. It is very important for a
student struggling in one of these areas to provide them with the help they need; sometimes that answer to helping them is to hire a private tutor. We can provide a private math tutor in the
following algebra classes:
Pre-algebra - The goal of Pre-algebra is to develop fluency with rational numbers and proportional relationships. Students will: extend their elementary skills and begin to learn algebra concepts
that serve as a transition into formal Algebra and Geometry; learn to think flexibly about relationships among fractions, decimals, and percents; learn to recognize and generate equivalent
expressions and solve single-variable equations and inequalities; investigate and explore mathematical ideas and develop multiple strategies for analyzing complex situations; analyze situations
verbally, numerically, graphically, and symbolically; and apply mathematical skills and make meaningful connections to life's experiences.
Algebra I - The main goal of Algebra is to develop fluency in working with linear equations. Students will: extend their experiences with tables, graphs, and equations and solve linear equations and
inequalities and systems of linear equations and inequalities; extend their knowledge of the number system to include irrational numbers; generate equivalent expressions and use formulas; simplify
polynomials and begin to study quadratic relationships; and use technology and models to investigate and explore mathematical ideas and relationships and develop multiple strategies for analyzing
complex situations.
Algebra II - A primary goal of Algebra II is for students to conceptualize, analyze, and identify relationships among functions. Students will: develop proficiency in analyzing and solving quadratic
functions using complex numbers; investigate and make conjectures about absolute value, radical, exponential, logarithmic and sine and cosine functions algebraically, numerically, and graphically,
with and without technology; extend their algebraic skills to compute with rational expressions and rational exponents; work with and build an understanding of complex numbers and systems of
equations and inequalities; analyze statistical data and apply concepts of probability using permutations and combinations; and use technology such as graphing calculators.
College Algebra – Topics for this course include basic concepts of algebra; linear, quadratic, rational, radical, logarithmic, exponential, and absolute value equations; equations reducible to
quadratic form; linear, polynomial, rational, and absolute value inequalities, and complex number system; graphs of linear, polynomial, exponential, logarithmic, rational, and absolute value
functions; conic sections; inverse functions; operations and compositions of functions; systems of equations; sequences and series; and the binomial theorem.
For most students success in any math course comes from regular studying and practicing habits. However, Algebra class can be a foreign language for many students. Whether you are in need of a little
extra help or someone who can teach the subject from scratch, hiring a professional tutor with a strong background in mathematics can make a dramatic impact on a student’s performance and outlook on
all future course work.
Our Tutoring Service
We believe that the most effective tutoring occurs when you have the undivided attention of a highly qualified and experienced tutor by your side. Our exceptional tutors are not only experienced and
educated, but are experts in their field and passionate about teaching others. We will always provide you with a tutor specifically qualified and experienced in the subject matter you need. And for
your peace of mind, we conduct a nation-wide criminal background check, sexual predator check and social security verification on every single tutor we offer you. Before you invest money and time
into tutoring sessions, be sure you have selected the right tutor for you.
Here Are Some Of Our Algebra Tutor Profiles
Davorin D
Teaching Style
From my tutoring experience, I have noticed that students have trouble understanding the meaning of numbers and symbols on paper simply because no one has taught them how to visualize and interpret
them in a real world situation. They also don't realize that they have plethora of resources and tools available to them to help them, yet they rarely utilize them. I try to give hints and clues to
my students and let them obtain the right answers on their own instead of simply solving the problem for them. I believe this gives them a much better understand of the material.
Experience Summary
Having recently graduated with a Bachelor's degree, I am continuing my education to obtain my Master's degree in mathematics. Number Theory will be my focus, as I intend to get involved in the
encryption field. During my senior year as undergraduate, I have worked on campus as a teacher's assistant and at home as an online tutor for UNCG's iSchool program. I have privately tutored
undergraduates needing help in pre-calculus and calculus. I enjoyed helping the students and showing them some of my own tricks and ways when it comes to solving the problems.
Type Subject Issued-By Level Year
Degree Mathematics with Concentration in Computer Science University of North Carolina at Greensboro BS 2007
Robert R
Teaching Style
I’ve always been interested in the application of math and science to the solution of real world problems. This led me to a very satisfying career in engineering. Therefore my approach to teaching is
very application oriented. I like to relate the subject to problems that the students will encounter in real life situations. I've generally only worked with older students; high school or college
age or older mature adults who have returned to school to get advance training or learn a new trade.
Experience Summary
I’ve always been interested in math and science; especially in their application to solving real world problems. This led me to a very satisfying career in engineering. I have a BS in electrical
engineering from General Motors Institute (now Kettering University) and an MS in electrical engineering from Marquette University. I am a registered professional engineer in Illinois. I have over 30
years of experience in the application, development, and sales/field support of electrical/electronic controls for industrial, aerospace, and automotive applications. I’m currently doing consulting
work at Hamilton-Sundstrand, Delta Power Company, and MTE Hydraulics in Rockford. I also have college teaching and industrial training experience. I have taught several courses at Rock Valley College
in Electronic Technology, mathematics, and in the Continuing Education area. I’ve done industrial technical training for Sundstrand, Barber Colman, and others. I’ve also taught math courses at
Rasmussen College and Ellis College (online course). I’ve also been certified as an adjunct instructor for Embry-Riddle Aeronautical University for math and physics courses. I've tutored my own sons
in home study programs. I'm currently tutoring a home schooled student in math using Saxon Math. I hope to do more teaching/tutoring in the future as I transition into retirement.
Type Subject Issued-By Level Year
Degree Electrical Engineering Marquette University MS 1971
Degree Electrical Engineering GMI (Kettereing University) BS 1971
Redha R
Teaching Style
I teach by example and I am methodical. I show a student how to solve a problem by going very slowly and by following a sequence of steps. I make sure that the student follows understands each step
before moving on to the next one. I then ask the student to solve almost the exact same problem (with a slight change in numbers for example)so that the student learns the method and how to solve the
problem by herself/himself. I am very patient but expect the student to be willing to learn. I tell students that learning a subject matter is more important than getting an A in a class - this is
because 1. if you learn, chances are you won't forget (at least for a long while) and 2. you will get a good grade as a result.
Experience Summary
I have been employed by the IBM corporation for the last 27 years. I held numerous positions in hardware development, software development, telecommunication network development, project management,
solution architecture, performance analysis, and others. As much as I enjoy my job, I have passion in Mathematics. I develop new ideas related to my work and expand them into U.S. patents and
external technical papers. I informally tutor family and friends attending high school or college. I was a Mathematics teaching assistant at the Univ. of Pittsburgh for 2 years and at the Univ. of
Michigan at Ann Arbor for 3 years. I love teaching and sharing my knowledge with others. I have published numerous technical papers and hold numerous U.S. patents as well.
Type Subject Issued-By Level Year
Other Arabic Native speaker Fluent current
Other French Native speaker Fluent current
Degree Electrical Engineering Univeristy of Michigan P.h.D 1990
Degree Electrical Engineering and Mathematics University of Pittsburgh M.S. 1982
Degree Computer Science and Mathematics Univerisity of Pittsburgh B.S. 1980
Paul P
Teaching Style
I am passionate about teaching, and love to work with school age and college level students to help them achieve their academic goals. I am available to help students learn and grow academically. My
teaching style is facilitative, patient, easygoing and helpful. I strive to create a productive, safe, and caring environment for my students, where their learning experience is nurtured and
productive. I have had the awesome privilege of being educated by many phenomenal teachers, and I believe in providing the same quality of teaching to all my students. It is my goal to make a
profound difference in the lives of each of my students - one which help them to become successful adults in a complex business world.
Experience Summary
As a secondary school and college level teacher, I am passionate about helping my students succeed academically. Having earned two bachelors degrees (A Bachelor of Education in Secondary Education
and a Bachelor of Science in Computer Science/Mathematics), and a Master of Business Administration degree (GPA of 3.81/4.00), I have the academic and professional competencies which enable me to
produce positive outcomes in a teaching environment. During the last 2 years, I have tutored students in Statistics, Mathematics, Biostatistics, and English. Also, I have taught for more than 8 years
at the college level, teaching such subjects as Math, Information Technology, and Management Information Systems.
Type Subject Issued-By Level Year
Degree Business Administration Bellevue University MBA 2005
Certification Secondary School Teacher Newfoundland Department of Education - Canada Secondary Education 1985
Degree Secondary Education Memorial University B. Ed 1984
Degree Computer Science/Mathematics Memorial University B. Sc. 1984
Lucille L
Teaching Style
My students know I love math. I am enthusiastic, patient, and caring. I believe everyone can learn math given the right circumstances. I take interest in my students, I email them, I encourage them
to do their homework. Through this personal interest, my students work to please the teacher. I also use different teaching styles: discovery learning, Look-Do-Learn, one-to-one instruction, critical
thinking. Once a student, always a student for me. I go the extra mile.
Experience Summary
From an early age I loved math and so when I graduated with a BA in Math and Latin, I went straight into teaching math at the high school level. During graduate school years, I taught math at the
University of Toronto. While working with computer programming, I taught math in the evening division of Westbury College in Montreal. I have taught math in different countries: Jamaica, Canada, U.S.
Virgin Islands, Nigeria-West African Educational Council, The Bahamas, and Florida.
Type Subject Issued-By Level Year
Degree Math University of Toronto M.Sc. 1966
Gary G
Teaching Style
My teaching style has been, for the most part, dictated by student response. I am comfortable teaching in a traditional lecture format, in a format that uses a cooperative learning approach
exclusively, or in a hybrid format. The goal is for effective learning to take place, and I believe my strongest quality is to be able to adapt in such a way that best helps students reach their
academic goals.
Experience Summary
For eight years, I taught freshman and sophomore-level mathematics courses at Arizona State University. These courses included College Algebra, Pre-Calculus, Calculus, Finite Mathematics, and
Elementary Mathematics Theory. Additionally, I have tutored students in these courses both in the Mathematics Department Tutor Center, and on my own personal time.
Type Subject Issued-By Level Year
Degree Mathematics Southern Illinois University MS 1999
Degree Mathematics Allegheny College BS 1996
Ashley A
Teaching Style
First and foremost, I believe it is important to establish a trusting relationship with those you teach. Throughout my experiences in the field of education, I have found the most fruitful of those
relationships to be those in which I was able to work with a student or group of students regularly over an extended period of time to develop a routine and a strong relationship in which great
amounts of learning and understanding could be accomplished. Although many people experience obstacles in learning Mathematics, I believe that by presenting multiple ways to approach a problem, every
student can find a method that works for them. Every student should be allowed the resources and opportunity to realize that they CAN achieve their goals. I am here to help students who have had
difficulty with Math in the past to succeed, and feel confident in both their abilities in Math, and in life.
Experience Summary
As a student, I was always committed to learning and to achieving my goals. As a teacher and tutor, I strive to help others share the same love for learning and for understanding as I do. Now, I help
others set goals, and work toward achieving them. I have worked with all ages, all ability levels, and various sized groups and have enjoyed each and every experience. In the past, I have primarily
tutored in the subject of Mathematics, but am also trained by the Literacy Council to tutor reading and writing, and have enjoyed volunteering with that organization as well. I believe that I can
help anyone enjoy and understand math, and help them feel better about themselves for it.
Type Subject Issued-By Level Year
Degree Mathematics UNC Chapel Hill BA 2005
Veronica V
Teaching Style
I love teaching and the reward that it brings. I am a step-by-step oriented teacher. I've had many students return saying how my style of teaching continues to help them in their current studies. In
my 14 years of experience, I've learned that many students learn with many different styles. I believe that every child can learn; however, the teacher must reach them at their level. If I have a
good understanding of where the student is academically, I can help them to grow academically.
Experience Summary
I've taught middle school for 14 years. My goal in becoming a teacher was to reach those students who had, somehow, fallen through the cracks of education. I taught at a drop-out-prevention school
for 7 years. During those years, my student scores continuously rose. I taught 7th and 9th grade at this school. I transferred to a different middle school where I taught 6th & 7th grade for two
years and the remaining five years, I taught Pre-Algebra and Algebra 1. I began to tutor after school during my first year to struggling students. I was also listed on the school boards list of
tutors. I also worked at an after-school program. During these years, I tutored second graders and up to mathematical levels of geometry, Algebra 1 and Algebra II.
Type Subject Issued-By Level Year
Certification Mathematics In-service grades 5-8 current
Degree Mathematics Education Florida State University BS 1993
A Word From Previous Students and Parents
Jeff, B.
Weston, FL
Sharon is doing great... Best so far. Thanks.
Nancy T.
St. Petersburg, FL
I think Richard is Great!!! He helped me raise my grade from doom to a C and I would not have passed without his help. Thanks for responding so quickly your entire office is fantastic!
Calvin & Lori, S.
Antioch, TN
Calvin's tutoring sessions with Ms. Nancy are going well. She has been very helpful to not only Calvin but to me as well. Giving me advice on what to do to help him more.
Advancing students beyond the classroom... across the nation
Tutoring for Other Subjects | {"url":"https://advancedlearners.com/missouri/algebra/tutor/find.aspx","timestamp":"2024-11-03T22:51:58Z","content_type":"text/html","content_length":"59686","record_id":"<urn:uuid:7b1fd5c8-8e6a-4649-a2f3-9e1b2d7878e0>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00245.warc.gz"} |
Simplex algorithm
Colin Neil Jones, Yuning Jiang, Yingzhao Lian
Nonlinear model predictive control (NMPC) has been widely adopted to manipulate bilinear systems with dynamics that include products of the inputs and the states. These systems are ubiquitous in
chemical processes, mechanical systems, and quantum physics, ... | {"url":"https://graphsearch.epfl.ch/en/concept/349458","timestamp":"2024-11-13T14:39:44Z","content_type":"text/html","content_length":"137091","record_id":"<urn:uuid:b452b61d-438e-4842-a804-0ecb52bb1bfc>","cc-path":"CC-MAIN-2024-46/segments/1730477028369.36/warc/CC-MAIN-20241113135544-20241113165544-00511.warc.gz"} |
Addition Table
A table of Addition is an arithmetic operation that is used to find the sum of two or more numbers. The addition table aids in number addition by generating a certain pattern and arranging the
numbers in rows and columns. The addition table allows us to rapidly calculate the sum of two numbers without having to add them.
What exactly is an addition table?
The summing of two or more quantities is defined as an addition operation. An addition table is a tabular arrangement of numbers in which the first row and column each contain the same set of
numbers. By observing the pattern in the addition table, we can quickly arrive at the result of the addition of two numbers.: The arithmetic operation used to make the addition table is added, which
is denoted by the symbol (+). Addends are the numbers that are added to the addition chart to complete it. To fill a certain row of an addition table, we’ll use the logic of adding numbers, keeping
one addend constant while altering the other to fill in all 1 + 1 = 2, 1 + 2 = 3, 1 + 3 = 4, and so on are among the entries in that now. The addition chart is another name for the addition table. In
the next section, we’ll look at the addition chart and how it works.
1–10 Addition Chart
The addition chart uses the addition operation to fill in all of the matching row values in the table by adding a number from the topmost horizontal row with a number from the left-most column. Let’s
have a look at the addition chart 1 – 10 in the image below.
Table1 – 10: Addition Table Description We choose the first addend from the set of numbers written in the left-most column to fill a particular row of an addition table, and we keep updating the
second addend vertically printed in the top-most row. Each time we modify a row, the associated addend value changes as well, and this process repeats itself for all 10 rows, starting with 1 and
ending with 10. To comprehend the addition chart, we’ll look at a few instances. To fill in the first row, we’ll use 1 as the first element of the numbers written in the leftmost column and keep
changing the second addend horizontally as the numbers written in the topmost row, as follows: 1 + 1 = 2, 1 + 2 = 3, 1 + 3 = 4, 1 + 4 = 5, 1 + 5 = 6, 1 + 6 = 7, 1 + 7 = 8, 1 + 7 = 9, 1 + 9 = 10, and
1 + 10 = 11. As a result, the first row’s entries will be 2, 3, 4, 5, 6, 7, 8, 9, 10, 11.To fill in the second row, we’ll use the first addend, which is 2, as the second element of the left-most
column and keep altering the second addend horizontally as follows: 2 + 1 = 3, 2 + 2 = 4, 2 + 3 = 5, 2 + 4 = 6, 2 + 5 = 7, 2 + 6 = 8, 2 + 7 = 9, 2 + 8 = 10, 2 + 9 = 11, and 2 + 10 = 12. As a result,
the second row’s entries will be 3, 4, 5, 6, 7, 8, 9, 10, 11, and 12.
Now, we’ll use observation to find the result of 7 + 8 on the addition table chart. When we move down the row with the number 7 and across the column with the number 8, we get the result 15, which is
shown in the addition table. We do know, however, that addition obeys the commutative property. As a result, 7 + 8 = 8 + 7. As a result, the row and column to be monitored can be reversed. We can
proceed down the 8-numbered row and across the 7-numbered column. We’ll still get 15 at the junction location if we use the corresponding value. As a result, when reading an addition table, we can
assume that the rows and columns are interchangeable for the addends.
To read the value of 5 + 5, move down the row with the number 5 and across the column with the same number 5, the intersection of these numbers indicates the entry as 10 in the addition chart.
As a result, we can easily calculate the sum of two numbers by observing the intersection of the two addends’ corresponding rows and columns. Individual addition tables for all the numbers 1 through
10 are shown below, based on which the rows of the addition table chart are filled.
What is the best way to teach a child the addition table 7
Easy Steps to Teach Addition
Use countable manipulatives to introduce the concept. Using countable manipulatives (physical items) will make addition much more concrete and understandable.
Now it’s time to move on to the visuals…
Make use of a number line.
Counting down the days…
Word problems…. Finding the ten.
Make a mental note of the math facts.
An addition table is a tabular representation of numbers arranged in rows and columns that allows us to calculate the sum of two numbers simply by looking at them rather than doing the math. | {"url":"https://unacademy.com/content/jee/study-material/mathematics/addition-table/","timestamp":"2024-11-02T18:05:51Z","content_type":"text/html","content_length":"638986","record_id":"<urn:uuid:23eb10a2-7111-4e72-8c3a-3815ce01c58e>","cc-path":"CC-MAIN-2024-46/segments/1730477027729.26/warc/CC-MAIN-20241102165015-20241102195015-00669.warc.gz"} |
How often do you flop a pair in holdem | Poker HUD | Stats | TrackingHow often do you flop a pair in holdem | Poker HUD | Stats | Tracking
How often do you flop a pair in Holdem?
In this article, we will go through some important probabilities and statistics regarding Texas Holdem such as how often do you flop a pair in Holdem? How many possible flops are there in Texas
Holdem, and other such questions.
Texas Holdem is one of the most played poker variants in online poker. Only on Facebook, the global fans of Texas Holdem are more than 62 million. If you are one of those, this article will help you
understand some important game scenarios and their possible outcomes.
How often do you flop a pair in Holdem?
The chances of flopping a pair in Holdem are roughly 30%. In other words, you can flop a pair once in every 3 flops. Out of 52 cards, you have got the two hole cards while 50 cards remain in the
deck. Out of these 50, 6 cards will be the pairs while 44 won’t give you any pairs. If we calculate the total number of flops, there are 19,600 total possible flops whereas 13, 244 possible flops for
cards that cannot give you a pair. So, doing a little math gives us a percentage of a little over 32%. Hence, the answer to the question “how often do you flop a pair in Holdem” is 30% (rounded off.)
How many Possible flops are there in Texas Holdem?
There are 52 cards in a deck and you have two in hand while you are playing. So, for 50 cards, the total number of possible flops is 19, 600. The formula for the total number of possible flops is (50
|3 or 50x49x48/3x2x1 = 19,600.)
Number of starting hands in Holdem
A deck of cards has 52 cards, and the total number of starting hands in Holdem is 1326. The formula for calculating the number of hands is 52×52/2×1 = 1326.
Hands groups and combinations
The 1326 possible hands can be distributed into different types of hands. There are a total of 13 pairs, 78 suited hands, and 78 off-suit hands. Also, for each non-suited hand, there are 12
combinations; and 4 combinations for suited hands. The pairs can be formed in 6 different ways. For example:
– King of Hearts and King of Spades
– King of Spades and King of Diamonds
– King of Hearts and King of Diamonds
– King of Hearts and King of Clubs
– King of Diamonds and King of Clubs
Chances of a pocket pair in Holdem
The chances of receiving a pocket pair are roughly 6%. Here is how to calculate this:
There are a total of 13 pairs in Texas Holdem and you can get these pairs in 6 different ways. In other words, you can be each pair in 6 different ways so the total number of ways to be dealt with a
pocket pair is 6×13 = 78. On the other hand, the total number of hand combinations is 1326. Hence, you can calculate the percentage of dealing with a pocket pair as 78/1326 X 100 = 5.88% or 6% when
rounded off. | {"url":"https://drivehud.com/dwkb/how-often-do-you-flop-a-pair-in-holdem/","timestamp":"2024-11-12T02:07:33Z","content_type":"text/html","content_length":"241350","record_id":"<urn:uuid:d727e34a-43ff-46f1-8db4-c49244319d6e>","cc-path":"CC-MAIN-2024-46/segments/1730477028242.50/warc/CC-MAIN-20241112014152-20241112044152-00655.warc.gz"} |
Jump to navigation Jump to search
The count of factors of n can be found using ${\displaystyle \Pi _{i=1}^{k}1+p_{i}}$, where p is the prime factors of n and k is the length of p, or in J */ 1 + _ q: n. It is generally common to see
some form of brute-force approach with these problems, so my initial idea was to find i where A000217(i) is the answer. I generally reach for the Do-While construct because it is fairly easy to use.
u^:v^:_ executes u so long as the boolean condition v returns a 1.
A000217 =. -: * >: NB. closed form to find +/ >: i. n
cond =. 500 > [: */ 1 + _ q: A000217
>:^:cond^:_ [ 8 NB. the first 7 triangular numbers can't be it so let's start at 8
A000217 12375x NB. the answer | {"url":"https://code.jsoftware.com/wiki/ShareMyScreen/ProjectEuler/0012","timestamp":"2024-11-02T15:02:42Z","content_type":"text/html","content_length":"17600","record_id":"<urn:uuid:c339fd1b-2d30-4143-84c5-b8ee024534b4>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00213.warc.gz"} |
Numerical questions
From the student perspective, a numerical question looks just like a short-answer question.
The difference is that numerical answers are allowed to have an accepted error. This allows a continuous range of answers to be set.
For example, if the answer is 30 with an accepted error of 5, then any number between 25 and 35 will be accepted as correct.
Numerical questions can also have case-insensitive non-numerical answers. This is useful whenever the answer for a numerical question is something like N/A, +inf, -inf, NaN etc | {"url":"http://fc.coft.cat/help.php?module=quiz&file=numerical.html","timestamp":"2024-11-02T14:58:53Z","content_type":"text/html","content_length":"4204","record_id":"<urn:uuid:230cd964-da8d-443f-8343-da97365a50ad>","cc-path":"CC-MAIN-2024-46/segments/1730477027714.37/warc/CC-MAIN-20241102133748-20241102163748-00331.warc.gz"} |
Zero-sum Game
...what is called a zero-sum game fallacy. The idea of the fallacy is that when you are talking about zero-sum games, the best strategy is to compete. A zero sum game is like a basketball game, a
chess game, a football game. And in those cases of games, you have to compete. But there are not only zero-sum games in...more | {"url":"https://leadermorphosis.co/pages/ideas/zero-sum-game","timestamp":"2024-11-03T23:04:00Z","content_type":"text/html","content_length":"12451","record_id":"<urn:uuid:6946f5e5-3bd9-4153-8dd6-cc9f2390d361>","cc-path":"CC-MAIN-2024-46/segments/1730477027796.35/warc/CC-MAIN-20241103212031-20241104002031-00249.warc.gz"} |
Eigenvalues and Eigenvectors
Eigenvalues and eigenvectors are the most powerful tools in Linear Algebra.
This article is part 7 in the How to Discover Finite Fields series.
We’ve talked about matrices and linear transformations in the previous article in the context of solving systems of equations, but there are often many cases where we have a linear transformation of
a space to itself. In cases like that, we can repeatedly apply the linear transformation.
Repeatedly applying linear transformations can be quite time consuming. Furthermore, while the columns of a matrix indicate where we send the basis vectors, it’s difficult to interpret what’s going
on with the matrices. The solution to both of these problems involves rewriting everything in the eigenbasis.
While this is part of a series, you don’t need to know anything from the previous articles except what was covered in the previous article.
While I’ve given some general motivation in the intro to the article, the motivation will become more clear if we give some examples.
Markov Chains
Say you live in a city where the weather on one day depends on the weather of the previous day with the… | {"url":"https://joseph-mellor1999.medium.com/eigenvalues-and-eigenvectors-2ad8a1fd107?responsesOpen=true&sortBy=REVERSE_CHRON&source=author_recirc-----295188afb8be----1---------------------31936364_9ad3_492c_afbd_9c974928415a-------","timestamp":"2024-11-13T11:53:30Z","content_type":"text/html","content_length":"92869","record_id":"<urn:uuid:dd6ae06e-7bf7-497f-9d13-600971bf0a8a>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00286.warc.gz"} |
We introduce an efficient signed backward substitution for a highly-structured system. More precisely, the problem is to find a vector u of dimension s that solves the system of piecewise affine
equations u = c + L|u|, where L is a strictly lower left triangular s times s matrix, c denotes a given vector of dimension s, and the notation |cdot| indicates the component-wise absolute value of a
vector. The novel approach is based on a Neumann series reformulation and attempts to exploit a high ... Read more | {"url":"https://torsten-bosse.de/","timestamp":"2024-11-02T11:18:29Z","content_type":"text/html","content_length":"47343","record_id":"<urn:uuid:804194d8-dee1-485d-89b1-9c8fadb5ca56>","cc-path":"CC-MAIN-2024-46/segments/1730477027710.33/warc/CC-MAIN-20241102102832-20241102132832-00040.warc.gz"} |
Economic Order Quantity - Formula, Example, and Explanation - Wikiaccounting
Economic Order Quantity – Formula, Example, and Explanation
In any business, production costs are the foundation of the pricing strategy, profit margins, and market positioning. The most important costs incurred in any manufacturing operation are material,
labor, and factory overhead.
Material is often existing as a cushion between production and consumption of the goods. In any inventory, you will find material in various shapes and sizes. The materials are waiting for
processing, semi-processed material, finished goods at the site, in transit, at the warehouse, in retail outlets. In all these forms, there must be a legit economic justification for the inventories
or material.
Each unit carried is costing something to the business, and all costs have to be incurred in the financial statements. Therefore, material planning is used to determine material levels and
The most critical factors managed by material planning are:
• The quantity of material
• Time to purchase material
To answer these two questions of how much and when is dealt by two conflicting costs that are:
• Cost of carrying inventory
• Cost of inadequate carrying
The cost of carrying the variable costs that vary with the change in the amount ordered is included. The most common costs are interest, tax, warehousing, or storage. The cost of inadequate carrying,
on the other hand, is also an important consideration for the calculation of order quantity.
One most popular and common methods of calculating the quantity to be ordered are Economic Order Quantity (EOQ). In this article, EOQ, its formula, calculation, importance, and limitations will be
What is EOQ?
Economic Order Quantity is defined as,
It is the ideal or optimal quantity of inventory that can be ordered at a time to minimize the annual costs of inventory.
This quantity and production-scheduling model was developed by Ford W. Harris in 1913. Over the time of one century, there have been further developments and refinements in the model to make it more
Suppose a business firm purchases materials once or twice a year, but the order sizes are big. In that case, they are incurring too much cost for carrying the inventory. It increases the annual
inventory cost.
Conversely, if the business firm buys smaller quantities in too many orders during a year, the ordering cost goes up. Ultimately, it will also increase annual inventory costs.
Therefore, an optimal quantity of inventory to be ordered at a time requires balancing two factors of the equation.
• Cost of carrying or possessing material
• Cost of ordering or acquiring material
Assumption Of EOQ
The Economic Order Quantity model works on certain assumptions.
1. In EOQ, it is assumed that the demand for the material is always the same. The constant demand implies that the seasonal fluctuations and consumer behavior will not affect the demand over the
2. The second assumption is related to constant holding and ordering costs. According to the assumption, ordering costs and carrying cost is always same. The change in transportation costs, interest
rates, warehouse rent do not impact the ordering and holding costs of the material.
3. The final assumption is the absence of any discounts. The EOQ model doesn’t encompass the rebates or trade discounts offered to the business.
Now let’s jump to the formula of EOQ.
The differential calculus has been employed to devise a formula for the calculation of EOQ. The formula is as follow:
Following costs are components of the Economic Order Quantity.
Ordering Cost
Ordering cost represents the cost of one order. It is calculated by dividing the annual demand by the number of orders annually.
Number Of Orders = D/ Q
The annual ordering costs are found by multiplying the number of orders by the fixed cost of each order.
Annual Ordering Cost = (D/Q) x S
Holding Cost
Holding costs of inventory if often expressed as cost per unit multiplied by interest rate. The holding costs can be direct costs of financing the inventory purchase or the opportunity cost of not
investing the money somewhere else.
The formula of Holding cost is expressed as,
Holding cost = H = iC
Since the inventory demand is assumed to be constant in EOQ, the annual holding cost is calculated using the formula.
Annual Holding Cost = (Q/2) X H
Total Cost And Economic Order Quantity
By adding the holding cost and ordering cost gives the annual total cost of the inventory. For calculation of the EOQ, that is, optimal quantity, the first derivative of the total cost with respect
to Q is taken.
Annual Total Cost = [(D/Q) X S] + [(Q/2) X H]
How To Calculate EOQ?
Let’s calculate the EOQ by example.
Suppose a company has an annual demand of 2500 units. The total cost to place one order is $1200. The per-unit cost is $250. According to the calculations, the carrying costs of the company are 12%
of the per-unit cost.
It is required to find the Economic Order Quantity.
If we arrange the data, it will look like this,
Variable Value
D 2500 units
Q q
S $1200
C $250
H $30
I 12%
EOQ = 408 units per order.
Why Is Economic Order Quantity Model Important?
The economic order quantity model is an important consideration because it helps to find the optimal number of units per order. The firms can minimize their material acquisition costs by applying the
EOQ model.
There can be modifications in the EOQ formula to find other production levels or order intervals. According to the economies of scale, the larger quantities of orders result in decreased per-unit
cost of ordering.
The EOQ is also used by companies as a cash flow tool. By the calculations, a business firm can control the amount of cash tied up to acquire the inventory. Besides, the companies are in a better
position to manage their inventories more efficiently. In the absence of this technique, the companies might end up sticking too much cash into large amounts of inventories. Otherwise, the smaller
orders will result in an unwanted surge in the annual ordering costs.
The investors can also calculate the EOQ for the assessment of a firm’s efficiency in managing its inventory.
Advantages Of EOQ
There are certain benefits the firms can reap by using the EOQ as a cost-scheduling and production-scheduling model. Some of them are mentioned here.
Inventory Costs Are Minimized
Using the EOQ model, the companies are saving them from the unnecessary warehousing costs resulting in the case of extra stocks of inventory. Other factors can be the reason behind the surge in
inventory costs.
For instance, damaged products, unsold inventory, the pattern of ordering affects costs. But suppose you’re business deals in low-velocity products. In that case, EOQ can be a beneficial tool to help
you find an optimal level of order quantity.
Optimized Inventory Means Minimum Stockouts
EOQ is the most efficient model that tells you how to minimize inventory stockouts without holding unnecessary inventory for longer periods. The essence of the EOQ model is the quantity a firm needs
re-ordering and how often to re-order.
Every business is different. Different industries have different requirements. For some businesses ordering smaller amounts more often can be a cost-effective solution. For others, the case might be
the complete opposite. You can optimize your inventory management by EOQ.
Overall Efficiency of Handling Inventory Is Improved
The carrying costs and ordering costs are the two most important considerations in EOQ. When a company is using EOQ, its overall efficiency of handling the inventory is increased. You can smartly
calculate the EOQ by taking into consideration of all important cost variables.
Limitations Of EOQ
The limitations of the EOQ model are based on the assumptions made in the formula derivation. The EOQ assumes consumer demand, ordering costs, and holding costs to be constant. These assumptions
affect the efficiency of the model.
The unpredictable and uncertain events that every business might face are totally ignored. The change in transportation fares, consumer demands, economic recession or boom, or seasonal fluctuations
is some events that affect the demand and costs of ordering as well as holding inventory. Any purchase discounts are also not taken into account for the calculation of EOQ.
Final Words
EOQ might not be a 100% accurate tool to calculate the optimal order quantity, but it helps the business improve its inventory management. Despite its limitations, EOQ is a powerful
production-scheduling technique to make inventory-related decision-making more smooth.
However, one thing should be understood that EOQ is just one of the many inventory management techniques used by businesses. The EOQ will be higher if the business’s set-up costs increase or there is
a demand surge. However, the EOQ will be lower when the cost of holding inventory is high. | {"url":"https://www.wikiaccounting.com/economic-order-quantity/","timestamp":"2024-11-05T07:22:04Z","content_type":"text/html","content_length":"241474","record_id":"<urn:uuid:87379173-8e79-4c04-b467-60bcc263593e>","cc-path":"CC-MAIN-2024-46/segments/1730477027871.46/warc/CC-MAIN-20241105052136-20241105082136-00841.warc.gz"} |
RSA-based Key Encapsulation Mechanisms
A key encapsulation mechanism (KEM) can be used to construct a “hybrid” cryptosystems. In these cryptosystems symmetric keys (e.g. for AES) are encrypted using asymmetric keys. The symmetric key is
used for encrypting data.
A naive KEM built using RSA primitives could use “textbook” RSA to encrypt a randomly generated symmetric key but this has some significant flaws:
• If e is small (e.g. e=3), the symmetric key may not be reduced by the modulus after exponentiation. This means the “encrypted” key would be trivially decrypted by taking the eth-root of the
• Unpadded RSA ciphertexts can be manipulated in predicatable ways. The paper “When Textbook RSA is Used to Protect the Privacy of Hundreds of Millions of Users” describes a fantastic attack on an
unpadded RSA-based KEM where captured encrypted keys were decrypted by replaying ciphertexts with clever bit-shifts.
These issues could be alleviated by using a secure padding scheme like OAEP . However, there is a secure KEM that is just about as simple as the textbook KEM called RSA-KEM .
RSA-KEM works by generating a random integer r in (0, N-1) (where N is the modulus of the key) and encrypting/encapsulating r. The symmetric key is then derived by throwing r into a key derivation
function (KDF).
As I understand it, OAEP is emulating a construction like RSA-KEM in that it attempts to converts a message into a r-like value. The extra complexity that OAEP introduces is to handle messages that
are not necessarily evenly distributed in (0, N-1) and the padding step needs to be reversible. | {"url":"https://kel.bz/post/kem/","timestamp":"2024-11-13T10:42:17Z","content_type":"text/html","content_length":"5094","record_id":"<urn:uuid:8456e255-7136-45b9-a3f2-305369e1ec08>","cc-path":"CC-MAIN-2024-46/segments/1730477028347.28/warc/CC-MAIN-20241113103539-20241113133539-00216.warc.gz"} |
Question #cec3a | Socratic
Question #cec3a
1 Answer
The idea here is that you're performing a serial dilution, so you should be aware of the fact that the overall dilution factor will be equal to the product of the dilution factors of each individual
${\text{DF"_"overall" = "DF"_1 xx "DF"_2 xx ... xx "DF}}_{n}$
As you know, the dilution factor can be calculated by dividing the volume of the diluted solution by the volume of the concentrated solution.
$\text{DF" = V_"diluted"/V_"concentrated}$
In your case, you're performing $8$identical dilutions that have
#"DF" = ((1 + 9)color(red)(cancel(color(black)("mL"))))/(1color(red)(cancel(color(black)("mL")))) = 10#
That is the case because, for each dilution, the volume of the concentrated solution is equal to $\text{1 mL}$. You dilute the concentrated solution by adding $\text{9 mL}$ of water, which makes the
total volume of the diluted solution equal to $\text{10 mL}$.
So, you can say that after $8$dilutions, the overall dilution factor will be
#"DF"_"8 dilutions" = overbrace(10 xx 10 xx... xx 10)^(color(blue)("8 times")) = 10^8#
Now, the dilution factor also tells you the ratio that exists between the concentration of the concentrated solution and the concentration of the diluted solution.
For the overall dilution, you have
$\text{DF"_ "8 dilutions" = c_"initial"/c_"final}$
This means that the final concentration of the solution will be
${c}_{\text{final" = c_"initial}} / {10}^{8}$
In your case, this is equivalent to
${c}_{\text{final" = "0.1 M}} / {10}^{8} = {10}^{- 9}$$\text{M}$
Now, for all intended purposes and based on the number of significant figures that you have for your values, you can go ahead and say that the $\text{pH}$ of this solution is equal to $7$ at room
It's worth mentioning that the actual $\text{pH}$ of this solution--I won't do the calculation here--is approximately equal to $6.998$ at room temperature.
Impact of this question
1562 views around the world | {"url":"https://api-project-1022638073839.appspot.com/questions/598ab5967c01494e989cec3a","timestamp":"2024-11-06T07:52:43Z","content_type":"text/html","content_length":"38789","record_id":"<urn:uuid:74849dd2-07af-4d22-8216-fe4fd2c0a4fd>","cc-path":"CC-MAIN-2024-46/segments/1730477027910.12/warc/CC-MAIN-20241106065928-20241106095928-00795.warc.gz"} |
CBL_EQ performs a binary, bitwise "equals" operation on a series of bytes.
CALL "CBL_EQ"
USING SOURCE, DEST, LENGTH
GIVING STATUS
│ SOURCE PIC X(n) │ The source bytes for the operation. │
│ DEST PIC X(n) │ The destination bytes for the operation. │
│ LENGTH Numeric parameter (optional) │ The number of bytes to combine. If omitted, then CBL_EQ uses the minimum of the size of SOURCE and the size of DEST. │
│ STATUS Any numeric data item │ The return status of the operation. Returns "0" if successful, "1" if not. This routine always succeeds, so STATUS always contains a zero. │
For LENGTH bytes, each byte of SOURCE is combined with the corresponding byte of DEST. The result is stored back into DEST. The runtime combines the bytes by performing an "equals" operation between
each bit of the bytes. The "equals" operation uses the following table to determine the result: | {"url":"https://www.microfocus.com/documentation/extend-acucobol/1051/extend-interoperability-suite/BKPPPPLIBRS012.html","timestamp":"2024-11-07T13:15:30Z","content_type":"text/html","content_length":"16936","record_id":"<urn:uuid:f32f2250-94be-49f9-8361-fe71393cd0c2>","cc-path":"CC-MAIN-2024-46/segments/1730477027999.92/warc/CC-MAIN-20241107114930-20241107144930-00530.warc.gz"} |
4. 1/2 points | Previous Answers SerCP11 15.2.P.008. My Notes Ask Your Teacher Four point charges...
Four point charges are at the corners of a square of side a as shown in the figure below. Determine the magnitude and direction of the resultant electric force on q, with ke, q, and a in symbolic
form. (Let B = 6.0q and C = 2.0q. Assume that the +x-axis is to the right and the +y-axis is up along the page.) magnitude | {"url":"https://www.homeworklib.com/question/1095705/4-12-points-previous-answers-sercp11-152p008-my","timestamp":"2024-11-04T02:38:07Z","content_type":"text/html","content_length":"51914","record_id":"<urn:uuid:9199f509-2661-481e-9584-6e4e1c42fbc5>","cc-path":"CC-MAIN-2024-46/segments/1730477027809.13/warc/CC-MAIN-20241104003052-20241104033052-00077.warc.gz"} |
Epsilon lesson 8---question
my daughter is really struggling with all of the steps to solve unlike denominators for 3 fractions.
how can I explain this to her in an easier to understand way?
I tried to explain the other ways to my son and he said "I understand perfectly well how to do it this way." And that was that. (We don't watch the DVDs, so he hadn't seen the explanations at all). I
don't think learning all the ways are necessary, because eventually Epsilon will teach finding the LCD and adding and subtracting using it.
Of course, I use MUS because DS likes it (not the videos) but I don't so I feel free to change things at will.
Method #2 is just unnecessarily confusing in my opinion. Skip it.
First try to find a common denominator for all three fractions.
For example in this problem: 1/2 + 1/4 + 1/6=
You would use 12 as the common denominator of all three so you would end up with: 6/12 + 3/12 + 2/12 = 11/12
If you can't find a common denominator for all three, as in this problem: 1/2 + 1/3 + 1/7 =
Then find a common denominator of the first two fractions and then a common denominator of that answer and the third fraction.
So you would use 6 as the common denominator of: 1/2 + 1/3 =
Which would give you: 3/6 + 2/6 = 5/6
Then you would use 42 as the common denominator for: 5/6 + 1/7 =
Which would give you: 35/42 + 6/42 = 41/42
Hope that helps!
Method #2 is just unnecessarily confusing in my opinion. Skip it.
First try to find a common denominator for all three fractions.
For example in this problem: 1/2 + 1/4 + 1/6=
You would use 12 as the common denominator of all three so you would end up with: 6/12 + 3/12 + 2/12 = 11/12
If you can't find a common denominator for all three, as in this problem: 1/2 + 1/3 + 1/7 =
Then find a common denominator of the first two fractions and then a common denominator of that answer and the third fraction.
So you would use 6 as the common denominator of: 1/2 + 1/3 =
Which would give you: 3/6 + 2/6 = 5/6
Then you would use 42 as the common denominator for: 5/6 + 1/7 =
Which would give you: 35/42 + 6/42 = 41/42
Hope that helps!
that DOES help...thanks. I think she might actually get this. | {"url":"https://forums.welltrainedmind.com/topic/424612-epsilon-lesson-8-question/","timestamp":"2024-11-04T08:07:39Z","content_type":"text/html","content_length":"222157","record_id":"<urn:uuid:02ab2d0f-ad6c-4db4-9fc8-99df0aca3dfc>","cc-path":"CC-MAIN-2024-46/segments/1730477027819.53/warc/CC-MAIN-20241104065437-20241104095437-00681.warc.gz"} |
Last Index issues on a Bar Graph / Software / IQAN
Not a bug
4.07.7.4605 and 5.00.27.4689
Maybe I am misunderstanding what the "Last Index" should do on a bar graph, but I have an Array size of 48, and the first 24 contain the data I would like to show on a bar graph. I am basically using
the other 24 for math functions. I figured if I set my last index to 24 I would be all set. But this doesn't seem to do anything. Can someone either tell me I am misunderstanding the intent, or the
software has glitches?
The hint of the last index property says: "Index in the array channel that corresponds to the rightmost value in the graph."
The bar graph always draws all the values in the array. But it does not have to start from index 0 to the left and index 47 to the right. By setting the last index you can determine which value is
drawn as the last (rightmost) bar. This is useful if the array contains historic data and is used as a circular buffer, i.e. you store the latest value at an index that is increased with every value
(restarting at 0 when reaching the length of the array).
In your example setting last index to 24 means that the value at index 24 is drawn to the right, then 23, 22, 21 and so on down to 0. After reaching 0 the drawing will continue at index 47, 46, 45
... 25 until all values has been drawn (index 25 will be the leftmost value).
Thank you, great example. I should have thought of that. | {"url":"https://forum.iqan.se/communities/1/topics/1061-last-index-issues-on-a-bar-graph","timestamp":"2024-11-05T18:50:46Z","content_type":"text/html","content_length":"51059","record_id":"<urn:uuid:eff27a0b-0b76-45b0-ac8c-ebca14578b7a>","cc-path":"CC-MAIN-2024-46/segments/1730477027889.1/warc/CC-MAIN-20241105180955-20241105210955-00066.warc.gz"} |
Efficient fixed-point multiplication by 2^K
b = pow2(a,K) returns the value of a shifted by K bits where K is an integer and a and b are fi objects. The output b always has the same word length and fraction length as the input a.
In fixed-point arithmetic, shifting by K bits is equivalent to, and more efficient than, computing b = a*2^K.
If K is a non-integer, the pow2 function will round it to floor before performing the calculation.
The scaling of a must be equivalent to binary point-only scaling; in other words, it must have a power of 2 slope and a bias of 0.
a can be real or complex. If a is complex, pow2 operates on both the real and complex portions of a.
The pow2 function obeys the OverflowAction and RoundingMethod properties associated with a. If obeying the RoundingMethod property associated with a is not important, try using the bitshift function.
The pow2 function does not support fi objects of data type Boolean.
The function also does not support the syntax b = pow2(a) when a is a fi object.
Example 1. Example 1
In the following example, a is a real-valued fi object, and K is a positive integer.
The pow2 function shifts the bits of a 3 places to the left, effectively multiplying a by 2^3.
a = fi(pi,1,16,8)
b = pow2(a,3)
binary_a = bin(a)
binary_b = bin(b)
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 8
b =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 8
binary_a =
binary_b =
Example 2. Example 2
In the following example, a is a real-valued fi object, and K is a negative integer.
The pow2 function shifts the bits of a 4 places to the right, effectively multiplying a by 2^–4.
a = fi(pi,1,16,8)
b = pow2(a,-4)
binary_a = bin(a)
binary_b = bin(b)
a =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 8
b =
DataTypeMode: Fixed-point: binary point scaling
Signedness: Signed
WordLength: 16
FractionLength: 8
binary_a =
binary_b =
Example 3. Example 3
The following example shows the use of pow2 with a complex fi object:
format long g
P = fipref('NumericTypeDisplay', 'short');
a = fi(57 - 2i, 1, 16, 8)
a =
57 - 2i
ans =
127.99609375 - 8i
Extended Capabilities
C/C++ Code Generation
Generate C and C++ code using MATLAB® Coder™.
GPU Code Generation
Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.
Version History
Introduced before R2006a | {"url":"https://au.mathworks.com/help/fixedpoint/ref/pow2.html","timestamp":"2024-11-08T07:58:56Z","content_type":"text/html","content_length":"75762","record_id":"<urn:uuid:40c095a8-b206-408a-b909-6dbd6517d2bc>","cc-path":"CC-MAIN-2024-46/segments/1730477028032.87/warc/CC-MAIN-20241108070606-20241108100606-00130.warc.gz"} |
Technical Basics 2 – Capacitance | VK6GMD - Roaming VK
4. Technical Basics 2 - Capacitance
• Understand that the capacitance of a capacitor is influenced by the area and separation of the plates and the permittivity of the dielectric. Understand the formula C=KA/d.
What is a capacitor ?
A capacitor is a device used to store electrical charge and electrical energy. Capacitors are generally with two electrical conductors separated by a distance. (Note that such electrical conductors
are sometimes referred to as “electrodes,” but more correctly, they are “capacitor plates.”) The space between capacitors may simply be a vacuum, and, in that case, a capacitor is then known as a
“vacuum capacitor.” However, the space is usually filled with an insulating material known as a dielectric. (You will learn more about dielectrics in the sections on dielectrics later in this
chapter.) The amount of storage in a capacitor is determined by a property called capacitance, which you will learn more about a bit later in this section.
Capacitors have applications ranging from filtering static from radio reception to energy storage in heart defibrillators. Typically, commercial capacitors have two conducting parts close to one
another but not touching, such as those in the figure below. Most of the time, a dielectric is used between the two plates. When battery terminals are connected to an initially uncharged capacitor,
the battery potential moves a small amount of charge of magnitude Q from the positive plate to the negative plate. The capacitor remains neutral overall, but with charges
residing on opposite plates.
A system composed of two identical parallel-conducting plates separated by a distance is called a parallel-plate capacitor (see below). The magnitude of the electrical field in the space between the
parallel plates is
, where
denotes the surface charge density on one plate (recall that
is the charge Q per the surface area A). Thus, the magnitude of the field is directly proportional to Q.
Capacitors with different physical characteristics (such as shape and size of their plates) store different amounts of charge for the same applied voltage V across their plates. The capacitance C of
a capacitor is defined as the ratio of the maximum charge Q that can be stored in a capacitor to the applied voltage V across its plates. In other words, capacitance is the largest amount of charge
per volt that can be stored on the device:
The SI unit of capacitance is the farad (F), named after Michael Faraday (1791–1867). Since capacitance is the charge per unit voltage, one farad is one coulomb per one volt, or
By definition, a 1.0-F capacitor is able to store 1.0 C of charge (a very large amount of charge) when the potential difference between its plates is only 1.0 V. One farad is therefore a very large
capacitance. Typical capacitance values range from picofarads
$(1pF=10−12F) to millifarads$
(1mF=10−3F), which also includes microfarads
Capacitors can be produced in various shapes and sizes.
Calculating Capacitance
The capacitance of a capacitor is calculated using the formulae:
C = ^KA/[d] * 8.85E-12
Where C is the capacitance in Farads; K is the dielectric constant; A is the (overlapping) area of one plate in square metres; and d is the distance between the plates in metres. The constant
8.85E-12 is the absolute permittivity of free space.
Calculate C for an air capacitor with two plates, each with an area of 2 square metres, separated by 1 centi metre (1cm=10mm).
C = KA/d * 8∙85E-12
= 1*2 / 10E-3 * 8∙85E-12
= 200 *8∙85E-12
= 1770E-12
= 1770pF
Increasing/Reducing capacity
The capacitance of a capacitor can be increased as follows,
• Increasing the overlapping area of it’s plates
• Increasing the number of plates connected in parallel
• Decreasing the distance between the plates
• Using an insulating material with a high dielectric constant.
The value of capacitance will be reduced if,
• the overlapping area of the plates is reduced
• the number of plates connected in parallel is reduced
• the distance between the plates is increased
• the insulator has a low dielectric constant
NOTE: The area of the plates refers to the area overlap not the physical size of each plate. In most cases this is the same but, as in the tuning capacitor, it is the overlapping area that decides
the capacitance not the overall size of each plate.
The dielectric is the material between the capacitors plates. In tuning or adjustable capacitors this is very often air but in fixed value capacitors many other substances may be used.
As seen above the material between the plates determines the capacitance of the device. As well as acting as an insulator this dielectric concentrates the electrical force lines. All dielectrics are
compared to air to give them a value. This is known as the dielectric constant or the dielectric permittivity of the dielectric.
Some common dielectric constants are:
Material Approx Constant Used in
Air 1 Variable Capacitors
Oil 2∙2 Power Capacitors
Aluminium Oxide 8 Electrolytic capacitors
Silvered Mica 7.5 High stability RF capacitors
Tantalum Oxide 28 Physically small capacitors
Ceramic 100 RF Capacitors
Dielectric Losses
Because there is no such thing as a perfect insulator all dielectrics will leak some charge, although this may be very small. These losses are known as dielectric losses and in some dielectrics
increase with the frequency of the charge/discharge cycle.
• Understand that capacitors have a breakdown voltage and that they need to be used within that voltage.
Voltage rating
All capacitors have a voltage rating which must not be exceeded if the capacitor is to function reliably. The voltage rating of the capacitor is determined by the insulating properties of its
dielectric. If the maximum operating voltage is exceeded, even for a fraction of a milli-second the excess voltage will arc between the plates of the capacitor effectively making it a short circuit.
This will not only destroy the capacitor but can damage other components the capacitor is connected to. When choosing a capacitor for a specific application it is good practice to choose one that has
a voltage rating of ten to fifteen percent higher than the highest voltage it will have to handle under normal operating conditions.
• Recall that different dielectrics are used for different purposes. Recall that with some dielectrics, losses increase with increasing frequency.
Types of dielectric
Electrolytic Capacitors
Electrolytic capacitors use a semi-liquid or paste type of dielectric. The advantage of this is that the dielectric itself is extremely thin, often only a few molecules thick, allowing many plates to
be packaged in a small physical size. This enables capacitors of several thousand micro-Farads, or more, to be made that occupy very little space. A typical 1000 micro-Farad 50 Volt electrolytic
capacitor may measure approximately 15 millimeters long with a diameter of eight millimeters. To produce this same value of capacitance using a conventional dielectric would result in a capacitor
several hundred times larger.
The dielectric in an electrolytic capacitor does not become effective until a DC voltage is applied to the capacitor. This is where the capacitor gets its name from; the dielectric is formed due to
an electrolytic process that takes place when a voltage is applied to the capacitor.
As a result electrolytic capacitors must be connected with the correct polarity. If an electrolytic capacitor is connected with reverse polarity it will draw a large amount of current which causes it
to heat up very quickly until it explodes, giving off toxic fumes and damaging other components. When using electrolytic capacitors be certain that you have connected them with the correct polarity.
Working Voltages, Types and Applications
Care must be taken to use a capacitor of a type suited to its application and with the correct voltage rating.
Type of dielectric Typical DC working voltage Use
Polystyrene 250 High stability audio & RF
Metallised Polyester 100 to 630 Audio & RF
Electrolytic 10 to 200 Audio & Power supplies
Tantalum 3 to 50 Low leakage, high capacitance
Ceramic 50 to 3000 High stability, high frequency range
Air Up to several thousand Variable tunning capacitors
Polypropylene 30 to 100 Variable for tunning receivers
• Understand the formula for time constant (T=CR) in relation to the charge and discharge of a capacitor in a CR circuit.
RC Time constant
When we charge a capacitor with a voltage level, it's not surprising to find that it takes some time for the cap to adjust to that new level. Exactly how much time it takes to adjust is defined not
only by the size of the capacitor, but also by the resistance of the circuit.
The RC time constant is a measure that helps us figure out how long it will take a cap to charge to a certain voltage level. The RC constant will also have some handy uses in filtering that we'll see
later on.
Calculating the RC is straight forward -- multiply the capacitance C, in Farads, by the resistance R, in Ohms. Remember to take care of your powers of 10 -- a micro-Farad is 10^-6F, while a
pico-Farad is 10^-9F. In the circuits we'll look at, RC constants often come out to be in the hundreths of a second or millisecond range.
Here's an RC circuit getting charged with a 10V source:
Assuming the cap started with 0V, it's rise of voltage over time will look like this:
The key points here are to note that after 1 RC, the cap will have reached about 2/3 of the V-in, and after 5 RC's, the cap will be very close to V-in.
If, after charging the cap in our RC circuit to 10V, we brought V+ down to ground, the cap would discharge. And here again, the discharge time would be determined by the RC time constant. The RC
curve for discharging looks like this:
The key points on the discharge curve are at 1 RC, where the voltage is about a third of the original, and at 5 RC, where the voltage across the cap is nearly 0.
The length of time taken for the capacitor to charge to 63∙2% of the difference between the supply voltage and the voltage across the capacitor is known as the time constant of the circuit and is
calculated by the formulae:
τ = RC
where τ is the time constant in seconds, R is the circuit resistance in Ohms and C is the capacitor value in Farads.
To determine the time constant of a circuit containing a 20μF capacitor and 5kΩ of resistance:
τ = R*C
= 20E-6*5E3
= 0∙1 seconds
• Recall the dangers of stored charges on large or high voltage capacitors.
• Recall that large value resistors can be used to provide leakage paths for these stored charges.
The Dangers of Charged Capacitors
High value capacitors of several tens of thousands of micro-Farads can hold their charge for several hours or even days. Even if these capacitors are only low voltage types eg 25 Volts, when fully
charged they are capable of delivering very high currents for a short period of time. Such high currents are capable of causing severe burns or generating enough heat to be a potential fire hazard.
Capacitors that are used in high voltage circuits can retain sufficient charge to cause serious, sometimes fatal, electric shocks. While well designed equipment will have bleeder resistors connected
across each capacitor, to drain away accumulated charge, it is possible that a bleeder resistor has gone open circuit and will not discharge the capacitor it is connected across.
Never assume that turning off a piece of electronic equipment will render that equipment safe. Modern capacitors can maintain their charge for long periods of time and represent a shock hazard even
when the equipment is turned off.
• Understand and apply the formulae for calculating the combined values of capacitors in series and in parallel and in series-parallel combinations.
Capacitors in series
When capacitors are connected in series, the total capacitance is less than any one of the series capacitors’ individual capacitances. If two or more capacitors are connected in series, the overall
effect is that of a single (equivalent) capacitor having the sum total of the plate spacings of the individual capacitors. As we’ve just seen, an increase in plate spacing, with all other factors
unchanged, results in decreased capacitance.
Thus, the total capacitance is less than any one of the individual capacitors’ capacitances. The formula for calculating the series total capacitance is the same form as for calculating parallel
Capacitors in parallel
When capacitors are connected in parallel, the total capacitance is the sum of the individual capacitors’ capacitances. If two or more capacitors are connected in parallel, the overall effect is that
of a single equivalent capacitor having the sum total of the plate areas of the individual capacitors. As we’ve just seen, an increase in plate area, with all other factors unchanged, results in
increased capacitance.
Thus, the total capacitance is more than any one of the individual capacitors’ capacitances. The formula for calculating the parallel total capacitance is the same form as for calculating series
As you will no doubt notice, this is exactly the opposite of the phenomenon exhibited by resistors. With resistors, series connections result in additive values while parallel connections result in
diminished values. With capacitors, its the reverse: parallel connections result in additive values while series connections result in diminished values. | {"url":"https://vk6gmd.com.au/index.php/technical-basics-2-capacitance/","timestamp":"2024-11-07T19:37:04Z","content_type":"text/html","content_length":"157088","record_id":"<urn:uuid:82c24deb-ed3b-4f8e-88b4-3805119ccc5d>","cc-path":"CC-MAIN-2024-46/segments/1730477028009.81/warc/CC-MAIN-20241107181317-20241107211317-00356.warc.gz"} |
Mensuration is one of the branches of mathematics.This means measurement.It is is being done in our life in many situations.
For example,
A Length of cloth we need for stitching,the area of a wall which is being painted, perimeter of the circular garden to be fenced, quantity of water needed to fill the tank. For these kind of
activities, we are doing measurements for further needs.
Here, we are going to cover three areas here.
Apart from the examples and practice questions in the above three areas, we also give calculators in this topic which can be used by the students to check their answers which they have found for the
questions they have.You can use any of the given calculators to get answer for your questions in seconds.
For example, in his development of integration and calculus, he tried to find a value for π by using circumscribed and inscribed polygons, eventually using 96 sided polygons inside and outside a
circle to generate a value for Pi of between 31⁄7 (approximately 3.1429) and 310⁄71 (approximately 3.1408). This range of values is extremely accurate, as the actual value is 3.1416. This is just one
example of his inventions.
In this topic we are going study about perimeter, area and volume of different shapes like a cylinder, cone, sphere, hemisphere etc..These shapes are called geometric shapes.
Area of circle = πr^2
Circumference of circle = 2πr
Example problems of area of circle
Example problems of circumference of circle
Semi Circle
Area of semicircle = πr^2/2
Circumference of circle = πr
Example problems on semi circle
Area of quadrant = πr^2/4
Equilateral Triangle
Area of equilateral triangle = (√3/4)a^2
Perimeter of equilateral triangle = 3a
Example problems on equilateral triangle
Scalene Triangle
Area of scalene triangle = √[s(s - a)(s - b)(s - c)]
Perimeter of scalene triangle = a + b + c
Example problems on scalene triangle
Right Triangle
Area of right triangle = (b x h)/2
Area of parallelogram = b x h
Perimeter of parallelogram = 2(a + b)
Example problems of perimeter of parallelogram
Area of quadrilateral = (1/2) x d x (h[1 ]+ h[2])
Perimeter of quadrilateral = a + b + c + d
Area of rectangle = l x w
Perimeter of rectangle = 2(l + w)
Examples of perimeter of rectangle
Area of square = a x a
Perimeter of square = 4a
Examples problems on area of square
Examples problems on perimeter of square
Area of rhombus = (d[1] x d[2])/2
Perimeter of rhombus = 4a
Examples problems on area of rhombus
Examples problems on perimeter of rhombus
Area of trapezoid = h(a + b)/2
Perimeter of trapezoid = a + b + c + d
Examples problems on area of trapezoid
Length of arc (l) = (θ/360) ⋅ 2πr
When we know the radius "r" of the circle and central angle "θ" of the sector :
Area of the sector = (θ/360°) ⋅ πr^2
When we know the radius "r" of the circle and arc length "l":
Area of the sector = (l ⋅ r)/2
Worksheet for length of arc
Curved surface area of cylinder = 2πrh
Total surface area of cylinder = 2πr(h + r)
Volume of cylinder = πr^2h
Curved surface area of cone = πrl
Total surface area of cone = 2πr(l + r)
Volume of cone = πr^2h/3
Surface area of sphere = 4πr^2
Volume of sphere = 4πr^3/3
Curved surface area of hemisphere = 2πr^2
Total surface area of hemisphere = 3πr^2
Volume of hemisphere = 2πr^3/3
Mensuration Calculators
18. Heron's Triangle Area Calculator
20. Regular Polygon Area Calculator
22. Circle Sector Area Calculator
Kindly mail your feedback to v4formath@gmail.com
We always appreciate your feedback.
©All rights reserved. onlinemath4all.com | {"url":"https://www.onlinemath4all.com/mensuration.html","timestamp":"2024-11-08T10:59:35Z","content_type":"text/html","content_length":"46213","record_id":"<urn:uuid:273d95a2-6fe0-47a9-9df6-dae3f005d993>","cc-path":"CC-MAIN-2024-46/segments/1730477028059.90/warc/CC-MAIN-20241108101914-20241108131914-00595.warc.gz"} |
How to Use StandardScaler() to Standardize the Data
Ensuring consistency in the numerical input data is crucial to enhancing the performance of machine learning algorithms. To achieve this uniformity, it is necessary to adjust the data to a
standardized range.
Standardization and Normalization are both widely used techniques for adjusting data before feeding it into machine learning models.
In this article, you will learn how to utilize the StandardScaler class to scale the input data.
What is Standardization?
Before diving into the fundamentals of the StandardScaler class, you need to understand the standardization of the data.
Standardization is a data preparation method that involves adjusting the input (features) by first centering them (subtracting the mean from each data point) and then dividing them by the standard
deviation, resulting in the data having a mean of 0 and a standard deviation of 1.
The formula for standardization can be written like the following:
• standardized_val = ( input_value - mean ) / standard_deviation
Assume you have a mean value of 10.4 and a standard deviation value of 4. To standardize the value of 15.9, put the given values into the equation as follows:
• standardized_val = ( 15.9 - 10.4 ) / 3
• standardized_val = ( 5.5 ) / 4
• standardized_val = 1.37
The StandardScaler stands out as a widely used tool for implementing data standardization.
What is StandardScaler?
The StandardScaler class provided by Scikit Learn applies the standardization on the input (features) variable, making sure they have a mean of approximately 0 and a standard deviation of
approximately 1.
It adjusts the data to have a standardized distribution, making it suitable for modeling and ensuring that no single feature disproportionately influences the algorithm due to differences in scale.
Why Bother Using it?
Well, so far you've already understood the idea of using StandardScaler in machine learning but just to highlight, here are the primary reasons why you should use StandardScaler:
• For the betterment of the performance of the machine learning models
• Maintains the consistency of data points
• Useful when working with machine learning algorithms that can be negatively influenced by differences in the scale of the features of the data.
How to Use StandardScaler?
First, you should bring in the StandardScaler class from the sklearn.preprocessing module. After that, create an instance of the StandardScaler class by using StandardScaler(). Following that, apply
the fit_transform method to the input data by fitting it to the created instance.
# Imported required libs
import numpy as np
from sklearn.preprocessing import StandardScaler
# Creating a 2D array
arr = np.asarray([[12, 0.007],
[45, 1.5],
[75, 2.005],
[7, 0.8],
[15, 0.045]])
print("Original Array: \n", arr)
# Instance of StandardScaler class
scaler = StandardScaler()
# Fitting and then transforming the input data
arr_scaled = scaler.fit_transform(arr)
print("Scaled Array: \n", arr_scaled)
An instance of the StandardScaler class is created and stored in the variable scaler. This instance will be used to standardize the data.
The fit_transform method of the StandardScaler object (scaler) is called with the original data arr as the input.
The fit_transform method will compute the mean and deviation for each data point in the input data arr and then apply the standardization to the input data.
Here's the original array and the standardized version of the original array.
Original Array:
[[1.200e+01 7.000e-03]
[4.500e+01 1.500e+00]
[7.500e+01 2.005e+00]
[7.000e+00 8.000e-01]
[1.500e+01 4.500e-02]]
Scaled Array:
[[-0.72905466 -1.09507083]
[ 0.55066894 0.79634605]
[ 1.71405403 1.43610862]
[-0.92295217 -0.09045356]
[-0.61271615 -1.04693028]]
Does Standardization Affect the Accuracy of the Model?
In this section, you'll see how the model's performance is affected after applying standardization to features of the dataset.
Let's see how the model will perform on the raw dataset without standardizing the feature variables.
# Evaluate KNN on the breast cancer dataset
from sklearn import datasets
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
from numpy import mean
# load dataset
df = datasets.load_breast_cancer()
X = df.data
y = df.target
# Instantiating the model
model = KNeighborsClassifier()
# Evaluating the model
scores = cross_val_score(model, X, y, scoring='accuracy', cv=10, n_jobs=-1)
# Model's average score
print(f'Accuracy: {mean(scores):.2f}')
The breast cancer dataset is loaded from the sklearn.datasets and then the features (df.data) and target (df.target) are stored inside the X and y variables.
The K-nearest neighbors classifier (KNN) model is instantiated using the KNeighborsClassifier class and stored inside the model variable.
The cross_val_score function is used to evaluate the KNN model's performance. It passes the model (KNeighborsClassifier()), features (X), target (y), and specifies that accuracy (scoring='accuracy')
should be used as the evaluation metric.
This will evaluate the accuracy scores by dividing the dataset equally into 10 parts (cv=10) which means the dataset will be trained and tested 10 times. Here, n_jobs=-1 means using all the available
CPU cores for faster cross-validation.
Finally, the average of the accuracy scores (mean(scores)) is printed.
Accuracy: 0.93
Without standardizing the dataset's feature variables, the average accuracy score is 93%.
Using StandardScaler for Applying Standardization
# Evaluate KNN on the breast cancer dataset
from sklearn import datasets
from sklearn.model_selection import cross_val_score
from sklearn.neighbors import KNeighborsClassifier
from sklearn.preprocessing import StandardScaler
from numpy import mean
# loading dataset and configuring features and target variables
df = datasets.load_breast_cancer()
X = df.data
y = df.target
# Standardizing features
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
# Instantiating model
model = KNeighborsClassifier()
# Evaluating the model
scores = cross_val_score(model, X_scaled, y, scoring='accuracy', cv=10, n_jobs=-1)
# Model's average score
print(f'Accuracy: {mean(scores):.2f}')
The dataset's features undergo scaling with the StandardScaler(), and the resulting scaled dataset is stored in the X_scaled variable.
Next, this scaled dataset is used as input for the cross_val_score function to compute and subsequently display the accuracy.
Accuracy: 0.97
It is noticeable that the accuracy score has significantly increased to 97% when compared to the previous accuracy score of 93%.
The application of StandardScaler(), which standardized the data's features, has notably improved the model's performance.
StandardScaler is used to standardize the input data in a way that ensures that the data points have a balanced scale, which is crucial for machine learning algorithms, especially those that are
sensitive to differences in feature scales.
Standardization transforms the data such that the mean of each feature becomes zero (centered at zero), and the standard deviation becomes one.
Let's recall what you've learned:
• What actually is StandardScaler
• What is standardization and how it is applied to the data points
• Impact of StandardScaler on the model's performance
🏆Other articles you might be interested in if you liked this one
✅How do learning rates impact the performance of the ML and DL models?
✅How to build a custom deep learning model using transfer learning?
✅How to build a Flask image recognition app using a deep learning model?
✅How to join, combine, and merge two different datasets using pandas?
✅How to perform data augmentation for deep learning using Keras?
✅Upload and display images on the frontend using Flask in Python.
✅What are Sessions and how to use them in a Flask app as temporary storage?
That's all for now
Keep Coding✌✌
Did you find this article valuable?
Support Team - GeekPython by becoming a sponsor. Any amount is appreciated! | {"url":"https://teamgeek.geekpython.in/how-to-use-standardscaler-to-standardize-the-data","timestamp":"2024-11-12T23:20:28Z","content_type":"text/html","content_length":"240727","record_id":"<urn:uuid:868ac668-16b3-47fb-9494-77de48152cb9>","cc-path":"CC-MAIN-2024-46/segments/1730477028290.49/warc/CC-MAIN-20241112212600-20241113002600-00780.warc.gz"} |
limit of noncontinuous function Archives - Math Research of Victor Porton
No root of -1? No limit of discontinuous function? Like as once roots were generalized for negative numbers, I succeeded to generalize limits for arbitrary discontinuous functions. The formula of
limit of discontinuous function is based on algebraic general topology, my generalization of general topology in an algebraic way. The formula that defines limit of […] | {"url":"https://math.portonvictor.org/tag/limit-of-noncontinuous-function/","timestamp":"2024-11-10T22:05:13Z","content_type":"text/html","content_length":"91013","record_id":"<urn:uuid:cb74191a-0490-4b35-833b-fe9cdbad0f7a>","cc-path":"CC-MAIN-2024-46/segments/1730477028191.83/warc/CC-MAIN-20241110201420-20241110231420-00219.warc.gz"} |
A pedestrian introduction to Monad and Guix
Note: After discussing this topic with Clément, Mathieu and Pierre at FOSDEM 2020, I dropped them an email; three indeed. Here is an edited version and a long overdue: «T'as passé combien de
temps sur ce email? Peut-être qu'il faudrait que tu penses à écrire un article.» One year later, done. Thanks to them for helping me to clear my mind. Thanks to Domagoj Stolfa for reading an
earlier version.
A monad is just a monoid in the category of endofunctors. How does it sound? A joke, right? The aim of this post is to provide an intuition for the concept; a first step to let you then jump in more
formal discussions and trying to avoid the monad tutorial fallacy. It represents my digest of the concept; I am not convinced I understand well enough to well explain. Let give a try.
The aim of the monad concept is to be able to compose operations in a structured way. That’s all. Let detail with two examples and end with a rough definition.
First example: list
First thing first, launch guix repl to run the code snippets. We will extensively use pattern matching, thus I recommend to give a look at the examples from Guile documentation. That’s said, go.
Let naively implement a nap function (normally called map) applying a function f to the list lst.
(use-modules (ice-9 match))
(define (nap f lst)
(match lst
((x xs ...)
(cons (f x) (nap f xs)))
(_ '())))
If two functions have type-signature compatibility, then one can easily compose them,
(define (g x) (string->number x))
(define (h x) (* x x))
(nap (compose h g) (list "1" "2" "3"))
;; <->
(nap h (nap g (list "1" "2" "3")))
The flow reads from right to left; though using infix notation we should read: the data (list "1" "2" "3") flow via g flow via h. In short, the two ingredients to make the composition working are:
• match which somehow unwraps (or unpacks),
• cons which somehow wraps back (or packs).
So far, so good. Now, consider the repeat function which takes a seed and generates a list,
(define (repeat seed) (list seed seed))
(repeat "hello") ;; => ("hello" "hello")
and we would like to compute the result ("hello" "hello" "hello" "hello" "world" "world" "world" "world") from the input (list "hello" "world"). In other words, we would like someting able to compose
repeat. If repeat is composed to itself using compose as previously, then the result is (("hello" "hello") ("hello" "hello")); not what we want. We want a simple list back. Therefore concatenate from
SRFI-1 seems the thing we need.
(use-modules (srfi srfi-1))
(define (bind-list f lst)
(concatenate (nap f lst)))
(bind-list repeat (bind-list repeat (list "hello" "world")))
;; => ("hello" "hello" "hello" "hello" "world" "world" "world" "world")
Wow! First, please note the same pattern as composing previously with nap. Reading right to left with infix notation: the list data bind to repeat bind to repeat. Second, bind-list somehow acts as an
unwrap-rewrapper. What could somehow acts as a wrapper? Pick the simplest:
(define (return-list x)
(list x))
Attentive reader might notice nice properties between bind-list and return-list; it is let as an exercise to verify^1 all these instances:
(bind-list repeat (return-list "one")) ;; (repeat "one")
(bind-list return-list (list "two")) ;; (list "two")
(bind-list repeat (bind-list repeat (list "three")))
;; eq?
(lambda (x) (bind-list repeat (repeat x)))
(list "three"))
Do you still follow? If yes, congrat! Because we just have defined the list monad. The terms bind and return are the common names for these unwrap-rewrapper and wrapper. Not convinced yet by the
concept, let jump into another example.
Second example: maybe
Let consider a tiny calculator. The example is borrowed from Graham Hutton at Computerphile . We start with expressions and a recursive evaluator. Here we oversimplify the underlying datatype to stay
on track and thus avoid the introduction of unrelated Guile concepts. An expression is represented by a nested sequence of “nodes” (neg, add, div) where the “leaf” is represented by val with an
associated value.
(define expr '(add (val 1) (neg (val 1))))
(define bang `(div (val 1) ,expr))
(define (evaluator e)
(match e
(('val x) x)
(('neg x) (- 0 (evaluator x)))
(('add e1 e2) (+ (evaluator e1) (evaluator e2)))
(('div e1 e2) (/ (evaluator e1) (evaluator e2)))))
The evaluator walks the sequence and last return a value (number). Nothing fancy. The well-known issue is the division by zero. For instance, (evaluator bang) throws the error ("/" "Numerical
overflow" #f #f)'. The question is thus: how to prevent this error? One solution is to define a datatype and a function catching the potential error. Let write it,
(define (safe-div n m)
(match m
((? = 0) 'nothing)
(_ `(just ,(/ n m)))))
This function now returns maybe the result: if the division makes sense, then it just returns it, else it returns nothing. So far, so good.
However, now this function safe-div cannot be composed with usual other arithmetic operations; for instance, (+ 1 (safe-div 4 2)) does not make sense at the type level. Somehow, we need a way to
unwrap the value. The evaluator has to be adjusted accordingly, i.e., it cannot return a number anymore; instead it has to return either nothing (something went wrong) or either (just number) (the
computation is just that).
(define (evaluator-bis e)
(match e
(('val x) `(just ,x))
(('neg x)
(match (evaluator-bis x)
('nothing 'nothing)
(('just y) `(just ,(- 0 y)))))
(('add e1 e2)
(match (evaluator-bis e1)
('nothing 'nothing)
(('just x1)
(match (evaluator-bis e2)
('nothing 'nothing)
(('just x2) `(just ,(+ x1 x2)))))))
(('div e1 e2)
(match (evaluator-bis e1)
('nothing 'nothing)
(('just x1)
(match (evaluator-bis e2)
('nothing 'nothing)
(('just x2) (safe-div x1 x2))))))))
Here we are, (evaluator-bis bang) returns now nothing. Wait! Are we seeing again some unwrapper-wrapper? Attentive reader might notice that, first, a pattern emerges, isn’t it? And second, the issue
is about composition, right? We already know what is the next step. Let’s go define this pattern and re-write the evaluator.
(define (bind-maybe f m)
(match m
('nothing 'nothing)
(('just x) (f x))))
(define (return-maybe x)
`(just ,x))
(define (safe-evaluator e)
(match e
(('val x) (return-maybe x))
(('neg x)
(lambda (y) (return-maybe (- 0 y)))
(safe-evaluator x)))
(('add e1 e2)
(bind-maybe (lambda (x1)
(bind-maybe (lambda (x2)
(return-maybe (+ x1 x2)))
(safe-evaluator e2)))
(safe-evaluator e1)))
(('div e1 e2)
(bind-maybe (lambda (x1)
(bind-maybe (lambda (x2)
(safe-div x1 x2))
(safe-evaluator e2)))
(safe-evaluator e1)))))
This safe-evaluator is better because it only depends on how the datatype of an expression is represented. The internal representation of an impossible computation is captured by another datatype;
this other datatype could capture more information about the context than the computation itself. The main point is to be still able to compose, i.e., unwrap-rewrap transparently without worrying
about the structure of the richer datatype. It is tempting to add syntactic sugar and abstract the remaining pattern; let as an exercise. Again, attentive reader might notice these identities:
(define (f x) (if (< x 0) 'nothing `(just ,x)))
(bind-maybe f (return-maybe 1)) ; (f 1) ;; => (just 1)
(bind-maybe f (return-maybe -1)) ; (f -1) ;; => nothing
(bind-maybe return-maybe '(just 2)) ; => (just 2)
(bind-maybe return-maybe '(just -2)) ; => (just -2)
(bind-maybe f (bind-maybe f '(just 3)))
;; eq? => (just 3)
(lambda (x) (bind-maybe f (f x)))
'(just 3))
(bind-maybe f (bind-maybe f '(just -3)))
;; eq? => nothing
(lambda (x) (bind-maybe f (f x)))
'(just -3))
Oh, are we speaking about the maybe monad? Yes! The monad concept is not so complicated after all, isn’t it?
And so, what is monad?
A monad is a mathematical object composed by:
1. a type constructor,
2. a function return,
3. a function bind.
Considering a type a, the monadic type reads m a. For instance, the type a means number and m means list; or number and “ maybe ”. The signature is: return :: a \(\rightarrow\) m a; from a value of
type a is returned a monadic value of type m a. The function bind is less intuitive; the signature^2 is:
\begin{equation*} \texttt{bind}:: (\texttt{a} \rightarrow \texttt{m b}) \rightarrow \texttt{m a} \rightarrow \texttt{m b} \end{equation*}
which means bind takes first a function which takes a normal value and returns a monadic value (value with context) and second a monadic value, then this bind returns a monadic value (value with
context). Well, I hope that framing these words with the above example using the maybe monad helps, maybe.
Somehow, a monad encapsulates an object to another object owning more information. And the two associated functions provide the composition.
These two functions return and bind are not randomly picked. They also must respect the three laws to fully qualify as a monad:
1. return is a left-identity for bind,
2. return is a right-identity for bind,
3. bind is associative.
The test suite of Guix checks these three laws (for specific monad). They are strong invariant properties but attached to concrete types. In other words, the term monad refers to a generic
mathematical object defined by 3+3 items, then attached to specific types, i.e., once defined concrete constructor, return and bind, one can define specific monads as the list monad, maybe monad and
many more as the state monad.
To end, let challenge our parser: these three laws formally written read,
\begin{eqnarray} (\texttt{bind}~f~(\texttt{return}~x)) \leftrightarrow (f~x) &\quad& \forall x\in\texttt{a}.~f::\texttt{a} \rightarrow \texttt{m b} \\ (\texttt{bind}~\texttt{return}~x) \
leftrightarrow x &\quad& \forall x\in\texttt{m a} \\ (\texttt{bind}~f~(\texttt{bind}~g~x)) \leftrightarrow (\texttt{bind}~(\lambda.y~(\texttt{bind}~f~(g~y)))~x) &\quad& \forall x\in\texttt{m
a}.~f,g::\texttt{a} \rightarrow \texttt{m b} \end{eqnarray}
where bind is usually denoted >>= flipping the argument order and with an infix notation: \(x~\texttt{>>=}~f\) means \((\texttt{>>=}~x~f) = (\texttt{bind}~f~x)\), therefore the three laws now read^3,
\begin{eqnarray*} \texttt{return}~x~\texttt{>>=}~f &\leftrightarrow& f~x \\ x~\texttt{>>=}~\texttt{return} &\leftrightarrow& x \\ (x~\texttt{>>=}~g)~\texttt{>>=}~f &\leftrightarrow& x~\texttt{>>=}~\
lambda.y~(g~y)~\texttt{>>=}~f \end{eqnarray*}
Don’t fear the monad!
Opinionated afterword
Monad provides a framework to compose effects (or context) under control (or purity). In Guix terminology, these effects are the various states (profiles, generations, etc.) kept under control by
using isolated environments. Functional package management^4 sees the package definition (recipe) as a pure mathematical function \(\texttt{a}~\rightarrow~\texttt{b}\) but that alone is useless for
practical purposes. Build a package could somehow be seen as a function \(\texttt{a}~\rightarrow~\texttt{m b}\) where m is a controlled context. However, compose such functions, e.g., build a profile
, is not straightforward as we have shown by tiny examples. What Guix calls the store monad is the framework to make functional package management useful and safe; at least an attempt.
Useful Apt,Conda \(\underset{\rightarrow}{?}\) Nirvana
\(\uparrow\) Guix
Effects Unsafe Safe
Take the red pill
Reader, if you are still there, I recommend you three extensive presentations:
• give a look at Category theory for Scientists by David Spivak who exposes bottom-up the material targeting broader scientific community;
• Category theory for Programmers by Bartosz Milewski who deeply introduces core concepts targeting programmers;
• a bit aged but still classic Categories for the Working Mathematician by Saunders Mac Lane who provides mathematical details for people with a strong background about abstraction.
Although I have never fully completed all these materials. :-)
More or less straighforward from their definitions.
Grab a pen and make it back to envelope if needed. | {"url":"https://www.tournier.info/posts/2021-02-03-monad.html","timestamp":"2024-11-02T02:08:24Z","content_type":"text/html","content_length":"32544","record_id":"<urn:uuid:c57e7761-9d95-4df9-8902-cc9221f6f213>","cc-path":"CC-MAIN-2024-46/segments/1730477027632.4/warc/CC-MAIN-20241102010035-20241102040035-00735.warc.gz"} |
Tolcap Stacks
In this video we take a look at why statistical tolerancing is invoked and examine the assumptions underlying statistical tolerance analysis. We show how Tolcap calculates capable tolerances for the
parts in a "stack" and introduce a simple equation that you can use to correctly stack your assemblies component tolerances.
If you would like to comment please join in the conversation on YouTube.
Tolerance Stacks
This presentation is about tolerance stacks and particularly statistical tolerancing. An earlier presentation explained why you need Tolcap for checking and setting the tolerances you put on
If Tolcap is good for individual tolerances, then hopefully you will see that Tolcap is, well, essential for tolerances in combination – in tolerance stacks.
Tolcap for Tolerance Stacks
Statistical tolerancing is almost always misapplied! We'll have a look shortly at why statistical tolerancing is invoked, and how it is usually applied with no consideration of any underlying
assumptions. In fact there is a general lack of awareness that there are any underlying assumptions.
Tolerance stacks are not easy – the theory is essentially maths, and I apologise in advance, but the explanation will involve some maths, but it really isn't very advanced, and I will take it slowly.
Maths is necessary to be able to explain:
• the assumptions of statistical tolerancing;
• why Tolcap is vital for valid statistical tolerancing;
• and to show a straightforward method of calculating a sound statistical stack tolerance using Tolcap.
The Tolerance Stack Problem
First let's look at what we mean by a tolerance stack.
We set tolerances for components, but it is often necessary to ensure the parts fit properly when they are assembled together:
The diagram represents an assembly -
three ‘blocks’ dimensioned d[1]+/-t[1], d[2]+/-t[2] and d[3]+/-t[3]
are fitted side by side into a ‘cradle’ dimensioned d[4]+/-t[4].
Will they fit?
OK this example looks unrealistically simple, and your real designs will be more complex, but often they do essentially reduce to this problem statement.
Clearly we need d[4] to be greater than d[1] + d[2] + d[3] to allow for the blocks being larger than nominal size, but if the difference is too large, the assembly may be too ‘slack‘ when the blocks
are smaller than nominal.
So what should we specify for d[4]?
Tolerance Stack Calculation
Always start with a worst case analysis:
The assembly will always fit if the parts are in tolerance and:
if (d[4] – t[4]) is greater than (d[1] + t[1] + d[2] + t[2] + d[3] + t[3]),
or re-arranging that: d[4] is (d[1] + d[2] + d[3] + t[1] + t[2] + t[3] + t[4])
note we add t[4].
More often than not when the dimensions are at the opposite ends of the tolerance band, the assembly would be unacceptably slack -
when all the dimensions come out on nominal, the gap is the sum of the tolerances,
and when d[4] is at maximum tolerance and the blocks on minimum, the gap is twice the sum of the tolerances.
At this point, someone will usually say – “Let's use statistical tolerancing!”:
We don't have to add up the individual tolerances: statistical tolerancing lets you ‘root-sum-square’ them (that is square them all, add the squares up and take the square root).
So now d[4] equals (d[1] + d[2] + d[3] + √(t[1]^2 + t[2]^2 + t[3]^2 + t[4]^2) )
where √ is ‘the square root of’.
This gives a smaller answer for d[4], so less ‘slack’ - but is it justified?
Statistical Tolerancing Justification
We know that ‘root-sum-squaring’ numbers gives a smaller answer than just adding them up. Pythagoras knew that – we can see it is a shorter distance along the hypotenuse of a right angled triangle
than going round the other two sides, but does it mean anything in this situation?
The basis of statistical tolerancing is - as I said - a mathematical theorem:
Suppose we were to make up a variable - let's call it ‘stack’:
- constructed by adding or subtracting a number of component variables that are all independent (that means the value of one variable is not in any way related to any of the others);
- and further suppose that the probability of finding a particular value obeys a ‘normal’ or ‘Gaussian’ distribution;
- and variable i has a mean Xbar[i] and standard deviation σ[i].
Now it is a property of the normal distribution that if you do add or subtract variables that are independent & normally distributed (as we want to do to make up this variable ‘stack’), then the
result follows a normal distribution.
Further, this variable ‘stack’ will have:
• a mean which is the sum of the component means
• and a standard deviation which is the root sum square of the component standard deviations.
The means have to be added algebraically, that is added or subtracted as appropriate – but the sigma squareds of course all add.
Statistical Tolerancing Justification
So that is the theory. If we could travel forward in time:
• we could measure the dimensions of the components in the stack;
• assure ourselves they are indeed independent;
• check that they are normally distributed;
• and measure Xbar and σ for each one.
Then then we could properly calculate the stack tolerance
- and if our tolerances happen to be the same multiple of the sigma's,
- and if the means all come out at the nominal dimension,
- then ‘root-sum-squaring’ the tolerances will work!
But how do we know that?
And if we don't know – and I ask again how we could know - there is no justification for ‘root-sum-squaring’!
But we have Tolcap!...
** If your head hurts, this might be a good point to pause this presentation and walk round the office!
Statistical Tolerancing with Tolcap
OK, if you're back now, let's continue to use the same example to explore and try to understand the traditional approach to statistical tolerancing and then we'll take a look at a method for
calculating stack tolerances using Tolcap.
We will compare and explore how stack tolerances are calculated as a function of the component tolerances:
t[stack] is a function of the individual tolerances.
This just means there is some equation, such that t[stack] is the square root of the sum of all the individual tolerances squared - which just means square each of the tolerances, add them all
together and then take the square root.
The algorithms or equations will be given and explained in the general case, but it is helpful to compare approaches. Purely to do that, we will take the specific (if implausible) case where all the
component tolerances happen to come out equal: so t[1] = t[2] = t[3] = t[4] and it equals t This is done just to make the comparison computations easy. Let's call that stack t*[stack].
Some Maths
The following slides will use some maths, so let's get some of the manipulations clear and out of the way:
As explained above, Σd[i] means add up all the values (and don't forget they may be plus or minus). So in our example, to calculate the nominal ‘gap’: Σd[i] = – d[1] – d[2] – d[3] + d[4]
Recall that √Σσ[i]^2 means:
• square all the sigmas,
• add them all up (no minuses unfortunately!),
• and take the square root of the sum.
Note that if we have a constant inside the expression such as ‘c’ in √Σ(cσ[i])^2, that is equal to √Σc^2σ[i]^2 and equals c√Σσ[i]^2, you can put the c squared outside the sigmas, or you can take the
c entirely outside of the brackets:
√Σ (cσ[i])^2 = √Σ c^2σ[i]^2 = c√Σ σ[i]^2.
That piece of maths enables us to say for example that:
If t[i] = 6σ[i],
then √Σσ[i]^2 = (√Σt[i]^2)/6
or even √ Σ(4.5σ[i])^2 = (√Σt[i]^2)x 4.5/6
Finally, for the purpose of the comparative example, note that for four equal tolerances:
Σt[i] is 4t* but √Σt[i]^2 is 2t*
Traditional Statistical Tolerancing
When statistical tolerancing was conceived, a tolerance of just 3σ was considered entirely adequate!
Assuming component tolerances were set at 3σ, a three sigma stack tolerance was calculated as √Σt[i]^2 i.e. root-sum-square the t[i]'s squared.
This procedure was found to give optimistic results, and in 1962 Arthur Bender Jr published a paper which proposed adding a 50% margin to the stack tolerance, thus
t[stack] = 1.5 times the root-sum-square of the tolerances,
and ‘Benderizing’ is still a mainline traditional approach.
Tolcap predicts Cpk rather than σ, but we can readily make use of Tolcap to analyse the traditional method and then look at how it can be developed and improved.
A Six Sigma Stack Tolerance
While traditional statistical tolerancing works with three sigma tolerances, let's start from the rather more up to date Design for Six Sigma.
DFSS says to set the tolerance to six sigma to allow 1.5σ for the ‘process shift’ i.e. to allow for the other assumption of the maths of normal distributions that the mean of the parts may not match
the nominal dimension.
Does this mean that the process shift for every manufacturing process really is 1.5σ ?
No! There is no law of nature that would cause that. It is actually derived from the cunning of the manufacturing people – the shift should be less than 1.5σ, but if necessary a shift any greater
than that could be detected in production (using a four-sample Xbar _R control chart). A control chart cannot be too commonly required, so this implies that 1.5σ is generally sufficient.
A Six Sigma Stack Tolerance
The empirical data in Tolcap reflects the reality of the various manufacturing processes and confirms that 1.5σ is sufficient - but not always necessary. The process shift in Tolcap may be as much as
1.5σ, but it will be smaller if appropriate to the process.
Extracting the process shift from Tolcap is no simple matter - it varies across the maps, and the effect of the wizards depends on which issues are being compensated – allowing for a different
material will most probably have a different effect from compensating for difficult geometry. So let's use Tolcap taking the Six Sigma approach, that is that 1.5σ gives a reasonable conservatively
large process shift.
That is for now - later on we can do a sensitivity analysis to show what happens when the process shift is less than 1.5σ.
Traditional Statistical Tolerancing
To analyse tolerancing algorithms we will get our tolerances from Tolcap.
We will use Tolcap in the mode to find the tolerance we need to achieve a target process capability.
We want sigma values for these tolerances. Let's start as I said with the ‘Six Sigma’ approach and tolerances:
• Open Tolcap
• Select a map
• And select Cpk rather than Tolerance
• Enter the nominal dimension
• Set Target Cpk to 1.5
• Apply the wizards and find what tolerance Tolcap gives
• Repeat that for each tolerance in the stack
Now we can analyse the traditional approach!
Traditional Statistical Tolerancing
Assume Tolcap has given us ‘Six Sigma’ tolerances:
Each t[i] = 6σ[i], based on:
4.5σ[i] to give Cpk = 1.5 plus 1.5σ process shift.
So if we did want three sigma tolerances for the traditional approach, we could halve the six sigma tolerances and get 3σ is t[i]/2
Our (three sigma!) stack tolerance is then
t[stack] = √Σ(t[i]/2) ^2 [the root sum of half the tolerances squared]
or 0.5 times the root-sum-square of the tolerances.
The ‘Benderized’ tolerance would be 50% larger, i.e. 0.75 times the root-sum-square of the tolerances.
In the specific comparison case where all the tolerances are equal, remembering that
√Σt[i]^2 = 2t
t*[stack] = √Σ(t[i]/2) ^2 which comes to t
and the Benderized tolerance would be 1.5t.
But do remember that t is a six sigma component tolerance ... and don't we want a six sigma stack tolerance to go with that?
A Six Sigma Stack Tolerance
How could we work out a six sigma stack tolerance?
Maybe we just root-sum-square the component tolerances?
That works for components.
And then t[stack] = would be the root-sum-square of the tolerances, and for equal tolerances, t*[stack] would come out to 2t.
Or maybe we still need to ‘Benderize’ the tolerance?
Do we need the full 50% extra?
t*[stack] would then come out at 3t ....
Or is the traditional absolute correction for three sigma tolerances enough?
That would be an extra 25%, so t*[stack] would be 2.5t.
To find out, we're going to look at the process shift allowance more closely. This is maybe again a good point to pause and clear your head. Then I'll tell you how we do that.
A Six Sigma Stack Tolerance
OK let's look at the process shift allowance more closely.
The process shift recognises that the mean dimension of the parts is not necessarily equal to the nominal dimension on the drawing. Thinking real process shifts:
• for some processes, such as turning, the shift will depend on how well the setter has set up the batch;
• for processes such as moulding, the process shift will to a large extent by drift over time as the tool wears.
So the process shift thus includes at least an element which is not variable from part to part but fixed from batch to batch, or drifts very slowly over time. So while it makes sense to root sum
square the ‘random’ part to part element of the tolerances (provided they are independent and normal), it may be prudent to combine the process shifts worst case – and simply add them up rather than
root-sum-square them.
A Six Sigma Stack Tolerance
On this basis; our six sigma tolerance t[i] is 6σ[i].
Process shift is 1.5σ[i] which is a quarter of the tolerance.
Part-to-part variation is 4.5σ[i] which is three quarters of the tolerance.
Then t[stack] would be the sum of one quarter the sum of the tolerances plus three quarters of the root-sum-square of the tolerances.
Simplifying the equation to make computation easier using the bit of maths we did before:
t[stack] is sum of the tolerances times 1/4 + the root-sum-square of the tolerances times 3/4.
In the specific comparison case where all the tolerances are equal, t*[stack] comes out to be 2.5t, and it's tempting to say that this lines up with one of our Benderised projections, but remember
this is a special artificial example that happens to use four components.
But here at last is a method! Find 6σ tolerances from Tolcap, add one quarter the sum of the tolerances to three quarters the root-sum-square of the tolerances.
Sensitivity Analysis
Now for a sensitivity analysis the analysis above assumed a ‘Design for Six Sigma’ 1.5σ process shift in the tolerances obtained from Tolcap.
If we knew the process shifts were smaller we would modify our calculation.
For example, suppose the data in Tolcap reflected only 0.5σ process shifts for all the components in the stack. Then a Cpk = 1.5 tolerance will comprise 4.5σ for the short term variation plus only
0.5σ for the process shift: a five sigma tolerance where we expected a six sigma tolerance!
What is the effect of this?
Well the process shift at 0.5σ is now the tolerance divided by 10, and the part-to-part 4.5σ is nine tenths of the tolerance.
And now the t[stack] is the sum of a tenth of the [sum of the] tolerance
plus the root-sum-square of nine tenths of the tolerances,
which comes out to 0.1 of the sum of the tolerances plus 0.9 of the root-sum-square of the tolerances.
And then if we go to the specific comparison case where all the tolerances are equal, that comes out to
t*[stack] at 2.2 time the tolerance.
Sensitivity Analysis
So we didn't know we had a five sigma tolerance, but we can have some confidence that our computation assuming t[i] = 6σ[i] is conservative, and the margin in t*[stack] would be 12%.
If there were processes such that all the component process shifts were zero, then each t[i] would be 4.5σ, and we would want to simply root-sum-square the tolerances.
In this case we would find t*[stack] = 2t.
The margin in t*[stack] would be 20%.
A 5.5σ Stack Tolerance
The analysis used can readily be applied to Tolcap's default Cpk = 1.33 tolerances.
For the same process and dimension as in the ‘six sigma’ case, we still assume the process shifts are 1.5σ, and for Cpk = 1.33 ,our tolerance needs another 4σ for part to part variation:
So now this is ‘Design for 5.5σ’!
So t[i] is 5.5σ, 1.5σ[i] is 3/11 t[i] and 4σ is 8/11 of t[i] So t[stack] comes to be 3/11 of the sum of the tolerances and 8/11 of the root-sum-square of the tolerances.
So we have a simple algorithm for tolerance stacks with a minimum Cpk is 1.5 or Cpk is 1.33 to match our component Cpk.
Using Tolcap
We hope this presentation has explained:
• the assumptions of statistical tolerancing,
• why Tolcap is vital for valid statistical tolerancing,
• and a straightforward method of calculating a sound statistical stack tolerance using Tolcap.
For ‘six sigma’ tolerances, use Tolcap to set component tolerances at Cpk = 1.5, and then:
t[stack] is one quarter the sum of the tolerances plus three quarters of the root-sum-square of the tolerances.
For Tolcap ‘default’ Cpk = 1.33 tolerances, use Tolcap to set component tolerances at Cpk = 1.33, and then t[stack] is 3/11 of the sum of the tolerances plus 8/11 of the root-sum-square of the
Tolcap includes:
• Calculations for over 80 manufacturing processes
• FREE trials for business users
• Low cost business licences
• No long term contract
• No set up charges
try it for freesign up takes less than a minute
Other videos in this series Tolcap videos | {"url":"https://www.tolcap.com/resources/videos/tolerance-stacks","timestamp":"2024-11-06T10:35:43Z","content_type":"text/html","content_length":"48466","record_id":"<urn:uuid:0b45531c-e7f1-47df-87f2-5190fb304ece>","cc-path":"CC-MAIN-2024-46/segments/1730477027928.77/warc/CC-MAIN-20241106100950-20241106130950-00268.warc.gz"} |
How to get a head count at the parade
This is another in the series of “Best Practices” posts from my archives from the Herald News. This was also written by Jonathan Maslow, and offers some solid street reporting tips for counting the
crowd at any large public event.
News reporting often involves telling how many people took part in or attended an event. Sometimes the seating capacity is known, as in the case of a sports stadium. Sometimes an official estimates
the crowd size, sometimes the organizers estimate. Sometimes they don’t agree. Sometimes no one estimates.
It’s a good practice for the reporter to do his or her own estimation in all cases.
For inside events, a quick head count with a qualifying “about” is good. (“about 75 people attended the planning board session.”) The tried and true method of estimating crowd size is as follows: 1)
make a careful measurement of the area in which the event takes place. Simply put, that’s length times width equals square feet. In some cases, it may take a slightly more sophisticated algorithm
(the crowd covers the sidewalks on six blocks. Ask officials how long each block is, walk off the width of sidewalk. Multiply).
2) Decide whether it’s a dense crowd or a loose crowd. People packed into a subway car, for example, are a dense crowd and occupy 2 square feet per person. A loose crowd, such as a parade of
journalists on May Day, has one person per five square feet.
3) Divide the area by the crowd density, and you’ve got your estimated crowd. Example: A political rally takes place in the square outside City Hall. You pace it off and find it’s 150 feet long and
100 feet wide, or 15,000 square feet. You wade into the crowd to test the density. Some places toward the front are packed, but in the back it’s loose. You choose a density in between of one person
per three square feet. Divide 15,000 by 3 = 5,000 people.
To recap:
area: 150 x 100 = 15,000 estimate of density: 3 sq. feet/per person 15,000 divided by 3 = 5,000 attendees
If AP is covering an event, check your estimate against theirs, which is based on aerial/satellite photography and should be accurate. If yours and theirs are at great odds, dig in your heels and
insist the wire services are an ass.
Note: [Then] Passaic County Sherriff’s communications man Bill Maer says the county uses a slight variation:
1) Count how many people are actually in a 10 x 10 foot area (that’s 100 square feet).
2) Measure on foot or get a good estimate of the size of the entire area.
3) divide the total area by 100 (the number of 10 by 10 squares)
4) Multiply the result by how many people are in the sample 10 x 10 foot square.
Personally, I think this adds an unnecessary step, but it does make the reckoning of density a bit more accurate and hence, perhaps, the crowd estimate itself.
• Jonathan Maslow, Herald News, Nov. 2003 | {"url":"https://www.tommeagher.com/blog/2012/07/crowd-estimates.html","timestamp":"2024-11-06T06:14:00Z","content_type":"text/html","content_length":"11289","record_id":"<urn:uuid:55f34377-b2eb-44c6-8778-0b1207b8a769>","cc-path":"CC-MAIN-2024-46/segments/1730477027909.44/warc/CC-MAIN-20241106034659-20241106064659-00507.warc.gz"} |